
Recent advancements have positioned Large Language Models (LLMs) as transformative tools for scientific research, capable of addressing complex tasks that require reasoning, problem-solving, and decision-making. Their exceptional capabilities suggest their potential as scientific research assistants, but also highlight the need for holistic, rigorous, and domain-specific evaluation to assess effectiveness in real-world scientific applications.
First, this talk motivates and describes the current effort at Argonne National Laboratory to develop a multifaceted methodology for evaluating AI models as scientific Research Assistants (EAIRA). This methodology incorporates four primary classes of evaluations: 1) Multiple Choice Questions to assess factual recall; 2) Open Response to evaluate advanced reasoning and problem-solving skills; 3) Lab-Style Experiments involving detailed analysis of capabilities as research assistants in controlled environments; and 4) Field-Style Experiments to capture researcher-LLM interactions at scale in a wide range of scientific domains and applications. For each of these four classes of evaluation, we develop testing methods (e.g., benchmarks) and tools for manual and automatic QA generation and validation, as well as for collecting and analyzing researcher-LLM interactions.
We will present a selection of tools and generated benchmarks, as well as the early analysis of the largest Field-Style Experiments to date (the 1,000 Scientists AI JAM). These complementary methods enable a comprehensive analysis of LLM strengths and weaknesses with respect to their scientific knowledge, reasoning abilities, and adaptability. Although developed within a subset of scientific domains, the methodology is designed to be generalizable to a wide range of scientific domains.
Cappello received his Ph.D. from the University of Paris XI and joined CNRS (as CR) in 1994. In 2003, he joined INRIA (as DR). He initiated the Grid’5000 project in 2003 and served as its Director (https://www.grid5000.fr) from 2003 to 2008. In 2009, he established the JLESC with Marc Snir. In 2016, Cappello became the director of two Exascale Computing Project (ECP: https://www.exascaleproject.org/) software projects related to resilience (VeloC) and lossy compression of scientific data (SZ) that help Exascale applications run efficiently on Exascale systems. Cappello is now focusing on establishing a methodology to evaluate the knowledge and skills of LLMs used as research assistants. He is an IEEE Fellow, the recipient of the 2024 IEEE CS Charles Babbage Award, the 2024 Europar Achievement Award, the 2022 HPDC Achievement Award, two R&D100 awards (2019 and 2021), the 2018 IEEE TCPP Outstanding Service Award, and the 2021 IEEE Transactions of Computer Award for Editorial Service and Excellence.