Prompt engineering of GPT-4 for chemical research: what can/cannot be done?

This paper evaluates the capabilities and limitations of the Generative Pre-trained Transformer 4 (GPT-4) in chemical research. Although GPT-4 exhibits remarkable proficiencies, it is evident that the quality of input data significantly affects its performance. We explore GPT-4’s potential in chemic...

Full description

Bibliographic Details
Main Authors: Kan Hatakeyama-Sato, Naoki Yamane, Yasuhiko Igarashi, Yuta Nabae, Teruaki Hayakawa
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Science and Technology of Advanced Materials: Methods
Subjects:
Online Access:http://dx.doi.org/10.1080/27660400.2023.2260300
Description
Summary:This paper evaluates the capabilities and limitations of the Generative Pre-trained Transformer 4 (GPT-4) in chemical research. Although GPT-4 exhibits remarkable proficiencies, it is evident that the quality of input data significantly affects its performance. We explore GPT-4’s potential in chemical tasks, such as foundational chemistry knowledge, cheminformatics, data analysis, problem prediction, and proposal abilities. While the language model partially outperformed traditional methods, such as black-box optimization, it fell short against specialized algorithms, highlighting the need for their combined use. The paper shares the prompts given to GPT-4 and its responses, providing a resource for prompt engineering within the community, and concludes with a discussion on the future of chemical research using large language models.
ISSN:2766-0400