A new study from the University of Arkansas has found that OpenAI's GPT-4 AI model outperformed human participants in a standardized test designed to measure divergent thinking, a key component of creativity. The research, published in the journal Scientific Reports, administered the Alternative Uses Task to both the AI and human volunteers. In this test, …
A new study from the University of Arkansas has found that OpenAI’s GPT-4 AI model outperformed human participants in a standardized test designed to measure divergent thinking, a key component of creativity. The research, published in the journal Scientific Reports, administered the Alternative Uses Task to both the AI and human volunteers. In this test, participants are asked to generate creative uses for common objects like a rope or a fork. The AI’s responses were assessed for both the quantity and quality of ideas, factoring in originality and detail. The results showed that GPT-4 produced a greater volume of ideas and, on average, its responses were rated as more original and elaborate than those from humans. The researchers note that this demonstrates AI’s capacity for tasks requiring creative ideation but caution that the test measures only one aspect of creativity and does not account for real-world application or intent. The findings contribute to the ongoing debate about AI’s potential role in creative fields. Read the full article at https://sciencedaily.com/releases/2024/07/240712140242.htm.
Join the Club
Like this story? You’ll love our Bi-Weekly Newsletter



