AI that mimics human problem solving is a big advance – but comes with new risks and problems
OpenAI recently unveiled its latest artificial intelligence (AI) models, o1-preview and o1-mini (also referred to as “Strawberry”), claiming a significant leap in the reasoning capabilities of large language models (the technology behind Strawberry and OpenAI’s ChatGPT). While the release of Strawberry generated excitement, it also raised critical questions about its novelty, efficacy and potential risks.
Central to this is the model’s ability to employ “chain-of-thought reasoning” – a method similar to a human using a scratchpad, or notepad, to write down intermediate steps when solving a problem.
Chain-of-thought reasoning mirrors human problem solving by breaking down complex tasks into simpler, manageable sub-tasks. The use of scratchpad-like reasoning in large language models is not a new idea.
The ability to perform chain-of-thought reasoning by AI systems not specifically trained to do so was first observed in 2022 by several research groups. These included Jason Wei and colleagues from Google Research and Takeshi Kojima and colleagues from the University of Tokyo and Google.
Before these works, other........
© The Conversation
visit website