OpenAI o1 model warning issued by scientist: “Particularly dangerous”

https://www.newsweek.com/openai-advanced-gpt-model-potential-risks-need-regulation-experts-1953311

3 Comments

  1. “OpenAI’s o1-preview, its new series of “enhanced reasoning” models, has prompted warnings from AI pioneer professor Yoshua Bengio about the potential risks associated with increasingly capable artificial intelligence systems.

    These new models are designed to “spend more time thinking before they respond,” allowing them to tackle complex tasks and solve harder problems in fields such as science, coding, and math.

    * In qualifying exams for the International Mathematics Olympiad (IMO), the new model correctly solved 83 percent of problems, compared to only 13 percent solved by its predecessor, GPT-4o.
    * In coding contests, the model reached the 89th percentile in Codeforces competitions.
    * The model reportedly performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.

    “If OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they [repor](https://openai.com/index/openai-o1-system-card/)t, this only reinforces the importance and urgency to adopt legislation like SB 1047 in order to protect the public,” Bengio said in a comment sent to *Newsweek*, referencing the AI safety bill currently proposed in California.

    He said, “The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”