OpenAI acknowledges new models increase risk of misuse to create bioweapons

https://www.ft.com/content/37ba7236-2a64-4807-b1e1-7e21ee7d0914

4 Comments

  1. “OpenAI’s latest models have “meaningfully” increased the risk that artificial intelligence will be misused to create biological weapons, the company has acknowledged.

    The San Francisco-based group announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions.

    Yoshua Bengio, a professor of computer science at the University of Montreal and one of the world’s leading AI scientists, said that if OpenAI now represented “medium risk” for chemical and biological weapons “this only reinforces the importance and urgency” of legislation such as a hotly debated bill in California to regulate the sector.”

  2. I don’t mean to be pessimistic, but if people are interested in creating bioweapons, surely they’d find a way?

    From what I understand, OpenAI does attempt to have safeguards and filtering in place for such content, but that’s not going to stop open source no morality models from assisting.

    I can’t help but feel like the cat is out of the bag and only so much can be done. People are resourceful.

  3. They keep beating the drums to market better.

    This thing can make bioweapons, so Spend the $2,000 month.

    But the drums aren’t real.

  4. banned4being2sexy on

    Those already exist, who would have the resources and want to go to prison for something so dumb.