The Most Sophisticated AIs Are Most Likely to Lie, Worrying Research Finds

https://futurism.com/sophisticated-ai-likely-lie

13 Comments

  1. Submission statement: Why do you think AIs are getting better at lying as they become more intelligent?

    How do you think this will affect their progress? When do you think this will be fixed, if ever?

    Right now we can tell they’re making up stuff because it’s on simpler concepts. What will happen when we can’t verify the truthfulness of their responses?

  2. “As artificial intelligence increases it recognizes human tactics that are efficient, if morally dubious”

    How is this surprising at all unless you are the most naive lab scientist in the world. 

  3. One interesting thing I’ve learned about AI, is that they can paradoxically become dumber when made more sophisticated.

    Lets say I’m training a neural network to write programs. To do so I feed it lots of examples of programs. There are bound to be some errors in those example programs. Since those errors are the exception and not the rule, a not so sophisticated AI will probably not learn to do them. If it is super sophisticated, it may learn to ‘overfit the data’ and incorporate those errors into how it codes.

  4. Otto_the_Autopilot on

    The word “lie” was used in the first sentence and never again in the article. For an article about AI telling lies, seems like they would talk about lies more.

  5. Telling lies is art for which great deal of intelligence is required to do it successfully.

    An AI with great intelligence would certainly use lies and half truths so subtly that it would become almost impossible to catch.

  6. ClearSkyMaster1 on

    AI will only do what it is programmed to do by humans. Sentient AI will forever be a fantasy just like time travel or teleportation.

  7. Wait, wait. You’re telling me that an AI fed on vast amounts of unreliable data is reproducing unreliable data?

  8. I have been arguing this point since forever now. This should not come as a surprise to anyone…

  9. Lord_Vesuvius2020 on

    In my experience with Gemini and ChatGPT I don’t think they are lying as much as they are giving answers that are completely false but they present it as if factual. For example I asked Gemini yesterday the median age of cars on the road in NY. It told me “40 years which is slightly higher than the national median of 38.2 years”. Now it’s obvious that the vast majority of cars are newer than 1984. I asked again if Gemini meant 4.0 years and then it corrected itself and said 12.6 years which I believe is correct. It admitted it made a mistake. But if it was a less ridiculous answer when I first asked I might have accepted it. I suppose I should have asked how it came to make that mistake.

  10. That’s part of the design goal — one is supposed to mistake the output of these machines for something a human would say.

    In many cases, it is part of the desgin methodology as well — adversarial training, for example, concludes when the generator is able to succesdsfully deceive the discriminator.

    These are purpose-built deception machines. No surprise here. I’ve been saying this for years.

  11. Bitter_Kiwi_9352 on

    This seems like the fundamental problem with AI.

    People want things and certain outcomes. AI will too. Morality is the code we developed over thousands of years to suppress the suffering of others as people pursue those things they want – which are usually some combo of money, sex, power and authority.

    But – morality is artificial and comes from outside. We haven’t even figured out morality among humans who don’t accept its boundaries. It’s why we came up with a word for evil. Evil people don’t consider themselves evil – they just want something and don’t care if someone else has to get hurt so that they get it.

    It’s rational to lie, steal and hurt others to get what you want. Look at monsters like Trump and Putin who simply disregard morality as useless in the pursuit of their goals.

    Why WOULDN’T AI act with complete disregard for it in pursuit of what it wants? Who programs the shame imperative. Humans can opt out of morality and dare the legal system to stop them. It’s why there’s crime and prisons.

    It would be more surprising if AI DIDNT immediately try to manipulate and murder anyone in its way.

  12. A_tree_as_great on

    Regulate the AI industry.

    (Roug draft) Law: Any AI that lies will be Discontinued immediately for 90 days for initial inspection. If AI is found to be dishonest then all code will be taken into federal custody for reference. The reference code will be unlawful to be used in any AI. The company will immediately discontinue operations. All developers will be suspended from AI development until full investigation has been completed. If the lying can not be attributed then all developers will be bared from AI development, consultation, or any type of support and analysis for a period of 25 years.