New AI models like ChatGPT pursue ‘superintelligence’, but can’t be trusted when it comes to basic questions

https://english.elpais.com/technology/2024-09-25/new-ai-models-like-chatgpt-pursue-superintelligence-but-cant-be-trusted-even-when-it-comes-to-basic-questions.html

4 Comments

  1. Nobody ever even defined “super intelligence” in any meaningful way for it to not obviously just be market/media hype.

    It’s more like a big AI experiment that can at least somewhat justify it’s own massive costs, which is good for at least part of the development of eventual AGI and ASI, but it doesn’t seem especially close to human intelligence or super intelligence. It doesn’t appear to think so much as parse data and regurgitate data.

    That’s mostly good because if it were aware we’d have to start treating it like a life form, not just like an electronic helper and humans need electronic helpers a lot more than super intelligent electronic life forms.

  2. LLMs are limited by the data used to train them. An app like ChatGPT is not aiming for “superintelligence” so much as to be super adept at conversation. If the training data set is flawed, the answers it gives will never be totally reliable, but it will become really good at how it presents wrong answers.

  3. fallibility is really not a strong indicator of intelligence. if you can do something no human on the earth can do but make 10 mistakes to get there, you’re still smarter than any human. also, my chat gpt has told me several things that were incorrect before, but it still taught me how to do a raspberry pi project from knowing absolutely nothing, and correctly diagnosed multiple illnesses that i’ve gone to several doctors for with no success, and helped me fix my car when a mechanic would have charged 300 dollars.

    it’s just a poor indicator of it’s practical uses to say that it told you incorrect information. as if humans don’t get it wrong sometimes. or as though alternative forms of information on the internet are 100% correct.

    fallibility is just part of the algorithm. also, maybe stop asking simple questions? AI shines in conversation, not one off information grabs. if it’s that important, even if you google something you’re going to do it multiple times and check several sources. you can even ask your AI “are you sure, can you check again” and it’ll correct itself a lot of times.

  4. Please no clickbait newspaper articles in this sub, the actual research can be found here, [https://www.nature.com/articles/s41586-024-07930-y](https://www.nature.com/articles/s41586-024-07930-y) . TLDR, of the actual research, current LLMs appear to be peaking with ever greater scale no longer improving performance / reliability, to address this issue six reliability metrics to improve performance are proposed.