Apple’s study proves that LLM-based AI models are flawed because they cannot reason

https://appleinsider.com/articles/24/10/12/apples-study-proves-that-llm-based-ai-models-are-flawed-because-they-cannot-reason?utm_medium=rss

4 Comments

  1. thenewguyonreddit on

    They never could reason and the only people who believed this were laymen unfamiliar with how GPTs actually work.

    At their core, they are very fancy prediction and probability engines. Thats it. They either predict the next word in a sentence or the next pixel in an image. Most times they are right, sometimes they are laughably wrong. Even calling them AI is a huge stretch.

  2. I don’t disagree with the premise of the article, but when you’re testing an LLM “with a given math question” you’re unlikely to get good results.Â