Meta's AI Chief on Limitations of Large Language Models

TapTechNews May 23 - According to a report by the Financial Times on the 23rd local time, Yann Lecun, the head of Meta's artificial intelligence and chief artificial intelligence scientist, expressed his views on the capabilities of large language models.

Yann Lecun asserted that although these models have impressive performance in some tasks, fundamentally they are limited and can never reach the level of intelligence like that of humans - with the ability to reason and plan.

He pointed out that the defect of such large language models lies in the lack of understanding of logic, limited grasp of the physical world, no long-term memory, inability to reason, and inability to do hierarchical planning. When pursuing human-level intelligence, don't overly rely on promoting large language models. He believes that these models rely heavily on existing training data, so they are inherently unsafe because they can only provide accurate responses within the training range.

Yann Lecun advocated for a fundamental change in approach, focusing on developing a new generation of artificial intelligence systems aimed at endowing machines with human-level intelligence. He acknowledged that this vision is ambitious but estimated that it might take ten years to achieve.

TapTechNews note: Yann Lecun is a French-American computer scientist who won the Turing Award in 2018 and has made achievements in fields such as machine learning, computer vision, mobile robotics, and computational neuroscience.

Likes