Do Large Language Models possess the ability to learn human language structures? And what can LLMs teach us about human language acquisition?
The influential TWIML AI Podcast and leading AI authority Sam Charrington turned to 2024 IEEE John von Neumann medalist, Christopher Manning of Stanford University to find out. In this insightful interview, he shares perspectives on the intersection of linguistics and LLMs, digs into the concept of their “intelligence”, and more.
Also, shout out to TWIMLAI for including Chris’ acceptance speech from the 2024 IEEE Honors Ceremony in its Resources section.