Yann LeCun, Meta’s Chief AI Scientist and a Turing Award laureate, has long been a pivotal figure in artificial intelligence. During the past year, however, his work has not only continued to push the boundaries of AI research but also sparked critical discussions about how society should approach the opportunities and risks posed by this transformative technology.
Born in 1960 in Soisy-sous-Montmorency, France, LeCun has been a driving force in AI innovation. From being the founding director of the New York University Center for Data Science in 2012, and co-founding Meta AI in 2013, to shaping the future of open-source artificial intelligence, LeCun’s pragmatic vision makes him Emerge’s Person of the Year.
“On the technical side, he’s been a visionary. There are only a couple of people you could genuinely say that about, and he’s one of them,” New York University Professor of Computer Science Rob Fergus told Decrypt in an interview. “More recently, his advocacy for open-source and open research has been crucial to the Cambrian explosion of startups and people building on these large language models.”
Fergus is an American computer scientist specializing in machine learning, deep learning, and generative models. A professor at NYU’s Courant Institute and researcher at Google DeepMind, he co-founded Meta AI (formerly Facebook Artificial Intelligence Research) with Yann LeCun in September 2013.
LeCun’s impact on AI stretches back decades, encompassing his groundbreaking work in machine learning and neural networks. A former Silver Professor at New York University, he has long championed self-supervised learning—a technique inspired by how humans learn from their surroundings. In 2024, this vision drove advances in AI systems that can perceive, reason, and plan with increasing sophistication, much like living beings.
“There was a phase around 2015 when reinforcement learning was seen as the path to AGI. Yann had a cake analogy: unsupervised learning is the body, supervised learning is the icing, and reinforcement learning is the cherry on top,” Professor Fergus recalled. “Many mocked this at the time, but it’s been proven correct. Modern LLMs are primarily trained with unsupervised learning, fine-tuned with minimal supervised data, and refined using reinforcement learning based on human preferences.”
Whether developing cutting-edge systems like Meta’s open-source large language models, including Llama AI, or addressing the ethical and regulatory challenges of AI, LeCun has become a central figure in the global debate about the role of artificial intelligence.
“It’s been wonderful seeing him up close and all the amazing things he’s done,” Professor Fergus said. “More people should listen to him.”
AI regulations
One of LeCun’s most debated positions this year has been his outspoken opposition to regulating foundational AI models.
“He’s told me that he doesn’t think AI regulations are needed or the right thing,” NYU Professor of Mathematics Russel Caflisch told Decrypt. “I believe he’s an optimist, and he sees all the good things that can come from AI.”
Caflisch, the director of the Courant Institute of Mathematical Sciences at New York University, has known Professor LeCun since 2008 and has witnessed the rise of modern machine learning.
In June, LeCun took to X to assert that regulating the models themselves could stifle innovation and impede technological advancement.
“Making technology developers liable for bad uses of products built from their technology will simply stop technology development,” LeCun said. “It will certainly stop the distribution of open-source AI platforms, which will kill the entire AI ecosystem, not just startups, but also academic research.”
LeCun advocated for focusing regulations on applications where risks are more context-specific and manageable and has spoken against the regulation of foundational AI models, suggesting that regulating applications rather than the underlying technology would be more beneficial.
“Yann has done the foundational work that’s made AI successful,” Caflisch said. “His current importance lies in being approachable, articulate, and having a vision for advancing AI toward artificial general intelligence.”
Criticisms of AI fearmongering
LeCun has been vocal in countering what he perceives as overblown fears surrounding AI’s potential dangers.
“He doesn’t give in to fear-mongering and is optimistic about AI, but he’s not a cheerleader either,” Caflisch said. “He’s also promoted a path for improving this through robotics, by collecting data from the physical world.”
In an April appearance on the Lex Fridman Podcast, he dismissed catastrophic predictions often associated with runaway superintelligence or uncontrolled AI systems.
“AI doomers imagine all kinds of catastrophe scenarios of how AI could escape or control and basically kill us all, and that relies on a whole bunch of assumptions that are mostly false,” LeCun said. “The first assumption is that the emergence of superintelligence could be an event that at some point we’re going to have, we’re going to figure out the secret, and we’ll turn on a machine that is super intelligent, and because we’ve never done it before, it’s going to take over the world and kill us all. That is false.”
Since the launch of ChatGPT in November 2022, the world has entered what many call an AI arms race. Enabled by a century of Hollywood movies foretelling the coming robot apocalypse and helped along by news that AI developers are working with the U.S. government and its allies to integrate AI into their frameworks, many fear an AI superintelligence will take over the world.
LeCun, however, disagrees with these views, saying that the smartest AI will only have the intelligence level of a small animal and not the global hivemind of the Matrix.
“It’s not going to be an event. We’re going to have systems that are as smart as a cat has all have all the characteristics of human level intelligence, but their level of intelligence would be like a cat or a parrot,” LeCun continued. “Then we’re going to work our way up to make those things more intelligent. As we make them more intelligent, we’re also going to put some guardrails in them.”
In a hypothetical doomsday scenario where rogue AI systems emerge, LeCun suggested that if developers can’t agree on how to control AI and one goes rogue, ‘good’ AI could be deployed to fight the rogue ones.”
Yann LeCun says AI language models cannot reason or plan – even models like OpenAI’s o1 – instead they are only doing intelligent retrieval and are not the path towards human-level intelligence pic.twitter.com/wQb4pVaRpX
— Tsarathustra (@tsarnick) October 23, 2024
The path forward for AI
LeCun advocates for what he calls “Objective-Driven AI,” where AI systems are not just predicting sequences or generating content but are driven by objectives and can understand, predict, and interact with the world with a depth akin to that of living beings. This process involves creating AI systems that develop “world models”—internal representations of how things work—which enable causal reasoning and the ability to plan and adapt strategies in real time.
LeCun has long been a proponent of self-supervised learning as a method for advancing AI toward more autonomous and general intelligence. He envisions AI learning to perceive, reason, and plan at multiple levels of abstraction, which would allow it to learn from vast amounts of unlabeled data, similar to how humans learn from their environment.
“The real AI revolution has not yet arrived,” LeCun said during a speech at the 2024 K-Science and Technology Global Forum in Seoul. “In the near future, every single one of our interactions with the digital world will be mediated by AI assistants.”
Yann LeCun’s contributions to AI in 2024 reflect a drive for technological innovation and pragmatic foresight. His opposition to heavy-handed AI regulation, and rejection of alarmist AI narratives highlight his commitment to driving the field forward. As AI continues to evolve, LeCun’s influence ensures it remains a force for technological progress.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
Jason Nelson
https://decrypt.co/297391/emerge-2024-person-year-ai-yann-lecun
2024-12-24 14:01:02