Current track

Title

Artist

Current show

The Morning Hustle

5:00 am 9:00 am

Current show

The Morning Hustle

5:00 am 9:00 am


Elon Musk releases code for his AI chatbot Grok. Here’s why it matters

Written by on March 20, 2024

Elon Musk releases code for his AI chatbot Grok. Here’s why it matters
SpaceX, Twitter and Tesla CEO Elon Musk, arrives for a US Senate bipartisan Artificial Intelligence (AI) Insight Forum at the U.S. Capitol, Sept. 13, 2023, in Washington. (Stefani Reynolds/AFP via Getty Images)

(NEW YORK) — Some of the world’s largest companies and richest people are fighting over a question that will help shape the future of AI: Should firms reveal exactly how their products work?

Elon Musk, the CEO of Tesla and SpaceX, upended the debate in recent days by opting to release the computer code behind his AI chatbot, Grok.

The move contrasts with the approach taken by OpenAI, the company behind popular AI text bot ChatGPT. OpenAI, part owned by tech giant Microsoft, opted to release comparatively few details about the latest algorithm behind its products.

Elon Musk did not respond to ABC News’ request for comment. Neither did OpenAI.

In a statement earlier this month, OpenAI rebuked claims that the company has kept its AI models secret.

“We advance our mission by building widely-available beneficial tools. We’re making our technology broadly usable in ways that empower people and improve their daily lives, including via open-source contributions,” the company said. “We provide broad access to today’s most powerful AI, including a free version that hundreds of millions of people use every day.”

Here’s what to know about Grok, why Elon Musk disclosed the computer code and what it means for the future of AI:

What is Musk’s AI chatbot, Grok?

Last year, Musk launched an artificial intelligence company called xAI, vowing to develop a generative AI program that competes with established offerings like ChatGPT.

On several occasions, Musk has warned against risks of political bias in AI chatbots, which help shape public opinion and risk the spread of misinformation.

However, content moderation itself has become a polarizing topic and Musk has voiced opinions that place his approach within that hot-button political context, some experts previously told ABC News.

In November, xAI debuted an early version of its first product, Grok, which responds to user prompts with humorous comments modeled on the classic sci-fi novel Hitchhiker’s Guide to the Galaxy.

Grok is powered by Grok-1, a large language model that generates content based on statistical probabilities learned from scanning vast amounts of text.

To access Grok, users must first purchase a premium subscription to X, the social media platform owned by Musk.

“We believe that it is important to design AI tools that are useful to people of all backgrounds and political views. We also want to empower our users with our AI tools, subject to the law,” xAI said in a blog post in November. “Our goal with Grok is to explore and demonstrate this approach in public.”

Why did Musk make the code openly available?

The decision to release the code behind Grok touches on two issues important to Musk: The threat posed by AI and an ongoing battle with rival company OpenAI.

For years, Musk has warned that AI risks significant societal harm. In 2017, he tweeted: “If you’re not concerned about AI safety, you should be.” And more recently, in March 2023, he signed onto an open letter warning of the “​​profound risks to society and humanity” posed by AI.

In remarks on Sunday, Musk appeared to frame the open-source decision as a means of ensuring transparency, protecting against bias and minimizing the danger posed by Grok.

“Still work to do, but this platform is already by far the most transparent & truth-seeking,” Musk said in a post on X.

The move also relates directly to a public feud between Musk and OpenAI.

Musk, who co-founded OpenAI but left the organization in 2018, sued OpenAI and its CEO Sam Altman earlier this month, alleging the company abandoned its mission of benefiting humanity in a sprint toward profits.

Days after filing the lawsuit, Musk said on X that he would drop the case if OpenAI changed its name to “ClosedAI.”

In a statement earlier this month, OpenAI said it plans to move to dismiss all of Musk’s legal claims.

“As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control. Elon left OpenAI, saying there needed to be a relevant competitor to Google/DeepMind and that he was going to do it himself. He said he’d be supportive of us finding our own path,” OpenAI said.

What are the stakes of the fight over open vs. closed source AI?

The debate over whether to release the computer code behind AI products divides along two competing visions of how to limit harm, remove bias and optimize performance.

On the one hand, proponents of open source say that publicly available code allows a wide community of AI engineers to identify and fix flaws in a system, or tailor it for a purpose separate from its originally intended function.

In theory, the open-source code offers programmers an opportunity to improve the security of a given product while ensuring accountability by making everything visible to the public.

“Whenever somebody’s creating a piece of software, there can be bugs that can be exploited in ways that can cause security vulnerabilities,” Sauvik Das, a professor at Carnegie Mellon University who focuses on AI and cybersecurity, told ABC News. “It doesn’t matter if you’re the most brilliant programmer in the world.”

“If you open source, then you have an entire community of practitioners who poke holes and gradually over time build up patches and defenses,” Das added.

By contrast, supporters of closed source argue that the best way to safeguard AI is to keep the computer code private so it stays out of the hands of bad actors, who might repurpose it for malicious ends.

Closed-source AI also affords a leg up to companies who may want to capitalize upon advanced products unavailable to the wider public.

“The closed-source systems are more difficult to redeploy for nefarious reasons simply because they already exist and there are only certain things you can do with them,” Kristian Hammond, a professor of computer science at Northwestern University who studies AI, told ABC News.

Last month, the White House announced it was requesting public comment on the benefits and dangers of open-source AI systems. The move came as part of a sweeping set of AI rules issued by the Biden administration through executive order in October.

Das, of Carnegie Mellon, said the open-source release by Musk may be motivated by both public and personal interests but the move has sparked a much-needed conversation about this facet of AI safety.

“Even if the motives aren’t necessarily totally pure, the fact that this is raising public consciousness around this idea of open versus closed — and the benefits versus the risks of both — is exactly what we need in society right now in order to raise public awareness,” Das said.

Copyright © 2024, ABC Audio. All rights reserved.


Reader's opinions

Leave a Reply

Your email address will not be published.