Galactica, New Meta AI Demo writes racist and inaccurate science fiction, language model removed after three days of intense criticism


Last November 15th, Meta introduced a great new language model called Galactica, designed to help scientists. But instead of landing with the big bang Meta had hoped for, Galactica wilted after three days of intense criticism. According to a report by MIT Technology Review, Meta pulled a public demo that it encouraged everyone to try out on November 17.

Meta’s misstep—and its hubris—reveals that big tech companies are blind to the serious limitations of big language models. Many studies point to the flaws of this technology, including its tendency to repeat biases and claim lies as fact.

OpenAI’s Large Language Models (LLMs) such as GPT-3 learn to type text by learning millions of examples and understanding the statistical relationships between words. Thus, they may write documents that appear convincing, but these works may also be full of potentially dangerous falsehoods and stereotypes.

Enter Galactica, an LLM in nonfiction writing. Its authors trained Galactica on humanity’s vast collection of scientific knowledge, which includes more than 48 million articles, textbooks and lecture notes, scholarly websites, and encyclopedias. According to the Galactica article, Meta AI researchers believed that this high-quality data would lead to high-quality results. It presented the metamodel as a new interface for accessing and manipulating what we know about the universe.

While some people found the demonstration promising and helpful, others quickly discovered that anyone could post racist or potentially offensive comments and easily create authoritative content on these topics. For example, someone used it to create a wiki entry on a fictional research paper called “Benefits of Crushed Bottle Food.”

Even if the Galactica result was not offensive to social norms, the model could solve well-understood scientific facts and produce inaccuracies, such as wrong dates or animal names, that would require extensive knowledge of the subject to correct. The episode evokes a common ethical dilemma in artificial intelligence: when it comes to potentially harmful generative models, does it belong to the general public to use them responsibly, or does it belong to the publishers of those models to prevent misuse? ?

Like all language models, Galactica is a brainless robot that cannot distinguish fact from fiction. Within hours, scientists were sharing his biased and wrong results on social media. “I’m both amazed and unsurprised by this new effort,” says Chirag Shah of the University of Washington, who studies search technology. When it comes to displaying these things, they look so fantastic, magical and clever. But people still don’t understand that in principle they cannot work as we claim.

“Language models aren’t very good beyond their ability to capture patterns of word strings and reproduce them probabilistically,” Shah explains. It gives a false sense of intelligence.

Gary Marcus, a cognitive scientist at New York University and an outspoken critic of deep learning, presented his thoughts in a post called Substack. A Few Words About Bullshit Arguing that the ability of large language models to mimic human-written text is nothing more than a statistical advantage (a few words about phrasing).

However, Meta is not the only company advocating that language models can replace search engines. For the past two years, Google has been promoting language models such as PaLM as an information retrieval tool.

This is a tempting idea. But to suggest that the human-like text generated by these models will always contain valid information, as Meta did in promoting Galactica, is reckless and irresponsible. It was an unintentional mistake.

And it wasn’t just Meta’s marketing team’s fault. Yann LeCun, Turing Award winner and Meta’s chief scientist, defended Galactica to the end. On the day the model was released, LeCun tweeted: Type in text and Galactica will generate an article with relevant references, formulas, and everything else. Three days later, he tweeted: The Galactica demo is currently offline. No more having fun using it casually. Excited?

This is not Meta’s Thai moment. Recall that in 2016, Microsoft launched a chatbot on Twitter called Tay – before shutting it down after 16 hours when Twitter users turned it into a racist and homophobic sexbot. But Meta’s treatment of Galactica shows the same filth.
The big tech companies keep doing it — and believe me, they’re not going to stop — because they can,” Shah said. And they feel they have to do it or someone else will. They think it’s the future of data access, even though nobody wants it.

In June of last year, Google placed one of its engineers on paid administrative leave for violating its privacy policy after concerns that its AI chatbot system had become vulnerable. Engineer Blake Lemoine works for Google’s Responsible AI organization and tested whether its LaMDA model generated discriminatory language or hate speech.

The engineer’s concerns stemmed from the persuasive responses generated by the AI ​​system regarding his rights and the ethics of robotics. In April, he discussed with leaders “Is LaMDA Sensitive?” shared a document called It contains a transcript of his conversations with the AI ​​(after he was fired, Lemoine posted the transcript on his Medium account), which he claims shows him to be sentient because he has feelings, emotions, and subjective experience.

Source: MIT Technology Review

And you?

It is reckless and irresponsible to suggest that the humanoid text produced by these models will always contain valid information, as Meta did in promoting Galactica, do you agree that academic Chirag Shah’s promotion of Galactica is wrong? involuntary?

When it comes to potentially harmful generative models, do you think it’s up to the general public to use them responsibly, or those who publish them to prevent any misuse?

While some people consider large model languages ​​a promising technology, others liken it to a brainless robot that cannot distinguish fact from fiction. What is your opinion?

Why do some people see this as a problem and others don’t?

See also:

A Google engineer has been fired after claiming that Google’s LaMDA AI chatbot is sentient and expresses thoughts and feelings equal to those of a human child.

GPT-4: A new version of OpenAI’s natural language processing AI could arrive this summer, smaller than GPT-3 but should be more powerful

Open AI offers the GPT-3 natural language processing model in private beta for applications ranging from sequential text generation to code generation and software creation.

Leave a Reply

Your email address will not be published. Required fields are marked *