A Google engineer cautions that in the race for AI, it may be outperformed by open-source technology

TechBelli

 According to a leaked document, readily available software threatens tech companies and OpenAI's ChatGPT.



One of Google's engineers has warned the company that it is not in a position to win the race against artificial intelligence and risk losing to readily available AI technology.

According to a Google engineer's document that was leaked online, the company had "looked over our shoulders a lot at OpenAI," the company that created the ChatGPT chatbot.

The employee, who Bloomberg identified as a senior software engineer, claimed that neither business was in a position to succeed.

“The uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch,” Engineer wrote.

The engineer continued by saying that the open-source community was the "third faction" that posed a threat to Google and OpenAI from a competitive standpoint.

Developers of open-source software don't claim ownership of their creations; instead, they make them available for anyone to use, enhance, or modify as they see fit. The Linux operating system and the Microsoft Office substitute LibreOffice are two classic examples of open-source work.

The Google engineer claimed that open-source AI programmers were "already lapping us," citing examples such as tools built on a sizable language model created by Mark Zuckerberg's Meta that the company made available in February on a "noncommercial" and case-by-case basis but that were leaked online shortly after.

The document also stated that the entry barrier for developing AI models has decreased since Meta's LLaMA model became widely available, going from requiring "the total output of a major research organisation to one person, an evening, and a beefy laptop."

Additionally, websites with open-source models for creating visual art were cited in the document. Contrarily, neither Chat GPT nor Google's Bard chatbot share their underlying models with the general public.

“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customisable, more private, and pound-for-pound more capable,”The Google employee wrote.

The engineer continued, "Our best hope is to learn from and collaborate with what others are doing outside Google," adding that people would not pay for a restricted AI model when "free, unrestricted alternatives are comparable in quality." The engineer also cautioned that the company had "no secret sauce," and that "free, unrestricted alternatives are comparable in quality."

The EU was cautioned this week, though, that failing to safeguard grassroots AI research in its proposed AI law would jeopardise the availability of open source models. The European Parliament was informed in an open letter coordinated by the German research organisation Large-scale AI Open Network (Laion) that any regulations requiring developers to monitor or control use of their work "could make it impossible to release open-source AI in Europe".

Such limitations, according to the Laion letter, would "entrench large firms" and "hinder efforts to improve transparency, reduce competition, limit academic freedom, and drive AI investment abroad."

The UK's competition watchdog began an investigation into the AI market on Thursday, focusing on the theoretical underpinnings of generative AI tools like ChatGPT, Bard, and the Stable Diffusion image generator. According to the Competition and Markets Authority, "open, competitive markets" are necessary to support AI innovation.

The consulting company SemiAnalysis said it "verified" the authenticity of the Google engineer's document after it was shared on a public server on the Discord chat platform before publishing it online.

Tags

Post a Comment

0Comments

Post a Comment (0)

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience.
Accept !