Google arrived a little late to the AI battles, which are still very much in progress. Google’s Bard is accused in a recent report of using online-shared ChatGPT replies as training data. Google refutes the charge.
A large language model (LLM) called Google Bard can produce content in response to requests. This may entail delineating subjects, responding to inquiries, or writing paragraphs of text in response to a straightforward user request. The “new Bing” chatbot’s functionality is quite similar to that of ChatGPT, the generative AI that swept the globe by storm last year.
Bard has performed pretty similarly to ChatGPT in its first week of use, which indicates that it is quite unpolished in many areas. Bard frequently makes factual errors, occasionally “hallucinates” and invents gibberish, and never credits any sources.
The manner in which Bard was trained, though, might be a more pressing issue. The Information claims that Jacob Devlin, a now-retired Google AI engineer, objected to the company utilizing ChatGPT data to train Bard.
According to Devlin, the Bard team relied “extensively” on ChatGPT’s responses posted on the ShareGPT website. This site is where users frequently submit responses they’ve gotten from OpenAI’s chatbot. Devlin also believed that such training might cause Bard’s reactions to resemble those from ChatGPT more.
Devlin left his job. He now works for OpenAI after expressing his concerns to Sundar Pichai. The paper continues, “Google also ceased utilizing such data to train Bard.”
OpenAI’s terms of service prohibits the use of ChatGPT’s output “to construct models that compete with OpenAI.” Other Googlers wary of the situation felt this usage violated.
Google has issued a succinct statement to The Verge claiming that Bard did not train using ChatGPT data.
It doesn’t appear that Google’s response definitively excludes the possibility that Bard trained on ChatGPT data. But it does appear that this is at least no longer the case.
The Information story continues by stating that Google’s Brain AI division and DeepMind, a business owned by Alphabet, are collaborating to more effectively compete with OpenAI. According to the report, the “Gemini” project aims to “try to match the capabilities of OpenAI’s GPT-4.” This would entail matching GPT-4’s 1 trillion parameters. These measure the calculations in a machine-learning model.