At Google, there are fears that ChatGPT will spark a revolution in search engines that could end the era of the Internet giant.
It is undeniable, OpenAI’s ChatGPT has become in a few weeks the number 1 attraction of Artificial Intelligence, open to the general public. ChatGPT has answers to almost everything in form, although sometimes more convincingly than in substance. The powerful algorithms of this AI impress and worry.
In addition to ethical questions, there are those that revolve around the economy. And that’s the side that’s eyeing from the corner of theeyeeye the management of Google. Taken aback by the effectiveness and popularity of AI, according to information from the New York Times, the management team of the Internet giant would have triggered a “red code” for its own search engine. The boss of Google, Sundar PichaiSundar Pichai, reportedly ordered several teams to step up their efforts on the company’s search engine quality to arm them against the potential threat posed by ChatGPT. However, before the acquisition and transformation of OpenAI into a commercial company, overseen in particular by Elon MuskElon Muskthe development of the GPT AI, which drives the chatbot, comes in part from Google labs.
ChatGPT’s algorithms don’t work like those of regular search engines. ChatGPT seeks to understand user questions and intent formulated in natural language. The chat system, that is to say dialogue, allows the AI to enrich its thinking and correct the situation in the event of misinterpretation. It is this revolution that scares Google if it is implemented now in search tools. If the firm has become unavoidable, it is precisely because in its time, it had also implemented a technology which upset the Internet and eliminated most of the other search engines.
Google seeks to develop safe and ad-friendly AI
For the moment, this AI has nothing to do with a search engine like Google’s. Its database is only internal. She bases her answers on what she has ingested until 2021 and does not come to draw information from the Web. It is for this reason that the AI can lie or misinform by asserting with very convincing rhetoric of fake newsfake news. But, despite this, the basics are there and this kind of AI could well pull the cover to it. A shame for Google, which also has its own chatbot, based on LaMDA, or Language Model for Dialogue Applications. It can be compared to ChatGTP. Rather than opening its AIs to the public like OpenAI does, Google prefers to integrate them into real products before offering them.
If the firm also wishes in the long term that the AI can be used to use a search engine in a safe way, it also comes up against a prohibitive incompatibility. Today, 80% of Google’s revenue comes from its advertising network. However, the AI of a chatbot is not at all adapted to the diffusiondiffusion of these announcements. The teams will therefore have to redouble their ingenuity to find a solution allowing them to rule out a troublemaker like ChatGPS or one of its clonesclones.
And then, we also remember the disappointments of Meta, last summer with the BlenderBot 3, a chatbot capable of leading discussions by going to glean information on the Internet to fuel the debates. Less than two days after its launch, the chatbot very quickly began to make conspiratorial and anti-Semitic comments. Finally, in wanting to ensure that the AI reacts like a human, the algorithms behave a bit like it, by going to give credit to massively shared impressions.