ChatGPT worries Google. Is it really believable? – SEO and engine news

ChatGPT inquiète Google. Est-ce vraiment crédible ?

Open AI’s ChatGPT AI-boosted dialogue tool has been in the news since its release, to the point that we are talking about direct competition with Google and predicting the next death of the Mountain View firm. Yet all of this is well within the normal order of the evolution of search tools. And ChatGPT is still far from competing with the leading engine…

For the past few weeks, Open AI’s ChatGPT tool has been making headlines with its fairly impressive question-and-answer system (even in French) and fed by artificial intelligence. The system certainly has its limits, but it remains efficient for a certain number of requests, in particular informational.

Here are two examples of questions for which the ChatGPT answer is of rather good quality:

Questions about the SEO and Google, answers from ChatGPT. Source: Abundance

But it must be noted that often, the answers are riddled with inaccuracies, even errors. Here are a few examples among many others, with explanations in the legend:

Question about Spirou, which was created by Rob-vel and not Franquin. A bellhop is not really a valet. Spip is a squirrel and not a hamster. The other designers did not wait for Franquin’s death to take up the character. Etc. Source: Abundance

Question about Iznogoud : here Goscinny is the scriptwriter and not the designer (although he was before being a scriptwriter). But for Iznogoud, he is clearly the screenwriter and not the designer. Jean Tabary is French (although born in Stockholm) and not Franco-Belgian. Source: Abundance

Question about Abraracourcix : the name of the wife of the chief of the Gallic village is Bonemine and not Bonnaire. Source: Abundance

Question about wine Klevener : No mention at the beginning of Heiligenstein, cradle of the Klevener since 1742 and unique city ​​producing this wine. Heiligenstein is also located in the Bas-Rhin and not the Haut-Rhin. Etc. Source: Abundance

We see from these few examples (which are not intended to be a scientific study of the reliability of ChatGPT but which could be repeated almost endlessly) that this tool can hardly be used without checking the result on other sources dismissed. Reliability is clearly not there (yet)… Ask ChatGPT a question on a subject that you know well and you will immediately realize it. It is inconceivable today to use ChatGPT as a lambda search engine for its daily information research needs.

Nevertheless, the tool is remarkable for the form of the answers it returns (structured and long, descriptive sentences, without spelling mistakes) and in this, it is promising what it will give when the many errors current ones will be corrected. Today, it can be estimated that 90% of the content of the answers provided are quite reliable. But when, in this answer, 10% is wrong, it does not allow to have unalterable confidence in the tool. In this, it seems impossible to us that it can compete with Google today. On the other hand, it may well show what search engines will look like in a few years, in an obvious way.

Red alert at Google

In any case, the release of this product caused a “red code” at Google and the leaders of the Mountain View firm seem to have asked its engineers to speed up development (in progress, of course, for many years). months, especially around LaMDA) of an equivalent technology. But it is clear that the dimensions of effective operation between ChatGPT and Google are not the same:

  • the degree of errors returned by ChatGPT today would not be accepted if it was Google that returned them to its official engine (but it may be the case on an experimental tool which will certainly be launched in 2023).
  • the volume of information brewed and indexed by Google (hundreds of billions of web pages) has no common measure with that of ChatGPT.
  • Same for the volume of requests processed every second.

Sundar Pichai, CEO of Google, and Jeff Dean, Head of AI at Alphabet, temporize: This is an area where we need to be bold and responsible. So we have to find a balance (…) For research, questions of veracity are really important; and for other applications, issues of bias, toxicity and safety are also paramount “. Moreover Sam Altman, CEO of OpenAI, does not say anything else and recently admitted the limits of his tool: “ ChatGPT is incredibly limited, but good enough in some areas to give the deceptive impression of being great. It would be a mistake to rely on it for anything of importance at present (…) there is a lot of work still to be done in terms of robustness and veracity. »

We must therefore certainly go beyond the “Wow” aspect (which is clearly real) of ChatGPT and think that it is above all a vision of what search engines will be in the medium term (knowing that on the Internet, the notion of medium term is relative: we are talking here of only a few years). It’s clear that the “10 blue links” will soon disappear, in their current form, anyway. It is also likely that 2023 is a pivotal year at this level and that Google will show us things soon, perhaps as soon as the Google I/O event next May.

It remains to be seen then how SEO will adapt to this new standard, these new standards of evolution. Because adaptation is likely to be important and not so easy…