ChatGPT, Open AI, Microsoft, Google, Elon Musk… all about the great battle for leadership in Artificial Intelligence

ChatGPT, Open AI, Microsoft, Google, Elon Musk… all about the great battle for leadership in Artificial Intelligence

ChatGPT based on GPT-3.5, developed by OpenAI, has monopolized all the covers and all the prominence of Artificial Intelligence in recent weeks.

ChatGPT, Open AI, Microsoft, Google, Elon Musk.

The algorithm, which allows you to answer any question in conversational mode with answers consistent with the logic of the data and languages with which it has been trained causes furor and surprises with the freshness and spontaneity of its answers. OpenAI is a foundation originated by Elon Musk and is now highly supported by Microsoft, among others, which promotes the research and development of next-generation neural networks with a high impact on society. Although OpenIA is a non-profit, it is obvious that all those who participate see enormous business possibilities.

Its derivations with Dall E2 lead to realistic images based on text descriptions, which makes it possible to devise any type of image or, for example, to complete images of which we only have a partial, in the case of paintings or paintings.

The power of ChatGPT is such that it can lead to the leadership of Google’s search and indexing engine in a short time, although the beginnings, technology, and architecture are similar.

But it is good to know that behind all these several elements affect this RE-EVOLUTION.

BERT-type self-supervised algorithms

First of all, the emergence of self-supervised algorithms such as BERT (bidirectional encoder representations from transformers) that emerged from Google in 2018 to better interpret our searches and exceed 85% response to the queries made. 

Let’s remember that Google has been indexing the web for more than two decades, and it knows a little about this. Google’s idea with BERT is to interpret contexts not just search words to refine more. It is taken into account that every day 15% of new searches are made.

It uses “Transformers”, which is a neural network model dating from 2017 that outperforms recurrent neural networks that had problems with text and language, especially with long paragraphs. The Transformers measure the weights of each word within each sentence and their relationship, so there comes a time when it can predict the next word. These can be trained with long paragraphs and are ideal for translations. For example, GPT-3.5 was trained with 45TB of text from all over the web, which gives it great power.

Ultimately, BERT is the most “popular” model of all the Transformers and constitutes a model trained on itself by Google researchers with a massive text corpus. But, as a result of BERT, derivations have emerged such as RoBERTa, which is used by Facebook, DistilBERT, or XLNet, which improve performance and computing capacity and use different training. They all solve the same tasks:

  • – Summary of texts.
  • – Answers to questions.
  • – Classification and resolution of named entities.
  • – Search for similar texts.
  • – Detection of fakes or malicious messages.
  • – Conversational understanding with the user.

The second important point is that BERT and this whole family of bidirectional Transformers are open sources, which means that anyone can build on it, improve it and share it. This collective intelligence makes this algorithm quickly evolve and mutate at the speed of viruses, so every 6, 9, or 12 months different version come out that achieve breakthrough leaps. Now we are going for GPT-4.

ChatGPT, Open AI, Microsoft, Google, Elon Musk

The GPT-3.5 model created by OpenAI has managed to become popular due to its easy access, totally free, and its very realistic conversation, but it must also be considered that Google Research already launched Meena in 2022, also based on Transformers, which maintains a very convincing on any subject. The difference lies in its access and popularization through TensorFlowHub, perhaps less friendly to the public, or in the HuggingFace library, much more attractive if you have Python programming knowledge, it is better used and is capable of adapting to any need. concrete, how to create and sell a product.

At the base, the technical architecture of operation is always the same, except that OpenAI has evolved the model exponentially since the appearance of the first version of GPT-1. Now with the upcoming release of GPT-4, it refines and resolves many of the issues detected with ChatGPT (GPT-3.5).

It improved accuracy with greater power in text generation and answers to difficult questions. In reality, it is enormously similar to human behavior in terms of the type of response, in such a way that there will be 65% of users will not distinguish who is behind it. To get an idea of each breakthrough leap, GPT-4 will be 600 times more powerful than its predecessor GPT-3.5, which is the basis of ChatGPT. It has a few months left to come to light, but it will almost certainly be before the end of 2023.

On the other hand, the Transformer base is used to compose music, generate images from text, and text from images…. so their possibilities combined are immense. From a business point of view, imagination to power. The possibilities of generating products with this base that solve problems for companies and citizens are immense.

If we go back a bit in history, the first assistants were SIRI, which was introduced in iPhone4S in 2011, based on self-supervised learning algorithms and natural language processing (NLP) neural networks that are fine-tuned with each interaction with the user and that arrive to reach high thresholds of understanding. Then Alexa arrived in 2014, based on technology from the SAS Institute, for which Amazon bet heavily, joined by many others such as Cortana, from Microsoft, in 2014 and Google Home in 2016 but it was in June 2019 when With the Transformers, what is considered a barrier to human understanding of language was overcome and which is at 87%, according to different benchmarks such as SuperGLUE.

That means that in a conversation between average humans, 13% of the information or understanding of it is lost for different reasons, environmental, cultural, and educational barriers… If the algorithm exceeds that percentage, it means understands better than the human in a conversation and it loses fewer words, information, or knowledge.

The moment this threshold is exceeded things get serious, and a race begins to achieve maximum understanding for 100% human interaction, even among voice assistants themselves.

In short, automating a perfect understanding of language makes it so that we don’t notice who is behind it opens a huge door in numerous fields that can help improve the message, adapt it to the recipient, achieve greater persuasion goals, and all without the need for preparation, because It can be done while talking, in real-time.

For example, we can ensure that a specialized call center could fully automate its activity through an algorithm whose performance would be identical to the best-trained worker. Of course, he will always act by the training carried out and with the information and data provided. let’s think about the disappearance of buttons or levers since we will activate everything by voice, from driving to maneuvering in a refinery. in addition, it is combined with digital twins everything can be done from the sofa while sitting with Oculus-type glasses.

In summary, we could talk about multiple examples of a benign use of these technologies, of great help to the development of humanity because they make our lives more comfortable. But on the dark side, they are also very effective. 

Related Post

Leave a Reply

Your email address will not be published.