ChatGPT is a prototype AI-based chatbot that can understand natural human language and generate fairly detailed text that appears to be written by a human.
It is the latest generation of text-generating AIs known as the Generative Pre-trained Transformer (GPT).
It is the product of OpenAI, the artificial intelligence research laboratory founded in late 2015 by Elon Musk and other Silicon Valley investors. The most important among them was Sam Altman, current CEO of the OpenAI company, and their goal is to “advance digital intelligence in the way that is most likely to benefit humanity.”
Since then, Musk stepped down from the board and distanced himself from the company, denying it access to Twitter’s database (which OpenAI had used to train artificial intelligence).
How does it work?
Using artificial intelligence and machine learning, the GPT system is made to provide information and answer questions through a conversational interface.
A sizable sample of text pulled from the internet was used to train the AI.
The new generation was created with a focus on ease of use. OpenAI states that the dialog format allows for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
How can it be used?
Early users have described ChatGPT as an alternative to Google because of its capability to offer descriptions, answers and solutions to complex questions, including how to write computer code, handle layout issues, and optimize queries.
In the real world, it may be applied to create content for websites, respond to customer inquiries, make recommendations, as well as develop automated chatbots.
Sam Altman, CEO of OpenAI, says that the system is still in the early stages of development: “Soon you will be able to have helpful assistants that talk to you, answer questions, and give advice. Later you can have something that goes off and does tasks for you. Eventually you can have something that goes off and discovers new knowledge for you.”
Can it replace people?
It has long been hypothesized that professions dependent on content production – be it playwrights, journalists, professors or programmers – could one day be supplanted by machines.
ChatGPT has already demonstrated that it could pass college exams with flying colors, while programmers have successfully used the tool to tackle coding problems in obscure programming languages in a matter of seconds.
Some people have suggested that the technology could replace journalists because it can generate human-like written text. But as of right now, chatbots lack the understanding of nuance, critical thinking skills, or ethical decision-making abilities required for good journalism.
Additionally, its current knowledge base ends in 2021, rendering some queries and searches useless.
ChatGPT can also give completely wrong answers and present misinformation as fact, writing “plausible-sounding but incorrect or nonsensical answers,” the company admits.
All this tells us that this AI system (like many others, after all) still has a long way to go.