ChatGPT: The Good, The Bad and The Helpful

Artificial intelligence (AI) has rapidly advanced in recent years, becoming an increasingly ubiquitous part of our daily lives. From virtual assistants like Siri and Alexa to self-driving cars, AI is changing the way we interact with technology and each other.  

One of the most exciting developments in this field is the creation of chatbots like ChatGPT, which uses deep learning algorithms to generate human-like responses to text-based input. It can operate at peak human performance in a wide range of academic and professional exams, logical quizzes and mathematical proofs. It can even explain why images are funny or ironic.  

GPT – the language model behind ChatGPT – has become a media frenzy in recent months because, to put it plainly, it is revolutionizing AI. It is the product of a sophisticated algorithm and huge neural network that delivers almost human-like language capabilities.  

Microsoft Co-founder Bill Gates recently said that he has only seen two moments of true innovation in his lifetime: the first being the invention of the graphical user interface that changed how we interact with computers, and the second being GPT’s successful completion of human exams in 2022. This was a truly momentous occasion, particularly as most of us never anticipated machines exhibiting human-like capabilities so soon.  

For centuries, language has been seen as a uniquely human quality – something that defines reality and existence. Today, we are confronted with a tool that disrupts this belief and forces us to welcome in a new era.   

So, how does ChatGPT work?  

ChatGPT is a language model built on a neural network which operates similarly to a human brain. When you provide input by telling it to do something, it uses a combination of rule-based modelling (similar to how humans understand rules and concepts) and training-based approximation (like how humans imitate behavior and patterns) to find an answer. The algorithm uses the input to select the most probable or best-fitting next token (a sub-part of a word) one at a time until it has provided the required output.  

How is it so successful in creating human-like text?   

There are a few factors that explain the effectiveness of ChatGPT: 

  • Tokens – while the algorithm often selects the most probable next token based on the data on which it has been trained, there is also an element of randomization which facilitates its creativity. 

  • Size – the number of parameters in GPT’s neural network is huge, at a whopping 175 billion. For each token that it outputs, it is doing 175 billion calculations. Even if you ask it for the sum of ‘5 + 5’, it still needs to do that number of calculations, which is somewhat inefficient when compared with a calculator. For reference, the human brain has 1,000 times more parameters than GPT, so while AI is still a way off human intelligence, the difference is much smaller than it has been in the past.  

  • Training data – GPT was trained on all the available text on the internet, plus approximately 10% of all the books that have ever been written. To give you an idea, the entirety of Wikipedia only accounts for around 3% of all the text used to train GPT. When you chat with GPT, you’re actually chatting with all the text that humanity has digitally produced so far. 

  • Fine-tuning – many humans have been involved in testing and reviewing ChatGPT’s responses to enable it to reach the high quality and accuracy that we see today.   

Is AI becoming human-like?  

GPT may seem like it expresses human-level intelligence, but it is not there yet. It has no long-term memory, goals, agency, introspection, body experience, or – crucially – an understanding of human language.  

We used to believe that you could not communicate via human language without understanding it, but this no longer seems to be true. It’s not that AI is getting more intelligent than we thought, we’ve just learnt something new about human language.  

Is there a sustainability element to it? 

As with all technology, there is an environmental cost associated with the creation and maintenance of tools like ChatGPT. The example of asking it to provide the sum of ‘5 + 5’ is, of course, inefficient. As such, it’s good to explore it and see what it can and can’t do, but it is not a resource that we can waste.  

However, many of the conversations we’re having recognize that there is a connection between environmental, social and governance issues, and AI. They are intrinsically linked, and AI can support the acceleration of our ESG journeys.  

How is Orbus Software leveraging AI?  

Our flagship product OrbusInfinity is all about decision intelligence and helping organizations to make the best decisions. Decisions are always an expression of language, so we are working very actively to surface these new capabilities in the OrbusInfinity platform.  

Instead of looking at diagrams or reading through reports to find out which applications you should sunset, imagine being able to ask a question and receive a response in natural language. This is now very much in our grasp and our product team is excitedly developing ways for us to make this a reality.  

Chief Marketing Officer Dr Thorsten Fuchs discussed this topic in our recent ‘Tech-Talk: Generative AI: What, How and Why?’. To watch the full session, click here