AI-driven chatbots like ChatGPT and Google Bard are gaining popularity, offering a new generation of conversational tools that range from conducting web searches to generating creative literature and retaining global knowledge.

ChatGPT, Google Bard, and their counterparts are known as large language models (LLMs). By understanding how LLMs work, you can harness their capabilities while also recognizing their limitations and areas where they shouldn’t be relied upon.

Similar to other AI systems, such as voice recognition and image generation, LLMs are trained using massive amounts of data. Although the companies behind these models remain somewhat secretive about their data sources, some clues can be found. For instance, the research paper introducing LaMDA, the foundation for Google Bard, mentions Wikipedia, public forums, and programming-related documents. Furthermore, Reddit and StackOverflow plan to start charging for access to their platforms, implying that LLMs have been utilizing these resources for free up until now.

This text data, regardless of its origin, is processed through a neural network, a widely-used AI engine comprising multiple nodes and layers. These networks constantly adjust their interpretation of data based on various factors, including previous outcomes. Most LLMs employ a specific neural network architecture called a transformer, which excels in language processing (the GPT in ChatGPT stands for Generative Pretrained Transformer).

Transformers can read immense amounts of text, identify patterns between words and phrases, and predict subsequent words. Comparing LLMs to advanced autocorrect engines is not entirely inaccurate: ChatGPT and Bard may not “know” anything, but they excel at determining word sequences, which can resemble genuine thought and creativity.

A crucial innovation in transformers is the self-attention mechanism, which enables words in a sentence to be considered in relation to one another, resulting in a deeper understanding of the text.

Built-in randomness and variation ensure that transformer chatbots produce different responses each time. The autocorrect analogy also clarifies how errors can emerge. Fundamentally, ChatGPT and Google Bard cannot distinguish between accurate and inaccurate information; they seek only plausible, natural responses that align with their training data.

Occasionally, a bot may choose a less likely word instead of the most probable one. Overdoing this, however, leads to nonsensical sentences, which is why LLMs continuously self-analyze and self-correct. Input also affects responses, allowing users to request simpler or more complex answers.

Generated text can sometimes appear generic or clichéd, as chatbots synthesize responses from vast textual repositories. Consequently, LLM-generated sentences may resemble averages calculated by a spreadsheet, resulting in unremarkable output. For example, asking ChatGPT to imitate a pirate will yield an exaggerated, stereotypical portrayal.

Humans play a role in training LLMs, with supervisors and end users identifying errors, ranking responses, and providing high-quality results for the AI to emulate. This technique, known as “reinforcement learning on human feedback” (RLHF), enables LLMs to fine-tune their neural networks for better results. Although this technology is still in its early stages, developers are continuously releasing upgrades and improvements.

As LLMs become larger and more intricate, their capabilities will advance. ChatGPT-4, for example, has approximately 100 trillion parameters, a substantial increase from ChatGPT 3.5’s 175 million. This growth enhances the model’s ability to understand word relationships and generate responses.

Based on how LLMs operate, it’s evident that they excel at replicating the text they’ve been trained on, creating content that appears natural and knowledgeable, though somewhat monotonous. Utilizing their “advanced autocorrect” approach, they’ll accurately provide information most of the time (e.g., “the first president of the United States was…”). However, LLMs can falter when the next most probable word isn’t necessarily the correct one.

Navigating the Future of Generative AI

Master the Art of Prompt Engineering for Large Language Models: A Comprehensive Guide

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.