6 min read

Blog thumbnail
Published on 06/27/2023
Last updated on 06/18/2024

Mastering the Art of Prompting for Large Language Models: Reducing Hallucination and Improving Reasoning 



Language is an integral part of our lives. From our daily conversations to the software applications we use, language shapes our interactions and experiences. Now, imagine a machine that can understand and generate human-like text. This isn't the realm of science fiction anymore, but the fascinating world of Large Language Models (LLMs)! 

Large Language Models, or LLMs, are artificial intelligence models designed to understand and generate human-like text. These models are trained on vast amounts of data and can generate responses to prompts (input) they are given. One of the most prominent examples of an LLM is OpenAI's GPT-3, a model that has over 175 billion parameters and can write coherent and contextually accurate responses to a wide range of prompts. 


The process of interacting with an LLM is called "prompting." Essentially, a prompt is an instruction or a question that you give to the model. For example, if you ask the model, "What is the capital of France?" the model, having been trained on a broad corpus of text that includes this information, will reply with "Paris." 

Challenges in Prompting 

While this may sound simple and straightforward, prompting LLMs can be quite challenging. Here are some common problems that can arise: 

  1. Hallucination: Sometimes, the model might "hallucinate" information that wasn't in the prompt or its training data. This means that it can generate text that seems plausible but is not actually correct or factual. 
  1. Wrong Answers: Even though these models are trained on a vast amount of data, they can still give incorrect answers. This could be due to a variety of reasons, including the ambiguity of the prompt or limitations in the model's training data. 
  2. Weak Reasoning/Arithmetic: Despite their impressive capabilities, LLMs can sometimes struggle with complex reasoning or arithmetic tasks. For instance, the model might make errors when asked to solve complicated math problems or reason about complex scenarios. 

Understanding these challenges is the first step towards effective prompting. In the following sections, we will delve into specific prompting techniques that can help you get better responses from LLMs. These include least to most prompting, the chain of thought approach, self-consistency techniques, the tree of thoughts method, and reasoning via planning. 

Chain-of-thought prompting takes the idea of natural language rationales and combines it with few-shot prompting to improve results on challenging natural language processing tasks. The approach prompts the language model to generate a detailed reasoning process as part of the answer, which is both interpretable and effective in yielding correct responses. It's also been shown to outperform state-of-the-art results from specially designed neural models trained with significantly more annotated examples. However, it's been noted to struggle with tasks that require generalization beyond the complexity of the demonstrated examples. 

Self-consistency is a prompting strategy designed to enhance the performance of chain-of-thought prompting in large language models. Instead of using a naïve greedy decoding approach, self-consistency first samples a diverse set of reasoning paths, then selects the most consistent answer by marginalizing out the sampled reasoning paths. This strategy capitalizes on the idea that complex reasoning problems often have multiple valid lines of thought, all leading to the same correct answer. The implementation of self-consistency has been shown to significantly improve the performance of chain-of-thought prompting across a variety of arithmetic and commonsense reasoning benchmarks, demonstrating its effectiveness in augmenting the problem-solving capabilities of large language models 

Least-to-most prompting, on the other hand, tackles this issue by reducing a complex problem into a series of simpler subproblems. It consists of two stages: the first stage reduces the problem into a list of subproblems, and the second stage sequentially solves these subproblems. By progressively solving these problems, the model is able to tackle problems that are more complex than those it has seen in the prompts. This approach has shown to outperform chain-of-thought prompting on tasks that require symbolic manipulation, compositional generalization, and math reasoning. 

The Tree of Thoughts (ToT) is a sophisticated framework developed by researchers at Princeton University and Google DeepMind to enhance the problem-solving capabilities of large language models (LLMs). Unlike traditional LLMs that follow a linear decision-making process, ToT allows these models to explore coherent units of text, or "thoughts", as intermediate steps towards problem-solving. This approach enables the LLM to consider multiple reasoning paths, evaluate potential outcomes, and choose the most promising path. It also provides the flexibility to anticipate future decisions and revise previous ones if a more promising path is identified. 

This dynamic decision-making process is analogous to traversing a tree structure, where each branch represents a potential reasoning path. The ability to explore different "thoughts" or branches significantly improves the LLM's problem-solving abilities, especially in tasks requiring non-trivial planning or search. This advancement in artificial intelligence and natural language processing opens up new possibilities for applying LLMs in complex problem-solving tasks. 

Reasoning via Planning (RAP) is an innovative framework proposed to bolster the reasoning capabilities of large language models (LLMs). The key insight behind RAP is the repurposing of LLMs to serve as both a world model and a reasoning agent. This dual role allows the LLM to better understand and navigate the vast reasoning space it operates within. A crucial component of RAP is the integration of a principled planning algorithm, which enables strategic exploration and decision-making within this reasoning space. 

The potential of RAP is particularly exciting in the context of Reinforcement Learning (RL). In RL, an agent learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties. The ability of RAP to perform strategic exploration and deliberate planning could significantly enhance the decision-making capabilities of RL agents. This could lead to more effective learning and improved performance in complex tasks, opening up new possibilities for the application of RL in diverse fields. 

These techniques, by addressing complex problems in a systematic manner and ensuring consistent outputs respectively, have set a new standard in the field of AI, making language models more reliable and robust. As we continue to innovate, we can look forward to further advancements that will push the boundaries of what these intelligent systems can achieve. 


Chain of thought prompting - 

Self consistency - 

Least to most prompting -,necessary%20to%20obtain%20a%20correct 

Tree of thoughts prompting - 

Reasoning via Planning - 

Subscribe card background
Subscribe to
the Shift!

Get emerging insights on innovative technology straight to your inbox.

Unlocking multi-cloud security: Panoptica's graph-based approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

Subscribe to
the Shift
emerging insights
on innovative technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background