Conquering LLM Hallucinations with Tree of Thought

A few weeks ago I shared a YouTube video with you called “SmartGPT”. This video demonstrated a creative approach towards making AI models smarter by linking several of them together. This approach boosts their ability to reason and solve problems. Today, I want to talk about a research paper from Princeton University and Google DeepMind that delves deeper into this idea. The paper, called “Tree of Thoughts: Deliberate Problem-Solving with Large Language Models,” provides an academic take on this topic.

The authors of the paper propose a way to make AI models better at problem-solving. They suggest letting the models explore different ways to solve a problem and reason about the best approach. To make this idea easier to understand, imagine you’re at the base of a huge tree, and you’re trying to find the best leaf on that tree. To do that, you need to climb the trunk and explore different branches until you find the perfect leaf. Each step of this journey is like asking a question to the AI model. If you use this approach with AI models, they can provide better answers compared to just asking them a simple question.

What’s really exciting about this approach is that it could potentially solve a big problem with AI models, which is that they sometimes come up with answers that don’t make sense (we call these “hallucinations”). The idea is to have a system of AI models where one model checks the others to ensure their answers make sense. Although the paper doesn’t go into detail about this, it does talk about having an AI model act as a critic, which is similar to the idea of a hallucination checker. This is similar to what we saw with the SmartGPT approach, where linking AI models together helped to solve some common issues.

So, what does this all mean for you? It means that we might be overreacting a bit when it comes to worrying about hallucinations. We might already have ways to deal with most of these issues, such as fine-tuning a LLM to be a “hallucination checker.” But creating a system like this isn’t easy. It means you need to ask the AI model more questions to get the same answers, which can be a big limitation, especially with models like GPT-4 as I’ve discussed previously.

The world of AI is moving fast. New breakthroughs are happening all the time, and it can be hard to keep up. But don’t worry, I’ll do my best to help you stay up-to-date with the most important updates for your business strategies.

Subscribe to our YouTube channel where we post daily videos on the ever-evolving world of AI and large language models.

PROLEGO GREATEST HITS:

ABOUT PROLEGO
Prolego is an elite consulting team of AI engineers, strategists, and creative professionals guiding the world’s largest companies through the AI transformation. Founded in 2017 by technology veterans Kevin Dewalt and Russ Rands, Prolego has helped dozens of Fortune 1000 companies develop AI strategies, transform their workforce, and build state-of-the-art AI solutions.

Let’s Future Proof Your Business.