Finally, a search engine better than Google

In the video above, computer scientist and AI researcher Lex Friedman interviews Aravind Srinivas, CEO of Perplexity, an AI-based “answer engine.” Unlike typical search engines that require you to sort through pages of results to find the information you need, Perplexity provides real-time answers to queries.

One of the pitfalls of current AI technologies like ChatGPT is that they sometimes tend to hallucinate or manipulate information. To minimize this risk, you can provide links to sources and ask them to verify the accuracy of the information provided. However, Perplexity has been dealing with this issue from the beginning, and while you may still see hallucinations, they are based on factual evidence.

“(Perplexity) aims to revolutionize the way humans get answers to questions on the Internet by combining search and large-scale language models (LLMs) to produce answers with human-generated citations to sources on the web for every answer part,” Fridman said. “This significantly reduces the LLM hallucination and makes it much easier and more reliable to use for research and my usual late-night rabbit hole explorations out of curiosity.”One

Part search engine, part question-answering platform

Friedman describes Perplexity as part of a search engine (a software system designed to retrieve information from the Internet) and part of an LLM. An LLM is a type of artificial intelligence system that is trained on large amounts of text data to understand and generate text similar to humans. An LLM can perform a variety of language-related tasks, such as answering questions, generating content, and translating languages.

Unlike standard search engines that provide links, Perplexity tries to answer queries directly, Srinivas explains:2

“Perplexity is best described as an answer engine. You ask a question and you get an answer. The only difference is that all the answers are backed up by sources. This is similar to how an academic would write a paper. Now the reference part, the sourcing part, is where the search engine part comes in. It combines existing searches and extracts results that are relevant to the query the user is asking. It reads the link, extracts the relevant paragraphs, and puts them into LLM…

LLM takes the relevant paragraphs, looks at the query, and provides a well-structured answer with appropriate footnotes to each sentence. Because it was instructed to do so, and given a lot of links and paragraphs according to specific instructions, it was instructed to write a concise answer for the user with appropriate citations.

The magic is that all of this works together in one coordinated product. That’s why we created Perplexity.”

Srinivas, who previously worked as an AI researcher at DeepMind, Google, and OpenAI, says he sees Perplexity as a discovery engine that fuels curiosity.three

“The journey doesn't end when you get an answer. In my opinion, the journey begins after you get an answer. You'll see related questions at the bottom, and suggested questions you can ask. Why? Because the answer wasn't enough, or maybe the answer was good enough, but you want to dig deeper and ask more questions.So, in the search box, it says where knowledge begins, because knowledge has no end. It can only expand and grow.”

Breakthrough Advancement in AI

Please understand that Perplexity is not perfect and still exhibits some bias, but it significantly outperforms Google on almost every other search query, especially when it comes to COVID-19 information. Perplexity’s AI-based technology provides more accurate, comprehensive, and nuanced results, making it a better choice for general searches. Its advanced algorithms ensure that users receive the most relevant and insightful information, setting it apart from traditional search engines.

Srinivas explains several ways Perplexity is embracing cutting-edge advances in machine learning and general innovation, including Augmented Retrieval Generation (RAG), an advanced technique in natural language processing (NLP) that combines the capabilities of LLM with information retrieval systems to generate more accurate and contextually relevant responses.

This approach is particularly useful for tasks that require accurate and up-to-date information, such as question answering, summarization, and conversational systems. In short, RAG includes the retrieval aspect of the query, while Perplexity goes beyond it. Srinivas says:4

“The principle of perplexity is that if you don't search, you shouldn't say anything. This is much more powerful than RAG, because RAG just says, 'Okay, use this additional context and write your answer.' But we say, 'Don't use anything else.' That way, we ensure factual support. If the documents you searched don't have enough information, we can just say, 'There aren't enough search resources to give you a good answer.'”

They also used chain-of-thought reasoning to take NLP tasks to the next level in terms of performance. In AI, chain-of-thought reasoning refers to the ability of a language model to generate a logical step-by-step explanation or sequence of thoughts that lead to a conclusion or answer. This approach improves the model’s performance on complex reasoning tasks by encouraging it to represent intermediate steps in the reasoning process. As Srinivas explains:5

“The chain of thought is a very simple idea. Instead of simply training a model to follow instructions and complete them, what if we could force the model to go through the inference steps and come up with an explanation and arrive at an answer?

It's almost like an intermediate step before arriving at a final answer, and by forcing the model to go through an inference path, it ensures that it doesn't overfit to external patterns, and that it can answer new questions that it hasn't seen before.”

The beginning of a real reasoning revolution

It is not yet known whether AI can perform high-level reasoning that is fundamentally similar to human cognitive processes. However, getting there will require applying more inference computing in part, which in AI refers to the computational resources and processes involved in running AI models to make predictions or decisions based on new data.

This step is different from the training step, which builds and optimizes the model. In detail, inference is the process by which an AI model applies learned patterns to new data to produce predictions, classifications, or other outputs. For example, using AI to classify images or predict stock prices.

Meanwhile, the computational aspect refers to the computing power required to perform inference. This includes hardware, software frameworks, and algorithms optimized for efficient computation. Srinivas says:6

“Can you have a conversation with an AI that feels like you've talked to Einstein or Feynman? You ask them a difficult question, and they say, “I don't know.” And then a week later, they've done a lot of research and… they come back and it's just crazy.

I think if we can get to that level of inference computation, if we can get to dramatically better answers as we apply more inference computation, that would be the start of a real inference revolution… It's possible. We haven't broken it, but there's nothing to say we won't.”

Curiosity is the key factor that separates humans from AI.

Part of cracking that code is teaching AI how to mimic natural human curiosity. “But what makes humans special is our curiosity,” explains Srinivas. “Even if AI cracks that, it’s still us asking them to explore things. And one of the things that AI hasn’t cracked yet is how to be naturally curious and come up with interesting questions to understand the world, and dig deeper into the world.”7

In addition, there is much controversy and fear surrounding artificial general intelligence (AGI), a type of AI that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence.

While Srinivas says we shouldn’t worry about “AI going rogue and taking over the world,” there is the question of who controls the compute that AGI runs on. “Access to the weights in the model is less important. Access to compute is more, and the world is becoming more concentrated in the hands of more and more individuals, because not everyone can afford to compute enough to answer the hardest questions.”

Srinivas says one sign of AI's increasing intelligence is its ability to create new knowledge, to provide truths to questions we don't know the answers to, and to help us understand why they're true.

“Could we create an AI that, like Galileo or Copernicus, would question our current understanding and come up with new positions that might be contradictory and misunderstood, but that might end up being true? … And the answer would be so shocking that you wouldn't expect it at all.”8

What does the future hold for search and AI?

We’re already seeing AI tools like Perplexity that are exponentially better than traditional search engines, but the goal for the future, says Srinivas, is not to build better search tools, but to build platforms for knowledge.9

“Even before the Internet, when you zoom out, it was always about the transfer of knowledge. It's bigger than search… So we imagine a future where the entry point for a question isn't just a search box. It might be listening to a page or reading a page, or hearing a page being read, and then being curious about one element of it and asking a follow-up question about it.

So I think it's really important to understand that your mission is not to change search. Your mission is to make people smarter and to impart knowledge. And the way to do that is to start anywhere. You can start by reading a page. You can start by listening to an article… It's just a journey. It never ends.”

It’s important to keep in mind that Perplexity and other AI tools are not meant to replace your critical thinking, but rather to enhance your creativity. It’s important to keep this in mind and remember that AI is meant to assist, not replace, your intellectual and creative abilities.

Precautions should be taken, including not sharing personal or confidential information, but the idea is to augment, not replace, human behavior, allowing individuals to focus on aspects of their work that require uniquely human attributes, such as empathy, strategic thinking, creativity and curiosity, explains Srinivas.10

“So I think curiosity is what makes humans special, and we want to live up to that. That's the mission of the company, and we're leveraging the power of AI and all of these frontier models to deliver on that. And I believe that even as we have more capable cutting-edge AI, human curiosity is not going anywhere, and we're going to create a world where it makes humans even more special.

With all the additional power, they will become more powerful, more curious, and more knowledgeable in their pursuit of truth, which will lead to the beginning of infinity.”