
Campbell Brown has devoted his career to tracking accurate information, first as a renowned TV journalist and then as Facebook’s first and only dedicated news officer. Now, as she sees AI changing the way people consume information, she’s threatening history to repeat itself. This time, I’m not waiting for someone else to solve my problem.
Her company, Forum AI, which she discussed with TechCrunch’s Tim Fernholz at a recent StrictlyVC evening in San Francisco, evaluates how foundational models perform on what she calls “high-stakes topics” like geopolitics, mental health, finance, and employment, topics that are “dark, nuanced, and complex without a clear yes or no answer.”
The idea is to find the world’s best experts, have them design benchmarks, and then train AI judges to evaluate models at scale. For Forum AI’s geopolitics work, Brown brought in Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is for AI judges to reach roughly 90% agreement with human experts, a threshold that Forum AI has been able to reach, she says.
Brown traces the origins of Forum AI, which was founded 17 months ago in New York, to a specific moment. “I was at Meta when ChatGPT was first released, and I remember immediately realizing that this was going to be the conduit through which all information would flow, and it wasn’t very good.” The impact on his own children made that moment feel almost existential. “If we don’t find a way to solve this problem, our children will be really stupid,” she recalled thinking.
What frustrated her most was that accuracy didn’t seem to be anyone’s priority. She said foundation model companies are “extremely focused on coding and math, while news and information are more difficult.” But harder doesn’t mean it’s optional, she argued.
In fact, when Forum AI began evaluating its leading models, the results were not very encouraging. She noted that Gemini pulls “articles that have nothing to do with China” from Chinese Communist Party websites and pointed out a left-wing political bias in almost all of its models. She said there were many subtle failures, including missing context, missing perspective and arguing at straws without acknowledgment. “There is a long way to go,” she said. “But I think there are also very easy fixes that can significantly improve the results.”
Brown spent years at Facebook watching what happens when the platform optimizes for the wrong things. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she created no longer exists. Even if social media ignores this, the lesson is that optimizing engagement is poor for society and leaves many people underinformed.
Her hope is that AI can break this vicious cycle. “It could go either way now,” she said. Companies can either give users what they want, or they can “give people something real, honest and authentic.” She acknowledged that idealistic versions of truth-optimizing AI may sound naive. But she thinks corporations are unlikely to be allies here. Companies using AI for credit decisions, lending, insurance and recruiting are concerned about accountability and “will want to optimize to get the right results.”
These corporate needs are also what Forum AI invests in its business. But translating compliance focus into consistent revenue remains difficult. This is especially true given that much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown deems inadequate.
She said the compliance environment was “a joke.” When New York City passed its first hiring bias law requiring AI audits, the state auditor found that more than half of the violations went undetected. She said real-world assessments require domain expertise to address not only known scenarios but also edge cases, “where people can get into problems they don’t think about.” And that takes time. “Smart generalists won’t cut it.”
Brown, whose Lerer Hippeau-led company raised $3 million last fall, is uniquely positioned to explain the disconnect between the AI industry’s self-image and the reality of most users. “You hear from leaders of big tech companies things like, ‘This technology will change the world,’ ‘It will put you out of a job,’ ‘It will cure cancer,’” she said. “But the average person using a chatbot to ask basic questions still gets a lot of random and incorrect answers.”
Trust in AI is at a very low level, and she believes that in many cases skepticism is justified. “In Silicon Valley, the conversation is centered around one topic, but among consumers, it’s a completely different conversation.”
If you purchase through links in our articles, we may receive a small commission. This does not affect our editorial independence.









