Oprah has a special on AI with Sam Altman and Bill Gates. Highlights include:

Late Thursday night, Oprah Winfrey aired a special on AI, appropriately titled “AI and Our Future.” Guests included OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI Director Christopher Wray.

The prevailing mood was one of skepticism and caution.

In her prepared speech, Oprah said the AI ​​genie is out of the bottle, for better or worse, and humanity will have to learn to live with the consequences.

“AI is still beyond our control, and to a large extent… beyond our understanding,” she said. “But AI is here, and we will live with technology that can be both our allies and our competitors… We are the most adaptable creatures on this planet. We will adapt. But keep an eye on what’s real. The stakes have never been higher.”

Sam Altman overpromises

In his first Oprah interview, Altman made the dubious claim that today's AI learns concepts from the data it is trained on.

“We show the system a thousand words in sequence and ask it to predict what’s going to come next,” he told Oprah. “The system learns to predict, and it learns basic concepts from that.”

There are many experts who oppose this.

AI systems like ChatGPT and o1, which OpenAI introduced Thursday, actually predict the most likely next word in a sentence. But they are just statistical machines. They learn patterns in data. They have no intentionality. They just make informed guesses.

Altman may have overstated the capabilities of today's AI systems, but he stressed the importance of figuring out how to safety-test those same systems.

“One of the first things we need to do is figure out how to do safety testing for these systems, just like the government does for airplanes or new drugs,” he said. “I personally talk to government officials every few days.”

Altman’s push for regulation may be selfish. OpenAI opposes California’s AI safety bill, known as SB 1047, saying it would “hinder innovation.” But former OpenAI employees and AI experts like Geoffrey Hinton support the bill, arguing that it would impose necessary safeguards on AI development.

Oprah also asked Altman about his role as head of OpenAI. She asked why people should trust him, and he largely dodged the question, saying his company tries to build trust over time.

Altman has previously said quite directly that people shouldn't trust him or anyone else to ensure that AI works for the good of the world.

The OpenAI CEO later said it was odd to hear Oprah ask whether he was “the most powerful and dangerous man in the world,” as the headline suggested. He disagreed, but said he had a responsibility to push AI in a positive direction for humanity.

Oprah on Deepfakes

The topic of deepfakes was scheduled to appear in a special program on AI.

To demonstrate how convincing synthetic media can be, Brownlee compared sample footage from OpenAI’s AI-based video generator, Sora, to footage generated by AI systems that are just a few months old. The Sora samples were way ahead of their time, showing the rapid progress in the field.

“Even now when I watch parts of it, I can tell something is wrong,” Brownlee said of the Sora footage. Oprah said it seemed real to her.

The Deepfake Showcase was followed by an interview with Ray, who recalled the moment when he first learned about AI deepfake technology.

“I was in a conference room, and the (FBI) guys were there showing me how to make an AI-enhanced deepfake,” Wray said. “And they made a video of me saying things I’ve never said before and never will say again.”

Ray talked about the growing prevalence of AI-assisted sextortion. According to cybersecurity firm ESET, there will be a 178% increase in sextortion cases between 2022 and 2023, partly driven by AI technology.

“Someone is targeting a teenager, posing as their peer, and using (AI-generated) compromising images to convince the child to send them the real images,” Ray said. “In fact, it's a guy behind a keyboard in Nigeria, and he threatens to blackmail the child once he gets the images, and that if they don't pay up, he'll share the images that will ruin their life.”

Wray also addressed the misinformation surrounding the upcoming US presidential election. He argued that this is “not the time to panic,” and stressed that “everyone in America” has a responsibility to “intensify their focus and attention” on the use of AI and how it could be “used by bad guys against all of us.”

“All too often, we find on social media that someone who looks like Bill from Topeka or Mary from Dayton is actually a Russian or Chinese intelligence agent based out of Beijing or Moscow,” Ray said.

In fact, a Statista poll found that more than a third of U.S. respondents said they had seen misleading information (or suspected misleading information) on a major topic by the end of 2023. This year, AI-generated misleading images of Vice Presidential candidate Kamala Harris and former President Donald Trump were viewed millions of times on social networks, including Facebook.

Bill Gates on AI Innovation

As part of the optimistic shift toward technology, Oprah interviewed Microsoft founder Bill Gates, who expressed hope that AI will bring about profound changes in education and medicine.

“AI is like a third person sitting at the (medical appointment) and doing the transcription and suggesting the prescription,” Gates said. “So instead of the doctor looking at a computer screen, they’re interacting with the patient and the software is making sure there’s a really good transcription.”

But Gates discounted the possibility that bias could arise from lack of training.

A recent study found that a leading tech company’s speech recognition system was twice as likely to mistranscribe audio from a Black speaker as from a White speaker. Another study found that AI systems reinforce long-held false beliefs about biological differences between Black and White people, a lie that leads clinicians to misdiagnose health problems.

Gates said AI in the classroom is “always available” and “can understand how to motivate you, regardless of your level of knowledge.”

But this doesn't happen in many classrooms.

Last summer, schools and universities rushed to ban ChatGPT over concerns about plagiarism and misinformation. Some have since backed down. But not everyone is convinced of GenAI’s potential for good, pointing to surveys such as this one from the UK’s Safer Internet Centre, which found that more than half of children (53%) had seen people their age use GenAI in a negative way, such as creating believable misinformation or images that are used to upset someone.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) late last year urged governments to regulate the use of GenAI in education, including by enforcing age restrictions on users and regulations on data protection and user privacy.