Sam Altman, the CEO Behind ChatGPT, Doesn’t Trust the AI He Built—Should You?

2 min de lectura
por June 24, 2025
Sam altman, the ceo behind chatgpt, doesn’t trust the ai he built—should you?

In the very first episode of OpenAI’s new podcast, CEO Sam Altman said something that should’ve made bigger headlines:

“People have a very high degree of trust in ChatGPT… which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much.”

It wasn’t a glitch. It wasn’t a throwaway. It was Altman straight-up acknowledging what most people using ChatGPT every day either don’t realize—or choose to ignore.

And that’s the paradox at the heart of this AI moment: it’s a tool that admits it lies, while still being treated like a digital oracle.

We Trust ChatGPT Because It Talks Like Us

ChatGPT and other large language models are designed to sound human. They’re fluent, confident, and helpful—until they’re confidently wrong.

Sam altman, the ceo behind chatgpt, doesn’t trust the ai he built—should you?

Altman himself admitted to using it for parenting advice during his son’s early months. And millions of users are doing the same for health questions, school essays, even legal guidance. Despite public warnings and disclaimers, the product’s design screams “trust me.”

It remembers things. It knows your tone. It helps you brainstorm. And then it tells you that the Battle of Hogwarts happened in 1812.

See also: Is ChatGPT Conscious? Gen Z Thinks So—and They’re Kind of Serious About It

The AI Hallucination Problem (Still Not Fixed)

OpenAI has made progress, sure. The latest version, GPT-4o, is one of the most accurate models out there—with an 88.7% score on the MMLU benchmark.

But hallucinations still happen. One study showed that GPT-3.5 hallucinated 39.6% of references in academic writing. GPT-4 hallucinated 28.6%. That’s still nearly 1 in 3.

Sam altman, the ceo behind chatgpt, doesn’t trust the ai he built—should you?

Altman’s remarks weren’t a PR slip—they were a reality check. And that reality is especially dangerous in high-stakes fields like:

  • Healthcare – It can explain symptoms, but it can’t diagnose with nuance.

  • Law – It’s cited fake cases in court.

  • Education – It generates essays students turn in without checking.

  • Parenting – Yes, even the baby advice might be a hallucination.

See also: Did Trump Use ChatGPT to Write His Tariff Plan? The Math Sure Looks Familiar

And Yet… We Keep Using It

The real twist? Altman isn’t telling people to stop using ChatGPT—just to use it with “awareness.” But that’s like selling self-driving cars that swerve off cliffs and saying, “Just keep your eyes open.”

We live in a system where convenience always wins. And ChatGPT is very convenient. It’s fast, it’s friendly, and it’s available 24/7.

Altman knows this. And he knows that even if trust shouldn’t be high, it already is.

Sam altman, the ceo behind chatgpt, doesn’t trust the ai he built—should you?

See also: (VIDEO) The Family That Brought Their Brother Back to Life with AI to Deliver a Message in Court

The Real Question: Why Do We Trust It More Than People?

Maybe it’s not just about hallucinations. Maybe it’s about the fantasy that tech is smarter, safer, or more objective than people.

But hallucinating machines are not objective. They just sound like they are. And as Altman and OpenAI roll out features like memory, advertising, and long-term personalization, the risk isn’t just bad information—it’s psychological dependency on something that looks like truth, but isn’t built to care whether it’s lying.

Is dua lipa a man - "is dua lipa a man? " her confidence in vogue cover exposed fragile masculinity
Historia anterior

“Is Dua Lipa a Man?” Her Confidence In Vogue Cover Exposed Fragile Masculinity

What to do if a loved one is detained by ice
Siguiente historia

What to Do If a Loved One Is Detained by ICE: How to Locate, Contact, and Help Them

Lo más reciente de Technology

× publicidad

Don't Miss