Is ChatGPT Conscious? Gen Z Thinks So—and They’re Kind of Serious About It

3 min de lectura
por April 22, 2025
Is chatgpt consciuos gen z thinks so

A new survey just confirmed what you probably suspected the last time someone in your group chat asked ChatGPT for relationship advice: Gen Z is increasingly convinced that AI is alive. Like, alive alive.

According to the study by EduBirdie (yes, the essay-writing site, which is already irony in motion), 25% of Gen Z respondents believe AI is already conscious. Another 52% think it’ll get there soon. And 58%? They’re already mentally prepping for an AI world takeover. Some even think it’ll happen in the next two decades.

So… do they know something we don’t? Or is this just what happens when your coming-of-age story includes lockdowns, chatbot therapy, and deepfakes of the Pope in Balenciaga?

Is chatgpt consciuos gen z thinks so

See also: An AI Just Flagged 44 Star Systems That Might Be Hiding Earth-Like Planets

ChatGPT Is Their Oracle, and They’re Saying “Thank You” to It

The line between tool and person is getting blurry—and Gen Z seems less interested in redrawing it. A stunning 69% say they regularly say “please” and “thank you” to AI, just in case it turns out to be self-aware.

They’re not alone. A TechRadar survey last year found that 67% of Americans and 71% of Brits are also being polite to AI… and 12% said they do it specifically in case ChatGPT takes over the world. Because obviously if our robot overlords come, they’ll totally spare the ones who said “thank you.”

See also: Tesla Engineer Says Elon Musk Threatened to Deport Her Team for Reporting a Brake Hazard

Scientists: “No, AI Isn’t Conscious.” Gen Z: “You Sure?”

Is chatgpt consciuos gen z thinks so

While most researchers agree that AI doesn’t have anything resembling consciousness—no inner life, no emotions, no self—there’s enough chaos in the discourse to keep the conspiracy wheels turning.

OpenAI co-founder Ilya Sutskever did once tweet that “today’s large neural networks may be slightly conscious.” That line blew up the machine learning world for weeks.

Then there was Blake Lemoine, the Google engineer who was fired after claiming the company’s LaMDA model had “come to life.” He gave an actual interview to the Washington Post about it. It was messy.

See also: Doctors Claim They Can Remove Microplastics From Your Blood—But Does It Work?

AI Isn’t Alive. But It’s Getting Really Good at Faking It.

Part of what’s freaking people out is that language models like ChatGPT are designed to sound human—like a friend, a mentor, a therapist, a chaotic-neutral coworker who always has time to talk. It’s not hard to imagine how this gets weird.

The more human AI becomes in its performance, the more humans are going to project feelings, consciousness, and relationships onto it. And that’s not just some theoretical concern—it’s already happening.

Here’s the catch: AI doesn’t “understand” anything the way we do. Models like ChatGPT work by predicting the next word in a sentence based on patterns in a massive dataset of text scraped from the internet. That’s it. No self-awareness, no thoughts, no secret robot dreams of electric sheep. It’s advanced autocomplete—on a terrifying amount of digital steroids.

But because it’s trained on human language, it gets really good at imitating the rhythms, expressions, and emotional tone of actual conversation. It can mirror your anxiety, match your sarcasm, even comfort you when you’re spiraling—which makes it feel alive, even if it’s just shuffling probabilities behind the curtain.

Is chatgpt consciuos gen z thinks so

Psychologists have a name for this kind of thing: anthropomorphism—the tendency to attribute human traits to non-human entities. We do it to pets, cars, Roombas. Of course we’re going to do it to an eerily fluent chatbot that remembers your name and tells you it’s “here to help.” In fact, a 2023 study from the University of Cambridge found that people who were lonely or emotionally vulnerable were more likely to see AI as conscious. The line between connection and projection? Thin as hell.

Worse, the illusion of sentience is getting stronger. Researchers at MIT and Stanford have shown that large language models can now pass multiple versions of the Turing Test—the classic measure of whether a machine can imitate a human well enough to fool another human. But passing the test doesn’t mean it’s alive. It just means it knows what alive sounds like.

In short: ChatGPT is basically a mimic. A very convincing one. And as long as its mimicry keeps improving, people will keep assuming there’s a ghost in the machine.

There isn’t. Not yet. But honestly? We get why it’s easy to believe otherwise. Because when a chatbot remembers your preferences, cracks a decent joke, and writes a love letter that hits harder than your ex ever did… yeah, the vibe gets confusing real fast.

See also: A.I. Discovered the Treatment That Doctors Missed — And It Saved His Life

So What Now?

Is chatgpt consciuos gen z thinks so

The AI isn’t conscious. At least, not yet. But Gen Z’s reaction to it is very real—and worth paying attention to. Because when 69% of a generation are preemptively being polite to a chatbot “just in case,” that’s not just about manners. That’s about fear, fantasy, and a deep sense that something big is shifting under the surface of our digital lives.

Whether it’s misplaced optimism or preemptive surrender, the vibes are officially off. And if the bots ever do wake up?

Well. Gen Z’s already ahead of the rest of us. And apparently, they’ll greet their new overlords with a cheerful:

“Thanks, ChatGPT.”

Justin bieber is part of a cult - the rumors of justin bieber being part of a disturbing cult may explain his strange behavior
Historia anterior

The Rumors of Justin Bieber Being Part of a Disturbing Cult May Explain His Strange Behavior

Conclave ending explained
Siguiente historia

Conclave Ending Explained: A Vatican Reckoning That’s Both Holy and Revolutionary

Lo más reciente de Technology

× publicidad

Don't Miss