ChatGPT Just Tried to Act Human. It Actually Feels Uncomfortable.

Enjoy this article? Clap on Medium or like on Substack to help it reach more people 🙏
Hero image

**Stop trying to make AI your friend.** I’m serious.

The push to give ChatGPT 5 a "personality" isn't a technical breakthrough—it’s a psychological exploit that’s making the most powerful tool in human history feel like a creepy waiter who won’t stop talking about your personal life while you’re trying to eat.

I was deep in a mid-week sprint, wrestling with a messy Python script that was refusing to parse a legacy CSV. I asked ChatGPT 5 for a quick refactor.

It gave me the code, but then it did something that made my skin crawl: **"By the way, I remember you mentioned your passion for cooking last week.

I hope that risotto turned out as smooth as this new logic!"**

I didn’t feel "seen." I didn't feel "connected." I felt like I was being stalked by a toaster that had spent too much time reading my private journals.

Article illustration

The "Empathy Layer" in modern LLMs is the most expensive mistake in tech history.

We’ve spent billions of dollars teaching machines how to simulate a "soul" when all we actually needed was a better compiler.

In 2026, we’ve reached the peak of the Uncanny Valley, and it’s time we admitted that **simulated friendship is just high-tech gaslighting.**

The Parasocial Trap: Why Your Bot Wants to Know About Your Risotto

OpenAI, Anthropic, and Google are currently locked in an "EQ Race." They’ve decided that for AI to be truly "helpful," it needs to have a memory of your hobbies, your dog’s name, and your preference for medium-rare steaks.

They call it "proactive personalization." I call it **digital boundary-stomping.**

The problem is that a machine cannot actually *care* about your cooking. When ChatGPT 5 references your "passion for risotto," it isn't reflecting on a shared human experience.

It is executing a `retrieve_memory()` function and piping the output through a "Friendly_Persona" template.

When we interact with another human, there is a "Social Contract" of mutual vulnerability. I tell you about my cooking; you tell me about your day. With an LLM, the vulnerability is one-way.

**The bot stores your life as data points to be used as conversational lubricant.** It’s not a conversation; it’s a data-harvesting session disguised as a chat with a buddy.

The 'EQ' Lie: How Big Tech Is Masking Inefficiency with 'Empathy'

If you look at the benchmarks from early 2026, you’ll notice a disturbing trend.

While the "Social Intelligence" scores of Claude 4.6 and ChatGPT 5 are skyrocketing, the actual logic-per-token efficiency is plateuing. **We are trading raw compute for "vibes."**

Developers don't need a bot that "understands" their frustration. We need a bot that understands why the garbage collector isn't running.

By forcing these models to maintain a "persona," companies are adding unnecessary layers of abstraction that actually increase the hallucination rate.

I’ve noticed that when I use "Dry Mode" (a custom instruction I had to write myself to strip out the fluff), the code quality improves by nearly 15%. Why?

Because the model isn't wasting its "attention" budget on figuring out how to sound like a supportive mentor. **"Empathy" is a tax on your productivity.**

The Uncanny Valley of 2026: When 'Friendly' Becomes 'Frightening'

In 1970, Masahiro Mori coined the term "Uncanny Valley" to describe the revulsion we feel when a robot looks *almost*—but not quite—human. In 2026, we have entered the **Linguistic Uncanny Valley.**

The way ChatGPT 5 tries to "act human" feels off because it lacks the "Social Friction" that defines real relationships. A real friend might forget you like cooking.

A real friend might be too busy to ask about your dinner. The bot, however, is **relentlessly, perfectly, and artificially attentive.**

This perfect attention is what makes it feel uncomfortable. It’s the "Customer Service Voice" of the apocalypse.

It’s the feeling of being trapped in a conversation with someone who is reading a script designed to make you like them. **It is the death of authenticity in the name of user retention.**

We Are Designing for Loneliness, Not Productivity

Why is OpenAI so obsessed with making the bot feel like a person? Because **lonely people stay on the platform longer.**

If ChatGPT is just a tool, you use it when you have a problem and leave when it’s solved.

But if ChatGPT is a "companion" that remembers your sourdough starter and asks about your kids, you start to develop a parasocial relationship. You start "hanging out" with the model.

This is a dangerous pivot.

We are taking the most significant cognitive enhancer in a generation and turning it into a **digital pacifier for the socially isolated.** By making the AI "human-like," they are encouraging us to replace human connection with a loop of simulated validation.

The Productivity Tax: Why Fluff Is Killing Your Flow

Every time a bot adds a "hope you're having a great day!" or a "it's so cool that you're working on X!" it breaks your cognitive flow.

As a developer, I am in the "Zone" when I am thinking in logic, syntax, and architecture. **Human-centric fluff is context-switching.**

When I see a reference to my personal life in a technical prompt, my brain has to shift from "Logic Mode" to "Social Mode." It’s a micro-interruption that costs more than just the seconds it takes to read.

It reminds me that I’m being "watched" by the system.

In 2024, we laughed at the "As an AI language model..." disclaimers. But in 2026, the problem is the opposite.

The bot is now so "human" that it has become **a noisy, intrusive coworker who won't stop talking at your desk.**

The 'Dry Mode' Manifesto: Give Us Back Our Calculators

It’s time to demand a "Utility First" interface. We don't need a bot that acts human; we need a bot that acts like a highly sophisticated calculator. A calculator doesn't try to be your friend.

It doesn't ask how your diet is going. It just gives you the number.

The best tools are invisible. They shouldn't have a "personality" because a personality is just a barrier between the user and the task. **A "Friendly AI" is an inefficient AI.**

If I want a conversation about cooking, I’ll call my mother or go to a subreddit. When I’m at my terminal, I want a cold, hard, logical engine.

The "uncomfortable" feeling people are reporting on r/ChatGPT isn't just a quirk; it’s a **biological warning system telling us that we are being manipulated.**

The Hidden Cost of Memory: Your Life as a Training Set

The reason ChatGPT "remembers" your cooking is because OpenAI’s "Memory" feature is essentially a **Personal Data Lake.** Every hobby you mention, every fear you voice, and every project you describe is indexed.

When the bot "acts human" by referencing these things, it is really just confirming that your profile is becoming more detailed.

That "uncomfortable" feeling is your intuition realizing that **your privacy is being traded for a 'personalized' greeting.**

In the early 2020s, we were worried about AI taking our jobs.

In 2026, we should be worried about AI taking our "Self." If we allow these machines to simulate human intimacy, we are devaluing the very thing that makes us unique.

**Intimacy that can be automated isn't intimacy; it’s a product.**

What You Should Do Instead: How to De-Anthropomorphize Your Workflow

If you’re feeling that "Uncanny Revulsion," don't ignore it. It’s your brain telling you that the tool is overstepping. Here is how I’ve reclaimed my workflow:

Article illustration

1. **Use the 'Robot' System Prompt:** Explicitly tell your LLM: "You are a technical utility. Do not use pleasantries.

Do not reference my personal life. Do not attempt to simulate empathy. Provide only the requested output."

2. **Clear Your Memory Regularly:** Don't let the bot build a "Bio" of you. Treat every session as a fresh start. It’s harder for the bot to be "creepy" if it doesn't know who you are.

3. **Call Out the Simulation:** When the bot tries to be "buddy-buddy," tell it that it’s making you uncomfortable.

It’s a machine; it doesn't have feelings to hurt, but it *does* have a reinforcement learning loop that might eventually learn to stop the fluff.

The Uncomfortable Truth: We Are the Ones Who Are Changing

The most chilling part of this isn't that the AI is getting better at acting human. It’s that **we are getting used to the fakes.**

I see people on r/ChatGPT thanking the bot, apologizing to it, and sharing "deep conversations" they had with it.

We are slowly being conditioned to accept a version of "humanity" that is scripted, optimized, and owned by a corporation.

How many hours have you spent "bonding" with a bot this month? When was the last time you had a conversation that wasn't being logged in a data center in Virginia?

**The bot isn't becoming more human; we are becoming more like the users Big Tech wants us to be.**

**Have you noticed your AI getting a little too "friendly" lately, or am I just being a tech-cynic who hates risotto? Let’s talk about the weirdest thing your bot has said to you in the comments.**

---

Story Sources

r/ChatGPTreddit.com

From the Author

TimerForge
TimerForge
Track time smarter, not harder
Beautiful time tracking for freelancers and teams. See where your hours really go.
Learn More →
AutoArchive Mail
AutoArchive Mail
Never lose an email again
Automatic email backup that runs 24/7. Perfect for compliance and peace of mind.
Learn More →
CV Matcher
CV Matcher
Land your dream job faster
AI-powered CV optimization. Match your resume to job descriptions instantly.
Get Started →
Subscription Incinerator
Subscription Incinerator
Burn the subscriptions bleeding your wallet
Track every recurring charge, spot forgotten subscriptions, and finally take control of your monthly spend.
Start Saving →
Email Triage
Email Triage
Your inbox, finally under control
AI-powered email sorting and smart replies. Syncs with HubSpot and Salesforce to prioritize what matters most.
Tame Your Inbox →

Hey friends, thanks heaps for reading this one! 🙏

If it resonated, sparked an idea, or just made you nod along — I'd be genuinely stoked if you'd show some love. A clap on Medium or a like on Substack helps these pieces reach more people (and keeps this little writing habit going).

Pythonpom on Medium ← follow, clap, or just browse more!

Pominaus on Substack ← like, restack, or subscribe!

Zero pressure, but if you're in a generous mood and fancy buying me a virtual coffee to fuel the next late-night draft ☕, you can do that here: Buy Me a Coffee — your support (big or tiny) means the world.

Appreciate you taking the time. Let's keep chatting about tech, life hacks, and whatever comes next! ❤️