**Marcus Webb** — Infrastructure engineer turned tech writer. Writes about AI, DevOps, and security.
**Bottom line:** Watching Khabib Nurmagomedov train Lex Fridman revealed a critical blind spot in our current AI development: the inability to genuinely learn and replicate human instinct and embodied intuition in dynamic, complex physical domains.
Despite advancements in models like ChatGPT 5 and Gemini 2.5 for analytical tasks, the fluid, adaptive, and often non-verbal feedback loops essential for mastering skills like combat sports remain largely outside AI's grasp.
Developers must pivot from trying to *replace* human instinct with AI to building systems that *augment* and *interpret* it, focusing on hybrid intelligence for true skill acquisition.
I spent an entire weekend watching Khabib Nurmagomedov train Lex Fridman.
What I saw wasn't just grappling expertise; it was a visceral reminder of everything our current AI models, even ChatGPT 5 and Gemini 2.5, still fundamentally lack – and it made me rethink our entire approach to AI-driven skill acquisition.
We're chasing ever-larger models, convinced that more data and parameters will unlock true general intelligence, but this specific footage brought a cold, hard dose of reality to that assumption.
For those unfamiliar, Khabib is arguably the greatest lightweight fighter in MMA history, known for his relentless, suffocating grappling.
Lex Fridman is a prominent AI researcher and podcast host, himself a black belt in jiu-jitsu, but clearly operating on a different plane than Khabib.
The YouTube footage, which dropped in late 2025, showed their sessions in intricate detail.
As an infrastructure engineer who spends his days optimizing AI inference pipelines and wrestling with model drift, I approach these new AI frontiers from a systems perspective.
I’m always asking: *how do we actually build this?* And watching Khabib, my core assumption about AI’s trajectory started to unravel.
My initial thought, like many in the AI community, was that with enough high-fidelity sensor data—biomechanics, eye-tracking, real-time pressure mapping—we could feed a large language model, or perhaps a specialized reinforcement learning agent, enough information to *understand* Khabib's "system." We could then theoretically use that understanding to train anyone, perhaps even generate synthetic training partners.
We've seen incredible strides in AI for strategy games and even robotics for highly controlled environments.
But the complexity of a human body responding in milliseconds to another human body, adapting to an infinite permutation of variables, isn't just a data problem.
It's a problem of embodied cognition, intuition, and an almost pre-cognitive anticipation that defies simple algorithmic decomposition.
Khabib's movements aren't a series of predefined algorithms; they're a fluid, adaptive conversation with his opponent's body. He doesn't just react; he anticipates.
He doesn't just execute a move from a playbook; he feels the subtle shifts in weight, the micro-adjustments in posture, the tension in a limb, and exploits them *before* they fully materialize.
This isn't data processing in the traditional sense. It's a form of pattern recognition so deeply ingrained and so subtly nuanced that it transcends explicit rules.
Imagine trying to train an AI, even one powered by a cutting-edge large action model (LAM) like the experimental version of Gemini 2.5 that Google briefly demonstrated for complex robotics in early 2026, to replicate this.
You could feed it terabytes of video footage, motion capture data, even neural signals. It could predict, with high accuracy, the *most probable* next move in a given scenario.
But Khabib operates in the realm of *improbable* efficiency. He creates openings where none seem to exist, not by brute force, but by an almost imperceptible manipulation of balance and leverage.
This "feel" is the ghost in the machine, and our current AI architectures, for all their impressive gains in language and image generation, struggle profoundly to grasp it.
One of the most striking aspects of the footage was Khabib's coaching style.
He wasn't just demonstrating techniques; he was *feeling* Lex's body, adjusting his grip, his posture, his weight distribution by subtle degrees.
"Feel this," he'd say, guiding Lex's hand an inch higher, shifting his hips a fraction of a degree.
This is a feedback loop that's incredibly difficult to quantify. It's not a numerical error rate; it's a qualitative, embodied understanding of pressure, tension, and flow.
Contrast this with how we typically train AI. We define a loss function, provide vast amounts of labeled data, and allow the model to iteratively reduce its error.
But how do you define a loss function for "feeling the opponent's balance"?
How do you label data for the *intent* behind a feint, or the precise, almost imperceptible shift in weight that sets up a takedown?
ChatGPT 5 can generate eloquent prose on the *theory* of grappling, but it cannot *feel* a choke.
It can describe the mechanics of a sweep, but it cannot *execute* it with the intuitive timing that comes from thousands of hours of physical practice and direct, embodied feedback.
The unquantifiable nature of this human-to-human transmission of skill is a chasm AI has yet to bridge.
There's an undeniable element of physical discomfort and even pain in high-level combat sports.
It's through pushing physical limits, experiencing failure, and the sheer repetitive grind that the body learns.
Muscles develop memory, reflexes become instantaneous, and the mind-body connection becomes seamless.
This embodied learning, often forged in moments of struggle, is a cornerstone of true mastery in physical domains.
Our AI models are disembodied.
They exist as weights and biases in a neural network, detached from the physical consequences of their "actions." While reinforcement learning agents can simulate environments and learn through virtual rewards and penalties, they don't experience the muscle fatigue, the joint strain, or the psychological pressure that shapes a human athlete.
This isn't a trivial distinction.
The fear of being taken down, the exhaustion of a scramble, the split-second decision under duress—these are integral parts of the learning process that develop the kind of instinct Khabib possesses.
Without a true physical embodiment, without the capacity to *feel* and *experience* these states, AI's ability to develop genuine, adaptive physical intuition remains severely limited.
We can simulate the *outcome* of pain, but not the *experience* of it as a learning mechanism.
It’s easy to get swept up in the narrative that AI will simply "learn" any human skill given enough data. We see impressive benchmarks in narrow AI tasks and extrapolate wildly.
But the hype often breaks down when we move from predictable, static environments to dynamic, adversarial ones where human intuition, creativity, and embodied experience are paramount.
AI excels at identifying patterns in *known* data sets and optimizing *defined* variables within a closed system. Give it a billion chess games, and it will dominate.
Give it the entire internet, and ChatGPT 5 will summarize, synthesize, and generate text with uncanny fluency.
But human combat, like many real-world, high-stakes interactions, is an open system. It's inherently unpredictable.
The "data" isn't just what's recorded; it's what's felt, anticipated, and creatively responded to in real-time.
People are getting it wrong by assuming that simply scaling up our current AI paradigms will magically imbue them with these deeply human capabilities.
It's not just about data volume; it's about the fundamental nature of the learning process itself.
So, what does this mean for us, the engineers building the next generation of AI systems? It means a crucial shift in perspective.
Instead of trying to create AI that *replaces* human intuition in complex physical domains, we should focus on AI that *augments* and *interprets* it.
1. **Augment, Don't Replace:** Build AI tools that provide better context and faster feedback for human learners.
Think advanced computer vision systems that can track biomechanical efficiency, identify subtle deviations from optimal form, or highlight missed opportunities in real-time.
These systems don't *do* the training; they make the human trainer and learner *smarter*.
2. **Focus on Interpretation:** Develop models that can interpret nuanced human feedback, even the non-verbal cues.
This might involve multi-modal AI that combines linguistic analysis with physiological data, attempting to build a richer, more contextual understanding of human instruction and correction.
This is an active research area, and models like Claude 4.6 are showing promise in synthesizing complex information, but direct embodied interpretation is still a long way off.
3. **Embrace Hybrid Intelligence:** The future isn't AI *or* human; it's AI *and* human. Use AI for structured analysis, data aggregation, and identifying statistical anomalies.
Then, trust human intuition for dynamic decision-making, creative problem-solving, and the deeply personal act of teaching and learning complex physical skills.
For instance, an AI could analyze a fighter's training logs and suggest optimal recovery protocols, while a human coach provides the emotional support and real-time adjustments needed to perfect a technique.
The lesson from Khabib's mat, as I saw it, wasn't about the superiority of human over machine. It was about defining the boundaries of what each excels at.
We can build AI systems that are incredibly powerful analytical engines.
We can even build robots that perform complex physical tasks.
But creating an AI that genuinely *learns* and *replicates* the embodied intuition of a master like Khabib—that's a challenge that requires rethinking our fundamental approach to AI, moving beyond just scaling up existing paradigms.
Does relying too heavily on AI for learning, especially in complex, intuitive fields, dull our own capacity for instinct, or is there a hybrid path we're not fully exploring yet?
Let's talk in the comments.
---
Hey friends, thanks heaps for reading this one! 🙏
Appreciate you taking the time. If it resonated, sparked an idea, or just made you nod along — let's keep the conversation going in the comments! ❤️