Stop using Null. I am dead serious.
The man who gave us Quicksort, the foundations of concurrent programming, and the very concept of the "billion-dollar mistake" continues to stand by a critical warning to the industry.
It isn't a simple suggestion; it's a message that we are still building our digital world on a foundation of sand.
Sir Tony Hoare’s famous 2009 presentation, 'Null References: The Billion Dollar Mistake,' established a stance that we continue to ignore, and frankly, we should be embarrassed.
I’ve spent fifteen years in the trenches of distributed systems and legacy refactors. I have seen more production outages caused by a "nothing" value than by actual, complex logic failures.
We treat Null like a standard tool in the kit, but it’s actually a virus that has infected every layer of modern software engineering.
In 1965, Hoare was designing the first comprehensive type system for references in ALGOL W.
He later admitted he couldn't resist the temptation to put in a null reference, simply because it was so easy to implement.
He called it his "billion-dollar mistake," and as of March 2026, that estimate remains a conservative benchmark for the technical debt we've accumulated.
When you factor in the security vulnerabilities, the debugging hours, and the catastrophic system failures of the last sixty years, Null has likely cost the global economy hundreds of billions of dollars.
**Every time you see a "NullPointerException" or a "Cannot read property of undefined," you are witnessing a ghost from 1965 haunting your modern architecture.** It’s not a "feature" of programming; it’s a failure of imagination.
The tragedy isn't that Hoare made a mistake; the tragedy is that we’ve turned it into a standard.
We have built entire languages, frameworks, and career paths around checking if something is "nothing." We’ve accepted that a variable can either be what it says it is, or it can be a hidden landmine.
Most developers think they are being "safe" by sprinkling `if (obj != null)` throughout their codebase. You aren't being safe; you are being defensive because your tools are broken.
**Defensive programming is just a polite term for "I don't trust my own types."**
When you allow Null to permeate your system, you are essentially saying that every single function call is a gamble.
You are forcing every developer who touches your code to keep a mental map of which variables might blow up and which might not.
This cognitive load is what kills productivity, not the complexity of the business logic.
Consider the "Elvis operator" (`?.`) or the "Null Coalescing" operator (`??`). These are marketed as "syntactic sugar" to make our lives easier.
In reality, they are **nicotine patches for a Null addiction.** They make it easier to ignore the fact that your data model is fundamentally ambiguous.
Look at the data from recent major system failures.
The catastrophic CrowdStrike incident in July 2024, for instance, reminded the world how memory safety and unhandled pointer states can bring global infrastructure to a halt.
While these aren't always classic 'null' references, they stem from the same root: a lack of robust reference safety.
We are talking about systems running on 128-core processors with terabytes of RAM, yet they can be brought to their knees by a pointer to address zero.
**A "nothing" value should not have the power to delete a billion-dollar company’s uptime.** If your architecture allows a single missing value to cascade into a system-wide failure, you don't have an architecture; you have a house of cards.
Modern languages like Rust and Swift have already proven that we don't need Null. They use "Option" or "Optional" types that force you to explicitly handle the absence of a value.
When I moved a mission-critical service from Java to Rust last year, our unexpected crash rate related to memory safety was effectively eliminated.
We didn't get smarter; our language just stopped letting us be lazy.
The most common argument I hear in favor of Null is convenience.
"I just need to represent that the user hasn't filled this out yet," or "It's faster than creating a Null Object pattern." **"Convenience" is the word we use when we want to offload our technical debt onto the developers who come after us.**
By choosing the "convenient" path of Null, you are choosing to make every future interaction with that data more difficult.
You are choosing to require unit tests for "nothingness." You are choosing to make your API documentation a guessing game.
In the real world, if I ask you for a cup of coffee and you hand me a box that might contain a coffee or might contain a void that erases my existence, I’m not going to call that "convenient." I’m going to call the police.
Yet, in software, we do this to our teammates every single day and call it "best practices."
The reason Null is so pervasive is that most developers code for the "Happy Path"—the 90% of the time when everything works. Null is what happens when the Happy Path hits a wall.
Because we don't want to think about the "Unhappy Path," we just throw a Null and hope the next person handles it.
**If a value is truly optional, your type system should scream it from the rooftops.** It should be impossible to compile the code without acknowledging that the value might be missing.
If your language doesn't support this, you shouldn't be using that language for anything more important than a "Hello World" script in 2026.
We’ve reached a point where "Type Safety" without "Null Safety" is a contradiction in terms. A `String` that can also be `null` is not a `String`; it’s a `UnionType
We need to stop lying to ourselves about what our variables actually represent.
The tech industry has a weird obsession with speed over stability.
We reward the engineer who ships the feature in two days, even if it has three hidden Null landmines, and we ignore the engineer who takes four days to build a robust, Null-free architecture.
This incentive structure has created a market where fragility is commoditized. We build fast, we break things, and then we spend 80% of our maintenance budget fixing the things we broke.
**The "move fast and break things" era should have ended the moment we started putting LLMs and autonomous systems in charge of our infrastructure.**
When Tony Hoare issued his famous warning, he wasn't just talking about pointers. He was talking about the responsibility of the engineer.
He was reminding us that our job isn't to write code that *usually* works; it’s to write code that *cannot* fail in ways we haven't defined.
If you want to respect the legacy of the man who literally defined our field, stop using Null today. Here is the blueprint for 2026:
1. **Enforce Strict Null Checks:** If you are in TypeScript, `strictNullChecks` is not optional.
If you are in Java, use ` @NonNull` and ` @Nullable` annotations and set your build to fail on violations.
2. **Use Option/Maybe Types:** Wrap every optional value in a container. Force yourself and your teammates to `unwrap()` or `map()` over the value.
It makes the "nothing" state a first-class citizen instead of a hidden bug.
3. **The Null Object Pattern:** If a method returns a list, return an empty list, not Null. If it returns a string, return an empty string.
If it returns a user, return an "AnonymousUser" object with safe defaults.
4. **Fail Fast:** If a value is required and it’s missing, throw a specific, descriptive exception at the *entry point* of your system.
Don't let that Null seep into your business logic like a slow-acting poison.
We like to think of ourselves as architects and engineers, but as long as we continue to rely on Null, we are just highly-paid handymen patching leaks with duct tape.
We are using tools from the 1960s to solve problems of the 2030s, and we are surprised when the water keeps getting in.
Tony Hoare gave us the tools to build incredible things, but he also gave us the honesty to admit when we've messed up.
**The best way to heed his warning isn't just to quote him in lectures; it's the systematic removal of `null` from our production branches.**
How much of your current codebase is dedicated to checking for things that shouldn't be there in the first place?
When was the last time you felt truly confident that a change wouldn't trigger a cascading Null failure? If the answer is "never," then you aren't an engineer—you're a gambler.
And the house always wins.
Have you successfully moved a project to a "Zero-Null" architecture, or are you still stuck in the "if (x != null)" cycle of despair?
Let's talk about why we're still afraid to let go of our billion-dollar mistake in the comments.
Hey friends, thanks heaps for reading this one! 🙏
If it resonated, sparked an idea, or just made you nod along — I'd be genuinely stoked if you'd show some love. A clap on Medium or a like on Substack helps these pieces reach more people (and keeps this little writing habit going).
→ Pythonpom on Medium ← follow, clap, or just browse more!
→ Pominaus on Substack ← like, restack, or subscribe!
Zero pressure, but if you're in a generous mood and fancy buying me a virtual coffee to fuel the next late-night draft ☕, you can do that here: Buy Me a Coffee — your support (big or tiny) means the world.
Appreciate you taking the time. Let's keep chatting about tech, life hacks, and whatever comes next! ❤️