Background

In the age of AI, one of the most cited essays is the Bitter Lesson by Richard Sutton. The premise is simple: methods that scale with compute keep beating methods that depend on us carefully encoding human knowledge. AlphaZero did not beat Stockfish because we taught it better openings. Speech and image recognition did not get dramatically better because we handcrafted better phonetic rules or bounding boxes. We just threw more data and compute at simple statistical models.The pattern repeats across almost every domain in AI.

And I think Sutton is basically right. Eventually.

While I agree with the idea (and it seems to be true whether I agree with it or not :D), and the premise of the article seems to be correct eventually, the thesis is not actionable.

How long can you wait?

Let’s take a quick detour into the land of finance.

There is a saying commonly attributed to Keynes that “Markets can stay irrational longer than you can stay solvent”. Suppose that you find a flaw in the market, or in specific companies, that you think will cause the market to correct eventually. Should you short it?

For example, look at the AI boom right now. There is a lot of real value being created, but there is also a lot of circular spending and self-reinforcing hype. NVIDIA sells GPUs to the cloud providers. The cloud providers pour billions into model companies. The model companies buy compute from the cloud providers, who then buy more GPUs. Data centers get built to serve the boom, and the boom itself becomes part of the justification for everybody’s valuation. You can believe this party ends badly and still have absolutely no idea when.

It might happen tomorrow, or in 5 years. And that uncertainty is the whole point.

Plenty of smart people shorted the housing market in 2006 and were right about the fundamentals, but some of them went bankrupt before the crash actually happened because they couldn’t sustain the position long enough.

AGI will take away all the knowledge work!

Similarly, a lot of people today are betting on the idea that AI will soon be smart enough that we will not need knowledge workers at all. Every idea will be turned into a polished product instantly. The machine will just “handle” it.

However, the gap between “impressive demo that I coded over the weekend” and “reliable production system” is wider than most people realize. Agents today are not smart enough to make good decisions autonomously. They can follow instructions and generate plausible output, but the moment you need actual judgment, when to deviate from the plan, when a requirement is wrong, when the right answer is to push back on the ask, they fall apart.

Anyone who knows how to code and has used AI coding agents for non-trivial work has seen this firsthand. The code compiles and the tests pass, but the architecture is terrible. You end up painted into a corner with abstractions that don’t serve any real purpose and patterns that fight against the grain of the codebase. I’ve written about this before.

If you cannot code, this failure mode is hard to see at the beginning. As a consultant, many of my clients have been CXOs with a clear product vision who vibe coded a first version. And initially it is exhilarating, because the first version almost always works. It IS impressive. Then they add one more feature, and another, and another, and somewhere around feature five the whole thing starts to collapse. The AI-generated codebase has no coherent architecture, the abstractions were chosen to make the current prompt work rather than to serve the product, and now every new change breaks two old things. By the time they come to me, the fastest path forward is often to throw it away and start over, with the AI as an assistant rather than the architect.

This is essentially the Gell-Mann Amnesia effect applied to AI. Michael Crichton coined the term to describe how you read a newspaper article about your area of expertise, notice it is full of errors, and then turn the page and read the next article as if it is gospel. The same thing happens with AI output: you see it produce confident, plausible-sounding work in a domain you do not know well and assume it is correct.

However, the last time you checked something in your domain, it was… also confident and plausible-sounding. And wrong.

Please don’t use AI for creative work

Today, AI output for anything remotely creative is slop. If you see a pretty website that actually creates an emotion in you, it was almost certainly made by a designer who knows what they are doing. The website that I, a non-designer, create looks like this blog: functional, but entirely uninspiring.

Now, you could make the toupee argument here: “you only notice the bad AI output; the good stuff passes by undetected.” There is some truth to that. But even the best AI-generated text usually has a quality that I can only describe as “mid”(as the kids say these days). It keeps drifting toward this safe, average, committee-approved voice. When you train on the aggregate of human writing, you should not be surprised when the result sounds aggregated. Yes, you can massage it into something good eventually, but that still requires taste and a lot of effort. In other words, the human is still doing the creative work.

Winter is Coming, so Make Hay while the Sun is shining

Yes, eventually, general compute will solve most of these problems. But “eventually” doesn’t pay your rent or build your career.

Imagine it is 1997. Deep Blue just beat Kasparov. You are the engineer in charge of building a chess engine for ordinary people to use. The Bitter Lesson would tell you: do not bother encoding opening theory or endgame tablebases, because general search and compute will eventually win. And it would be right. AlphaZero would eventually be created, learn entirely by playing itself with zero encoded human knowledge, and demolish everything that came before it. But that was in 2017. Twenty years later. Were you supposed to sit on your hands for two decades? The people who built Stockfish and Fritz in the interim made real products that real people used and paid for. They did real work that mattered. The Bitter Lesson being eventually true did not make their work worthless.

Today, right now, you can leverage yourself massively using AI. At Netrin, we roughly doubled our velocity as a team. That’s real. That’s happening now. The people who are using these tools effectively are getting a genuine edge over the ones who are waiting for the tools to be perfect.

As a single person, I can now do work that would have required an entire team a few years ago: take a product from ideation with stakeholders all the way to deployment and iteration, because I have agents doing the grunt work. But at the end of it, I am responsible for the code. I understand the critical architectural decisions. I can explain why any given component exists and what would break if you removed it. The AI is my force multiplier, not my replacement.

Do not be a luddite and insist on doing everything manually out of some misplaced pride. Use AI for brainstorming, for first drafts of code, for exploring solution spaces faster than you ever could alone. Let it do the bullshit work, the improvements you always wanted to make but never had time for. But do the deep, creative work yourself.

Most importantly, write your text yourself. AI writing is bland. Bland is perfectly fine when you are writing impersonal code. Nobody cares if a utility function has personality. But when you are talking to a fellow human, whether it is a blog post or an email to your team, your actual self has to come through. The whole point of writing is to communicate something that only you can communicate in the way that only you can communicate it.

After AlexNet in 2012, it still took more than a decade to get to systems that ordinary people experience as broadly useful intelligence1. In hindsight, that seems short. In career terms, it is an entire professional generation. If you had graduated in 2012, decided that “general compute will eventually solve everything”, and sat around waiting, you would have wasted a decade of your prime earning and learning years. The people who built domain expertise, and who understood the tools well enough to use them effectively when they arrived, are the people who benefited most from the AI wave.

Should you wait for AI to get better or make your money today? I think you should mine for the gold

Footnotes

  1. Maybe you don’t consider today’s AI true AGI, but ask anyone pre-2021 what AGI would be, and today’s simplest systems would surpass it.