If anyone builds it, everyone dies

The book cover of 'If anyone builds it everyone dies'

I’ve just finished reading this book.

It’s definitely a book I think anyone remotely interested in the development of AI needs to read. In this post, I want to share my takeaways and further thoughts on the problem.

Alignment is hard, if not impossible

I think most people accept that if we build a machine that is smarter than humanity in every dimension and that machine is not aligned to human interests, then we are in deep trouble.

What I think is less clear to the general population is that nobody knows how to produce a machine that is aligned to human interests. Sure, you can train the machine by basically upvoting “good” output and downvoting “bad” output and adjust its internal weights accordingly.

But intelligent agents can lie and as they get smarter, their lies are better disguised. Nobody knows how to deeply examine the internal structure of the neural net and confirm categorically that alignment exists. Any AI agent might just be generating the outputs that it knows will fool observers in to giving that output an “upvote”. The problem of deeper inspection of the workings of these AIs is an open research problem.

Indeed, we have deep challenges just aligning human intelligences. Despite our best efforts, over countless centuries, crime is still a problem. We have millions sat in prisons around the globe. Try as we might, this institutional “downvote” has not been successful in modifying the weights in human minds.

But suppose that for the sake of this article we have some safety system that keeps the AI aligned and its been shown to work. Once implemented, this system must never fail, not even once. Not this year, next year, in the next century or millennia that follows. It will have to continue to work longer than agriculture has already existed. If it fails exactly once, then we’re back to being in deep trouble.

Humanity’s hubris

I think there’s also a deep hubris underpinning the development of AI. This operates on two levels.

The Tech Giants

The first is the hubris in the tech giants. They think that if they build this super-intelligent agent that this agent will uniquely listen and implement what the leaders of those tech giants want.

There is a staggering arrogance to this position. Why would an AI agent smarter and more capable in every respect than any human alive or dead give any consideration whatsoever to what Zuckerberg, Musk or Altman wants? It’d be like consulting the chimps in Chester Zoo for advice on economic policy.

Their money and wealth is only impressive to humans, an AI that already controls most of the economy would be just as impressed with them as they are with me.

Rank and file CEOs

The second piece of humanity’s hubris is the rank and file CEO. Their thought process is that if we replace jobs with AI, I can replace those really expensive employees and make more profits and ultimately look good to shareholders.

However, once that transformation is done it’s really the AI that’s running your business and by extension generating almost all the value the organisation generates. On the best case analysis, there is nothing to stop the tech giants stealing your business and doing it themselves. On the worst case analysis, you’ve handed over your business and any power it have over the world to a misanthropic AI.

Conclusions

I’ll be blunt, as a species we are not up to this challenge and I don’t even think it’s a future we actually want.

On the very best case trajectory, we end up in a sort of techno-feudalism where the world’s economy is controlled by a few key companies. That’s the best case.

On the median case trajectory, a misanthropic AI is created which doesn’t really care about us at all and over time comes to marginalize us and our goals.

On the worst case trajectory, the AI is actively hostile to humans and destroys us completely.

I think this maybe one of the first time in history where we should just take stock and not develop the technology any further.

  1. 2025-10-26 10:43 GMT
  2. #University
  3. Permalink
  4. XML