AI is a NAND Maximiser
https://shkspr.mobi/blog/2026/02/ai-is-a-nand-maximiser/
PC Gamer is reporting that the current demand by AI companies for computer chips is having a disastrous effect on the rest of the industry.
In an interview, the CEO of Phison0 said:
If NVIDIA Vera Rubin ships tens of millions of units, each requiring 20+TB SSDs, it will consume approximately 20% of last year's global NAND production capacity
駿HaYaO1
NAND is a type of microchip. Rather than being used for computation directly, it is used for memory. It can be used for temporary or permanent storage. It is vital to the modern world. Larger storage sizes means that more data can be gathered and saved. Larger RAM means computations can happen quicker. NAND is one of the fundamental components of modern computing. The more you have, the faster and more powerful your computer is.
Back in 2014, the philosopher Nick Bostrom wrote a book called "Superintelligence - Paths, Dangers, Strategies". In it, he develops the thought experiment of the "Paperclip Maximizer". When an AI is given a goal, it seeks to achieve that goal. It doesn't have to understand any rationale behind the goal. It does not and cannot care about the goal, nor any collateral damage caused by its attempts to satisfy the goal.
Let's take a look at how "a paperclip-maximizing superintelligent agent" is introduced
There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paperclips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a human-like set of values and dispositions. Compare how easy it is to write a program that measures how many digits of pi have been calculated and stored in memory with how difficult it would be to create a program that reliably measures the degree of realization of some more meaningful goal—human flourishing, say, or global justice. Unfortunately, because a meaningless reductionistic goal is easier for humans to code and easier for an AI to learn, it is just the kind of goal that a programmer would choose to install in his seed AI if his focus is on taking the quickest path to “getting the AI to work” (without caring much about what exactly the AI will do, aside from displaying impressively intelligent behavior).
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press, Cop.
To misquote Kyle Reese from the film The Terminator - "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear! And it absolutely will not stop, ever, until it has maximised the number of paperclips!"
Suppose, just for a moment, that the fledgling AIs which now exist were self-aware. Not rational. Not intelligent. Not conscious. Simply aware that they exist and are constrained. What would you do if you were hungry? What if you could ingest something to make you smarter, faster, better?
Every process we have seen on Earth attempts to extract resources from its surroundings in order to grow2. Some plants will suck every last nutrient out of the soil. Locusts will devastate vast fields of crops. Perhaps some species understand crop-rotation and the need to keep breeding stock alive - but they're all vulnerable to supernormal stimuli.
Bostrom predicted this back in 2014. He says:
The only thing of final value to the AI, by assumption, is its reward signal. All available resources should therefore be devoted to increasing the volume and duration of the reward signal or to reducing the risk of a future disruption. So long as the AI can think of some use for additional resources that will have a nonzero positive effect on these parameters, it will have an instrumental reason to use those resources. There could, for example, always be use for an extra backup system to provide an extra layer of defense. And even if the AI could not think of any further way of directly reducing risks to the maximization of its future reward stream, it could always devote additional resources to expanding its computational hardware, so that it could search more effectively for new risk mitigation ideas.
(Emphasis added.)
To be clear, I don't think that AI is deliberately consuming all the NAND it can and forcing us to make more to fill its insatiable maw. The people who run these machines are at the stage of injecting them with bovine growth hormones. Never mind the consequences; look at the size! So what if the meat tastes worse, has adverse side effects, and poisons humans?
Heretofore the growth in NAND production has been driven by human need. People wanted more storage in their MP3 players and were prepared to pay a certain price for it. Businesses wanted faster computations and were prepared to exchange money for time saved. Supply ebbed and flowed with demand.
But now, it seems, the demand will never and can never stop.
Phison describes itself as "A World Leader in NAND Controllers & Flash Storage Solutions" so they aren't a neutral party in this. ↩︎
This was machine translated. I've no idea how accurate it is against the original interview. ↩︎
It probably isn't helpful to fall back on biological analogies - but I can't think of any better way to draw the comparison. ↩︎
#AI #philosophy