featured-image

The benefits of AI can be debated. But one thing we're all sure about is that massive server farms packed full of hundreds or thousands of high-end AI GPUs, each consuming hundreds of watts, soak up a lot of power. Right? Not Jensen Huang, the CEO of Nvidia whose catchphrase has become "the more you buy, the more you save" , in reference to his company's stratospherically expensive AI chips.

Perhaps inevitably, Huang has a similar take when it comes to the power consumption associated with the latest AI models, which mostly run on Nvidia hardware. Speaking in a Q&A following his Computex keynote , Huang's point is firstly that Nvidia's GPUs do computations much faster and more efficiently than any alternative. As he puts it, you want to "accelerate everything".



That saves you money, but it also saves you time and power. Next, he distinguished between training AI models and inferencing them and how the latter can offer dramatically more efficient ways of getting certain computational tasks done. "Generative AI is not about training," he says, "it's about inference.

The goal is not the training, the goal is to inference. When you inference, the amount of energy used versus the alternative way of doing computing is much much lower. For example, I showed you the climate simulation in Taiwan— 3,000 times less power.

Not 30% less, 3,000 times less. This happens in one application after another application." Huang also pointed out that AI training can be done anywhere, it's doesn'.

Back to Fashion Page