Openai CEO Sam Altman said Openai was forced to surprise the deployment of its latest model, GPT-4.5, as it is “outside the GPU.”
In an X post, Altman said that GPT-4.5, which he described as “huge” and “expensive,” requires “tens of thousands” more GPUs before being able to access it by additional ChatGPT users. GPT-4.5 will be the first to be subscribers to ChatGpt Pro starting Thursday, followed by ChatGpt Plus customers next week.
The GPT-4.5 is very expensive, probably because it is part of its size. Openai supplies the model $75 (approximately 750,000 words) per million dollars, and charges $150 per million dollars generated by the model. This is 30 times the input cost and 15 times the output cost of Openai’s Workhorse GPT-4O model.
GPT 4.5 pricing is irrelevant. If this doesn’t smell huge models, I’d be disappointed pic.twitter.com/1kk5lpn9gh
– Casper Hansen (@casper_hansen_) February 27, 2025
“We’re growing a lot and coming out of the GPU,” writes Altman. “We’re adding tens of thousands of GPUs next week and rolling them out into the positive layer… This isn’t the way we want to operate, but it’s hard to fully predict the surge in growth that will lead to a GPU shortage.”
Altman previously said that lack of computing capabilities is slowing down the company’s products. Openai hopes to fight this in the coming years by developing its own AI chips and building a large network of data centers.