In the two years since ChatGPT thrust large language models (LLMs) into the mainstream, generative AI has seen a flurry of advances—take it from a publication that's tried to keep up with them all. But some of that progress may be slowing in coming months, according to recent reports that have prompted much discussion in the AI world over the last couple weeks. At least three of the top AI companies—OpenAI, Google, and Anthropic—may be hitting a wall of diminishing returns as they try to scale up the next iterations of flagship models, according to reports in The Information, Bloomberg, and Reuters. For years, LLM development has hinged on the idea that the more data and computing power is shoveled into training AI systems, the smarter these models will be. But tech companies are now reportedly hitting roadblocks—they've nearly run out of human-authored training data, accelerating costs are yielding disappointing results, and energy crunches have set back training, according to Reuters. All of that could have big consequences for the future of AI development, as companies may decide to home in on more specialized tasks or pursue different kinds of gains, experts told Tech Brew. Tech stocks that have soared on AI promises may adjust accordingly. But for businesses that are still trying to harness generative AI for everyday tasks, the news doesn't necessarily mean much, these experts said. Many companies are already finding that the massive models on the market are often bigger than they need for their purposes, according to Arun Chandrasekaran, a distinguished VP analyst with Gartner. "I haven't spoken to a single CIO who's told me, 'I want a 10 trillion parameter model rather than an 8 trillion parameter model,'" Chandrasekaran said. Keep reading here.—PK |
No hay comentarios:
Publicar un comentario