Even after two years of interacting with ChatGPT, we still don't know exactly how it determines the text it's going to toss back at you. We know that every time someone enters text into a large language model (LLM), the words are encoded into numbers that then funnel through billions of specially attuned equations. These nodes and weights have been molded by mind-boggling amounts of human writing to represent patterns in language syntax, semantics, and context. The process is much more complicated than that, involving an architecture of multiple neural networks, but the point is that these systems are so massive that divining exactly why ChatGPT (or any other foundation model) spits out the answer that it does is currently functionally impossible. That "black box" may be fine for generating an email, but what about when AI is applied to a loan approval decision or power grid management? Can we really rely on the technology to aid with high-stakes tasks when we don't even know why it makes the decisions that it does? Experts say the answer has become a bit more complicated in the last couple years. While so-called explainability—the idea of a white-box AI with traceable decisions—used to be more of an option in a relatively simpler deep-learning era, the rise of LLMs has changed the thinking around this concept. With traditional machine learning, one could make a certain trade-off between the higher performance of a more complex model and a simpler algorithm with easily interpretable results, according to Kurt Muehmel, head of AI strategy at data and AI management platform Dataiku. But in the era of foundation models, you could say explainability as a term of art is becoming a bit less…explainable. "It's getting foggy for a lot of people out there," Muehmel said. "With the advent of LLMs, that's now brought a lot of confusion around the notion of explainability, because we're no longer talking about model selection. No LLM can be explainable in that old sense." Keep reading here.—PK |
No hay comentarios:
Publicar un comentario