Generative AI is becoming increasingly unavoidable as tech companies weave the technology into all kinds of products. But how much do we actually know about the data on which these models are trained, the rules that govern them, or their resource usage? When Stanford University researchers first looked into questions like these last October, the answer was a resounding not much. Half a year later, a reassessment found some slightly more encouraging signs, though there's still a lot of room for improvement. The goal of the transparency index, published by the Stanford Institute for Human-Centered Artificial Intelligence, is to encourage developers of societally consequential AI to reveal more about how these systems work. It comes as governments around the world are taking more steps to create guardrails around AI, oftentimes including disclosures about the development process and expected impact of deployment. Authors of the Foundation Model Transparency Index graded major players in the AI arms race like OpenAI, Google, and Meta across various measures of openness in their development processes. They then assigned each model an overall score out of 100. This time around, the average score on the index climbed to 58 points out of 100 from 37 points in October as more companies have disclosed or complied with the index's request for information around hardware, compute power, and energy usage. Still, the study noted that some areas are still kept under wraps. Keep reading here.—PK |
No hay comentarios:
Publicar un comentario