With dozens of new AI laws and rules now in place across the globe, companies are making countless decisions about potential dangers, adding complexity to using AI responsibly. Kathy Baxter, principal architect of Salesforce's responsible AI and tech practice, chatted with us about how her team thinks ahead when it comes to AI responsibility, her role in advising government agencies, and the state of the regulation landscape. This conversation has been edited for length and clarity. You've been working in the responsible AI space for years now. What has changed about your work since everybody went crazy for generative AI? With generative AI, many of the risks are the same as predictive AI, but on steroids. So even higher risks, say bias and toxicity and content creation; there's some additional new concerns, like hallucinations. So just completely making up information out of thin air. There's also a real risk of sustainability, because this technology takes much more carbon and water than the traditional, smaller predictive models do. And so as we're going through and we're thinking about the products, it gives us an increased risk space that we need to be thinking about. It's also changed in that it's not as straightforward for how you address each of these issues. There are some techniques that we can use that can help, for example, RAG—the retrieval-augmented grounding of our models in our customers' data—that can really help with hallucinations, but it's not always sufficient, especially if our customers aren't doing good data hygiene. If they don't have complete, accurate, up-to-date data, then the model can end up hallucinating just as much as if you weren't pointing to the customer's data. And so thinking about this increased surface area, all the ways that we need to mitigate it. That's basically how our practice has changed. And with just a whole lot more people on our team covering all of this work now, as well. Then me, personally, I've increasingly worked more and more with our government affairs team to engage with policymakers and governmental groups to think about, "How do we set standards? How do we develop policies to ensure that this technology is safe for everyone?" Keep reading here.—PK |
No hay comentarios:
Publicar un comentario