Like college students who skip assigned reading, generative AI chatbots have a reputation for confidently spouting wrong answers. Amazon Web Services (AWS) is looking to curb that habit with a new tool that will force the AI to show its work. The tech giant's cloud arm is rolling out a new option called a "contextual grounding check" that will compel large language models (LLMs) to back up output with a reference text. Enterprise AI users can set the confidence level of accuracy they demand, and Amazon claims the tool can cut down on as much as 75% of hallucinations on retrieval-augmented generation and summarization tasks. The tool joins other customizable guardrails that Amazon's generative AI Bedrock platform already has in place to allow users to filter out objectionable content, such as offensive words, personally identifiable information, or simply irrelevant topics. AWS also announced that these guardrails, first made widely available in April, will now be offered as a standalone API. Confidence boost: The trustworthiness of AI generation continues to be an obstacle as companies scramble to develop LLM tools for everything from customer service to summarization. Safeguarding tools like these aim to help set AWS's platforms apart as a safer place, especially for companies in highly regulated industries like banking or healthcare, according to AWS VP of AI Products Matt Wood. "Now we can protect against erroneous, confidently wrong answers that the model might accidentally generate," Wood told Tech Brew at a New York event this week. "We have seen from customers in regulated industries and many others, that as they move these systems to production, guardrails are just a 'do not pass go, do not collect $200' kind of capability." Keep reading here.—PK |
No hay comentarios:
Publicar un comentario