Amazon Bedrock Guardrails expands support for code domain
aws.amazon.com - machine-learningAmazon Bedrock Guardrails now supports protection against undesirable content within code elements including user prompts, comments, variables, function names, and string literals. Amazon Bedrock Guardrails provides configurable safeguards for building generative AI applications at scale. These safety controls work seamlessly whether you’re using foundation models from Amazon Bedrock, or applying them at various intervention points in your application using the ApplyGuardrail API. Currently, Amazon Bedrock Guardrails offers six key safeguards to help detect and filter undesirable content and confidential information, helping you align your AI applications with your organization’s responsible AI policies. These safeguards include content filters, denied topics, word filters, sensitive information filters, contextual grounding checks, and Automated Reasoning checks.
As organizations adopt AI systems for software development and code automation, they face new security and safety challenges. As an example, coding agents often have access to sensitive development environments, repositories, and build systems, making it essential ...
Copyright of this story solely belongs to aws.amazon.com - machine-learning . To see the full text click HERE

