Astra security unveils research on AI security: Exposing critical risks and defining the future of large language models pentesting
expresscomputer.inAstra Security presented its latest research findings on vulnerabilities in Large Language Models (LLMs) and AI applications at the prestigious Cybersecurity Conference called CERT-In Samvaad 2025, bringing to light the growing risks of AI-first businesses face from prompt injection, jailbreaks, and other novel threats.
This research not only contributes to the OWASP Top 10: LLM & Generative AI Security Risks but also forms the basis of Astra’s enhanced testing methodologies aimed at securing AI systems with research-led defense strategies. From fintech to healthcare, Astra’s findings expose how AI systems can be manipulated into leaking sensitive data or making business-critical errors—risks that demand urgent and intelligent countermeasures.
AI is rapidly evolving from a productivity tool to a decision-maker, powering financial approvals, healthcare diagnoses, legal workflows, and even government systems. But with this trust comes a dangerous new frontier of threats.
“The catalyst for our research ...
Copyright of this story solely belongs to expresscomputer.in . To see the full text click HERE