Beyond the Black Box: Mastering Insecure Output Handling (OWASP LLM05)

1. The Context: Why LLM05 Matters When we talk about AI security, the industry tends to obsess over what goes into the model—think prompt injection or jailbreaking. But the real “silent killer” is often what comes out. If an organization treats an LLM’s output as “trusted” or “safe” simply because it was generated by an AI, they are opening a door to classic web vulnerabilities in a very modern way. ...

5 min · Lakshya Rastogi

Exploiting an Enterprise AI: Chaining Vulnerabilities in a RAG HR Gateway

Quick Links: Source Code: View on GitHub (vulnerable-rag-agent) Author: Connect with Lakshya Rastogi on LinkedIn Executive Summary: The AI Blind Spot As startups and enterprises rapidly integrate Large Language Models (LLMs) into their internal workflows, a critical new attack surface is emerging: the data we trust the AI to process. To demonstrate this risk, I engineered “Happy-HR,” a deliberately vulnerable Retrieval-Augmented Generation (RAG) application. Designed as an internal HR assistant, the bot summarizes candidate resumes by parsing PDF uploads. However, by exploiting how the application handles this untrusted file input, I demonstrated how an external attacker could completely hijack the AI’s core instructions. ...

5 min · Lakshya Rastogi

The AI Integration Risk: Security Consequences of Rapid LLM and ML Adoption

Introduction: The Rush vs. The Risk The Hook The race to AI parity has fundamentally rewired the tech ecosystem. Right now, engineering teams are under immense, top-down pressure to integrate Large Language Models (LLMs) and Machine Learning (ML) capabilities to boost productivity, satisfy investor demands, and maintain a competitive edge. This FOMO-driven development cycle often treats artificial intelligence as a simple, plug-and-play API component. The reality, however, is much more volatile. We are seeing complex, non-deterministic cognitive engines being bolted onto existing architectures at breakneck speed—often heavily prioritizing time-to-market over robust security architecture. ...

6 min · Lakshya Rastogi

The Anatomy of a Prompt Injection: Direct, Indirect, and Jailbreaks

1. Introduction: The AI Architecture Flaw Prompt injection is frequently, and dangerously, misunderstood as a simple “glitch” that makes chatbots say inappropriate things. From an offensive security perspective, it is a critical enterprise vulnerability rooted in a fundamental architectural limitation: Large Language Models (LLMs) cannot inherently distinguish between developer instructions and user-supplied data. Much like classic SQL injection or buffer overflows, when an application blindly concatenates untrusted input with execution commands, the attacker dictates the output. In the era of AI integration, this oversight transforms benign features into remote code execution (RCE) and data exfiltration vectors. ...

3 min · Lakshya Rastogi