HTB Certified Penetration Testing Specialist (CPTS) blog

Waiting for the result.

March 26, 2026 · 1 min · Lakshya Rastogi

Beyond the Black Box: Mastering Insecure Output Handling (OWASP LLM05)

1. The Context: Why LLM05 Matters When we talk about AI security, the industry tends to obsess over what goes into the model—think prompt injection or jailbreaking. But the real “silent killer” is often what comes out. If an organization treats an LLM’s output as “trusted” or “safe” simply because it was generated by an AI, they are opening a door to classic web vulnerabilities in a very modern way. ...

5 min · Lakshya Rastogi

The AI Integration Risk: Security Consequences of Rapid LLM and ML Adoption

Introduction: The Rush vs. The Risk The Hook The race to AI parity has fundamentally rewired the tech ecosystem. Right now, engineering teams are under immense, top-down pressure to integrate Large Language Models (LLMs) and Machine Learning (ML) capabilities to boost productivity, satisfy investor demands, and maintain a competitive edge. This FOMO-driven development cycle often treats artificial intelligence as a simple, plug-and-play API component. The reality, however, is much more volatile. We are seeing complex, non-deterministic cognitive engines being bolted onto existing architectures at breakneck speed—often heavily prioritizing time-to-market over robust security architecture. ...

6 min · Lakshya Rastogi

The Anatomy of a Prompt Injection: Direct, Indirect, and Jailbreaks

1. Introduction: The AI Architecture Flaw Prompt injection is frequently, and dangerously, misunderstood as a simple “glitch” that makes chatbots say inappropriate things. From an offensive security perspective, it is a critical enterprise vulnerability rooted in a fundamental architectural limitation: Large Language Models (LLMs) cannot inherently distinguish between developer instructions and user-supplied data. Much like classic SQL injection or buffer overflows, when an application blindly concatenates untrusted input with execution commands, the attacker dictates the output. In the era of AI integration, this oversight transforms benign features into remote code execution (RCE) and data exfiltration vectors. ...

3 min · Lakshya Rastogi