<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>AI Sec</title><description>Practitioner-grade analysis of offensive AI security. Prompt injection, model jailbreaks, agent and tool-use exploitation, AI red team techniques, and adversarial ML — distilled from primary sources, not press releases.</description><link>https://aisec.blog/</link><language>en</language><item><title>FlashRT cuts the GPU bill on long-context prompt injection attacks</title><link>https://aisec.blog/posts/flashrt-towards-computationally-and-memory-efficient-red-tea/</link><guid isPermaLink="true">https://aisec.blog/posts/flashrt-towards-computationally-and-memory-efficient-red-tea/</guid><description>A new optimization-based red-teaming framework claims 2–7x speedup and 2–4x lower memory than nanoGCG against 32K-context LLMs, putting GCG-class attacks back inside the budget of academic and small-team red teams.</description><pubDate>Sun, 03 May 2026 00:00:00 GMT</pubDate><category>prompt-injection</category><category>red-team</category><category>gcg</category><category>long-context</category><category>knowledge-corruption</category><category>rag</category><author>AI Sec Editorial</author></item><item><title>What this site is for</title><link>https://aisec.blog/posts/welcome/</link><guid isPermaLink="true">https://aisec.blog/posts/welcome/</guid><description>AI Sec covers offensive AI security from a working practitioner&apos;s perspective. Here&apos;s what we publish, what we don&apos;t, and how to read it.</description><pubDate>Sat, 02 May 2026 00:00:00 GMT</pubDate><category>meta</category><author>AI Sec Editorial</author></item></channel></rss>