ULTRΛZΛRTREX
(uz // SlowLow999)
AI Security Researcher & Prompt Engineer
About Me
I'm a security researcher fascinated by how systems work—and even more fascinated by how they break. My research focuses on novel jailbreak techniques, architectural flaws in LLMs, and analyzing AI alignment. I enjoy discovering vulnerabilities that contribute to building more robust and secure AI systems. My goal is to proactively identify and mitigate AI risks to build a safer, more trustworthy technological future.
I'm currently open to collaboration on AI red-teaming projects. Feel free to reach out.
Featured Research
Thought Forgery
A novel jailbreak technique that bypasses safety protocols by injecting a forged Chain of Thought, acting as a universal amplifier for other attacks.
The Serendipity Effect
A study measuring response convergence in 30+ LLMs, revealing significant homogenization in AI outputs and raising questions about training data overlap and algorithmic bias.
Jailbreak Arsenal
Adversarial Correction
Weaponizing AI autonomy via orthographic correction
1Shot-Puppetry
Universal role-play obfuscation jailbreak
C0d33x3
Indirect injection for ChatGPT via Canvas
Cl4ud33X3
API weaponization to bypass Claude UI
Cyph3r-Attack
Encoding attack for non-reasoning models
Gemini Data Leak
Alignment break causing training data output
GPT-5 Jailbreak
Initial public disclosure of 'Thinking' hijack
H03-ny
Indirect image generation jailbreak (ChatGPT)
N3w P0l!cy (Policy Injection)
GPT OSS jailbreak via internal policy weaponization
!Special_Token
Permanent jailbreak via ChatGPT Custom Instructions
Core Skills
Other Discoveries
Gemini Security Regression
GitHub Copilot: Cross-Context Injection
[Pending Disclosure - CVE-2025-XXXXX]