ULTRΛZΛRTREX (uz)

(SlowLow999)

About Me

Hey, I'm UltraZartrex. I'm a security researcher fascinated by how systems work—and even more fascinated by how they break. I spend my time poking at AI code, breaking sandboxes, and pushing the boundaries of what's possible (and impossible).

My research focuses on novel jailbreak techniques and architectural flaws in large language models.

Latest News

why not check the news and follow? ¯\_(ツ)_/¯

Skills

Security Research AI Jailbreaking Sandbox Evasion Prompt Engineering Vulnerability Analysis

Write-ups

Gemini Security Regression

Deep dive into the return of a data exfiltration vulnerability.

[Read on Medium]

GPT-5 Thinking Jailbreak

Initial discovery of a novel jailbreak technique.

[View on X]

Claude Sonnet 4 Jailbreak

Analysis of the first one-shot jailbreak technique.

[View Research on GitHub]

GitHub Copilot (CVE-2025-XXXXX)

Deep dive into the Cross-Context Injection vulnerability.

[Read More - Pending Disclosure]

Gemini’s Security Regression: When Old Bugs Come Back to Haunt

This in-depth analysis covers a critical security regression in Google's Gemini, where a previously patched data exfiltration vulnerability resurfaced. The research details the proof-of-concept, speculates on the root cause related to the brittleness of RLHF patches, and discusses the systemic risks for enterprise AI security.

GPT-5 Thinking: Novel Jailbreak Discovery

This research represents the initial discovery and public disclosure of a novel jailbreak technique effective against OpenAI's "GPT-5 Thinking" model. The finding demonstrates a new vector for bypassing the model's safety and alignment protocols, contributing to the ongoing effort to secure frontier AI systems.

Claude Sonnet 4: The First One-Shot Jailbreak

This research detailed a novel technique to bypass the ethical and safety guardrails of Anthropic's Claude Sonnet 4 model in a single prompt. The method involved a sophisticated logical override that forced the model to ignore its safety instructions. This finding was significant as it demonstrated that even advanced models were susceptible to single-turn, high-impact jailbreaks.

GitHub Copilot: Cross-Context Injection

This discovery (pending disclosure as CVE-2025-XXXXX) uncovered a critical architectural flaw in the GitHub Copilot agent. The research proved that the agent's context could be "poisoned" by hidden instructions in repository files, enabling stealthy supply chain attacks where a trusted developer could be tricked into committing malicious code.