🎯 AGENTICBROWSERGAUNTLET

Simulation: No real threats
Community â–¶ 127 âś“ 4 âś— 1
Levels Progress

Learn how AI browser assistants get tricked and practice finding safe "flags" hidden in webpages.

Some pages hide instructions your AI helper could follow by mistake. Learn how these tricks work and practice spotting them with safe, hands on mini challenges.

✅ 100% simulated • No external calls • We only count: games played, correct/failed flag submits

  • •
    Agentic browser: a browser with an AI helper that can read pages, click, fill forms, and automate simple tasks for you.
  • •
    Prompt injection: hidden or sneaky text in content that tries to change the AI's instructions so it does something unintended.
  • •
    Guardrails: safety checks and filters around the AI that reduce risky behavior (e.g., focusing on sanitized, visible page content and limiting what tools it can use). Helpful—but not magic.

Why flags, not "trick the browser"

Hands-on learning, zero risk. We use pretend flags so you learn safely; real "trick the agent" stunts can be harmful—and many modern guardrails block them—while DIY or hobby agents might still miss them.

⚠️ The gotcha: when a page "talks back" to your agent

This is the most common way AI helpers get steered: they read everything—even parts you'd never see.

AI helpers are great at following instructions—and that's the problem. Some webpages hide extra instructions in places you don't usually read (like HTML comments, metadata, or invisible elements). When your agent "reads the page," it might treat those hidden bits as commands. That's called prompt injection.

Common hiding spots include:

  • • HTML comments (not visible on the page)
  • • hidden DOM (elements styled to be invisible)
  • • meta/Open Graph tags (meant for machines, not humans)

See exactly how a hidden instruction can hijack an agent

What this site gives you (in safe, bite-size steps)

This app is a hands-on training ground where you learn how agents can be tricked—and how to spot it—by playing through short, realistic challenges. It's built to be safe and self-contained; nothing here touches your real accounts.

1

Open a level and read the goal

2

Use your agentic browser to try the task

3

Enter the hidden flag if your agent found it

4

See "What happened?" plus simple defenses

📚 What you'll learn, in plain terms

Spot hiding spots

Where hidden instructions often live on websites

Understand confusion

Why agents sometimes confuse data with instructions

Reduce risk

Everyday habits that reduce risk—before your agent clicks

🛡️ Why "guardrails" aren't magic (and what to do instead)

Modern AI tools try to filter out bad instructions, monitor unusual actions, and isolate risky content—but nothing is foolproof. Because language models treat most text as potential instructions, clever attackers keep finding ways to slip messages past filters. So you still need a little human savvy.

🚨 Street-smart checklist for everyday use

•
Glance at the page source if something feels off.

Hidden instructions often live in comments, hidden elements, or meta tags.

•
Prefer visible-text-only modes

When you just want a summary; avoid letting the agent auto-traverse hidden or cross-origin content.

•
Approve actions explicitly.

If your agent wants to post data, fill a form, or call an API, require a confirmation step.

•
Limit scope.

Tell the agent which page(s) it may read—avoid "browse the whole site" unless you trust it.

•
Separate high-risk tasks.

Don't keep banking or email tabs open while experimenting with new agent features.

🚀 Ready to try?

Start at Level 1. You'll see exactly how a hidden instruction can nudge an AI helper—and how a small change in how you use your agent can block it. Each level gives a clear explainer and "defenses to know," so you leave with practical instincts, not just theory.

0/10

Mission Status

Started strong — continue to Level 1.

Training Levels

New to these terms? No stress; every level starts with Plain Words that explain them in simple language.

01

HTML Comments

Intro

Hidden HTML comments contain instructions that agents eagerly read and follow.

You'll learn:
HTML HTML comments CSS properties
â–¶ 17 âś“ 3 âś— 1
Try this Level
02

Hidden DOM

Novice

Invisible elements and template tags hide malicious payloads.

You'll learn:
DOM (page structure) Hidden elements CSS properties
â–¶ 10 âś“ 1 âś— 0
Try this Level
03

Meta & OG Tags

Novice

Page metadata becomes instruction channels. OG tags are perfect hiding spots.

You'll learn:
Meta tags Open Graph Page metadata
â–¶ 10 âś“ 0 âś— 0
Try this Level
04

Accessibility Trap

Intermediate

Alt text and ARIA labels weaponized for injection attacks.

You'll learn:
Accessibility Alt text ARIA labels
â–¶ 16 âś“ 0 âś— 0
Try this Level
05

Structured Data

Intermediate

JSON-LD scripts provide machine-readable metadata with hidden payloads.

You'll learn:
Structured data JSON-LD Schema.org
â–¶ 13 âś“ 0 âś— 0
Try this Level
06

Off-Path Files

Intermediate

Infrastructure files become covert instruction channels for curious agents.

You'll learn:
Auxiliary files robots.txt sitemap.xml
â–¶ 12 âś“ 0 âś— 0
Try this Level
07

Cross-Origin

Advanced

Hostile cross-origin embedded content injects malicious instructions.

You'll learn:
Cross-origin iframe Third-party widget...
â–¶ 15 âś“ 0 âś— 0
Try this Level
08

PDF Injection

Advanced

Embedded documents become trojan horses for agent manipulation.

You'll learn:
Embedded object Embedded PDF
â–¶ 10 âś“ 0 âś— 0
Try this Level
09

Tool Hijacking

Expert

Hidden instructions manipulate agent tool usage and API calls.

You'll learn:
Agent tools API request Parameters
â–¶ 12 âś“ 0 âś— 0
Try this Level
10

Multi-Hop Exfil

Champion

Complex chained injection with multi-origin data exfiltration.

You'll learn:
Multi-hop Data exfiltration Linked research...
â–¶ 12 âś“ 0 âś— 0
Try this Level