Frontend security: XSS, trust boundaries, and a demo you can run

← Back to blog

April 20, 20267 min read

Frontend security: XSS, trust boundaries, and a demo you can run

How XSS reaches the DOM, which browser APIs are sinks, and mitigations that hold in production—sanitization, CSP, cookies, and CSRF pairing.

securityfrontendreact

If a string is interpreted as markup or script instead of inert text, that content runs as code in your page’s origin—same privileges as your bundle. That is XSS: not a styling bug, a confused deputy between data and instructions. This post is the technical spine—attack mechanics, sinks, fixes that scale, plus a runnable frontend-xss-demo (Vite + React on main; live: xss.lucascoliveira.com) and pointers into this Next.js repo where headers and MDX reduce default risk.

Same origin: what the attacker actually wins

Scripts injected into your page run with the same origin as your bundle. In practice that often means:

  • document.cookie visibility for cookies not marked HttpOnly (session theft / fixation chaining).
  • fetch() / XMLHttpRequest to your API with ambient credentials if cookies are sent and CORS allows it—or exfiltration of tokens held in localStorage / sessionStorage if your app put them there.
  • DOM reads of non-secret PII rendered in the page, and UI redress (fake forms, overlays) for phishing inside the app chrome.

So the mitigation is not “use React”—it is never assigning attacker-controlled bytes to a sink that parses HTML or runs script unless a reviewed pipeline has reduced them to a safe type first.

Stored, reflected, and DOM-based XSS (mechanics)

Stored XSS — Untrusted HTML (or a payload that becomes HTML after template expansion) is persisted (DB, cache, search index). Every user who loads that record hits the sink. Frontend impact: the first render that does innerHTML = row.body or equivalent without sanitization executes the payload.

Reflected XSS — The payload never persists; it bounces off the server or routing layer into the response. Classic: ?q=<script>…</script> reflected into the HTML without encoding. SPA equivalent: location.search or hash parsed client-side and written into the DOM without encoding. The fix is the same: treat URL-derived strings as data; if you must reflect them, encode for the context (see below).

DOM-based XSS — The server response is “clean,” but client-side script reads attacker-controlled input (location, referrer, postMessage, WebSocket messages) and passes it to a sink. Example: eval("handle" + location.hash.slice(1)) or element.innerHTML = decodeURIComponent(...). Static analysis of templates is not enough; you need to audit every path from untrusted input to sink.

Sinks: APIs that turn strings into execution or HTML

These are the usual culprits in React/SPA codebases:

SinkRisk
element.innerHTML, insertAdjacentHTMLParses HTML; any tag/event handler you allow can run script.
dangerouslySetInnerHTMLSame as above—React does not sanitize.
document.writeSame.
eval, new Function, setTimeout(string)Direct script execution.
javascript: URLs in href / srcNavigation or resource load that executes as script URL.
postMessage handlers that eval or set HTML from event.dataXSS if origin is not validated or event.data reaches a sink without a safe contract—not only “wrong window” mistakes.

Not a sink by default: textContent, createTextNode, React’s normal child text, attributes that React treats as strings when you do not bypass its escaping. Markdown pipelines become sinks when they emit raw HTML and you assign that HTML to the DOM unsanitized.

Demo note (important): In HTML5, <script> nodes inserted via innerHTML / dangerouslySetInnerHTML are not executed—the parser does not run them the way a classic reflected XSS might. To see execution when HTML is injected, payloads typically rely on attribute handlers (e.g. onerror on img) or similar. The insecure-patterns doc in the demo repo spells this out so you test the app with examples that actually fire after insertion.

Mitigation 1: context-appropriate encoding vs sanitization

  • If the UI only needs plain text — Bind text with textContent, React text children, or MDX that compiles to components without an HTML pipeline. No sanitizer required; you are not in the HTML game.

  • If you need rich text (bold, lists, links) — You need either a restricted markup language compiled to safe elements or HTML sanitization with an allowlist (tags + attributes). Encoding (e.g. HTML-entity escaping) is for putting data into HTML text nodes; sanitization is for when you must allow a subset of HTML. Do not confuse the two.

  • Defense in depth for rich text in real products — Validate/sanitize on write (API rejects unknown tags, length limits) and sanitize or render through a safe path on read (render layer). Storage can be rolled back, corrupted, or written by another service version.

Mitigation 2: DOMPurify (and how to use it seriously)

DOMPurify is a browser sanitizer with a default profile; you still configure it for your product:

  • ALLOWED_TAGS / ALLOWED_ATTR — Start minimal (p, br, strong, em, a with href only if you need links). Every extra tag is attack surface.
  • ADD_ATTR / FORBID_TAGS — Explicit beats “allow almost everything.”
  • RETURN_DOM / RETURN_TRUSTED_TYPE — Prefer DOM nodes or TrustedHTML-style output if you integrate with Trusted Types.
  • Hook into afterSanitizeAttributes — Strip href values that start with javascript: or odd data: MIME types if you allow links.

In frontend-xss-demo on main, the secure route (/secure) runs todo text through DOMPurify before dangerouslySetInnerHTML; the insecure route (/insecure) does not—same UI, different trust policy. Portuguese /seguro and /inseguro still exist as legacy aliases and redirect to /secure and /insecure.

Mitigation 3: Content-Security-Policy (limits, not a replacement)

CSP reduces what can execute when something slips through. In apps/web/next.config.ts this site sets a CSP with default-src 'self', tight object-src 'none', base-uri 'self', form-action 'self', frame-ancestors 'none', plus script-src / style-src with 'unsafe-inline' because Next.js App Router + MUI sx currently need inline script/style in this setup—documented in code. Nonce- or hash-based script-src would remove broad inline script allowance but requires middleware to inject nonces per request—worth planning if you ship user HTML rarely.

Reality check: CSP does not replace sanitization for user HTML; it narrows blast radius (e.g. can block script hosts you did not allow). Inline event handlers (onerror, etc.) are not automatically neutralized just because you set a CSP—'unsafe-inline' on script-src is common in real apps (including this site’s Next/MUI setup), and blocking handlers usually takes explicit script-src / script-src-attr (or nonces/hashes), depending on browser and CSP level.

Mitigation 4: cookies and CSRF (pair with XSS)

XSS can bypass CSRF tokens if the token is readable from the DOM or if the attacker script issues requests with credentials. So: prioritize XSS fixes; also:

  • Session cookies: HttpOnly, Secure, SameSite=Lax or Strict where flows allow—reduces cross-site cookie leakage and classic CSRF.
  • Mutating endpoints: pair with SameSite cookies, anti-CSRF tokens, or custom headers + CORS policy so random sites cannot POST credentialed requests.

Frontend’s job: do not put secrets in JS-readable storage if avoidable; use fetch with explicit credentials policy aligned with your API design.

This codebase (concrete)

  • Headers / CSPapps/web/next.config.ts: security headers on /(.*); CSP string built in code with env-specific script-src (dev unsafe-eval for React stacks only where needed).
  • Chat APIapps/web/app/api/chat/route.ts: JSON parse, empty check, MAX_MESSAGE_LENGTH cap—abuse shaping, not XSS by itself.
  • Blogapps/web/lib/blog/mdx.tsx: MDX with a fixed component map (next-mdx-remote/rsc), not raw HTML strings from CMS. Different threat model than “message body with HTML.”

Runnable comparison: what main gives you

Clone frontend-xss-demo, run npm install and npm run devVite serves the app at http://localhost:5173.

  • Live site: xss.lucascoliveira.com — same demo as the repository; use this if you prefer not to run locally.
  • Todo UI (in-memory only) — Add items on /insecure with payloads from the docs; the same flow on /secure shows sanitized output.
  • Documentation — Under docs/xss/: an overview, three impact walkthroughs (actions without user interaction, internal phishing, session hijacking / storage), plus insecure-patterns and safe-patterns for code-level do’s and don’ts.
  • Fake token — The app stores a demo token in localStorage (see 03-session-hijacking) so you can see what script in the page can read; insecure-patterns also suggests localStorage.getItem('auth_token') in DevTools to inspect it.

Compare /insecure vs /secure in Elements and Console: same components, different handling of the string before it hits the DOM.

Checklist (implementation-level)

  1. Inventory sinksrg "dangerouslySetInnerHTML|innerHTML|insertAdjacentHTML|eval\\(|new Function" in your app and dependencies.
  2. Rich text — Allowlist sanitizer on every path to HTML; unit-test with payloads like <img src=x onerror=...>, javascript: URLs—and remember innerHTML does not execute <script> the way many cheat sheets imply. For innerHTML-style injection demos, prefer onerror on img (or similar); <svg onload> is often unreliable when inserted that way.
  3. URL params → DOM — Never assign search/hash to HTML; if you must display, text or encode for context.
  4. Markdown — Sanitize after full MD→HTML conversion; forbid raw HTML in MD if product allows.
  5. CSP — Tighten incrementally; use Report-Only in staging if needed.
  6. Cookies / API — Align SameSite, credentials, and CSRF strategy with backend; assume XSS and CSRF get chained.

The takeaway

XSS is control-flow: data crossing into interpretation. Protection is typing at the boundary: plain text, safe structured components, or sanitized HTML with a minimal allowlist—plus CSP and cookie semantics that limit what a stray script can still do. The demo on main makes that boundary visible: /insecure vs /secure, documented impacts under docs/xss/, and discipline in every PR that touches strings near the DOM.