Launch readiness · 13 min read

Frontend launch readiness: 14 checks before AI code goes live

Cursor wrote your checkout flow. Claude Code added a settings page in twenty seconds. Codex generated the whole signup form. Everything looks fine in dev — until someone opens it on a phone, in dark mode, with a screen reader, on the same Wi-Fi as a malicious frame. Here’s the deterministic 14-point checklist your AI didn’t run.

13 min readRun the scanAll 37 rules

Most launch checklists are vibes. “Make sure responsive works. Test dark mode. Run an accessibility audit.” Items vague enough that anyone can tick them off without actually checking anything. Then production breaks.

This one is different. Every check is a deterministic ESLint rule that either passes or fails on every render of every file — no LLM in the check path, no judgement calls. Run it once, get a 0–100 score and a list of the exact lines to fix. Then run it again on the next AI-generated PR. Then on the one after that.

Fourteen checks across five categories — design tokens, responsive coverage, WCAG 2.2, dark mode, and the frontend-safety basics that a freshly-generated React app routinely ships without. We walk all fourteen below, with the rule that catches each one and a two-line bad/good diff. The closing section turns the whole list into a single command: npx deslint launch-check.

None of this is novel. The rules have existed in some form for years in the ESLint ecosystem. What’s new is the volume of code AI agents are now writing — and the fact that they consistently ship the same fourteen mistakes. The checklist exists because the agents don’t.

Category 1 / 5

Design tokens (3 checks)

Tailwind made the design system visible in className. AI agents make it disappear again with arbitrary values: p-[13px] instead of p-3, bg-[#1A5276] instead of bg-primary. Three checks catch every common drift before it sediments into the codebase.

1

No hardcoded Tailwind spacing

AI agents pick whatever pixel value matches the screenshot they were given, ignoring your spacing scale. Three weeks later your scale has fifteen near-identical values nobody chose, and rhythm collapses. The rule flags p-[Npx], m-[Npx], gap-[Npx], w-[Npx] and friends, and auto-fixes to the nearest scale entry.
What AI ships
<div className="p-[13px] m-[7px] gap-[20px]" />
What it should look like
<div className="p-3 m-2 gap-5" />
2

No hex colors outside the palette

A raw hex inside className ships a brand color that isn’t in your tokens, won’t flip in dark mode, and won’t pass contrast on every surface. Catches bg-[#...], text-[rgb(...)], border-[hsl(...)] and rewrites them to the closest token. CSS variables (var(--brand)) are allowed by default.
What AI ships
<button className="bg-[#1A5276] text-[#fff]" />
What it should look like
<button className="bg-primary text-white" />
3

No magic numbers in grid / flex layout

AI loves grid-cols-[200px_1fr] because it matches the design at one breakpoint. It also breaks the moment a label gets longer or the language switches. The rule flags raw pixel values in grid, flex, and order utilities — CSS functions like minmax() and repeat() pass through.
What AI ships
<div className="grid grid-cols-[200px_1fr] basis-[180px]" />
What it should look like
<div className="grid grid-cols-[var(--sidebar)_1fr] basis-[var(--sidebar)]" />

Category 2 / 5

Responsive (3 checks)

Most AI agents never opened DevTools. They built your UI at one viewport size, the one in their training distribution. Three checks catch the layouts that pretend desktop is everywhere.

4

No fixed-width containers without breakpoints

A literal w-[800px] ships horizontal scroll on every phone in the world. The rule flags any fixed-width container (w, max-w, min-w with a px value) that doesn’t also declare a responsive variant under the configured breakpoints (sm:, md: by default).
What AI ships
<div className="w-[800px]">…</div>
What it should look like
<div className="w-full max-w-[800px] sm:w-auto">…</div>
5

Viewport meta does not block zoom

AI sometimes copy-pastes a user-scalable=no or maximum-scale=1 from a 2016 Stack Overflow answer that was wrong then too. Both block users with low vision from pinch-zooming and fail WCAG 1.4.4. The rule flags the offending viewport meta in the document head.
What AI ships
<meta name="viewport" content="width=device-width, user-scalable=no" />
What it should look like
<meta name="viewport" content="width=device-width, initial-scale=1" />
Caught by: viewport-meta
6

Interactive targets ≥ 24×24

WCAG 2.5.8
A 16×16 close button works on a desktop trackpad and is unhittable on a phone. WCAG 2.2 AA requires interactive targets to be at least 24×24 CSS pixels (or have 24px spacing around them). The rule walks every <button>, <a>, and form control and computes the rendered click box from Tailwind sizing utilities.
What AI ships
<button className="h-4 w-4 p-0">×</button>
What it should look like
<button className="h-6 w-6 p-1 inline-flex items-center justify-center">×</button>

Category 3 / 5

Accessibility — WCAG 2.2 (4 checks)

Accessibility is the first thing AI strips when it's 'cleaning up' code. Every check below cites the WCAG success criterion it enforces, so when reviewers ask what spec line you fail, the answer is in the lint message.

7

Every <img> has meaningful alt

WCAG 1.1.1
AI ships images with no alt attribute, or with placeholder text like alt="image" and alt="photo". The rule treats both as failures, distinguishes them in the message (missing vs. meaningless), and accepts alt="" on decorative images that also carry role="presentation" or aria-hidden.
What AI ships
<img src="/hero.jpg" /> <img src="/hero.jpg" alt="image" />
What it should look like
<img src="/hero.jpg" alt="Two engineers reviewing a dashboard on a laptop" /> <img src="/decoration.svg" alt="" role="presentation" />
Caught by: image-alt-text
8

Every form input has a programmatic label

WCAG 1.3.1 · 3.3.2
A placeholder is not a label — screen readers don’t announce it once the user starts typing, and the contrast on most placeholder colors fails. The rule walks every input / select / textarea, checks for an associated <label htmlFor>, a wrapping <label> ancestor, or an aria-label/aria-labelledby, and reports the ones that have none.
What AI ships
<input type="email" placeholder="Email" />
What it should look like
<label htmlFor="email">Email</label> <input id="email" type="email" placeholder="Email" />
Caught by: form-labels
9

No generic link text

WCAG 2.4.4
Screen-reader users navigate by listing every link on a page. A list of ten Click here entries reveals nothing. The rule flags click here, here, read more, more, learn more, this link, and the empty / icon-only variants — with a configurable allowlist for project-specific phrasing.
What AI ships
<a href="/docs/api">Click here</a> <a href="/blog/x">Read more</a>
What it should look like
<a href="/docs/api">Read the API reference</a> <a href="/blog/x">Read &quot;Tailwind v4 migration&quot;</a>
Caught by: link-text
10

No outline:none without a focus indicator

WCAG 2.4.7
AI strips outline-none on every interactive element to make designs look “clean,” and forgets to add a replacement. Keyboard users can no longer see where they are. The rule flags any element that nukes the outline without declaring at least one of focus-visible:, focus:ring-*, focus:outline-*, or a corresponding utility.
What AI ships
<button className="outline-none rounded px-3 py-2">Sign in</button>
What it should look like
<button className="outline-none focus-visible:ring-2 focus-visible:ring-primary rounded px-3 py-2">Sign in</button>

Category 4 / 5

Dark mode (1 check)

Either you support dark mode everywhere or you don't ship it. Half-coverage is worse than no coverage — users land on a white modal in a black app and lose trust.

11

Dark mode applied across, not on a sample

You asked the AI to add dark mode. It added dark: variants to half the file and called it done. The rule walks every element with a color or background utility, checks for a paired dark: variant on the same property, and reports the ones still in light mode. Off in the recommended config — turn it to warn when you start shipping dark mode.
What AI ships
<div className="bg-white text-gray-900 border-gray-200">…</div>
What it should look like
<div className="bg-white dark:bg-gray-900 text-gray-900 dark:text-gray-100 border-gray-200 dark:border-gray-800">…</div>

Category 5 / 5

Frontend safety (3 checks)

The basics every shipped app should pass. AI generates these patterns confidently and incorrectly: rendered comments via dangerouslySetInnerHTML, target=_blank without rel, embedded iframes without sandbox. All three landed as new rules in Deslint 0.8.

12

No dangerouslySetInnerHTML on untrusted data

The single most common XSS path in AI-generated React code: an agent renders a user comment, a markdown blob, or a server response with dangerouslySetInnerHTML and never sanitizes. The rule flags every JSX element that uses the prop, with three deliberate whitelist exceptions: <script type="application/ld+json"> (Schema.org structured data is always dev-controlled), <style> (CSS injection has a different threat model), and Next.js’s <Script> component (inline scripts via the framework’s loading strategy).
What AI ships
<div dangerouslySetInnerHTML={{ __html: comment }} />
What it should look like
<div>{comment}</div> {/* or, if HTML is genuinely needed */} <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(comment) }} />
13

<a target="_blank"> has rel="noopener noreferrer"

Without noopener, the new tab can navigate the opener via window.opener (reverse tab-nabbing). Without noreferrer, the destination sees the source URL in headers. The rule flags missing rel attributes and partial rel values (only one of the two tokens), and autofixes on JSX.
What AI ships
<a href="https://x.com/u" target="_blank">Profile</a> <a href="https://x.com/u" target="_blank" rel="noreferrer">Profile</a>
What it should look like
<a href="https://x.com/u" target="_blank" rel="noopener noreferrer">Profile</a>
14

<iframe> has a sandbox attribute

An iframe without sandbox inherits full origin privileges — it can navigate the parent, run scripts, submit forms with credentials, and break out via top-level navigation. The rule flags every <iframe> missing the attribute. Suggestion only: the right sandbox value depends on what the embed needs to do, so we don’t auto-fix.
What AI ships
<iframe src="https://embed.example.com" />
What it should look like
<iframe src="https://embed.example.com" sandbox="" /> {/* or opt-in only what's needed */} <iframe src="https://embed.example.com" sandbox="allow-scripts allow-same-origin" />
Caught by: iframe-sandbox

Run the whole checklist in one command

Reading a 14-point list is helpful exactly once. The goal is for it to run on every PR before anyone else has to think about it. Two commands do that:

$ npx deslint launch-check Frontend Launch Readiness: 73/100 Spacing 56 · Typography 80 · Responsive 62 · Consistency 95 17 violations, 9 auto-fixable
$ npx deslint fix --all Fixed 9 violations across 4 files

Both commands run locally, with zero LLM in the check path and zero code leaving your machine. See the full HTML report the first command writes to .deslint/report.html. Once a project clears the checklist on local, wire it into the agent loop with the MCP server so Cursor / Claude Code / Codex / Windsurf can’t silently regress what you just fixed, and at the merge gate with the GitHub Action so PRs that drop the score are blocked.

Why a deterministic checklist beats “run it through another AI”

A second LLM reviewing the first one feels productive and isn’t. Two reasons.

Same input, different output. Run the same scan twice and an AI reviewer flags different things on the second pass. Run a deterministic ESLint rule twice and the messages are byte-identical. When the merge gate fails, you can point at a line. When you fix it, the failure goes away.

No exfiltration surface. Every rule on this checklist runs on your machine against your files. No code leaves your laptop, your CI runner, or your air-gapped enterprise environment. The MCP server uses stdio — the same protocol your editor already uses to talk to language servers — so the data path is local, auditable, and indistinguishable from any other lint run.

The rules in this checklist exist because the patterns AI gets wrong are the same ones humans got wrong before AI — we just now hit them an order of magnitude more often. ESLint, Tailwind, and the WCAG specs already encode the answers. A linter is the shape of tool that turns those answers into a one-command checklist you can ship behind.

Related reading