Free · No install · Runs locally
Is your AI-built app ready to ship?
Cursor just rewrote your checkout flow. Claude Code added a settings page in eleven seconds. Codex generated the whole signup form. Everything looks fine — until someone opens it on a phone, in dark mode, with a screen reader. Run one command before you push.
Zero install, zero config. Detects React / Vue / Svelte / Angular / plain HTML on its own. Requires Node 20.19+.
What you get back
A single 0-100 score, five category bars, and a Fix Plan that tells you the literal next command to run. No dashboard, no account, no telemetry — the report renders in your terminal and in .deslint/report.html.
Real scan output. Fix Plan, per-rule patterns, file hotspots, color palette diff, and a 20-run trend chart. The same file lands at .deslint/report.html after every scan.
What it catches that your AI missed
37 deterministic rules across 5 scoring categories — design system, spacing, typography, responsive coverage, consistency — plus the safety basics every shipped app should pass. Every check is plain ESLint underneath, so no LLM ever sees your code.
Hardcoded Tailwind values
Your AI guessed `p-[13px]` instead of `p-4`. Three weeks later your design tokens are useless.
Rules:
`no-arbitrary-spacing`, `no-arbitrary-colors`, `no-arbitrary-typography`, `no-arbitrary-border-radius`Output: Auto-fix. One command rewrites them to the nearest token.
Mobile layout pretends desktop is everywhere
The AI never opened DevTools. No `md:`, no `sm:`, fixed widths everywhere. Looks fine until someone visits on a phone.
Rules:
`responsive-required`, `viewport-meta`, `touch-target-size`Output: Reports each fixed-width container that needs a breakpoint.
Buttons without focus rings, links that say "click here"
Accessibility is the first thing AI strips when it's "cleaning up". WCAG failures ship as warnings nobody reads.
Rules:
`focus-visible-style`, `link-text`, `image-alt-text`, `form-labels`, `aria-validation`Output: WCAG 2.2-mapped. Tells you which clause each violation breaks.
Dark mode survived… on three components out of forty
You asked the AI to add dark mode. It added `dark:` to half the file and called it done.
Rules:
`dark-mode-coverage`Output: Lists every element that has `bg-*` but no `dark:bg-*` peer.
`dangerouslySetInnerHTML` on user-supplied data
The AI rendered a comment field with `dangerouslySetInnerHTML`. You just shipped XSS.
Rules:
`no-dangerous-html`, `safe-external-links`, `iframe-sandbox`Output: Frontend safety basics, flagged before they reach prod.
Same component, six different paddings
Each time the AI regenerates a card it picks a slightly different size. The grid drifts.
Rules:
`consistent-component-spacing`, `consistent-border-radius`, `spacing-rhythm-consistency`Output: Cross-file consistency check, not just one-line lints.
Why this isn't another AI code-review SaaS
- No LLM in the hot path. Every rule is plain ESLint logic — same input, same output, every run. There's no second AI second-guessing the first.
- Zero code egress. Files read from disk, report written to your terminal. No outbound network calls. No account. No usage telemetry.
- Free forever. The CLI, the rules, the GitHub Action, the MCP servers — all open-source under MIT. Pricing exists for orgs that want hosted attestations; the core never becomes a paywall.
Once your launch check passes
Three places to plug Deslint in so the next AI rewrite doesn't silently regress what you just fixed.