Coding agents suck at frontend because translating intent (from UI → prompt → code → UI) is lossy.
For example, if you want to make a UI change:
How the coding agent processes this:
Search is a pretty random process since language models have non-deterministic outputs. Depending on the search strategy, these trajectories range from instant (if lucky) to very long. Unfortunately, this means added latency, cost, and performance.
Today, there are two solutions to this problem:
Improving the agent is a lot of unsolved research problems. It involves training better models (see Instant Grep, SWE-grep).
Ultimately, reducing the amount of translation steps required makes the process faster and more accurate (this scales with codebase size).
But what if there was a different way?
In my ad-hoc tests, I noticed that referencing the file path (e.g. path/to/component.tsx) or something to grep (e.g. className="flex flex-col gap-5 text-shimmer") made the coding agent much faster at finding what I was referencing. In short - there are shortcuts to reduce the number of steps needed to search!
Turns out, React.js exposes the source location for elements on the page. React Grab walks up the component tree from the element you clicked, collects each component's component name and source location (file path + line number), and formats that into a readable stack.
It looks something like this:
<selected_element> ## HTML Frame: <span class="font-bold"> React Grab </span> ## Code Location: at motion.div at StreamingText in /[project]/packages/website/components/blocks/streaming-text.tsx at MessageBlock in /[project]/packages/website/components/blocks/message-block.tsx at StreamDemo in /[project]/packages/website/components/stream-demo.tsx </selected_element>
When I passed this to Cursor, it instantly found the file and made the change in a couple seconds. Trying on a couple other cases got the same result.

I used the shadcn/ui dashboard as the test codebase. This is a Next.js application with auth, data tables, charts, and form components.
The benchmark consists of 20 test cases designed to cover a wide range of UI element retrieval scenarios. Each test represents a real-world task that developers commonly perform when working with coding agents.
Each test ran twice: once with React Grab enabled (treatment), once without (control). Both conditions used identical codebases and Claude 4.5 Sonnet (in Claude Code).
<selected_element> <a class="ml-auto inline-block text-..." href="#"> Forgot your password? </a> at a in components/login-form.tsx:46:19 at div in components/login-form.tsx:44:17 at Field in components/ui/field.tsx:87:5 at FieldGroup in components/ui/field.tsx:46:5 at form in components/login-form.tsx:32:11 </selected_element>
Without React Grab, the agent must search through the codebase to find the right component. Since language models predict tokens non-deterministically, this search process varies dramatically - sometimes finding the target instantly, other times requiring multiple attempts. This unpredictability adds latency, increases token consumption, and degrades overall performance.
With React Grab, the search phase is eliminated entirely. The component stack with exact file paths and line numbers is embedded directly in the DOM. The agent can jump straight to the correct file and locate what it needs in O(1) time complexity.
…and turns out, Claude Code becomes ~55% faster with React Grab!
| Metric | Control | |
|---|---|---|
| Avg Duration | 17.4s | 7.6s↓ 55.9% |
| Total Cost | $0.75 | $0.40↓ 46.8% |
| Avg Tool Calls | 4.4 | 0.5↓ 89.8% |
Below are the latest measurement results from all 20 test cases. The table below shows a detailed breakdown comparing performance metrics (time, tool calls, tokens) between the control and treatment groups, with speedup percentages indicating how much faster React Grab made the agent for each task.
Performance metrics per test: tokens, cost (USD), duration, and tool calls. React Grab shows % change vs. Control.Last run: November 20, 2025 at 12:17 PM
Test Name | Input Tokens | Output Tokens | Cost | Duration | Tool Calls | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| Control | Control | Control | Control | Control | ||||||
| Calendar Date Cell | 44,021 | 27,366↓38% | 241 | 88↓63% | $0.04 | $0.02↓50% | 13.1s | 8.5s↓35% | 3 | 1↓67% |
| Drag Handle | 40,546 | 13,544↓67% | 195 | 10↓95% | $0.02 | $0.01↓48% | 10.3s | 7.2s↓29% | 2 | 0↓100% |
| Dropdown Actions | 42,979 | 13,608↓68% | 419 | 10↓98% | $0.03 | $0.01↓76% | 15.7s | 5.8s↓63% | 5 | 0↓100% |
| Editable Target Input | 55,562 | 37,228↓33% | 658 | 69↓90% | $0.07 | $0.05↓25% | 32.4s | 9.1s↓72% | 13 | 1↓92% |
| Field Description Text | 40,510 | 13,592↓66% | 173 | 10↓94% | $0.02 | $0.01↓66% | 13.4s | 5.8s↓56% | 2 | 0↓100% |
| Forgot Password Link | 41,795 | 28,026↓33% | 261 | 69↓74% | $0.03 | $0.02↓25% | 13.5s | 6.9s↓49% | 5 | 1↓80% |
| Full Name Input Field | 26,910 | 13,562↓50% | 87 | 10↓89% | $0.02 | $0.01↓58% | 7.6s | 6.4s↓15% | 1 | 0↓100% |
| GitHub Link Button | 58,942 | 13,536↓77% | 435 | 10↓98% | $0.05 | $0.01↓73% | 19.8s | 6.3s↓68% | 8 | 0↓100% |
| Grayscale Avatar | 42,217 | 27,153↓36% | 256 | 106↓59% | $0.03 | $0.02↓41% | 13.1s | 9.8s↓25% | 3 | 1↓67% |
| Keyboard Shortcut Badge | 68,294 | 13,561↓80% | 522 | 11↓98% | $0.05 | $0.01↓74% | 34.4s | 6.1s↓82% | 10 | 0↓100% |
| OTP Input | 43,227 | 28,413↓34% | 250 | 96↓62% | $0.05 | $0.02↓62% | 16.3s | 9.6s↓41% | 4 | 1↓75% |
| Projects More Button | 61,707 | 13,604↓78% | 249 | 10↓96% | $0.04 | $0.01↓82% | 19.1s | 5.6s↓70% | 3 | 0↓100% |
| Quick Create Button | 55,035 | 28,079↓49% | 220 | 69↓69% | $0.03 | $0.02↓34% | 12.6s | 9.2s↓27% | 3 | 1↓67% |
| Revenue Card Badge | 42,814 | 13,611↓68% | 363 | 10↓97% | $0.03 | $0.01↓77% | 16.3s | 6s↓63% | 6 | 0↓100% |
| Sidebar Trigger Toggle | 64,484 | 13,533↓79% | 363 | 10↓97% | $0.07 | $0.01↓83% | 19.4s | 7.4s↓62% | 6 | 0↓100% |
| Sign Up With Google Button | 41,447 | 13,576↓67% | 166 | 10↓94% | $0.03 | $0.01↓56% | 11.6s | 6.7s↓42% | 2 | 0↓100% |
| Status Badge | 154,625 | 37,212↓76% | 742 | 94↓87% | $0.11 | $0.06↓48% | 45.8s | 9.5s↓79% | 9 | 1↓89% |
| Tabs with Badges | 26,968 | 37,326↑38% | 82 | 69↓16% | $0.02 | $0.06↑225% | 8.8s | 10.1s↑14% | 1 | 1 |
| Team Switcher Dropdown | 26,938 | 13,607↓49% | 117 | 11↓91% | $0.02 | $0.01↓57% | 13s | 6.2s↓52% | 1 | 0↓100% |
| Time Range Toggle | 26,980 | 32,055↑19% | 139 | 108↓22% | $0.02 | $0.04↑110% | 11.2s | 10.1s↓10% | 1 | 1 |
To run the benchmark yourself, check out the benchmarks directory on GitHub.
The best use case I've seen for React Grab is for low-entropy adjustments like: spacing, layout tweaks, or minor visual changes.
If you iterate on UI frequently, this can make everyday changes feel smoother. Instead of describing where the code is, you can select an element and give the agent an exact starting point.
We're finally moves things a bit closer to narrowing the intent to output gap (see Inventing on Principle).
There are a lot of improvements that can be made to this benchmark:
On the React Grab side - there's also a bunch of stuff that could make this even better. For example, grabbing error stack traces when things break, or building a Chrome extension so you don't need to modify your app at all. Maybe add screenshots of the element you're grabbing, or capture runtime state/props.
If you want to help out or have ideas, hit me up on Twitter or open an issue on GitHub.
React Grab is free and open source. Go try it out!