# Example 04: Browser Automation **Capability:** Claude Code can control a real browser via Playwright MCP, take screenshots, and extract structured data from live pages. **OpenClaw equivalent:** CDP/Playwright browser automation with screenshot and act commands. > **Building on Examples 01-03.** You have researched (01), organized (02), and verified (03) data using text-based tools. This example adds visual data capture from live web pages. In the Cumulative Path below, you capture a screenshot of a page referenced in your research. --- ## Prerequisites Playwright MCP must be enabled. Add this to `.mcp.json`: ```json { "mcpServers": { "playwright": { "command": "npx", "args": ["@playwright/mcp@latest"] } } } ``` This repo's `.mcp.json` already includes it. If Claude Code does not show Playwright tools at startup, run `npm install` to ensure the package is available, then restart. --- ## The Prompt ``` Navigate to https://news.ycombinator.com, take a screenshot of the current front page, and list the top 5 stories with their point counts and submitter names. Save the list to hacker-news-top5.md with the format: 1. [title] - [points] points by [submitter] Include a note at the bottom with the timestamp of when you fetched this. ``` --- ## What Happens Claude Code will: 1. Use the Playwright MCP `browser_navigate` tool to load the page 2. Use `browser_screenshot` to capture the current state 3. Use `browser_snapshot` or `browser_get_text` to extract story data 4. Use Write to save the structured list to `hacker-news-top5.md` --- ## Why This Matters Browser automation handles pages that cannot be reached with WebFetch alone: JavaScript-rendered content, login-protected pages, and interactive workflows. OpenClaw bundles Playwright natively. Claude Code uses the same underlying engine via MCP. The setup takes two minutes; the capability is identical. --- ## Carry Forward Browser automation is your fallback for when WebSearch and WebFetch cannot reach the data: JavaScript-rendered pages, login walls, interactive content. Examples 07 (messaging) and 10 (full pipeline) can use browser-captured data as input. --- ## The Cumulative Path > If you ran Examples 01-03, you have research with source URLs. This prompt > captures live visual evidence for the report. ``` Read pipeline-output/research-report/findings.md. Pick the top-ranked item and navigate to its GitHub page (or primary URL). Take a screenshot of the main page. Save the screenshot and add a line to findings.md: "Screenshot captured [today's date] - see [filename]" ``` This is optional in the cumulative flow. Not every pipeline needs browser automation. But when it does, this is how you add live visual data to research that was gathered via API. --- ## Now Try It Yourself Replace the demo target with a page relevant to your work: ``` Navigate to [URL you need data from], take a screenshot, and extract [the specific data you need] into a structured list. Save to [your-file].md with a timestamp at the bottom. ``` **The pattern you just learned:** URL + extraction target + output file. Use browser automation when the data lives behind JavaScript rendering, login screens, or interactive elements that WebFetch cannot handle. Ideas worth trying: - Your company dashboard or analytics page - A competitor's product page with pricing - A government portal with public data tables