Playwright vs. Selenium for Modern Browser Automation
Selenium is the incumbent. Playwright is the modern default. Here's what's actually different in production work, and the small list of cases where Selenium still wins.
The Short Version
If you're starting a browser automation project in 2026, default to Playwright. Selenium isn't broken — there are still automation pipelines running it productively at significant scale — but the friction is real, the tooling around it shows its age, and most of the things that made Selenium dominant for a decade (driver protocol stability, language coverage, ecosystem) Playwright now matches or exceeds. The cases where Selenium is still the right call are narrow: you're integrating with an existing Selenium grid, you need a browser Playwright doesn't support (Safari outside macOS, a niche embedded engine), or you've inherited a large Selenium codebase that isn't worth a rewrite.
This article is for the developer or engineering lead deciding which tool to pick for a new project, not a feature-by-feature spec sheet. I've shipped production work in both. The differences that matter are operational, not on the marketing page.
What Playwright Actually Improves
Selector Engines
Selenium gives you CSS selectors, XPath, and a small set of locator strategies (by ID, by name, by tag). All of them target the rendered DOM by structure. When the site re-skins, your selectors break.
Playwright adds role-based (page.get_by_role("button", name="Submit")), text-based (page.get_by_text("Continue")), label-based (page.get_by_label("Email address")), placeholder-based, title-based, and test-id locators on top of CSS and XPath. These selectors target the user-visible semantics of the page. They survive re-skins, framework migrations, and most of the normal churn that breaks structural selectors. They also document intent — get_by_role("button", name="Add to Cart") says what the test does in a way "div.btn.btn-primary.product-add" never will.
This is, by itself, enough reason to choose Playwright for new work. Selector breakage is the single largest source of maintenance toil in browser automation, and Playwright's selector engine cuts it by something like 60–70% on real projects.
Wait Strategy
Selenium's default approach to waiting is WebDriverWait + expected_conditions, plus a lot of explicit time.sleep() calls in code that didn't get cleaned up. The implicit-wait setting is a footgun — it interacts with explicit waits in surprising ways and silently slows everything down.
Playwright auto-waits. Calling page.click("text=Submit") already waits for the element to be attached, visible, stable, and enabled before clicking. There is no separate explicit-wait API to bolt on; the wait is built into every action. The result is dramatically less wait-related code in the average automation script, and almost none of the flakiness that "the click happened before the page was ready" causes in Selenium.
Network Interception
Playwright exposes the network layer as a first-class API. page.on("request") and page.on("response") let you observe every fetch the page makes. page.route() lets you intercept and rewrite requests on the fly — block analytics, mock API responses for testing, or pull JSON directly out of XHR calls instead of parsing the rendered HTML.
That last point is huge for scraping. Many "scrape this site" problems are really "intercept this XHR response" problems. The site fetches structured JSON internally and renders it into the DOM; your scraper can grab the JSON before it's rendered, which is faster, more accurate, and less fragile than reading it back out of the styled output. Selenium can do this through external proxies (BrowserMob, mitmproxy), but it's not native, and the integration is a friction tax on every project that needs it.
Async + Sync APIs Side by Side
Playwright ships both an async API and a sync API in the same package, and the sync API is implemented as a thin wrapper over the async one. You can write straightforward synchronous code for a CLI scraper and async code for a high-concurrency worker, using the same library and the same docs. Selenium has historically been sync-first; async support is an external project (selenium-async) that lags the main library.
Modern Tooling
Playwright Codegen records browser actions and generates working test code. Playwright Inspector lets you step through a script with a live DOM view. Trace Viewer captures every action plus screenshots, network calls, and console output into an interactive timeline you can scrub. None of this is exotic — Selenium has equivalents (Selenium IDE, various third-party recorders) — but Playwright's tools ship in the box, work well, and are maintained by the same team that maintains the runtime. The Codegen + Trace Viewer combination cuts initial development time significantly on most projects.
Anti-Detect Integration
Most production scraping work against modern targets requires either a real fingerprint browser (Kameleo, Multilogin) or a hardened headless setup (puppeteer-extra-stealth and friends). Playwright connects to Kameleo over CDP with a single browser_type.connect_over_cdp() call. The Kameleo profile handles the fingerprint and the proxy; Playwright handles the scripting. Clean separation, well-documented, works in production.
Selenium can attach to Chrome via the same CDP endpoint, but the integration is less idiomatic, the docs are sparser, and Kameleo's published examples are Playwright-first. If your project will eventually need Kameleo (and most serious scraping projects do), starting with Playwright avoids a tooling switch later.
Where Selenium Still Wins
It's a short list, but it's a real list.
Existing Selenium Grid infrastructure. If your organization already runs a Selenium Grid (or a SaaS equivalent like Sauce Labs / BrowserStack with Selenium endpoints), the integration cost of switching to Playwright is non-trivial. Playwright has its own service equivalents now, but the migration math has to be done case by case.
Browsers Playwright doesn't support. Playwright supports Chromium, Firefox, and WebKit. WebKit gives you Safari-equivalent rendering on every platform — you don't need a Mac. But if you specifically need actual Safari (for a Safari-specific bug, or for App Store testing), or a niche browser like Brave or Opera in their non-Chromium configurations, Selenium still has the edge.
Inherited large Selenium codebases. If you have 50,000 lines of Selenium tests that work, rewriting them in Playwright to gain 20% better selectors is rarely worth the investment. The right move is usually to keep the existing Selenium suite and start any new work in Playwright.
Language coverage at the long tail. Selenium has bindings for nearly every language ever shipped. Playwright officially supports JavaScript/TypeScript, Python, Java, and C#. If you're writing in Ruby, Go, Rust, or PHP, Selenium is still the easier choice — though community Playwright bindings exist for some of these.
Practical Migration Notes
For developers moving from Selenium to Playwright on a new project, a few patterns to internalize:
- Stop writing explicit waits. Auto-wait covers 95% of cases. Reach for
page.wait_for_selector()only when you need to wait for something that isn't the target of an immediate action. - Use
get_by_role/get_by_textfirst. CSS and XPath are still available, but they should be the fallback, not the default. - Page objects are simpler. A page-object class in Playwright is usually 30–40% shorter than the Selenium equivalent because the auto-wait + better selectors mean less ceremony.
- Use the Trace Viewer when something is flaky. Don't add
print()statements; capture a trace and scrub through the timeline. The investment in learning the tool pays back the first time it saves you from chasing a phantom race condition. - For scraping work, learn
page.on("response")early. A surprising number of scrapes get easier when you grab the JSON instead of the rendered output.
Wrap-Up
For new browser automation work, the question isn't really "Playwright or Selenium?" It's "do you have a specific reason not to use Playwright?" If the answer is no, default to Playwright and move on. If the answer is yes, the reasons are usually concrete: existing infrastructure, a specific browser requirement, or an inherited codebase. None of those are arguments against Playwright in general — just against switching in your specific case.
For deeper coverage of how I structure Playwright code in production — worker pools, error capture, Kameleo integration — see the Playwright Automation hub and the related articles in Browser Automation and Kameleo Automation.
Need a Custom Automation System?
Need help building a production scraping, browser automation, or AI data extraction system? I build custom Python, Playwright, Kameleo, Undetectable, MySQL, and dashboard-based automation systems for businesses.