Browser Infrastructure for AI Agents

Agent Browser lets you execute agentic workflows on remote browsers that never get blocked. Infinitely scalable, headless or headful, and powered by the world’s most reliable proxy network.

Try Now
No se requiere tarjeta de crédito
150M+ actions performed daily
1M+ concurrent sessions
150M+ IPs in 195 countries
3M+ domains unlocked
2.5PB+ collected daily

Navigate any website like a human would

Advanced Unlocking
  • Seamlessly overcome site restrictions using browser fingerprinting and CAPTCHA solving.
Auto-Scaling
  • Spin up unlimited parallel sessions from any geolocation without losing performance.
Managed Sessions
  • Leverage headful or headless browsers to control context, cookies and tabs.
Standardized Environment
  • Seamless integration through API or MCP, with no need for per-site configuration.

Make the Web AI-Ready

Seamlessly overcome site restrictions using browser fingerprinting and CAPTCHA solving.

Spin up unlimited parallel sessions from any geolocation without losing performance.

Leverage headful or headless browsers to control context, cookies and tabs.

Seamless integration through API or MCP, with no need for per-site configuration.

Bright Data Powers the World's Top Brands

Bright Data allows Autonomous AI agents to navigate websites, find information and perform actions automatically in a simple to integrate, consistent and reliable environment

Power your most complex workflows

Agent interaction

  • Enable agentic task automations
  • Fill forms, search, and more
  • Quick start with low latency
  • Ensure secure, isolated sessions

Stealth browsing

  • Use geolocation proxies
  • Human-like fingerprinting
  • Automatically solve CAPTCHAs
  • Manage cookies & session

AI-ready data pipeline

  • Discover relevant data sources
  • Real-time or batch collection
  • Structured or unstructured output
  • Integrate seamlessly via MCP

Headless & Headful browsers for unlimited, cost-effective web access and navigation

Human-like fingerprints

Emulate real users' browsers to simulate a human experience

Stealth mode

Ethically bypass bot detection and solve CAPTCHAs

Low latency sessions

Sub-second connection and stable sessions ensuring smooth interaction

Set referral headers

Simulate traffic originating from popular or trusted websites

Manage cookies and sessions

Prevent potential blocks imposed by cookie-related factors

Automatic retries and IP rotation

Continually retry requests, and rotate IPs, in the background

Worldwide geo-coverage

Access localized content from any country, city, state or ASN

Browser automation support

Compatible with Playwright, Puppeteer and Selenium

Enterprise-grade security

Browser instances can integrate with enterprise VPN and sign-on

                              const pw = require('playwright');

const SBR_CDP = 'wss://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9222';

async function main() {
    console.log('Connecting to Scraping Browser...');
    const browser = await pw.chromium.connectOverCDP(SBR_CDP);
    try {
        const page = await browser.newPage();
        console.log('Connected! Navigating to https://example.com...');
        await page.goto('https://example.com');
        console.log('Navigated! Scraping page content...');
        const html = await page.content();
        console.log(html);
    } finally {
        await browser.close();
    }
}

main().catch(err => {
    console.error(err.stack || err);
    process.exit(1);
});
                              
                            
                              import asyncio
from playwright.async_api import async_playwright

SBR_WS_CDP = 'wss://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9222'

async def run(pw):
    print('Connecting to Scraping Browser...')
    browser = await pw.chromium.connect_over_cdp(SBR_WS_CDP)
    try:
        page = await browser.new_page()
        print('Connected! Navigating to https://example.com...')
        await page.goto('https://example.com')
        print('Navigated! Scraping page content...')
        html = await page.content()
        print(html)
    finally:
        await browser.close()

async def main():
    async with async_playwright() as playwright:
        await run(playwright)

if __name__ == '__main__':
    asyncio.run(main())
                              
                            
                              const puppeteer = require('puppeteer-core');

const SBR_WS_ENDPOINT = 'wss://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9222';

async function main() {
    console.log('Connecting to Scraping Browser...');
    const browser = await puppeteer.connect({
        browserWSEndpoint: SBR_WS_ENDPOINT,
    });
    try {
        const page = await browser.newPage();
        console.log('Connected! Navigating to https://example.com...');
        await page.goto('https://example.com');
        console.log('Navigated! Scraping page content...');
        const html = await page.content();
        console.log(html)
    } finally {
        await browser.close();
    }
}

main().catch(err => {
    console.error(err.stack || err);
    process.exit(1);
});
                              
                            
                              const { Builder, Browser } = require('selenium-webdriver');

const SBR_WEBDRIVER = 'https://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9515';

async function main() {
    console.log('Connecting to Scraping Browser...');
    const driver = await new Builder()
        .forBrowser(Browser.CHROME)
        .usingServer(SBR_WEBDRIVER)
        .build();
    try {
        console.log('Connected! Navigating to https://example.com...');
        await driver.get('https://example.com');
        console.log('Navigated! Scraping page content...');
        const html = await driver.getPageSource();
        console.log(html);
    } finally {
        driver.quit();
    }
}

main().catch(err => {
    console.error(err.stack || err);
    process.exit(1);
});
                              
                            
                              from selenium.webdriver import Remote, ChromeOptions
from selenium.webdriver.chromium.remote_connection import ChromiumRemoteConnection

SBR_WEBDRIVER = 'https://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9515'

def main():
    print('Connecting to Scraping Browser...')
    sbr_connection = ChromiumRemoteConnection(SBR_WEBDRIVER, 'goog', 'chrome')
    with Remote(sbr_connection, options=ChromeOptions()) as driver:
        print('Connected! Navigating to https://example.com...')
        driver.get('https://example.com')
        print('Navigated! Scraping page content...')
        html = driver.page_source
        print(html)

if __name__ == '__main__':
    main()
                              
                            

Easily integrate your tech stack

  • Run your Puppeteer, Selenium or Playwright scripts
  • Automated proxy management and web unlocking
  • Get data in unstructured or structured formats
  • Get data in unstructured or structured formats
See documentation

Agent Browser

Scalable browser infrastructure with autonomous unlocking

pay as you go plan icon
PAGUE O QUE GASTAR
$8.4 / GB
Sem compromisso
Start free trial
Pague conforme o uso, sem compromisso mensal
2nd plan icon
69 GB incluso
$7.14 / GB
$499 Facturado mensualmente
Start free trial
Diseñado a medida para equipos que buscan escalar sus operaciones.
3rd plan icon
158 GB incluso
$6.3 / GB
$999 Facturado mensualmente
Start free trial
Diseñado para equipos grandes con amplias necesidades operativas
4th plan icon
339 GB incluso
$5.88 / GB
$1999 Facturado mensualmente
Start free trial
Suporte avançado e recursos para operações críticas
EMPRESA
Servicios de datos de élite para requisitos empresariales de primer nivel.
CONTACTANOS
  • Gestor de contas
  • Pacotes sob medida
  • SLA Premium
  • Suporte prioritário
  • Onboarding personalizado
  • SSO
  • Personalizações
  • Logs de auditoria

FAQ

Scraping Browser works like other automated browsers and is controlled by common high-level APIs like Puppeteer and Playwright, but is the only browser with built-in website unblocking capabilities. Scraping Browser automatically manages all website unlocking operations under the hood, including: CAPTCHA solving, browser fingerprinting, automatic retries, selecting headers, cookies, & Javascript rendering, and more, so you can save time and resources.

When data scraping, developers use automated browsers when JavaScript rendering of a page or interactions with a website are needed (hovering, changing pages, clicking, screenshots, etc.). In addition, browsers are useful for large-scaling data scraping projects when multiple pages are targeted at once.

Scraping Browser is a GUI browser (aka "headfull" browser) that uses a graphic user interface. However, a developer will experience Scraping Browser as headless, interacting with the browser through an API like Puppeteer or Playwright. Scraping Browser, however, is opened as a GUI Browser on Bright Data’s infrastructure.

In choosing an automated browser, developers can choose from a headless or a GUI/headful browser. The term “headless browser” refers to a web browser without a graphical user interface. When used with a proxy, headless browsers can be used to scrape data, but they are easily detected by bot-protection software, making large-scale data scraping difficult. GUI browsers, like Scraping Browser (aka "headfull"), use a graphical user interface. Bot detection software is less likely to detect GUI browsers.

Scraping Browser comes with a built-in website unlocking feature that handles blocking for you automatically. The Scraping Browsers employ automated unlocking and are opened on Bright Data’s servers, so they are ideal for scaling web data scraping projects without requiring extensive infrastructure.

Yes, Scraping Browser is fully compatible with Puppeteer.

Yes, Scraping Browser is fully compatible with Playwright.

Scraping Browser is an automated browser optimized for data scraping, which integrates the power of Web Unlocker's automated unlocking capabilities. While Web Unlocker works with one-step requests, Scraping Browser is needed when a developer needs to interact with a website to retrieve its data. It is also ideal for any data scraping project that requires browsers, scaling, and automated management of all website unblocking actions.