# Build Agents on Cloudflare URL: https://developers.cloudflare.com/agents/ import { CardGrid, Description, Feature, LinkButton, LinkTitleCard, PackageManagers, Plan, RelatedProduct, Render, TabItem, Tabs, TypeScriptExample, } from "~/components"; The Agents SDK enables you to build and deploy AI-powered agents that can autonomously perform tasks, communicate with clients in real time, call AI models, persist state, schedule tasks, run asynchronous workflows, browse the web, query data from your database, support human-in-the-loop interactions, and [a lot more](/agents/api-reference/). ### Ship your first Agent To use the Agent starter template and create your first Agent with the Agents SDK: ```sh # install it npm create cloudflare@latest agents-starter -- --template=cloudflare/agents-starter # and deploy it npx wrangler@latest deploy ``` Head to the guide on [building a chat agent](/agents/getting-started/build-a-chat-agent) to learn how the starter project is built and how to use it as a foundation for your own agents. If you're already building on [Workers](/workers/), you can install the `agents` package directly into an existing project: ```sh npm i agents ``` And then define your first Agent by creating a class that extends the `Agent` class: ```ts import { Agent, AgentNamespace } from 'agents'; export class MyAgent extends Agent { // Define methods on the Agent: // https://developers.cloudflare.com/agents/api-reference/agents-api/ // // Every Agent has built in state via this.setState and this.sql // Built-in scheduling via this.schedule // Agents support WebSockets, HTTP requests, state synchronization and // can run for seconds, minutes or hours: as long as the tasks need. } ``` Dive into the [Agent SDK reference](/agents/api-reference/agents-api/) to learn more about how to use the Agents SDK package and defining an `Agent`. ### Why build agents on Cloudflare? We built the Agents SDK with a few things in mind: - **Batteries (state) included**: Agents come with [built-in state management](/agents/api-reference/store-and-sync-state/), with the ability to automatically sync state between an Agent and clients, trigger events on state changes, and read+write to each Agent's SQL database. - **Communicative**: You can connect to an Agent via [WebSockets](/agents/api-reference/websockets/) and stream updates back to client in real-time. Handle a long-running response from a reasoning model, the results of an [asynchronous workflow](/agents/api-reference/run-workflows/), or build a chat app that builds on the `useAgent` hook included in the Agents SDK. - **Extensible**: Agents are code. Use the [AI models](/agents/api-reference/using-ai-models/) you want, bring-your-own headless browser service, pull data from your database hosted in another cloud, add your own methods to your Agent and call them. Agents built with Agents SDK can be deployed directly to Cloudflare and run on top of [Durable Objects](/durable-objects/) — which you can think of as stateful micro-servers that can scale to tens of millions — and are able to run wherever they need to. Run your Agents close to a user for low-latency interactivity, close to your data for throughput, and/or anywhere in between. --- ### Build on the Cloudflare Platform Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. Build stateful agents that guarantee executions, including automatic retries, persistent state that runs for minutes, hours, days, or weeks. --- # Changelog URL: https://developers.cloudflare.com/ai-gateway/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # Architectures URL: https://developers.cloudflare.com/ai-gateway/demos/ import { GlossaryTooltip, ResourcesBySelector } from "~/components"; Learn how you can use AI Gateway within your existing architecture. ## Reference architectures Explore the following reference architectures that use AI Gateway: --- # OpenAI Compatibility URL: https://developers.cloudflare.com/ai-gateway/chat-completion/ Cloudflare's AI Gateway offers an OpenAI-compatible `/chat/completions` endpoint, enabling integration with multiple AI providers using a single URL. This feature simplifies the integration process, allowing for seamless switching between different models without significant code modifications. ## Endpoint URL ```txt https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions ``` Replace `{account_id}` and `{gateway_id}` with your Cloudflare account and gateway IDs. ## Parameters Switch providers by changing the `model` and `apiKey` parameters. Specify the model using `{provider}/{model}` format. For example: - `openai/gpt-4o-mini` - `google-ai-studio/gemini-2.0-flash` - `anthropic/claude-3-haiku` ## Examples ### OpenAI SDK ```js import OpenAI from "openai"; const client = new OpenAI({ apiKey: "YOUR_PROVIDER_API_KEY", // Provider API key // NOTE: the OpenAI client automatically adds /chat/completions to the end of the URL, you should not add it yourself. baseURL: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat", }); const response = await client.chat.completions.create({ model: "google-ai-studio/gemini-2.0-flash", messages: [{ role: "user", content: "What is Cloudflare?" }], }); console.log(response.choices[0].message.content); ``` ### cURL ```bash curl -X POST https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions \ --header 'Authorization: Bearer {openai_token}' \ --header 'Content-Type: application/json' \ --data '{ "model": "google-ai-studio/gemini-2.0-flash", "messages": [ { "role": "user", "content": "What is Cloudflare?" } ] }' ``` ### Universal provider You can also use this pattern with the [Universal Endpoint](/ai-gateway/universal/) to add [fallbacks](/ai-gateway/configuration/fallbacks/) across multiple providers. When used in combination, every request will return the same standardized format, whether from the primary or fallback model. This behavior means that you do not have to add extra parsing logic to your app. ```ts title="index.ts" export interface Env { AI: Ai; } export default { async fetch(request: Request, env: Env) { return env.AI.gateway("default").run({ provider: "compat", endpoint: "chat/completions", headers: { authorization: "Bearer ", }, query: { model: "google-ai-studio/gemini-2.0-flash", messages: [ { role: "user", content: "What is Cloudflare?", }, ], }, }); }, }; ``` ## Supported Providers The OpenAI-compatible endpoint supports models from the following providers: - [Anthropic](/ai-gateway/providers/anthropic/) - [OpenAI](/ai-gateway/providers/openai/) - [Groq](/ai-gateway/providers/groq/) - [Mistral](/ai-gateway/providers/mistral/) - [Cohere](/ai-gateway/providers/cohere/) - [Perplexity](/ai-gateway/providers/perplexity/) - [Workers AI](/ai-gateway/providers/workersai/) - [Google-AI-Studio](/ai-gateway/providers/google-ai-studio/) - [Grok](/ai-gateway/providers/grok/) - [DeepSeek](/ai-gateway/providers/deepseek/) - [Cerebras](/ai-gateway/providers/cerebras/) --- # Getting started URL: https://developers.cloudflare.com/ai-gateway/get-started/ import { Details, DirectoryListing, LinkButton, Render } from "~/components"; In this guide, you will learn how to create your first AI Gateway. You can create multiple gateways to control different applications. ## Prerequisites Before you get started, you need a Cloudflare account. Sign up ## Create gateway Then, create a new AI Gateway. ## Choosing gateway authentication When setting up a new gateway, you can choose between an authenticated and unauthenticated gateway. Enabling an authenticated gateway requires each request to include a valid authorization token, adding an extra layer of security. We recommend using an authenticated gateway when storing logs to prevent unauthorized access and protect against invalid requests that can inflate log storage usage and make it harder to find the data you need. Learn more about setting up an [Authenticated Gateway](/ai-gateway/configuration/authentication/). ## Connect application Next, connect your AI provider to your gateway. AI Gateway offers multiple endpoints for each Gateway you create - one endpoint per provider, and one Universal Endpoint. To use AI Gateway, you will need to create your own account with each provider and provide your API key. AI Gateway acts as a proxy for these requests, enabling observability, caching, and more. Additionally, AI Gateway has a [WebSockets API](/ai-gateway/websockets-api/) which provides a single persistent connection, enabling continuous communication. This API supports all AI providers connected to AI Gateway, including those that do not natively support WebSockets. Below is a list of our supported model providers: If you do not have a provider preference, start with one of our dedicated tutorials: - [OpenAI](/ai-gateway/tutorials/deploy-aig-worker/) - [Workers AI](/ai-gateway/tutorials/create-first-aig-workers/) ## View analytics Now that your provider is connected to the AI Gateway, you can view analytics for requests going through your gateway.
:::note[Note] The cost metric is an estimation based on the number of tokens sent and received in requests. While this metric can help you monitor and predict cost trends, refer to your provider's dashboard for the most accurate cost details. ::: ## Next steps - Learn more about [caching](/ai-gateway/configuration/caching/) for faster requests and cost savings and [rate limiting](/ai-gateway/configuration/rate-limiting/) to control how your application scales. - Explore how to specify model or provider [fallbacks](/ai-gateway/configuration/fallbacks/) for resiliency. - Learn how to use low-cost, open source models on [Workers AI](/ai-gateway/providers/workersai/) - our AI inference service. --- # Header Glossary URL: https://developers.cloudflare.com/ai-gateway/glossary/ import { Glossary } from "~/components"; AI Gateway supports a variety of headers to help you configure, customize, and manage your API requests. This page provides a complete list of all supported headers, along with a short description ## Configuration hierarchy Settings in AI Gateway can be configured at three levels: **Provider**, **Request**, and **Gateway**. Since the same settings can be configured in multiple locations, the following hierarchy determines which value is applied: 1. **Provider-level headers**: Relevant only when using the [Universal Endpoint](/ai-gateway/universal/), these headers take precedence over all other configurations. 2. **Request-level headers**: Apply if no provider-level headers are set. 3. **Gateway-level settings**: Act as the default if no headers are set at the provider or request levels. This hierarchy ensures consistent behavior, prioritizing the most specific configurations. Use provider-level and request-level headers for more fine-tuned control, and gateway settings for general defaults. --- # Cloudflare AI Gateway URL: https://developers.cloudflare.com/ai-gateway/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, } from "~/components"; Observe and control your AI applications. Cloudflare's AI Gateway allows you to gain visibility and control over your AI apps. By connecting your apps to AI Gateway, you can gather insights on how people are using your application with analytics and logging and then control how your application scales with features such as caching, rate limiting, as well as request retries, model fallback, and more. Better yet - it only takes one line of code to get started. Check out the [Get started guide](/ai-gateway/get-started/) to learn how to configure your applications with AI Gateway. ## Features View metrics such as the number of requests, tokens, and the cost it takes to run your application. Gain insight on requests and errors. Serve requests directly from Cloudflare's cache instead of the original model provider for faster requests and cost savings. Control how your application scales by limiting the number of requests your application receives. Improve resilience by defining request retry and model fallbacks in case of an error. Workers AI, OpenAI, Azure OpenAI, HuggingFace, Replicate, and more work with AI Gateway. --- ## Related products Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network. Build full-stack AI applications with Vectorize, Cloudflare's vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. ## More resources Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Learn how you can build and deploy ambitious AI applications to Cloudflare's global network. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Universal Endpoint URL: https://developers.cloudflare.com/ai-gateway/universal/ import { Render, Badge } from "~/components"; You can use the Universal Endpoint to contact every provider. ```txt https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id} ``` AI Gateway offers multiple endpoints for each Gateway you create - one endpoint per provider, and one Universal Endpoint. The Universal Endpoint requires some adjusting to your schema, but supports additional features. Some of these features are, for example, retrying a request if it fails the first time, or configuring a [fallback model/provider](/ai-gateway/configuration/fallbacks/). You can use the Universal endpoint to contact every provider. The payload is expecting an array of message, and each message is an object with the following parameters: - `provider` : the name of the provider you would like to direct this message to. Can be OpenAI, workers-ai, or any of our supported providers. - `endpoint`: the pathname of the provider API you’re trying to reach. For example, on OpenAI it can be `chat/completions`, and for Workers AI this might be [`@cf/meta/llama-3.1-8b-instruct`](/workers-ai/models/llama-3.1-8b-instruct/). See more in the sections that are specific to [each provider](/ai-gateway/providers/). - `authorization`: the content of the Authorization HTTP Header that should be used when contacting this provider. This usually starts with 'Token' or 'Bearer'. - `query`: the payload as the provider expects it in their official API. ## cURL example The above will send a request to Workers AI Inference API, if it fails it will proceed to OpenAI. You can add as many fallbacks as you need, just by adding another JSON in the array. ## WebSockets API The Universal Endpoint can also be accessed via a [WebSockets API](/ai-gateway/websockets-api/) which provides a single persistent connection, enabling continuous communication. This API supports all AI providers connected to AI Gateway, including those that do not natively support WebSockets. ## WebSockets example ```javascript import WebSocket from "ws"; const ws = new WebSocket( "wss://gateway.ai.cloudflare.com/v1/my-account-id/my-gateway/", { headers: { "cf-aig-authorization": "Bearer AI_GATEWAY_TOKEN", }, }, ); ws.send( JSON.stringify({ type: "universal.create", request: { eventId: "my-request", provider: "workers-ai", endpoint: "@cf/meta/llama-3.1-8b-instruct", headers: { Authorization: "Bearer WORKERS_AI_TOKEN", "Content-Type": "application/json", }, query: { prompt: "tell me a joke", }, }, }), ); ws.on("message", function incoming(message) { console.log(message.toString()); }); ``` ## Workers Binding example import { WranglerConfig } from "~/components"; ```toml title="wrangler.toml" [ai] binding = "AI" ``` ```typescript title="src/index.ts" type Env = { AI: Ai; }; export default { async fetch(request: Request, env: Env) { return env.AI.gateway('my-gateway').run({ provider: "workers-ai", endpoint: "@cf/meta/llama-3.1-8b-instruct", headers: { authorization: "Bearer my-api-token", }, query: { prompt: "tell me a joke", }, }); }, }; ``` ## Header configuration hierarchy The Universal Endpoint allows you to set fallback models or providers and customize headers for each provider or request. You can configure headers at three levels: 1. **Provider level**: Headers specific to a particular provider. 2. **Request level**: Headers included in individual requests. 3. **Gateway settings**: Default headers configured in your gateway dashboard. Since the same settings can be configured in multiple locations, AI Gateway applies a hierarchy to determine which configuration takes precedence: - **Provider-level headers** override all other configurations. - **Request-level headers** are used if no provider-level headers are set. - **Gateway-level settings** are used only if no headers are configured at the provider or request levels. This hierarchy ensures consistent behavior, prioritizing the most specific configurations. Use provider-level and request-level headers for fine-tuned control, and gateway settings for general defaults. ## Hierarchy example This example demonstrates how headers set at different levels impact caching behavior: - **Request-level header**: The `cf-aig-cache-ttl` is set to `3600` seconds, applying this caching duration to the request by default. - **Provider-level header**: For the fallback provider (OpenAI), `cf-aig-cache-ttl` is explicitly set to `0` seconds, overriding the request-level header and disabling caching for responses when OpenAI is used as the provider. This shows how provider-level headers take precedence over request-level headers, allowing for granular control of caching behavior. ```bash curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id} \ --header 'Content-Type: application/json' \ --header 'cf-aig-cache-ttl: 3600' \ --data '[ { "provider": "workers-ai", "endpoint": "@cf/meta/llama-3.1-8b-instruct", "headers": { "Authorization": "Bearer {cloudflare_token}", "Content-Type": "application/json" }, "query": { "messages": [ { "role": "system", "content": "You are a friendly assistant" }, { "role": "user", "content": "What is Cloudflare?" } ] } }, { "provider": "openai", "endpoint": "chat/completions", "headers": { "Authorization": "Bearer {open_ai_token}", "Content-Type": "application/json", "cf-aig-cache-ttl": "0" }, "query": { "model": "gpt-4o-mini", "stream": true, "messages": [ { "role": "user", "content": "What is Cloudflare?" } ] } } ]' ``` --- # Getting started URL: https://developers.cloudflare.com/autorag/get-started/ AutoRAG allows developers to create fully managed retrieval-augmented generation (RAG) pipelines to power AI applications with accurate and up-to-date information without needing to manage infrastructure. ## 1. Upload data or use existing data in R2 AutoRAG integrates with R2 for data import. Create an R2 bucket if you do not have one and upload your data. :::note Before you create your first bucket, you must purchase R2 from the Cloudflare dashboard. ::: To create and upload objects to your bucket from the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2) and select **R2**. 2. Select Create bucket, name the bucket, and select **Create bucket**. 3. Choose to either drag and drop your file into the upload area or **select from computer**. Review the [file limits](/autorag/configuration/data-source/) when creating your knowledge base. _If you need inspiration for what document to use to make your first AutoRAG, try downloading and uploading the [RSS](/changelog/rss/index.xml) of the [Cloudflare Changelog](/changelog/)._ ## 2. Create an AutoRAG To create a new AutoRAG: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/ai/autorag) and select **AI** > **AutoRAG**. 2. Select **Create AutoRAG**, configure the AutoRAG, and complete the setup process. 3. Select **Create**. ## 3. Monitor indexing Once created, AutoRAG will create a Vectorize index in your account and begin indexing the data. To monitor the indexing progress: 1. From the **AutoRAG** page in the dashboard, locate and select your AutoRAG. 2. Navigate to the **Overview** page to view the current indexing status. ## 4. Try it out Once indexing is complete, you can run your first query: 1. From the **AutoRAG** page in the dashboard, locate and select your AutoRAG. 2. Navigate to the **Playground** page. 3. Select **Search with AI** or **Search**. 4. Enter a **query** to test out its response. ## 5. Add to your application There are multiple ways you can create [RAG applications](/autorag/) with Cloudflare AutoRAG: - [Workers Binding](/autorag/usage/workers-binding/) - [REST API](/autorag/usage/rest-api/) --- # Overview URL: https://developers.cloudflare.com/autorag/ import { CardGrid, Description, LinkTitleCard, Plan, RelatedProduct, LinkButton, Feature, } from "~/components"; Create fully-managed RAG applications that continuously update and scale on Cloudflare. AutoRAG lets you create retrieval-augmented generation (RAG) pipelines that power your AI applications with accurate and up-to-date information. Create RAG applications that integrate context-aware AI without managing infrastructure. You can use AutoRAG to build: - **Product Chatbot:** Answer customer questions using your own product content. - **Docs Search:** Make documentation easy to search and use.
Get started Watch AutoRAG demo
--- ## Features Automatically and continuously index your data source, keeping your content fresh without manual reprocessing. Create multitenancy by scoping search to each tenant’s data using folder-based metadata filters. Call your AutoRAG instance for search or AI Search directly from a Cloudflare Worker using the native binding integration. Cache repeated queries and results to improve latency and reduce compute on repeated requests. --- ## Related products Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network. Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. --- ## More resources Build and deploy your first Workers AI application. Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Changelog URL: https://developers.cloudflare.com/browser-rendering/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # FAQ URL: https://developers.cloudflare.com/browser-rendering/faq/ import { GlossaryTooltip } from "~/components"; Below you will find answers to our most commonly asked questions. If you cannot find the answer you are looking for, refer to the [Discord](https://discord.cloudflare.com) to explore additional resources. ### I see `Cannot read properties of undefined (reading 'fetch')` when using Browser Rendering. How do I fix this? This error occurs because your Puppeteer launch is not receiving the Browser binding or you are not on a Workers Paid plan. To resolve: Pass your Browser binding into `puppeteer.launch`. ### Will Browser Rendering bypass Cloudflare's Bot Protection? No, Browser Rendering requests are always identified as bots by Cloudflare and do not bypass Bot Protection. Additionally, Browser Rendering respects the robots.txt protocol, ensuring that any disallowed paths specified for user agents are not accessed during rendering. If you are attempting to scan your **own zone** and need Browser Rendering to access areas protected by Cloudflare’s Bot Protection, you can create a [WAF skip rule](/waf/custom-rules/skip/) to bypass the bot protection using a header or a custom user agent. ### Why can't I use an XPath selector when using Browser Rendering with Puppeteer? Currently it is not possible to use Xpath to select elements since this poses a security risk to Workers. As an alternative try to use a css selector or `page.evaluate` for example: ```ts const innerHtml = await page.evaluate(() => { return ( // @ts-ignore this runs on browser context new XPathEvaluator() .createExpression("/html/body/div/h1") // @ts-ignore this runs on browser context .evaluate(document, XPathResult.FIRST_ORDERED_NODE_TYPE).singleNodeValue .innerHTML ); }); ``` :::note Keep in mind that `page.evaluate` can only return primitive types like strings, numbers, etc. Returning an `HTMLElement` will not work. ::: ### What are the usage limits and pricing tiers for Cloudflare Browser Rendering and how do I estimate my costs? You can view the complete breakdown of concurrency caps, request rates, timeouts, and REST API quotas on the [limits page](/browser-rendering/platform/limits/). By default, idle browser sessions close after 60 seconds of inactivity. You can adjust this with the [`keep_alive` option](/browser-rendering/platform/puppeteer/#keep-alive). #### Pricing Browser Rendering is currently free up to the limits above until billing begins. Pricing will be announced in advance. ### Does Browser Rendering rotate IP addresses for outbound requests? No. Browser Rendering requests originate from Cloudflares global network, but you cannot configure per-request IP rotation. All rendering traffic comes from Cloudflare IP ranges and requests include special headers [(`cf-biso-request-id`, `cf-biso-devtools`)](/browser-rendering/reference/automatic-request-headers/) so origin servers can identify them. ### I see `Error processing the request: Unable to create new browser: code: 429: message: Browser time limit exceeded for today`. How do I fix it? This error indicates you have hit the daily browser-instance limit on the Workers Free plan. [Free-plan accounts are capped at free plan limit is 10 minutes of browser use a day](/browser-rendering/platform/limits/#workers-free) once you exceed those, further creation attempts return a 429 until the next UTC day. To resolve:[Upgrade to a Workers Paid plan](/workers/platform/pricing/) - Paid accounts raise these limits to [10 concurrent browsers and 10 new instances per minute](/browser-rendering/platform/limits/#workers-paid). --- # Browser Rendering URL: https://developers.cloudflare.com/browser-rendering/ import { CardGrid, Description, LinkTitleCard, Plan, RelatedProduct, } from "~/components"; Browser automation for [Cloudflare Workers](/workers/) and [quick browser actions](/browser-rendering/rest-api/). Browser Rendering enables developers to programmatically control and interact with headless browser instances running on Cloudflare’s global network. This facilitates tasks such as automating browser interactions, capturing screenshots, generating PDFs, and extracting data from web pages. ## Integration Methods You can integrate Browser Rendering into your applications using one of the following methods: - **[REST API](/browser-rendering/rest-api/)**: Ideal for simple, stateless tasks like capturing screenshots, generating PDFs, extracting HTML content, and more. - **[Workers Bindings](/browser-rendering/workers-binding-api/)**: Suitable for advanced browser automation within [Cloudflare Workers](/workers/). This method provides greater control, enabling more complex workflows and persistent sessions. Choose the method that best fits your use case. For example, use the [REST API endpoints](/browser-rendering/rest-api/) for straightforward tasks from external applications and use [Workers Bindings](/browser-rendering/workers-binding-api/) for complex automation within the Cloudflare ecosystem. ## Use Cases Browser Rendering can be utilized for various purposes, including: - Fetch HTML content of a page. - Capture screenshot of a webpage. - Convert a webpage into a PDF document. - Take a webpage snapshot. - Scrape specified HTML elements from a webpage. - Retrieve data in a structured format. - Extract Markdown content from a webpage. - Gather all hyperlinks found on a webpage. ## Related products Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. A globally distributed coordination API with strongly consistent storage. Build and deploy AI-powered agents that can autonomously perform tasks. ## More resources Deploy your first Browser Rendering project using Wrangler and Cloudflare's version of Puppeteer. New to Workers? Get started with the Workers Learning Path. Learn about Browser Rendering limits. Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Get started URL: https://developers.cloudflare.com/browser-rendering/get-started/ Browser rendering can be used in two ways: - [Workers Bindings](/browser-rendering/workers-binding-api) for complex scripts. - [REST API](/browser-rendering/rest-api/) for simple actions. --- # Cloudflare for Platforms URL: https://developers.cloudflare.com/cloudflare-for-platforms/ import { Description, Feature } from "~/components" Build your own multitenant platform using Cloudflare as infrastructure Cloudflare for Platforms lets you run untrusted code written by your customers, or by AI, in a secure, hosted sandbox, and give each customer their own subdomain or custom domain. ![Figure 1: Cloudflare for Platforms Architecture Diagram](~/assets/images/reference-architecture/programmable-platforms/programmable-platforms-2.svg) You can think of Cloudflare for Platforms as the exact same products and functionality that Cloudflare offers its own customers, structured so that you can offer it to your own customers, embedded within your own product. This includes: - **Isolation and multitenancy** — each of your customers runs code in their own Worker — a [secure and isolated sandbox](/workers/reference/how-workers-works/) - **Programmable routing, ingress, egress and limits** — you write code that dispatches requests to your customers' code, and can control [ingress](/cloudflare-for-platforms/workers-for-platforms/get-started/dynamic-dispatch/), [egress](/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) and set [per-customer limits](/cloudflare-for-platforms/workers-for-platforms/configuration/custom-limits/) - **Databases and storage** — you can provide [databases, object storage and more](/workers/runtime-apis/bindings/) to your customers as APIs they can call directly, without API tokens, keys, or external dependencies - **Custom Domains and Subdomains** — you [call an API](/cloudflare-for-platforms/cloudflare-for-saas/) to create custom subdomains or configure custom domains for each of your customers Cloudflare for Platforms is used by leading platforms big and small to: - Build application development platforms tailored to specific domains, like ecommerce storefronts or mobile apps - Power AI coding platforms that let anyone build and deploy software - Customize product behavior by allowing any user to write a short code snippet - Offer every customer their own isolated database - Provide each customer with their own subdomain *** ## Products Let your customers build and deploy their own applications to your platform, using Cloudflare's developer platform. Give your customers their own subdomain or custom domain, protected and accelerated by Cloudflare. --- # Overview URL: https://developers.cloudflare.com/constellation/ import { CardGrid, Description, LinkTitleCard } from "~/components" Run machine learning models with Cloudflare Workers. Constellation allows you to run fast, low-latency inference tasks on pre-trained machine learning models natively on Cloudflare Workers. It supports some of the most popular machine learning (ML) and AI runtimes and multiple classes of models. Cloudflare provides a curated list of verified models, or you can train and upload your own. Functionality you can deploy to your application with Constellation: * Content generation, summarization, or similarity analysis * Question answering * Audio transcription * Image or audio classification * Object detection * Anomaly detection * Sentiment analysis *** ## More resources Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Demos and architectures URL: https://developers.cloudflare.com/d1/demos/ import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components" Learn how you can use D1 within your existing application and architecture. ## Featured Demos - [Starter code for D1 Sessions API](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template): An introduction to D1 Sessions API. This demo simulates purchase orders administration. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template) :::note[Tip: Place your database further away for the read replication demo] To simulate how read replication can improve a worst case latency scenario, select your primary database location to be in a farther away region (one of the deployment steps). You can find this in the **Database location hint** dropdown. ::: ## Demos Explore the following demo applications for D1. ## Reference architectures Explore the following reference architectures that use D1: --- # Getting started URL: https://developers.cloudflare.com/d1/get-started/ import { Render, PackageManagers, Steps, FileTree, Tabs, TabItem, TypeScriptExample, WranglerConfig } from "~/components"; This guide instructs you through: - Creating your first database using D1, Cloudflare's native serverless SQL database. - Creating a schema and querying your database via the command-line. - Connecting a [Cloudflare Worker](/workers/) to your D1 database using bindings, and querying your D1 database programmatically. You can perform these tasks through the CLI or through the Cloudflare dashboard. :::note If you already have an existing Worker and an existing D1 database, follow this tutorial from [3. Bind your Worker to your D1 database](/d1/get-started/#3-bind-your-worker-to-your-d1-database). ::: ## Quick start If you want to skip the steps and get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/d1-get-started/d1/d1-get-started) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. Use this option if you are familiar with Cloudflare Workers, and wish to skip the step-by-step guidance. You may wish to manually follow the steps if you are new to Cloudflare Workers. ## Prerequisites ## 1. Create a Worker Create a new Worker as the means to query your database. 1. Create a new project named `d1-tutorial` by running: This creates a new `d1-tutorial` directory as illustrated below. - d1-tutorial - node_modules/ - test/ - src - **index.ts** - package-lock.json - package.json - testconfig.json - vitest.config.mts - worker-configuration.d.ts - **wrangler.jsonc** Your new `d1-tutorial` directory includes: - A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) in `index.ts`. - A [Wrangler configuration file](/workers/wrangler/configuration/). This file is how your `d1-tutorial` Worker accesses your D1 database. :::note If you are familiar with Cloudflare Workers, or initializing projects in a Continuous Integration (CI) environment, initialize a new project non-interactively by setting `CI=true` as an [environmental variable](/workers/configuration/environment-variables/) when running `create cloudflare@latest`. For example: `CI=true npm create cloudflare@latest d1-tutorial --type=simple --git --ts --deploy=false` creates a basic "Hello World" project ready to build on. ::: 1. Log in to your [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Go to your account > **Compute (Workers)** > **Workers & Pages**. 3. Select **Create**. 4. Under **Start from a template**, select **Hello world**. 5. Name your Worker. For this tutorial, name your Worker `d1-tutorial`. 6. Select **Deploy**. ## 2. Create a database A D1 database is conceptually similar to many other SQL databases: a database may contain one or more tables, the ability to query those tables, and optional indexes. D1 uses the familiar [SQL query language](https://www.sqlite.org/lang.html) (as used by SQLite). To create your first D1 database: 1. Change into the directory you just created for your Workers project: ```sh cd d1-tutorial ``` 2. Run the following `wrangler@latest d1` command and give your database a name. In this tutorial, the database is named `prod-d1-tutorial`: :::note The [Wrangler command-line interface](/workers/wrangler/) is Cloudflare's tool for managing and deploying Workers applications and D1 databases in your terminal. It was installed when you used `npm create cloudflare@latest` to initialize your new project. While Wrangler gets installed locally to your project, you can use it outside the project by using the command `npx wrangler`. ::: ```sh npx wrangler@latest d1 create prod-d1-tutorial ``` ```sh output ✅ Successfully created DB 'prod-d1-tutorial' in region WEUR Created your new D1 database. { "d1_databases": [ { "binding": "DB", "database_name": "prod-d1-tutorial", "database_id": "" } ] } ``` This creates a new D1 database and outputs the [binding](/workers/runtime-apis/bindings/) configuration needed in the next step. 1. Go to **Storage & Databases** > **D1 SQL Database**. 2. Select **Create Database**. 3. Name your database. For this tutorial, name your D1 database `prod-d1-tutorial`. 4. (Optional) Provide a location hint. Location hint is an optional parameter you can provide to indicate your desired geographical location for your database. Refer to [Provide a location hint](/d1/configuration/data-location/#provide-a-location-hint) for more information. 5. Select **Create**. :::note For reference, a good database name: - Uses a combination of ASCII characters, shorter than 32 characters, and uses dashes (-) instead of spaces. - Is descriptive of the use-case and environment. For example, "staging-db-web" or "production-db-backend". - Only describes the database, and is not directly referenced in code. ::: ## 3. Bind your Worker to your D1 database You must create a binding for your Worker to connect to your D1 database. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to access resources, like D1, on the Cloudflare developer platform. To bind your D1 database to your Worker: You create bindings by updating your Wrangler file. 1. Copy the lines obtained from [step 2](/d1/get-started/#2-create-a-database) from your terminal. 2. Add them to the end of your Wrangler file. ```toml [[d1_databases]] binding = "DB" # available in your Worker on env.DB database_name = "prod-d1-tutorial" database_id = "" ``` Specifically: - The value (string) you set for `binding` is the **binding name**, and is used to reference this database in your Worker. In this tutorial, name your binding `DB`. - The binding name must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_DB"` or `binding = "productionDB"` would both be valid names for the binding. - Your binding is available in your Worker at `env.` and the D1 [Workers Binding API](/d1/worker-api/) is exposed on this binding. :::note When you execute the `wrangler d1 create` command, the client API package (which implements the D1 API and database class) is automatically installed. For more information on the D1 Workers Binding API, refer to [Workers Binding API](/d1/worker-api/). ::: You can also bind your D1 database to a [Pages Function](/pages/functions/). For more information, refer to [Functions Bindings for D1](/pages/functions/bindings/#d1-databases). You create bindings by adding them to the Worker you have created. 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select the `d1-tutorial` Worker you created in [step 1](/d1/get-started/#1-create-a-worker). 3. Select **Settings**. 4. Scroll to **Bindings**, then select **Add**. 5. Select **D1 database**. 6. Name your binding in **Variable name**, then select the `prod-d1-tutorial` D1 database you created in [step 2](/d1/get-started/#2-create-a-database) from the dropdown menu. For this tutorial, name your binding `DB`. 7. Select **Deploy** to deploy your binding. When deploying, there are two options: - **Deploy:** Immediately deploy the binding to 100% of your audience. - **Save version:** Save a version of the binding which you can deploy in the future. For this tutorial, select **Deploy**. ## 4. Run a query against your D1 database ### Populate your D1 database After correctly preparing your [Wrangler configuration file](/workers/wrangler/configuration/), set up your database. Create a `schema.sql` file using the SQL syntax below to initialize your database. 1. Copy the following code and save it as a `schema.sql` file in the `d1-tutorial` Worker directory you created in step 1: ```sql DROP TABLE IF EXISTS Customers; CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT); INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name'); ``` 2. Initialize your database to run and test locally first. Bootstrap your new D1 database by running: ```sh npx wrangler d1 execute prod-d1-tutorial --local --file=./schema.sql ``` ```output ⛅️ wrangler 4.13.2 ------------------- 🌀 Executing on local database prod-d1-tutorial () from .wrangler/state/v3/d1: 🌀 To execute on your remote database, add a --remote flag to your wrangler command. 🚣 3 commands executed successfully. ``` :::note The command `npx wrangler d1 execute` initializes your database locally, not on the remote database. ::: 3. Validate that your data is in the database by running: ```sh npx wrangler d1 execute prod-d1-tutorial --local --command="SELECT * FROM Customers" ``` ```sh output 🌀 Mapping SQL input into an array of statements 🌀 Executing on local database production-db-backend () from .wrangler/state/v3/d1: ┌────────────┬─────────────────────┬───────────────────┐ │ CustomerId │ CompanyName │ ContactName │ ├────────────┼─────────────────────┼───────────────────┤ │ 1 │ Alfreds Futterkiste │ Maria Anders │ ├────────────┼─────────────────────┼───────────────────┤ │ 4 │ Around the Horn │ Thomas Hardy │ ├────────────┼─────────────────────┼───────────────────┤ │ 11 │ Bs Beverages │ Victoria Ashworth │ ├────────────┼─────────────────────┼───────────────────┤ │ 13 │ Bs Beverages │ Random Name │ └────────────┴─────────────────────┴───────────────────┘ ``` Use the Dashboard to create a table and populate it with data. 1. Go to **Storage & Databases** > **D1 SQL Database**. 2. Select the `prod-d1-tutorial` database you created in [step 2](/d1/get-started/#2-create-a-database). 3. Select **Console**. 4. Paste the following SQL snippet. ```sql DROP TABLE IF EXISTS Customers; CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT); INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name'); ``` 5. Select **Execute**. This creates a table called `Customers` in your `prod-d1-tutorial` database. 6. Select **Tables**, then select the `Customers` table to view the contents of the table. ### Write queries within your Worker After you have set up your database, run an SQL query from within your Worker. 1. Navigate to your `d1-tutorial` Worker and open the `index.ts` file. The `index.ts` file is where you configure your Worker's interactions with D1. 2. Clear the content of `index.ts`. 3. Paste the following code snippet into your `index.ts` file: ```typescript export interface Env { // If you set another name in the Wrangler config file for the value for 'binding', // replace "DB" with the variable name you defined. DB: D1Database; } export default { async fetch(request, env): Promise { const { pathname } = new URL(request.url); if (pathname === "/api/beverages") { // If you did not use `DB` as your binding name, change it here const { results } = await env.DB.prepare( "SELECT * FROM Customers WHERE CompanyName = ?", ) .bind("Bs Beverages") .all(); return Response.json(results); } return new Response( "Call /api/beverages to see everyone who works at Bs Beverages", ); }, } satisfies ExportedHandler; ``` In the code above, you: 1. Define a binding to your D1 database in your code. This binding matches the `binding` value you set in the [Wrangler configuration file](/workers/wrangler/configuration/) under `d1_databases`. 2. Query your database using `env.DB.prepare` to issue a [prepared query](/d1/worker-api/d1-database/#prepare) with a placeholder (the `?` in the query). 3. Call `bind()` to safely and securely bind a value to that placeholder. In a real application, you would allow a user to pass the `CompanyName` they want to list results for. Using `bind()` prevents users from executing arbitrary SQL (known as "SQL injection") against your application and deleting or otherwise modifying your database. 4. Execute the query by calling `all()` to return all rows (or none, if the query returns none). 5. Return your query results, if any, in JSON format with `Response.json(results)`. After configuring your Worker, you can test your project locally before you deploy globally. You can query your D1 database using your Worker. 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select the `d1-tutorial` Worker you created. 3. Select the **Edit code** icon (**\<\/\>**). 4. Clear the contents of the `worker.js` file, then paste the following code: ```js export default { async fetch(request, env) { const { pathname } = new URL(request.url); if (pathname === "/api/beverages") { // If you did not use `DB` as your binding name, change it here const { results } = await env.DB.prepare( "SELECT * FROM Customers WHERE CompanyName = ?" ) .bind("Bs Beverages") .all(); return new Response(JSON.stringify(results), { headers: { 'Content-Type': 'application/json' } }); } return new Response( "Call /api/beverages to see everyone who works at Bs Beverages" ); }, }; ``` 5. Select **Save**. ## 5. Deploy your application Deploy your application on Cloudflare's global network. To deploy your Worker to production using Wrangler, you must first repeat the [database configuration](/d1/get-started/#populate-your-d1-database) steps after replacing the `--local` flag with the `--remote` flag to give your Worker data to read. This creates the database tables and imports the data into the production version of your database. 1. Create tables and add entries to your remote database with the `schema.sql` file you created in step 4. Enter `y` to confirm your decision. ```sh npx wrangler d1 execute prod-d1-tutorial --remote --file=./schema.sql ``` ```sh output ✔ ⚠️ This process may take some time, during which your D1 database will be unavailable to serve queries. Ok to proceed? y 🚣 Executed 3 queries in 0.00 seconds (5 rows read, 6 rows written) Database is currently at bookmark 00000002-00000004-00004ef1-ad4a06967970ee3b20860c86188a4b31. ┌────────────────────────┬───────────┬──────────────┬────────────────────┐ │ Total queries executed │ Rows read │ Rows written │ Database size (MB) │ ├────────────────────────┼───────────┼──────────────┼────────────────────┤ │ 3 │ 5 │ 6 │ 0.02 │ └────────────────────────┴───────────┴──────────────┴────────────────────┘ ``` 2. Validate the data is in production by running: ```sh npx wrangler d1 execute prod-d1-tutorial --remote --command="SELECT * FROM Customers" ``` ```sh output ⛅️ wrangler 4.13.2 ------------------- 🌀 Executing on remote database prod-d1-tutorial (): 🌀 To execute on your local development database, remove the --remote flag from your wrangler command. 🚣 Executed 1 command in 0.4069ms ┌────────────┬─────────────────────┬───────────────────┐ │ CustomerId │ CompanyName │ ContactName │ ├────────────┼─────────────────────┼───────────────────┤ │ 1 │ Alfreds Futterkiste │ Maria Anders │ ├────────────┼─────────────────────┼───────────────────┤ │ 4 │ Around the Horn │ Thomas Hardy │ ├────────────┼─────────────────────┼───────────────────┤ │ 11 │ Bs Beverages │ Victoria Ashworth │ ├────────────┼─────────────────────┼───────────────────┤ │ 13 │ Bs Beverages │ Random Name │ └────────────┴─────────────────────┴───────────────────┘ ``` 3. Deploy your Worker to make your project accessible on the Internet. Run: ```sh npx wrangler deploy ``` ```sh output ⛅️ wrangler 4.13.2 ------------------- Total Upload: 0.19 KiB / gzip: 0.16 KiB Your worker has access to the following bindings: - D1 Databases: - DB: prod-d1-tutorial () Uploaded d1-tutorial (3.76 sec) Deployed d1-tutorial triggers (2.77 sec) https://d1-tutorial..workers.dev Current Version ID: ``` You can now visit the URL for your newly created project to query your live database. For example, if the URL of your new Worker is `d1-tutorial..workers.dev`, accessing `https://d1-tutorial..workers.dev/api/beverages` sends a request to your Worker that queries your live database directly. 4. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `https://d1-tutorial..workers.dev/api/beverages`. 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select your `d1-tutorial` Worker. 3. Select **Deployments**. 4. From the **Version History** table, select **Deploy version**. 5. From the **Deploy version** page, select **Deploy**. This deploys the latest version of the Worker code to production. ## 6. (Optional) Develop locally with Wrangler If you are using D1 with Wrangler, you can test your database locally. While in your project directory: 1. Run `wrangler dev`: ```sh npx wrangler dev ``` When you run `wrangler dev`, Wrangler provides a URL (most likely `localhost:8787`) to review your Worker. 2. Go to the URL. The page displays `Call /api/beverages to see everyone who works at Bs Beverages`. 3. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `localhost:8787/api/beverages`. If successful, the browser displays your data. :::note You can only develop locally if you are using Wrangler. You cannot develop locally through the Cloudflare dashboard. ::: ## 7. (Optional) Delete your database To delete your database: Run: ```sh npx wrangler d1 delete prod-d1-tutorial ``` 1. Go to **Storages & Databases** > **D1 SQL Database**. 2. Select your `prod-d1-tutorial` D1 database. 3. Select **Settings**. 4. Select **Delete**. 5. Type the name of the database (`prod-d1-tutorial`) to confirm the deletion. :::caution Note that deleting your D1 database will stop your application from functioning as before. ::: If you want to delete your Worker: Run: ```sh npx wrangler delete d1-tutorial ``` 1. Go to **Compute (Workers)** > **Workers & Pages**. 2. Select your `d1-tutorial` Worker. 3. Select **Settings**. 4. Scroll to the bottom of the page, then select **Delete**. 5. Type the name of the Worker (`d1-tutorial`) to confirm the deletion. ## Summary In this tutorial, you have: - Created a D1 database - Created a Worker to access that database - Deployed your project globally ## Next steps If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com). - See supported [Wrangler commands for D1](/workers/wrangler/commands/#d1). - Learn how to use [D1 Worker Binding APIs](/d1/worker-api/) within your Worker, and test them from the [API playground](/d1/worker-api/#api-playground). - Explore [community projects built on D1](/d1/reference/community-projects/). --- # Cloudflare D1 URL: https://developers.cloudflare.com/d1/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct } from "~/components" Create new serverless SQL databases to query from your Workers and Pages projects. D1 is Cloudflare's managed, serverless database with SQLite's SQL semantics, built-in disaster recovery, and Worker and HTTP API access. D1 is designed for horizontal scale out across multiple, smaller (10 GB) databases, such as per-user, per-tenant or per-entity databases. D1 allows you to build applications with thousands of databases at no extra cost for isolating with multiple databases. D1 pricing is based only on query and storage costs. Create your first D1 database by [following the Get started guide](/d1/get-started/), learn how to [import data into a database](/d1/best-practices/import-export-data/), and how to [interact with your database](/d1/worker-api/) directly from [Workers](/workers/) or [Pages](/pages/functions/bindings/#d1-databases). *** ## Features Create your first D1 database, establish a schema, import data and query D1 directly from an application [built with Workers](/workers/). Execute SQL with SQLite's SQL compatibility and D1 Client API. Time Travel is D1’s approach to backups and point-in-time-recovery, and allows you to restore a database to any minute within the last 30 days. *** ## Related products Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Deploy dynamic front-end applications in record time. *** ## More resources Learn about D1's pricing and how to estimate your usage. Learn about what limits D1 has and how to work within them. Browse what developers are building with D1. Learn more about the storage and database options you can build on with Workers. Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Developer Platform. --- # Wrangler commands URL: https://developers.cloudflare.com/d1/wrangler-commands/ import { Render, Type, MetaInfo } from "~/components" D1 Wrangler commands use REST APIs to interact with the control plane. This page lists the Wrangler commands for D1. ## Global commands ## Experimental commands ### `insights` Returns statistics about your queries. ```sh npx wrangler d1 insights --