diff --git a/docs/debugging/faq.mdx b/docs/debugging/faq.mdx
new file mode 100644
index 00000000..37f6dd7c
--- /dev/null
+++ b/docs/debugging/faq.mdx
@@ -0,0 +1,398 @@
+---
+title: Frequently Asked Questions
+subtitle: Common questions and answers from the Skyvern community
+slug: debugging/faq
+---
+
+## Troubleshooting
+
+
+Check these in order:
+
+1. **Run status** — Use `get_run(run_id)` to see the status and `failure_reason`
+2. **Timeline** — Use `get_run_timeline(run_id)` to find which block failed
+3. **Artifacts** — Check `screenshot_final` to see where it stopped, and `recording` to watch what happened
+
+Common causes:
+- **`terminated`** — The AI couldn't complete the goal (CAPTCHA, login blocked, element not found)
+- **`timed_out`** — Exceeded `max_steps`. Increase it or simplify the task
+- **`failed`** — System error (browser crash, network failure)
+
+See the [Troubleshooting Guide](/debugging/troubleshooting-guide) for detailed steps.
+
+
+
+Multiple similar elements on the page can confuse the AI. Fix this by:
+
+1. **Add visual context** — "Click the **blue** Submit button at the **bottom** of the form"
+2. **Reference surrounding text** — "Click Download in the row that says 'January 2024'"
+3. **Be specific about position** — "Click the first result" or "Click the link in the header"
+
+Check the `llm_response_parsed` artifact to see what action the AI decided to take.
+
+
+
+Tasks time out when they exceed `max_steps` (default: 50). This happens when:
+
+- The task is too complex for the step limit
+- The AI gets stuck in a navigation loop
+- Missing completion criteria causes infinite attempts
+
+**Fixes:**
+- Increase `max_steps` in your task parameters
+- Add explicit completion criteria: "COMPLETE when you see 'Order confirmed'"
+- Break complex tasks into multiple workflow blocks
+
+
+
+Elements may not be detected if they:
+
+- Load dynamically after the page appears
+- Are inside an iframe
+- Require scrolling to become visible
+- Use unusual rendering (canvas, shadow DOM)
+
+**Fixes:**
+- Add `max_screenshot_scrolls` to capture lazy-loaded content
+- Describe elements visually rather than by technical properties
+- Add explicit wait or scroll instructions in your prompt
+
+
+
+Check the `screenshot_final` artifact to verify the AI ended on the right page.
+
+Common causes:
+- Prompt was ambiguous about what to extract
+- `data_extraction_schema` descriptions weren't specific enough
+- Multiple similar elements and the AI picked the wrong one
+
+**Fixes:**
+- Add specific descriptions: `"description": "The price in USD, without currency symbol"`
+- Add visual hints: "The price is displayed in bold below the product title"
+
+
+---
+
+## Workflows
+
+
+Define parameters in the workflow editor, then pass them when starting a run:
+
+```python
+result = await client.run_workflow(
+ workflow_id="wpid_123456",
+ parameters={
+ "url": "https://example.com",
+ "username": "user@example.com"
+ }
+)
+```
+
+Parameters are available in prompts using `{{ parameter_name }}` syntax. See [Workflow Parameters](/workflows/workflow-parameters) for details.
+
+
+
+1. **Store credentials** — Go to Settings > Credentials and add your login
+2. **Reference in workflow** — Select the credential in the Login block's credential dropdown
+3. **Never hardcode** — Don't put passwords directly in prompts
+
+The AI will automatically fill the credential fields when it encounters a login form. See [Credentials](/credentials/introduction) for setup instructions.
+
+
+
+Skyvern doesn't have built-in scheduling yet. Use external schedulers like:
+
+- **Cron jobs** — Call the Skyvern API on a schedule
+- **Zapier/Make** — Trigger workflows from automation platforms
+- **Your backend** — Integrate scheduling into your application
+
+Set up a [webhook](/running-tasks/webhooks-faq) to get notified when scheduled runs complete.
+
+
+
+Make sure you're using the parameter placeholder syntax correctly:
+
+- **Correct:** `{{ parameter_name }}`
+- **Wrong:** `{parameter_name}` or `$parameter_name`
+
+Also verify:
+- The parameter name matches exactly (case-sensitive)
+- The parameter is defined in the workflow's parameter list
+- You're passing the parameter when starting the run, not after
+
+
+
+Use the "Run Sequentially" option with a credential key:
+
+1. Enable "Run Sequentially" in workflow settings
+2. Set the sequential key to your credential identifier
+3. Skyvern will queue runs that share the same credential
+
+This prevents concurrent logins that could trigger security blocks.
+
+
+---
+
+## Configuration
+
+
+Go to [Settings](https://app.skyvern.com/settings) in the Skyvern Cloud UI. Your API key is displayed there.
+
+For local installations, run `skyvern init` to generate a local API key, which will be saved in your `.env` file.
+
+
+
+**Cloud:** The model is managed by Skyvern. Contact support for custom model requirements.
+
+**Self-hosted:** Run `skyvern init llm` to reconfigure, or set these environment variables:
+- `LLM_KEY` — Primary model (e.g., `OPENAI_GPT4O`, `ANTHROPIC_CLAUDE_SONNET`)
+- `SECONDARY_LLM_KEY` — Optional lightweight model for faster operations
+
+
+
+Set `proxy_location` when running a task:
+
+```python
+result = await client.run_task(
+ prompt="...",
+ url="https://example.com",
+ proxy_location="RESIDENTIAL_US" # or RESIDENTIAL_ISP for static IPs
+)
+```
+
+Available locations include `RESIDENTIAL_US`, `RESIDENTIAL_UK`, `RESIDENTIAL_ISP`, and more. Use `RESIDENTIAL_ISP` for sites that require consistent IP addresses.
+
+
+
+Yes. Contact support to increase the limit for your account. Higher limits may incur additional costs.
+
+Alternatively, break complex tasks into multiple workflow blocks, each with its own step budget.
+
+
+---
+
+## API & Integration
+
+
+Check the run status in a loop:
+
+```python
+while True:
+ run = await client.get_run(run_id)
+ if run.status in ["completed", "failed", "terminated", "timed_out", "canceled"]:
+ break
+ await asyncio.sleep(5)
+```
+
+Or use [webhooks](/running-tasks/webhooks-faq) to get notified when runs complete without polling.
+
+
+
+The extracted data is in `run.output`:
+
+```python
+run = await client.get_run(run_id)
+print(run.output) # {"field1": "value1", "field2": "value2"}
+```
+
+For downloaded files, check `run.downloaded_files` which contains signed URLs.
+
+
+
+The recording URL is included in the run response:
+
+```python
+run = await client.get_run(run_id)
+print(run.recording_url) # https://skyvern-artifacts.s3.amazonaws.com/.../recording.webm
+```
+
+Note: Signed URLs expire after 24 hours. Request fresh URLs if expired.
+
+
+
+Use the `run_workflow` method:
+
+```python
+result = await client.run_workflow(
+ workflow_id="wpid_123456",
+ parameters={"key": "value"}
+)
+print(result.run_id) # wr_789012
+```
+
+The workflow starts asynchronously. Poll the run status or configure a webhook to track completion.
+
+
+---
+
+## Authentication & Credentials
+
+
+Check these common causes:
+
+1. **Credentials expired** — Verify username/password in your credential store
+2. **CAPTCHA appeared** — Use a browser profile with an existing session
+3. **MFA required** — Configure TOTP if the site requires 2FA
+4. **Site detected automation** — Use `RESIDENTIAL_ISP` proxy for trusted IPs
+
+Watch the `recording` artifact to see exactly what happened during login.
+
+
+
+Options to bypass CAPTCHAs:
+
+1. **Browser profile** — Create a profile with an authenticated session that skips login entirely
+2. **Human interaction block** — Pause the workflow for manual CAPTCHA solving
+3. **Residential proxy** — Use `RESIDENTIAL_ISP` for static IPs that sites trust more
+
+Skyvern has built-in CAPTCHA solving for common types, but some sites use advanced anti-bot measures.
+
+
+
+1. Go to Settings > Credentials
+2. Create a new TOTP credential with your authenticator secret
+3. Reference the TOTP credential in your workflow's login block
+
+The AI will automatically fetch and enter the TOTP code when it encounters a 2FA prompt. See [TOTP Setup](/credentials/totp) for detailed instructions.
+
+
+
+Email OTP requires an external integration:
+
+1. Set up an email listener (Zapier, custom webhook, etc.)
+2. When the OTP arrives, send it back to Skyvern via the API
+3. Use a Human Interaction block to wait for the code
+
+Alternatively, check if the site supports TOTP authenticators, which Skyvern handles natively.
+
+
+---
+
+## Browser & Proxy
+
+
+Sites may detect automation through:
+
+- IP reputation (datacenter IPs are often blocked)
+- Browser fingerprinting
+- Rate limiting
+
+**Fixes:**
+- Use `RESIDENTIAL_ISP` proxy for consistent, trusted IPs
+- Create a browser profile with an existing authenticated session
+- Slow down requests by adding explicit waits in your prompt
+
+
+
+Use browser profiles:
+
+1. Create a browser profile in Settings > Browser Profiles
+2. Log in manually or via a workflow
+3. Reference the `browser_profile_id` in subsequent runs
+
+The profile preserves cookies, localStorage, and session state across runs.
+
+
+
+Yes, for self-hosted installations:
+
+```bash
+skyvern init # Choose "connect to existing Chrome" option
+skyvern run server
+```
+
+This connects to Chrome running on your machine, useful for debugging or accessing internal tools.
+
+
+
+This usually indicates high load or network issues:
+
+- Wait a few seconds and retry
+- Check your rate limits
+- For self-hosted: verify your infrastructure has sufficient resources
+
+If persistent, contact support with your run IDs.
+
+
+---
+
+## Self-Hosting & Deployment
+
+
+- Python 3.11, 3.12, or 3.13
+- PostgreSQL database
+- An LLM API key (OpenAI, Anthropic, Azure, Gemini, or Ollama)
+- Docker (optional, for containerized deployment)
+
+Run `skyvern init` for an interactive setup wizard. See the [Quickstart](/getting-started/quickstart) for full instructions.
+
+
+
+```bash
+pip install --upgrade skyvern
+alembic upgrade head # Run database migrations
+skyvern run server # Restart the server
+```
+
+For Docker deployments, pull the latest image and restart your containers.
+
+
+
+Set these environment variables:
+
+```bash
+ENABLE_OPENAI=true # or ENABLE_ANTHROPIC, ENABLE_AZURE, etc.
+OPENAI_API_KEY=sk-...
+LLM_KEY=OPENAI_GPT4O
+```
+
+Skyvern supports OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Gemini, and Ollama. Run `skyvern init llm` for guided configuration.
+
+
+---
+
+## Billing & Pricing
+
+
+Pricing depends on your plan:
+
+- **Pay-as-you-go** — Charged per step or per task completion
+- **Enterprise** — Custom pricing based on volume
+
+View your usage and billing at [Settings > Billing](https://app.skyvern.com/billing).
+
+
+
+Credits may be reserved for in-progress runs. Check:
+
+1. Active runs that haven't completed yet
+2. Failed runs that consumed partial credit
+3. Pending charges that haven't settled
+
+Wait for active runs to complete, then check your balance again.
+
+
+
+Go to [Settings > Billing](https://app.skyvern.com/billing) to download invoices. For custom invoice requirements (company name, VAT, etc.), contact support.
+
+
+---
+
+## Still have questions?
+
+
+
+ Step-by-step debugging for failed runs
+
+
+ Chat with us via the support widget
+
+
diff --git a/docs/debugging/troubleshooting-guide.mdx b/docs/debugging/troubleshooting-guide.mdx
new file mode 100644
index 00000000..7c7874d5
--- /dev/null
+++ b/docs/debugging/troubleshooting-guide.mdx
@@ -0,0 +1,376 @@
+---
+title: Troubleshooting Guide
+subtitle: Common issues, step timeline, and when to adjust what
+slug: debugging/troubleshooting-guide
+---
+
+When a run fails or produces unexpected results, this guide helps you identify the cause and fix it.
+
+## Quick debugging checklist
+
+
+
+
+
+
+
+ Call `get_run(run_id)` and check `status` and `failure_reason`
+
+
+ Call `get_run_timeline(run_id)` to see which block failed
+
+
+ Check `screenshot_final` to see where it stopped, `recording` to watch what happened
+
+
+ Adjust prompt, parameters, or file a bug based on what you find
+
+
+
+**Key artifacts to check:**
+- `screenshot_final` — Final page state
+- `recording` — Video of what happened
+- `llm_response_parsed` — What the AI decided to do
+- `visible_elements_tree` — What elements were detected
+- `har` — Network requests if something failed to load
+
+---
+
+## Step 1: Check the run
+
+
+```python Python
+import os
+from skyvern import Skyvern
+
+client = Skyvern(api_key=os.getenv("SKYVERN_API_KEY"))
+
+run = await client.get_run(run_id)
+
+print(f"Status: {run.status}")
+print(f"Failure reason: {run.failure_reason}")
+print(f"Step count: {run.step_count}")
+print(f"Output: {run.output}")
+```
+
+```typescript TypeScript
+import { SkyvernClient } from "@skyvern/client";
+
+const client = new SkyvernClient({
+ apiKey: process.env.SKYVERN_API_KEY,
+});
+
+const run = await client.getRun(runId);
+
+console.log(`Status: ${run.status}`);
+console.log(`Failure reason: ${run.failure_reason}`);
+console.log(`Step count: ${run.step_count}`);
+console.log(`Output: ${JSON.stringify(run.output)}`);
+```
+
+```bash cURL
+curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID" \
+ -H "x-api-key: $SKYVERN_API_KEY" \
+ | jq '{status, failure_reason, step_count, output}'
+```
+
+
+| Status | What it means | Likely cause |
+|--------|---------------|--------------|
+| `completed` | Run finished, but output may be wrong | Prompt interpretation issue |
+| `failed` | System error | Browser crash, network failure |
+| `terminated` | AI gave up | Login blocked, CAPTCHA, page unavailable |
+| `timed_out` | Exceeded `max_steps` | Task too complex, or AI got stuck |
+| `canceled` | Manually stopped | — |
+
+---
+
+## Step 2: Read the step timeline
+
+The timeline shows each block and action in execution order.
+
+
+```python Python
+timeline = await client.get_run_timeline(run_id)
+
+for item in timeline:
+ if item.type == "block":
+ block = item.block
+ print(f"Block: {block.label}")
+ print(f" Status: {block.status}")
+ print(f" Duration: {block.duration}s")
+ if block.failure_reason:
+ print(f" Failure: {block.failure_reason}")
+ for action in block.actions:
+ print(f" Action: {action.action_type} -> {action.status}")
+```
+
+```typescript TypeScript
+const timeline = await client.getRunTimeline(runId);
+
+for (const item of timeline) {
+ if (item.type === "block") {
+ const block = item.block;
+ console.log(`Block: ${block.label}`);
+ console.log(` Status: ${block.status}`);
+ console.log(` Duration: ${block.duration}s`);
+ if (block.failure_reason) {
+ console.log(` Failure: ${block.failure_reason}`);
+ }
+ for (const action of block.actions || []) {
+ console.log(` Action: ${action.action_type} -> ${action.status}`);
+ }
+ }
+}
+```
+
+```bash cURL
+curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/timeline" \
+ -H "x-api-key: $SKYVERN_API_KEY"
+```
+
+
+---
+
+## Step 3: Inspect artifacts
+
+
+```python Python
+artifacts = await client.get_run_artifacts(
+ run_id,
+ artifact_type=["screenshot_final", "llm_response_parsed", "skyvern_log"]
+)
+
+for a in artifacts:
+ print(f"{a.artifact_type}: {a.signed_url}")
+```
+
+```typescript TypeScript
+const artifacts = await client.getRunArtifacts(runId, {
+ artifact_type: ["screenshot_final", "llm_response_parsed", "skyvern_log"]
+});
+
+for (const a of artifacts) {
+ console.log(`${a.artifact_type}: ${a.signed_url}`);
+}
+```
+
+```bash cURL
+curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/artifacts?artifact_type=screenshot_final&artifact_type=llm_response_parsed&artifact_type=skyvern_log" \
+ -H "x-api-key: $SKYVERN_API_KEY" \
+ | jq '.[] | "\(.artifact_type): \(.signed_url)"'
+```
+
+
+See [Using Artifacts](/debugging/using-artifacts) for the full list of artifact types.
+
+---
+
+## Common issues and fixes
+
+### Run Status Issues
+
+
+**Symptom:** Status is `completed`, but the extracted data is incorrect or incomplete.
+
+**Check:** `screenshot_final` (right page?) and `llm_response_parsed` (right elements?)
+
+**Fix:**
+- Add specific descriptions in your schema: `"description": "The price in USD, without currency symbol"`
+- Add visual hints: "The price is displayed in bold below the product title"
+
+
+
+**Symptom:** Status is `timed_out`.
+
+**Check:** `recording` (stuck in a loop?) and `step_count` (how many steps?)
+
+**Fix:**
+- Increase `max_steps` if the task genuinely needs more steps
+- Add explicit completion criteria: "COMPLETE when you see 'Order confirmed'"
+- Break complex tasks into multiple workflow blocks
+
+
+
+**Symptom:** Run stays in `queued` status and never starts.
+
+**Likely causes:** Concurrency limit hit, sequential run lock, or high platform load.
+
+**Fix:**
+- Wait for other runs to complete
+- Check your plan's concurrency limits in Settings
+- If using "Run Sequentially" with credentials, ensure prior runs finish
+
+
+
+**Symptom:** Getting `504 Gateway Timeout` errors when creating browser sessions.
+
+**Fix:**
+- Retry after a few seconds — usually transient
+- For self-hosted: check infrastructure resources
+- If persistent, contact support with your run IDs
+
+
+### Element & Interaction Issues
+
+
+**Symptom:** Recording shows the AI clicking something other than intended.
+
+**Check:** `visible_elements_tree` (element detected?) and `llm_prompt` (target described clearly?)
+
+**Fix:**
+- Add distinguishing details: "Click the **blue** Submit button at the **bottom** of the form"
+- Reference surrounding text: "Click Download in the row that says 'January 2024'"
+
+
+
+**Symptom:** `failure_reason` mentions an element couldn't be found.
+
+**Check:** `screenshot_final` (element visible?) and `visible_elements_tree` (element detected?)
+
+**Likely causes:** Dynamic loading, iframe, below the fold, unusual rendering (canvas, shadow DOM)
+
+**Fix:**
+- Add `max_screenshot_scrolls` to capture lazy-loaded content
+- Describe elements visually rather than by technical properties
+- Add wait or scroll instructions in your prompt
+
+
+
+**Symptom:** Dates entered in wrong format (MM/DD vs DD/MM) or wrong date entirely.
+
+**Check:** `recording` (how did AI interact with the date picker?)
+
+**Fix:**
+- Specify exact format: "Enter the date as **15/01/2024** (DD/MM/YYYY format)"
+- For date pickers: "Click the date field, then select January 15, 2024 from the calendar"
+- If site auto-formats: "Type 01152024 and let the field format it"
+
+
+
+**Symptom:** AI keeps clicking through address suggestions in a loop, or selects wrong address.
+
+**Check:** `recording` (is autocomplete dropdown appearing?)
+
+**Fix:**
+- Specify selection: "Type '123 Main St' and select the first suggestion showing 'New York, NY'"
+- Disable autocomplete: "Type the full address without using autocomplete suggestions"
+- Break into steps: "Enter street address, wait for dropdown, select the matching suggestion"
+
+
+### Authentication Issues
+
+
+**Symptom:** Status is `terminated` with login-related failure reason.
+
+**Check:** `screenshot_final` (error message?) and `recording` (fields filled correctly?)
+
+**Likely causes:** Expired credentials, CAPTCHA, MFA required, automation detected
+
+**Fix:**
+- Verify credentials in your credential store
+- Use `RESIDENTIAL_ISP` proxy for more stable IPs
+- Configure TOTP if MFA is required
+- Use a browser profile with an existing session
+
+
+
+**Symptom:** `failure_reason` mentions CAPTCHA.
+
+**Fix options:**
+1. **Browser profile** — Use an authenticated session that skips login
+2. **Human interaction block** — Pause for manual CAPTCHA solving
+3. **Different proxy** — Use `RESIDENTIAL_ISP` for static IPs sites trust more
+
+
+### Data & Output Issues
+
+
+**Symptom:** Extracted data contains information not on the page.
+
+**Check:** `screenshot_final` (data visible?) and `llm_response_parsed` (what did AI claim to see?)
+
+**Fix:**
+- Add instruction: "Only extract data **visibly displayed** on the page. Return null if not found"
+- Make schema fields optional where appropriate
+- Add validation: "If price is not displayed, set price to null instead of guessing"
+
+
+
+**Symptom:** Files download repeatedly, wrong files, or downloads don't complete.
+
+**Check:** `recording` (download button clicked correctly?) and `downloaded_files` in response
+
+**Fix:**
+- Be specific: "Click the **PDF** download button, not the Excel one"
+- Add wait time: "Wait for the download to complete"
+- For multi-step downloads: "Click Download, then select PDF format, then click Confirm"
+
+
+### Environment Issues
+
+
+**Symptom:** Site displays in unexpected language based on proxy location.
+
+**Check:** What `proxy_location` is being used? Does site geo-target?
+
+**Fix:**
+- Use appropriate proxy: `RESIDENTIAL_US` for English US sites
+- Add instruction: "If the site asks for language preference, select English"
+- Set language in URL: `https://example.com/en/`
+
+
+
+**Symptom:** Recording shows AI repeating the same actions without progressing.
+
+**Check:** `recording` (what pattern is repeating?) and conditional block exit conditions
+
+**Fix:**
+- Add termination: "STOP if you've attempted this action 3 times without success"
+- Add completion markers: "COMPLETE when you see 'Success' message"
+- Review loop conditions — ensure exit criteria are achievable
+
+
+---
+
+## When to adjust prompts vs parameters
+
+
+
+ - AI misunderstood what to do
+ - Clicked wrong element
+ - Ambiguous completion criteria
+ - Extracted wrong data
+
+
+ - Timed out → increase `max_steps`
+ - Site blocked → change `proxy_location`
+ - Content didn't load → increase `max_screenshot_scrolls`
+
+
+ - Visible element not detected
+ - Standard UI patterns don't work
+ - Consistent wrong element despite clear prompts
+
+
+
+---
+
+## Next steps
+
+
+
+ Detailed reference for all artifact types
+
+
+ Write prompts that fail less often
+
+
diff --git a/docs/debugging/using-artifacts.mdx b/docs/debugging/using-artifacts.mdx
new file mode 100644
index 00000000..c68cefed
--- /dev/null
+++ b/docs/debugging/using-artifacts.mdx
@@ -0,0 +1,211 @@
+---
+title: Using Artifacts
+subtitle: Access recordings, screenshots, logs, and network data for every run
+slug: debugging/using-artifacts
+---
+
+When a Skyvern run fails or produces unexpected results, artifacts are your debugging toolkit. Every run automatically captures what happened—recordings of the browser session, screenshots at each step, the AI's reasoning, and network traffic.
+
+This page covers how to retrieve artifacts and what each type tells you.
+
+
+Looking for a specific artifact type? Jump to the [artifact types reference](#artifact-types-reference) to see all available types, then come back here to learn how to retrieve them.
+
+
+---
+
+## Before you start
+
+You need a `run_id` to retrieve artifacts. You get this when you [run a task](/running-automations/run-a-task) or workflow.
+
+---
+
+## Retrieve artifacts
+
+Use `get_run_artifacts` with the `run_id` from your task or workflow run:
+
+
+```python Python
+import os
+from skyvern import Skyvern
+
+client = Skyvern(api_key=os.getenv("SKYVERN_API_KEY"))
+
+artifacts = await client.get_run_artifacts(run_id)
+
+for artifact in artifacts:
+ print(f"{artifact.artifact_type}: {artifact.signed_url}")
+```
+
+```typescript TypeScript
+import { SkyvernClient } from "@skyvern/client";
+
+const client = new SkyvernClient({
+ apiKey: process.env.SKYVERN_API_KEY,
+});
+
+const artifacts = await client.getRunArtifacts(runId);
+
+for (const artifact of artifacts) {
+ console.log(`${artifact.artifact_type}: ${artifact.signed_url}`);
+}
+```
+
+```bash cURL
+curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/artifacts" \
+ -H "x-api-key: $SKYVERN_API_KEY"
+```
+
+
+
+Signed URLs expire after 24 hours. If a URL has expired, call `get_run_artifacts` again to get fresh URLs. When running Skyvern locally, `signed_url` will be `null`—access artifacts directly from your local file system instead.
+
+
+---
+
+## Filter by artifact type
+
+To get only specific artifact types, pass `artifact_type`:
+
+
+```python Python
+# Get only recordings and final screenshots
+artifacts = await client.get_run_artifacts(
+ run_id,
+ artifact_type=["recording", "screenshot_final"]
+)
+```
+
+```typescript TypeScript
+const artifacts = await client.getRunArtifacts(runId, {
+ artifact_type: ["recording", "screenshot_final"],
+});
+```
+
+```bash cURL
+curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/artifacts?artifact_type=recording&artifact_type=screenshot_final" \
+ -H "x-api-key: $SKYVERN_API_KEY"
+```
+
+
+---
+
+## Artifact types reference
+
+### Visual artifacts
+
+These show you what the browser looked like at various points.
+
+| Type | What it contains | Use this to |
+|------|------------------|-------------|
+| `recording` | Full video of the browser session (WebM) | Watch the entire run, see exactly what happened |
+| `screenshot` | Base screenshot capture | General page state capture |
+| `screenshot_final` | Screenshot when the run ended | Check the final state—did it end on the right page? |
+| `screenshot_action` | Screenshot taken after each action | See the page state after each click, type, or navigation |
+| `screenshot_llm` | Screenshot sent to the LLM for decision-making | Understand what the AI "saw" when it made each decision |
+
+**Debugging tip:** If the run failed, start with `screenshot_final` to see where it stopped, then watch the `recording` to see how it got there.
+
+### AI reasoning artifacts
+
+These show you why the AI made each decision.
+
+| Type | What it contains | Use this to |
+|------|------------------|-------------|
+| `llm_prompt` | The prompt sent to the LLM | See exactly what instructions the AI received |
+| `llm_request` | Full request including context | Check if the right context was included |
+| `llm_response` | Raw LLM response | See the AI's unprocessed output |
+| `llm_response_parsed` | Parsed response (extracted actions) | Understand what actions the AI decided to take |
+| `llm_response_rendered` | Parsed response with real URLs | Debug URL targeting issues (see below) |
+
+**Debugging tip:** If the AI did something unexpected, check `llm_prompt` to see if your prompt was interpreted correctly, then `llm_response_parsed` to see what action it extracted.
+
+**About `llm_response_rendered`:** Skyvern hashes URLs before sending them to the LLM to reduce token usage and avoid confusing the model with long query strings. The `llm_response_parsed` artifact contains these hashed placeholders (like `{{_a1b2c3}}`), while `llm_response_rendered` shows the same response with actual URLs substituted back in. Use this when you need to verify which exact URL the AI targeted.
+
+### Page structure artifacts
+
+These show how Skyvern interpreted the page.
+
+| Type | What it contains | Use this to |
+|------|------------------|-------------|
+| `visible_elements_tree` | DOM tree of visible elements (JSON) | See which elements Skyvern detected |
+| `visible_elements_tree_trimmed` | Trimmed tree (sent to LLM) | See what the AI could "see" |
+| `visible_elements_tree_in_prompt` | Element tree as plain text | Read exactly what was embedded in the prompt |
+| `visible_elements_id_css_map` | CSS selectors for each element | Debug element targeting issues |
+| `visible_elements_id_frame_map` | Element-to-iframe mapping | Debug elements inside iframes |
+| `visible_elements_id_xpath_map` | XPath selectors for each element | Alternative element targeting debug |
+| `hashed_href_map` | Mapping of hashed URLs to original URLs | Decode URL placeholders in LLM responses |
+| `html` | Generic HTML capture | Inspect page HTML |
+| `html_scrape` | Full HTML captured during scraping | Inspect the raw page structure |
+| `html_action` | HTML captured after an action | See how the page changed after each action |
+
+**Debugging tip:** If the AI couldn't find an element, check `visible_elements_tree` to see if it was detected. If not, the element might be in an iframe, dynamically loaded, or outside the viewport.
+
+**About the element tree variants:** The `visible_elements_tree` is a JSON structure, while `visible_elements_tree_in_prompt` is the plain-text representation that gets embedded directly into the LLM prompt. The latter is often easier to read when debugging prompt issues. If you're debugging iframe-related problems, `visible_elements_id_frame_map` shows which frame contains each element ID.
+
+### Log artifacts
+
+These contain structured execution data.
+
+| Type | What it contains | Use this to |
+|------|------------------|-------------|
+| `skyvern_log` | Formatted execution logs | Read through the run step-by-step |
+| `skyvern_log_raw` | Raw JSON logs | Parse programmatically for automation |
+| `browser_console_log` | Browser console output | Debug JavaScript errors on the page |
+
+**Debugging tip:** `skyvern_log` is human-readable and shows the flow of execution. Start here for an overview before diving into specific artifacts.
+
+### Network artifacts
+
+| Type | What it contains | Use this to |
+|------|------------------|-------------|
+| `har` | HTTP Archive (network requests/responses) | Debug API calls, check for failed requests, see response data |
+| `trace` | Playwright trace file | Replay the session in Playwright Trace Viewer |
+
+**Debugging tip:** The HAR file captures every network request. If a form submission failed, check the HAR to see if the request was sent and what the server responded.
+
+### File artifacts
+
+| Type | What it contains | Use this to |
+|------|------------------|-------------|
+| `script_file` | Generated Playwright scripts | See the code Skyvern would use to repeat this task |
+
+---
+
+## Artifact response fields
+
+| Field | Type | Description |
+|-------|------|-------------|
+| `artifact_id` | string | Unique identifier |
+| `artifact_type` | string | Type (see tables above) |
+| `uri` | string | Internal storage path |
+| `signed_url` | string \| null | Pre-signed URL for downloading |
+| `task_id` | string \| null | Associated task ID |
+| `step_id` | string \| null | Associated step ID |
+| `run_id` | string \| null | Run ID (e.g., `tsk_v2_...`, `wr_...`) |
+| `workflow_run_id` | string \| null | Workflow run ID |
+| `workflow_run_block_id` | string \| null | Workflow block ID |
+| `organization_id` | string | Your organization |
+| `created_at` | datetime | When created |
+| `modified_at` | datetime | When modified |
+
+---
+
+## Next steps
+
+
+
+ Common issues and how to fix them
+
+
+ Map errors to custom codes for programmatic handling
+
+
diff --git a/docs/docs.json b/docs/docs.json
index f1ebf65e..6eec2570 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -51,6 +51,14 @@
"going-to-production/reliability-tips",
"going-to-production/captcha-bot-detection"
]
+ },
+ {
+ "group": "Debugging",
+ "pages": [
+ "debugging/using-artifacts",
+ "debugging/troubleshooting-guide",
+ "debugging/faq"
+ ]
}
]
},
diff --git a/docs/images/debugging-checklist.png b/docs/images/debugging-checklist.png
new file mode 100644
index 00000000..1e4f5fca
Binary files /dev/null and b/docs/images/debugging-checklist.png differ