Add conditional block support for script caching (v2 - with bug fix) (#4642)
This commit is contained in:
@@ -1,10 +1,30 @@
|
||||
# Script Generation Context
|
||||
# Script Generation & Caching
|
||||
|
||||
## Overview
|
||||
|
||||
Script generation converts workflow runs into executable Python code that can be cached and reused. This enables "run with code" mode where workflows execute via cached scripts instead of the AI agent.
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `generate_script.py` | Generates Python code from workflow run data |
|
||||
| `transform_workflow_run.py` | Transforms DB workflow run into code gen input |
|
||||
| `skyvern/services/workflow_script_service.py` | Caching logic, script storage |
|
||||
| `skyvern/forge/sdk/workflow/service.py` | Regeneration decision logic (`generate_script_if_needed`) |
|
||||
|
||||
## Key Constants
|
||||
|
||||
- `SCRIPT_TASK_BLOCKS` - Block types that have task_id and actions (task, navigation, extraction, etc.)
|
||||
- `BLOCK_TYPES_THAT_SHOULD_BE_CACHED` in `workflow/service.py` - Block types eligible for caching (includes for_loop)
|
||||
|
||||
## How Caching Works
|
||||
|
||||
1. **Block execution tracking** (service.py:1309-1316): When a block executes via agent and completes, it's added to `blocks_to_update`
|
||||
2. **Regeneration decision** (`generate_script_if_needed`): Decides whether to regenerate based on `blocks_to_update` and `missing_labels`
|
||||
3. **Script generation** (`generate_workflow_script`): Generates code only for blocks that executed this run
|
||||
4. **Progressive caching**: Only executed blocks are cached; unexecuted blocks remain uncached until they run
|
||||
|
||||
## Script Block Requirements for `run_with: code`
|
||||
|
||||
For a workflow to execute with cached scripts (`run_with: code`), ALL top-level blocks must have:
|
||||
@@ -13,6 +33,53 @@ For a workflow to execute with cached scripts (`run_with: code`), ALL top-level
|
||||
|
||||
Without these, the system falls back to `run_with: agent`.
|
||||
|
||||
## Critical: Two Mechanisms for Detecting New Blocks
|
||||
|
||||
| Mechanism | Location | What it catches |
|
||||
|-----------|----------|-----------------|
|
||||
| Execution tracking | service.py:1316 | Blocks that EXECUTED and aren't cached |
|
||||
| `missing_labels` check | service.py:3436-3441 | Blocks in DEFINITION that aren't cached |
|
||||
|
||||
For workflows WITHOUT conditionals, these are equivalent.
|
||||
For workflows WITH conditionals, they differ - see "Conditional Blocks" below.
|
||||
|
||||
## Conditional Blocks
|
||||
|
||||
Conditional blocks (`BlockType.CONDITIONAL`) are **NOT cached** - they always run via agent to evaluate conditions at runtime. However, cacheable blocks inside conditional branches ARE cached when they execute.
|
||||
|
||||
### Key Insight: Progressive Branch Caching
|
||||
|
||||
With conditionals, not all branches execute in a single run. The caching system handles this via "progressive caching":
|
||||
- Run 1 takes branch A → caches blocks from A
|
||||
- Run 2 takes branch B → caches blocks from B (preserves A's cache)
|
||||
- Eventually all executed branches have cached blocks
|
||||
|
||||
This means the workflow DEFINITION has all blocks, but the workflow RUN only executes some blocks.
|
||||
|
||||
## Performance Optimizations
|
||||
|
||||
### Batch Task and Action Queries
|
||||
**Location**: `transform_workflow_run.py`
|
||||
|
||||
Previously, the code made N+1 queries: one `get_task()` and one `get_task_actions_hydrated()` per task block. For workflows with 20 blocks, this meant 40 DB queries.
|
||||
|
||||
Now we batch all queries upfront:
|
||||
1. Collect all task_ids from workflow_run_blocks
|
||||
2. Single `get_tasks_by_ids()` call for all tasks
|
||||
3. Single `get_tasks_actions()` call for all actions
|
||||
4. Process blocks using pre-fetched data from dictionaries
|
||||
|
||||
**Impact**: Reduces from 2N queries to 2 queries.
|
||||
|
||||
### Block-Level Script Generation
|
||||
**Location**: `service.py:_generate_pending_script_for_block()`
|
||||
|
||||
Previously, `generate_or_update_pending_workflow_script()` was called after each action (CLICK, INPUT_TEXT, etc.), generating "pending" script drafts ~10-50x per workflow run.
|
||||
|
||||
Now script generation happens at block completion via `_generate_pending_script_for_block()`, called from both `_execute_workflow_blocks()` and `_execute_workflow_blocks_dag()`.
|
||||
|
||||
**Impact**: Reduces script generation frequency by 10-50x while maintaining progressive updates.
|
||||
|
||||
## Adding New Cacheable Block Types
|
||||
|
||||
When adding a new block type that should support cached execution:
|
||||
@@ -30,3 +97,37 @@ When adding a new block type that should support cached execution:
|
||||
2. `task_v2_blocks` - task_v2 blocks with child blocks
|
||||
3. `for_loop_blocks` - ForLoop container blocks
|
||||
4. `__start_block__` - Workflow entry point
|
||||
|
||||
## Things to Watch Out For
|
||||
|
||||
1. **Definition vs Execution**: The workflow DEFINITION has all blocks; the workflow RUN only executes some blocks (especially with conditionals)
|
||||
|
||||
2. **`blocks_to_update` sources**: This set is populated from multiple places - block execution (line 1316), finalize logic, explicit requests. Understand all sources before modifying.
|
||||
|
||||
3. **Database operations per regeneration**: Each regeneration does DELETE + CREATE + UPLOAD + INSERT. Unnecessary regenerations can flood the database.
|
||||
|
||||
4. **`BLOCK_TYPES_THAT_SHOULD_BE_CACHED`**: Not all block types are cached. Conditional, wait, code blocks etc. are excluded.
|
||||
|
||||
5. **Batch query data mapping**: When using `tasks_by_id` and `actions_by_task_id` dicts, ensure task_ids are consistent between run_blocks and the queried data.
|
||||
|
||||
## Testing Caching Changes
|
||||
|
||||
When modifying regeneration or caching logic, test these scenarios:
|
||||
|
||||
1. **Same blocks run twice** - Should NOT regenerate on 2nd run
|
||||
2. **New block added** - Should regenerate to include new block
|
||||
3. **Workflow with conditionals** - Different branches should cache progressively
|
||||
4. **Block type not in `BLOCK_TYPES_THAT_SHOULD_BE_CACHED`** - Should NOT trigger caching
|
||||
|
||||
## Test Commands
|
||||
|
||||
```bash
|
||||
# Run script-related tests
|
||||
python -m pytest tests/unit/ -k "script" --ignore=tests/unit/test_security.py -v
|
||||
|
||||
# Run conditional caching tests specifically
|
||||
python -m pytest tests/unit/test_conditional_script_caching.py -v
|
||||
|
||||
# Run forloop script tests
|
||||
python -m pytest tests/unit/test_forloop_script_generation.py -v
|
||||
```
|
||||
|
||||
@@ -2028,9 +2028,30 @@ def _build_block_statement(
|
||||
stmt = _build_http_request_statement(block)
|
||||
elif block_type == "pdf_parser":
|
||||
stmt = _build_pdf_parser_statement(block)
|
||||
elif block_type == "conditional":
|
||||
# Conditional blocks are evaluated at runtime by the workflow engine.
|
||||
# Generate a descriptive comment showing this is a runtime branch point.
|
||||
# The blocks inside conditional branches are processed separately when executed.
|
||||
branches = block.get("branches") or block.get("ordered_branches") or []
|
||||
branch_info_lines = []
|
||||
for i, branch in enumerate(branches):
|
||||
next_label = branch.get("next_block_label", "?")
|
||||
condition = branch.get("condition", "")
|
||||
# Truncate long conditions for readability
|
||||
if len(condition) > 50:
|
||||
condition = condition[:47] + "..."
|
||||
branch_info_lines.append(f"# Branch {i + 1}: {condition!r} → {next_label}")
|
||||
|
||||
if branch_info_lines:
|
||||
branch_info = "\n".join(branch_info_lines)
|
||||
comment_text = f"# === CONDITIONAL: {block_title} ===\n# Evaluated at runtime by workflow engine. One branch executes:\n{branch_info}"
|
||||
else:
|
||||
comment_text = f"# === CONDITIONAL: {block_title} ===\n# Evaluated at runtime by workflow engine."
|
||||
|
||||
stmt = cst.SimpleStatementLine([cst.Expr(cst.SimpleString(repr(comment_text)))])
|
||||
else:
|
||||
# Default case for unknown block types
|
||||
stmt = cst.SimpleStatementLine([cst.Expr(cst.SimpleString(f"# Unknown block type: {block_type}"))])
|
||||
# Default case for unknown block types - use quoted string literal to avoid libcst validation error
|
||||
stmt = cst.SimpleStatementLine([cst.Expr(cst.SimpleString(f"'# Unknown block type: {block_type}'"))])
|
||||
|
||||
return stmt
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from collections import defaultdict
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
@@ -8,6 +9,7 @@ from skyvern.forge import app
|
||||
from skyvern.schemas.workflows import BlockType
|
||||
from skyvern.services import workflow_service
|
||||
from skyvern.webeye.actions.action_types import ActionType
|
||||
from skyvern.webeye.actions.actions import Action
|
||||
|
||||
LOG = structlog.get_logger(__name__)
|
||||
|
||||
@@ -22,6 +24,29 @@ class CodeGenInput:
|
||||
task_v2_child_blocks: dict[str, list[dict[str, Any]]] # task_v2_label -> list of child blocks
|
||||
|
||||
|
||||
def _process_action_for_block(
|
||||
action: Action,
|
||||
block_dump: dict[str, Any],
|
||||
) -> dict[str, Any]:
|
||||
"""Process a single action and add block-specific context like data extraction goal."""
|
||||
action_dump = action.model_dump()
|
||||
action_dump["xpath"] = action.get_xpath()
|
||||
action_dump["has_mini_agent"] = action.has_mini_agent
|
||||
if (
|
||||
"data_extraction_goal" in block_dump
|
||||
and block_dump["data_extraction_goal"]
|
||||
and action.action_type == ActionType.EXTRACT
|
||||
):
|
||||
action_dump["data_extraction_goal"] = block_dump["data_extraction_goal"]
|
||||
if (
|
||||
"extracted_information_schema" in block_dump
|
||||
and block_dump["extracted_information_schema"]
|
||||
and action.action_type == ActionType.EXTRACT
|
||||
):
|
||||
action_dump["data_extraction_schema"] = block_dump["extracted_information_schema"]
|
||||
return action_dump
|
||||
|
||||
|
||||
async def transform_workflow_run_to_code_gen_input(workflow_run_id: str, organization_id: str) -> CodeGenInput:
|
||||
# get the workflow run request
|
||||
workflow_run_resp = await workflow_service.get_workflow_run_response(
|
||||
@@ -54,8 +79,41 @@ async def transform_workflow_run_to_code_gen_input(workflow_run_id: str, organiz
|
||||
# Create mapping from definition blocks by label for quick lookup
|
||||
workflow_run_blocks_by_label = {block.label: block for block in workflow_run_blocks if block.label}
|
||||
|
||||
# Batch fetch all tasks and actions upfront to avoid N+1 queries
|
||||
# First pass: collect all task_ids from workflow run blocks
|
||||
all_task_ids: set[str] = set()
|
||||
for rb in workflow_run_blocks:
|
||||
if rb.block_type in SCRIPT_TASK_BLOCKS and rb.task_id:
|
||||
all_task_ids.add(rb.task_id)
|
||||
|
||||
# Batch fetch all tasks and actions in 2 queries instead of N+1
|
||||
tasks_by_id: dict[str, Any] = {}
|
||||
actions_by_task_id: dict[str, list[Action]] = defaultdict(list)
|
||||
|
||||
if all_task_ids:
|
||||
task_ids_list = list(all_task_ids)
|
||||
# Single query for all tasks
|
||||
tasks = await app.DATABASE.get_tasks_by_ids(task_ids=task_ids_list, organization_id=organization_id)
|
||||
tasks_by_id = {task.task_id: task for task in tasks}
|
||||
LOG.debug(
|
||||
"Batch fetched tasks for code gen",
|
||||
workflow_run_id=workflow_run_id,
|
||||
task_count=len(tasks),
|
||||
)
|
||||
|
||||
# Single query for all actions
|
||||
all_actions = await app.DATABASE.get_tasks_actions(task_ids=task_ids_list, organization_id=organization_id)
|
||||
for action in all_actions:
|
||||
if action.task_id:
|
||||
actions_by_task_id[action.task_id].append(action)
|
||||
LOG.debug(
|
||||
"Batch fetched actions for code gen",
|
||||
workflow_run_id=workflow_run_id,
|
||||
action_count=len(all_actions),
|
||||
)
|
||||
|
||||
workflow_block_dump = []
|
||||
actions_by_task = {}
|
||||
actions_by_task: dict[str, list[dict[str, Any]]] = {}
|
||||
task_v2_child_blocks = {}
|
||||
|
||||
# Loop through workflow run blocks and match to original definition blocks by label
|
||||
@@ -75,7 +133,8 @@ async def transform_workflow_run_to_code_gen_input(workflow_run_id: str, organiz
|
||||
|
||||
# For task blocks, add execution data while preserving templated information
|
||||
if run_block.block_type in SCRIPT_TASK_BLOCKS and run_block.task_id:
|
||||
task = await app.DATABASE.get_task(task_id=run_block.task_id, organization_id=organization_id)
|
||||
# Use pre-fetched task data (batch fetched)
|
||||
task = tasks_by_id.get(run_block.task_id)
|
||||
if task:
|
||||
# Add task execution data but preserve original templated fields
|
||||
task_dump = task.model_dump()
|
||||
@@ -94,29 +153,9 @@ async def transform_workflow_run_to_code_gen_input(workflow_run_id: str, organiz
|
||||
}
|
||||
)
|
||||
|
||||
# Get task actions
|
||||
actions = await app.DATABASE.get_task_actions_hydrated(
|
||||
task_id=run_block.task_id, organization_id=organization_id
|
||||
)
|
||||
action_dumps = []
|
||||
for action in actions:
|
||||
action_dump = action.model_dump()
|
||||
action_dump["xpath"] = action.get_xpath()
|
||||
action_dump["has_mini_agent"] = action.has_mini_agent
|
||||
if (
|
||||
"data_extraction_goal" in final_dump
|
||||
and final_dump["data_extraction_goal"]
|
||||
and action.action_type == ActionType.EXTRACT
|
||||
):
|
||||
# use the right data extraction goal for the extract action
|
||||
action_dump["data_extraction_goal"] = final_dump["data_extraction_goal"]
|
||||
if (
|
||||
"extracted_information_schema" in final_dump
|
||||
and final_dump["extracted_information_schema"]
|
||||
and action.action_type == ActionType.EXTRACT
|
||||
):
|
||||
action_dump["data_extraction_schema"] = final_dump["extracted_information_schema"]
|
||||
action_dumps.append(action_dump)
|
||||
# Use pre-fetched actions (batch fetched)
|
||||
actions = actions_by_task_id.get(run_block.task_id, [])
|
||||
action_dumps = [_process_action_for_block(action, final_dump) for action in actions]
|
||||
actions_by_task[run_block.task_id] = action_dumps
|
||||
else:
|
||||
LOG.warning("Task not found", task_id=run_block.task_id)
|
||||
@@ -197,8 +236,8 @@ async def transform_workflow_run_to_code_gen_input(workflow_run_id: str, organiz
|
||||
child_run_block = child_run_blocks_by_label.get(loop_block_label) if loop_block_label else None
|
||||
|
||||
if child_run_block and child_run_block.block_type in SCRIPT_TASK_BLOCKS and child_run_block.task_id:
|
||||
# Get task data for this child block
|
||||
task = await app.DATABASE.get_task(task_id=child_run_block.task_id, organization_id=organization_id)
|
||||
# Use pre-fetched task data (batch fetched)
|
||||
task = tasks_by_id.get(child_run_block.task_id)
|
||||
if task:
|
||||
task_dump = task.model_dump()
|
||||
loop_block_dump.update({k: v for k, v in task_dump.items() if k not in loop_block_dump})
|
||||
@@ -210,28 +249,9 @@ async def transform_workflow_run_to_code_gen_input(workflow_run_id: str, organiz
|
||||
}
|
||||
)
|
||||
|
||||
# Get task actions for the child block
|
||||
actions = await app.DATABASE.get_task_actions_hydrated(
|
||||
task_id=child_run_block.task_id, organization_id=organization_id
|
||||
)
|
||||
action_dumps = []
|
||||
for action in actions:
|
||||
action_dump = action.model_dump()
|
||||
action_dump["xpath"] = action.get_xpath()
|
||||
action_dump["has_mini_agent"] = action.has_mini_agent
|
||||
if (
|
||||
"data_extraction_goal" in loop_block_dump
|
||||
and loop_block_dump["data_extraction_goal"]
|
||||
and action.action_type == ActionType.EXTRACT
|
||||
):
|
||||
action_dump["data_extraction_goal"] = loop_block_dump["data_extraction_goal"]
|
||||
if (
|
||||
"extracted_information_schema" in loop_block_dump
|
||||
and loop_block_dump["extracted_information_schema"]
|
||||
and action.action_type == ActionType.EXTRACT
|
||||
):
|
||||
action_dump["data_extraction_schema"] = loop_block_dump["extracted_information_schema"]
|
||||
action_dumps.append(action_dump)
|
||||
# Use pre-fetched actions (batch fetched)
|
||||
actions = actions_by_task_id.get(child_run_block.task_id, [])
|
||||
action_dumps = [_process_action_for_block(action, loop_block_dump) for action in actions]
|
||||
actions_by_task[child_run_block.task_id] = action_dumps
|
||||
else:
|
||||
LOG.warning(
|
||||
|
||||
Reference in New Issue
Block a user