Dynamic Workflows

Compose tasks conditionally at runtime—no upfront workflow DSL required.

Hyrex infers task trees automatically when tasks call other tasks; you get branching, parallelism, retries, and observability out of the box.

Inferred task trees

You don’t need to declare a workflow up front. When a task sends another task, Hyrex automatically records a parent → child relationship and builds a task tree as your code runs. Your normal control-flow determines fan-out, conditional branches, and follow-on steps—no orchestration YAML or custom DSL.

Task trees explained

A task tree forms implicitly from your code: nodes are tasks you send, and edges are created by calling tasks within tasks. Each node knows its parentId and the tree root (rootId), letting you trace lineage, correlate failures, and aggregate results—without declaring a workflow graph explicitly. Trees make it easy to:

  • Fan-out by sending multiple child tasks inside a parent task.
  • Branch by deciding which tasks to send based on inputs or context.
  • Retry transient failures with backoff and limits per task.
  • Observe every node with IDs, attempts, queues, and timing.

Why it matters

  • Durability: tasks and inferred trees survive restarts and worker crashes.
  • Control: per-task config for queues, timeouts, priorities, and retries.
  • Context: tasks access task metadata (IDs, attempts, queues) for correlation and state.
  • Reuse: copy or reuse tasks across stages without re-implementing logic.
  • Observability: monitor attempts, latency, and status across the tree.

Great fits

  • Web crawling and site indexing: spawn child crawlers for discovered links (depth-limited trees).
  • Document processing & indexing: fan-out parsing/embeddings per page or chunk, then consolidate.
  • LLM pipelines with tools: branch on model output, call tools, then post-process results.
  • API aggregation & enrichment: fan-out requests to providers, dedupe and reconcile responses.
  • Identity/Compliance checks: branch per risk signals; insert human review only when needed.

Example: Web crawling with task trees

Start from a seed URL and send child crawl tasks for each discovered link. The tree is inferred automatically from parent → child task sends; you can cap depth, choose queues, and add dedup with a KV store.

src/hyrex/crawl_page.py
1from hyrex import HyrexRegistry, HyrexKV, get_hyrex_context
2from pydantic import BaseModel
3from typing import List
4
5hy = HyrexRegistry()
6
7class CrawlArgs(BaseModel):
8    url: str
9    depth: int
10    max_depth: int = 2
11
12@hy.task(queue="io")
13def crawl_page(args: CrawlArgs):
14    ctx = get_hyrex_context()
15    visited_key = f"visited-{ctx.root_id}-{args.url}" if ctx else f"visited-unknown-{args.url}"
16    if HyrexKV.get(visited_key):
17        return {"url": args.url, "skipped": True}
18    HyrexKV.set(visited_key, "1")
19
20    html = fetch_url(args.url)            # pseudo
21    links: List[str] = extract_links(html, args.url)
22
23    if args.depth < args.max_depth:
24        for link in links:
25            # sending a task within a task creates a child node
26            crawl_page.with_config(queue="io").send(CrawlArgs(url=link, depth=args.depth + 1, max_depth=args.max_depth))
27
28    return {
29        "url": args.url,
30        "depth": args.depth,
31        "links": len(links),
32        "parentId": (ctx.parent_id if ctx else None),
33        "rootId": (ctx.root_id if ctx else None)
34    }
35
36# Kick off the crawl (root task)
37crawl_page.send(CrawlArgs(url="https://example.com", depth=0, max_depth=2))

Note: add your own fetchUrl/extractLinks implementations and robots.txt handling. Dedup via HyrexKV is optional.

Learn more