Context Engineering with Hyrex

Modern LLMs need rich, personalized context to be truly helpful. But gathering user data, preferences, and recent activity from multiple APIs can be slow and complex. What if you could prepare perfect context in parallel, every time?

With Hyrex, you can orchestrate complex context preparation workflows that fetch data from multiple sources simultaneously, process it with AI, and deliver ready-to-use context for your LLM applications. Turn slow, sequential API calls into fast, parallel context engineering.

Context Engineering Flow

Step 1: Define Context Gathering Tasks

Create Hyrex tasks for each piece of context you need - user profiles, recent activities, preferences, and more. These tasks can fetch data from different APIs and even use LLMs to summarize and structure the information.

src/hyrex/context_tasks.py
1from hyrex import HyrexRegistry
2import openai
3import requests
4from typing import List, Dict
5
6hy = HyrexRegistry()
7
8@hy.task
9def fetch_user_profile(user_id: str) -> Dict:
10    """Fetch user profile data"""
11    response = requests.get(f"https://api.example.com/users/{user_id}")
12    return response.json()
13
14@hy.task
15def fetch_recent_activities(user_id: str, limit: int = 50) -> List[Dict]:
16    """Fetch user's recent activities"""
17    response = requests.get(
18        f"https://api.example.com/users/{user_id}/activities?limit={limit}"
19    )
20    return response.json()
21
22@hy.task
23def fetch_user_preferences(user_id: str) -> Dict:
24    """Fetch user preferences and settings"""
25    response = requests.get(f"https://api.example.com/users/{user_id}/preferences")
26    return response.json()
27
28@hy.task
29def build_context_summary(user_id: str, profile: Dict, activities: List[Dict], preferences: Dict) -> str:
30    """Use LLM to create a context summary"""
31    context_data = {
32        "profile": profile,
33        "recent_activities": activities[:10],  # Last 10 activities
34        "preferences": preferences
35    }
36
37    prompt = f"""
38    Create a concise summary of this user's context for an AI assistant:
39
40    Profile: {context_data['profile']}
41    Recent Activities: {context_data['recent_activities']}
42    Preferences: {context_data['preferences']}
43
44    Focus on the most relevant information for personalized assistance.
45    """
46
47    response = openai.chat.completions.create(
48        model="gpt-4",
49        messages=[{"role": "user", "content": prompt}],
50        max_tokens=500
51    )
52
53    return response.choices[0].message.content
54
55@hy.task
56def prepare_llm_context(user_id: str):
57    """Orchestrate parallel context preparation"""
58    # Launch all data fetching tasks in parallel
59    profile_task = fetch_user_profile.send(user_id)
60    activities_task = fetch_recent_activities.send(user_id, 50)
61    preferences_task = fetch_user_preferences.send(user_id)
62
63    # Wait for all tasks to complete
64    profile = profile_task.get()
65    activities = activities_task.get()
66    preferences = preferences_task.get()
67
68    # Build final context summary
69    context_summary = build_context_summary.send(
70        user_id, profile, activities, preferences
71    ).get()
72
73    return {
74        "user_id": user_id,
75        "context_summary": context_summary,
76        "raw_data": {
77            "profile": profile,
78            "activities": activities,
79            "preferences": preferences
80        }
81    }

Step 2: Build Context Preparation APIs

Create endpoints that trigger context preparation workflows. Your LLM applications can call these APIs to prepare personalized context in the background while handling user interactions, ensuring no delays in the user experience.

src/routes/context_api.py
1from fastapi import FastAPI
2from pydantic import BaseModel
3from .tasks import prepare_llm_context
4
5app = FastAPI()
6
7class ContextRequest(BaseModel):
8    user_id: str
9    include_raw_data: bool = False
10
11class BulkContextRequest(BaseModel):
12    user_ids: list[str]
13    include_raw_data: bool = False
14
15@app.post("/context/prepare")
16async def prepare_user_context(request: ContextRequest):
17    # Prepare context for AI/LLM consumption
18    task = prepare_llm_context.send(request.user_id)
19
20    return {
21        "message": "Context preparation started",
22        "task_id": task.task_id,
23        "user_id": request.user_id
24    }
25
26@app.post("/context/bulk-prepare")
27async def prepare_bulk_contexts(request: BulkContextRequest):
28    task_ids = []
29    for user_id in request.user_ids:
30        task = prepare_llm_context.send(user_id)
31        task_ids.append({
32            "user_id": user_id,
33            "task_id": task.task_id
34        })
35
36    return {
37        "message": f"Started context preparation for {len(request.user_ids)} users",
38        "tasks": task_ids
39    }
40
41@app.get("/context/status/{task_id}")
42async def get_context_status(task_id: str):
43    # Check if context preparation is complete
44    task = hy.get_task(task_id)
45
46    if task.is_complete:
47        return {
48            "status": "complete",
49            "context_data": task.result
50        }
51    else:
52        return {
53            "status": "processing",
54            "progress": task.progress
55        }

Context is ready when you need it!

Now your LLM applications have access to rich, personalized context prepared in advance. Instead of making users wait while you gather data, context is ready the moment they need it, leading to faster, more intelligent responses.

Take it further by caching frequent context patterns, pre-warming context for active users, or using webhooks to update context as user data changes in real-time.