> ## Documentation Index
> Fetch the complete documentation index at: https://hyrex.io/docs/llms.txt
> Use this file to discover all available pages before exploring further.

# Production Guide

> Deploy and scale Hyrex in production environments

Everything you need to run Hyrex reliably at scale.

<Tabs>
  <Tab title="Hyrex Cloud">
    ## Production on Hyrex Cloud

    Hyrex Cloud handles infrastructure, scaling, and monitoring automatically.

    ### Environment Setup

    1. **Set Your API Key**

    ```bash theme={null}
    # Production API key
    export HYREX_API_KEY="prod_hx_..."
    ```

    2. **Configure App for Workers**

    ```python theme={null}
    # hyrex_app.py
    from hyrex import HyrexApp

    app = HyrexApp("production-app")
    ```

    ### Deployment

    Deploy workers using Docker:

    ```dockerfile theme={null}
    FROM python:3.11-slim

    WORKDIR /app
    COPY requirements.txt .
    RUN pip install -r requirements.txt

    COPY . .

    CMD ["hyrex", "run-worker", "hyrex_app:app"]
    ```

    Run with your API key:

    ```bash theme={null}
    docker run -e HYREX_API_KEY=$HYREX_API_KEY myapp:latest
    ```

    ### Monitoring

    Access production metrics at [hyrex.io/cloud](https://hyrex.io/cloud):

    * Task throughput and latency
    * Worker health and utilization
    * Error rates and alerts
    * Queue depths and processing times

    ### Best Practices

    1. **Use separate API keys** for dev/staging/prod
    2. **Set num\_processes** based on task type
    3. **Configure alerts** in Hyrex Cloud dashboard
    4. **Use queues** to separate workload types
  </Tab>

  <Tab title="FOSS">
    ## Production with FOSS

    Deploy Hyrex on your infrastructure with PostgreSQL.

    ### Monitoring Setup

    **Hyrex Studio**

    ```bash theme={null}
    hyrex studio
    ```

    Access at [https://local.hyrex.studio](https://local.hyrex.studio) to monitor:

    * Task queue status
    * Worker health
    * Task execution history
    * Error logs

    ### High Availability

    1. **Database HA**

    * Use PostgreSQL streaming replication
    * Configure automatic failover with Patroni
    * Regular backups with pg\_dump or WAL-G

    2. **Worker HA**

    * Deploy workers across multiple nodes
    * Use different queues for critical tasks
    * Configure health checks and auto-restart
  </Tab>
</Tabs>

## Production Checklist

### Pre-Deployment

* [ ] Load test your tasks to determine resource needs
* [ ] Configure appropriate task timeouts
* [ ] Set up error tracking (Sentry, etc.)
* [ ] Plan queue structure for workload separation
* [ ] Document task dependencies

### Deployment

* [ ] Use environment variables for configuration
* [ ] Set up secrets management
* [ ] Configure resource limits
* [ ] Enable health checks
* [ ] Set up log aggregation

### Post-Deployment

* [ ] Configure monitoring dashboards
* [ ] Set up alerts for key metrics
* [ ] Create runbooks for common issues
* [ ] Schedule regular performance reviews
* [ ] Plan for capacity scaling

## Scaling Strategies

### Queue Design

```python theme={null}
# Separate queues by priority and resource needs
@hy.task(queue="critical", max_retries=5)
def process_payment(ctx): ...

@hy.task(queue="batch", timeout_seconds=3600)
def generate_report(ctx): ...

@hy.task(queue="io-heavy", max_concurrency=50)
def fetch_external_data(ctx): ...
```

### Worker Scaling

```bash theme={null}
# Run workers for specific queue patterns
# Process all queues
hyrex run-worker app:app

# Process specific queue pattern
hyrex run-worker app:app --queue_pattern "critical-*"

# Process multiple queue patterns
hyrex run-worker app:app --queue_pattern "email-*" --num_processes 5
hyrex run-worker app:app --queue_pattern "batch-*" --num_processes 20
```

## Next Steps

* Join our [Discord](https://discord.gg/hyrex) for production support
* Check [GitHub](https://github.com/hyrex-labs/hyrex) for updates
* Contact [support@hyrex.io](mailto:support@hyrex.io) for enterprise needs
