Sizing Estimator
Use this calculator to estimate the compute resources (vCPU and memory) required for your Lakerunner deployment based on your expected telemetry throughput.
Telemetry Throughput
Enter your expected telemetry volume. Select a time unit that is convenient for you — all values are normalized internally.
Query Capacity
Query capacity depends on your query patterns, concurrency, and data volume. We recommend a minimum of 2 query-api and 2 query-worker pods.
Estimated Resources
| Category | Pods | Total vCPU | Total Memory |
|---|---|---|---|
| Core Components | 4 | 0.95 vCPU | 0.83 Gi |
| Logs | 2 | 2.00 vCPU | 8.00 Gi |
| Metrics | 4 | 7.00 vCPU | 22.00 Gi |
| Query | 4 | 6.00 vCPU | 16.00 Gi |
| Grand Total | 14 | 15.95 vCPU | 46.83 Gi |
Recommendations
Auto-Scaling
Log, metric, trace, and query components auto-scale based on demand using KEDA. The estimates above represent a maximum footprint at sustained peak load. Actual resource usage will be lower during normal operation.
Cluster Auto-Scaling
We strongly recommend enabling Kubernetes cluster auto-scaling to automatically add and remove nodes as workload pods scale up and down. This ensures you only pay for the compute capacity you actually need.
Spot / Preemptible Instances
Lakerunner workloads are well-suited for Spot Instances (AWS) or Preemptible VMs (GCP). These can reduce compute costs by 60-90%. All Lakerunner components are designed to handle interruptions gracefully through work queue-based processing.
Reach out to support@cardinalhq.io for support or to ask questions not answered in our documentation.