Lakerunner
Sizing Estimator

Sizing Estimator

Use this calculator to estimate the compute resources (vCPU and memory) required for your Lakerunner deployment based on your expected telemetry throughput.

This calculator runs entirely in your browser. No data is sent to any server.

Telemetry Throughput

Enter your expected telemetry volume. Select a time unit that is convenient for you — all values are normalized internally.

per second
per second
per second

Query Capacity

Query capacity depends on your query patterns, concurrency, and data volume. We recommend a minimum of 2 query-api and 2 query-worker pods.

2 (minimum)
2

Estimated Resources

15.95total vCPU
Core Components0.95 vCPU (6%)
Logs2.00 vCPU (13%)
Metrics7.00 vCPU (44%)
Query6.00 vCPU (38%)
CategoryPodsTotal vCPUTotal Memory
Core Components40.95 vCPU0.83 Gi
Logs22.00 vCPU8.00 Gi
Metrics47.00 vCPU22.00 Gi
Query46.00 vCPU16.00 Gi
Grand Total1415.95 vCPU46.83 Gi

Recommendations

Auto-Scaling

Log, metric, trace, and query components auto-scale based on demand using KEDA. The estimates above represent a maximum footprint at sustained peak load. Actual resource usage will be lower during normal operation.

Cluster Auto-Scaling

We strongly recommend enabling Kubernetes cluster auto-scaling to automatically add and remove nodes as workload pods scale up and down. This ensures you only pay for the compute capacity you actually need.

Spot / Preemptible Instances

Lakerunner workloads are well-suited for Spot Instances (AWS) or Preemptible VMs (GCP). These can reduce compute costs by 60-90%. All Lakerunner components are designed to handle interruptions gracefully through work queue-based processing.

Reach out to support@cardinalhq.io for support or to ask questions not answered in our documentation.