237,561
qail IPC q/s
prepared pipeline path
239,794
pgx q/s
SendBatch baseline
355,000
Rust q/s
workspace baseline
Architecture comparison
pgx (pure Go)
Go app -> pgx driver -> PostgreSQL
Direct client-to-PostgreSQL path.
qail-daemon transport
Go app -> Unix socket + JSON -> qail-daemon (Rust/Tokio) -> PostgreSQL
Extra hop with prepared statement caching in the daemon.
50 Million Query Throughput
| Driver | Queries/s | Total time | Per query |
|---|---|---|---|
| pgx (SendBatch) | 239,794 | 208.5s | 4,170ns |
| qail IPC (PreparedPipeline) | 237,561 | 210.5s | 4,210ns |
| Rust baseline | 355,000 | 141s | 2,817ns |
Test Configuration
| Parameter | Value |
|---|---|
| Total queries | 50,000,000 |
| Batch size | 10,000 queries |
| Query type | SELECT id, name FROM harbors LIMIT $1 |
| Protocol | Prepared statement plus pipeline mode |
| Connection | Single connection, no pooling |
Transport Iterations
| Mode | Queries/s | Relative to pgx | Notes |
|---|---|---|---|
| qail CGO (original) | 126,000 | 0.53x | CGO overhead and Go-side I/O |
| qail IPC (PipelineFast) | 42,000 | 0.18x | Full Query struct encoded per batch |
| qail IPC (PreparedPipeline) | 237,561 | 0.99x | Prepared statement cache in daemon |
PreparedPipeline notes
The daemon caches the prepared statement after a single Prepare call. Subsequent PreparedPipeline calls send parameter values rather than the full SQL template, which removes most of the JSON overhead for repeated execution.
PreparedPipeline flow
Go client
1. Prepare("SELECT ... LIMIT $1") -> handle
2. PreparedPipeline(handle, [["5"], ["3"], ["1"], ...])
qail-daemon
- Look up cached prepared statement by handle
- Call pipeline_execute_prepared_count()
- Return count Potential Optimizations
These items were not tuned in the recorded run.
| Optimization | Estimated gain | Status |
|---|---|---|
| MessagePack instead of JSON | +10-15% | Planned |
| Pre-encoded params in Go | +5-10% | Planned |
| Connection pooling in daemon | +5% | Planned |
| Projected upper bound | 260K+ q/s | Projected only |
Interpretation
- This snapshot isolates the cost of the Go-to-daemon transport boundary on a prepared, pipelined workload.
- The daemon path stayed close to direct pgx while preserving a shared Rust execution layer across language bindings.
- Use this report for transport-shape reasoning, not as a replacement for direct workload validation.
- Rerun on your own server topology before treating the ratio as production guidance.