QAIL Documentation
The AST-Native Query Compiler with Built-in Row-Level Security
QAIL compiles typed query ASTs directly to database wire protocols. No application-level SQL string interpolation on the AST path. Built-in multi-tenant data isolation via RLS. The only Rust PostgreSQL driver with AST-level tenant injection.
Latest Updates (March 2026)
- Tenant-first naming is now canonical across docs and gateway flows (
tenant_id), with legacyoperator_idcompatibility retained. - Gateway policy evaluation and execution-path fixes were applied to reduce false denies and keep optimized command execution consistent.
- Direct SDK tracks are now first-class: TypeScript, Swift, and Kotlin.
- Node.js native binding and WASM packaging remain deferred.
Philosophy: AST = Meaning
If a database doesn’t let us encode semantic intent, we don’t fake it.
QAIL compiles typed query ASTs directly to database wire protocols with typed value encoding.
SQL String vs SQL Bytes
- SQL string: text query assembled in application code.
- SQL bytes: PostgreSQL protocol message bytes (
Parse/Bind/Executeand results) plus encoded bind values. - QAIL guarantee: AST flow removes app-side SQL interpolation as an injection surface.
- PostgreSQL behavior: server parse/plan/execute still applies normally.
Supported Databases
| Tier | Category | Supported | Driver |
|---|---|---|---|
| 1 | SQL-AST | PostgreSQL | qail-pg — Native wire protocol, AST-to-bytes |
| 2 | Vector-AST | Qdrant | qail-qdrant — gRPC + REST, vector search |
Redis support (
qail-redis) was removed inv0.20.0.
❌ Not Supported
- Oracle, SQL Server, MySQL: Proprietary/Closed protocols.
Quick Example
#![allow(unused)]
fn main() {
use qail_core::{Qail, Operator, SortOrder};
// Build a query with the AST builder
let cmd = Qail::get("users")
.columns(["id", "email", "name"])
.filter("active", Operator::Eq, true)
.order_by("created_at", SortOrder::Desc)
.limit(10);
// Execute with qail-pg driver
let mut driver = PgDriver::connect("localhost", 5432, "user", "db").await?;
let rows = driver.query(&cmd).await?;
}
Current Status (Production Ready, Actively Hardened)
| Feature | Status |
|---|---|
| SSL/TLS | ✅ |
| SCRAM-SHA-256 Auth | ✅ |
| Connection Pooling | ✅ |
| AST-Native Migrations | ✅ |
| JSON/JSONB Types | ✅ |
| UUID, Timestamps, INTERVAL | ✅ |
| CTEs (WITH) | ✅ |
| DISTINCT ON | ✅ |
| CASE WHEN | ✅ |
| Ergonomic Builders | ✅ |
| qail-lsp (IDE) | ✅ |
| COPY Protocol | ✅ |
| Arrays (Value::Array) | ✅ |
| Transactions (BEGIN/COMMIT/ROLLBACK) | ✅ |
| Query Plan Caching | ✅ |
| Window Functions (OVER) | ✅ |
| Subqueries & EXISTS | ✅ |
| UPSERT (ON CONFLICT) | ✅ |
| RETURNING Clause | ✅ |
| LATERAL JOIN | ✅ |
| Unix Socket & mTLS | ✅ |
| Savepoints | ✅ |
| UNION/INTERSECT/EXCEPT | ✅ |
| TRUNCATE | ✅ |
| Batch Transactions | ✅ |
| Statement Timeout | ✅ |
| EXPLAIN / EXPLAIN ANALYZE | ✅ |
| LOCK TABLE | ✅ |
| Connection Timeout | ✅ |
| Materialized Views | ✅ |
| Row-Level Security (RLS) | ✅ |
| Multi-Tenant Isolation | ✅ |
TypedQail<T> Relations | ✅ |
| Protected Columns | ✅ |
| LISTEN/NOTIFY/UNLISTEN | ✅ |
Note: QAIL’s AST-native design eliminates app-side SQL interpolation on the AST path. Query plan caching (
prepare(),pipeline_prepared_fast()) is a PostgreSQL performance optimization, not the primary security boundary.
Why Some SQL Features Don’t Exist in QAIL
QAIL is AST-first, not SQL-string-first. Many traditional SQL “security features” exist to mitigate string-construction risks that AST pipelines avoid by design:
| SQL Feature | Why It Exists | QAIL Replacement |
|---|---|---|
| Parameterized Queries | Prevent string injection | Built in — Value::Param is a typed AST node, not a string hole |
| Prepared Statements (for security) | Separate SQL from data | Not primary defense — AST already separates structure from data |
| Query Escaping | Sanitize user input | Not primary path — values are typed (Value::Text, Value::Int) |
| SQL Validators | Detect malformed queries | AST validation + build-time checks handle this path |
The AST Guarantee
#![allow(unused)]
fn main() {
// SQL String (vulnerable):
let sql = format!("SELECT * FROM users WHERE id = {}", user_input);
// QAIL AST (impossible to inject):
Qail::get("users").filter("id", Operator::Eq, user_input)
// user_input becomes Value::Int(123) or Value::Text("...")
// — never interpolated into a string
}
Getting Help
🤝 Contributing & Support
We welcome issue reports on GitHub! Please provide detailed descriptions to help us reproduce and fix the problem. We aim to address critical issues within 1-5 business days.
Caution
Release Candidate: QAIL is now in the release-candidate phase. The API is near-stable and battle-tested in production. Breaking changes are expected to be rare and limited to critical correctness/security fixes before 1.0.
Installation
Rust (Recommended)
Add QAIL to your Cargo.toml:
[dependencies]
qail-core = "0.26.2" # AST and builder
qail-pg = "0.26.2" # PostgreSQL driver
CLI
Install the QAIL command-line tool:
cargo install qail
TypeScript SDK
npm install @qail/client
Swift SDK
Use the source package in this repository:
sdk/swift/Package.swift
Kotlin SDK
Use the Gradle module in this repository:
sdk/kotlin/build.gradle.kts
Deferred Bindings
- Node.js native binding: deferred
- WASM packaging: deferred
Verify Installation
qail --version
# qail 0.26.x
Quick Start
Connect to PostgreSQL
use qail_pg::PgDriver;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect with password (SCRAM-SHA-256)
let mut driver = PgDriver::connect_with_password(
"localhost", 5432, "user", "database", "password"
).await?;
// Or with SSL/TLS
let mut driver = PgDriver::connect(
"localhost", 5432, "user", "database"
).await?;
Ok(())
}
Execute Your First Query
#![allow(unused)]
fn main() {
use qail_core::Qail;
use qail_core::ast::Operator;
// Build a SELECT query
let cmd = Qail::get("users")
.columns(["id", "email"])
.filter("active", Operator::Eq, true)
.limit(10);
// Execute
let rows = driver.fetch_all(&cmd).await?;
for row in rows {
let id: i32 = row.get("id")?;
let email: String = row.get("email")?;
println!("{}: {}", id, email);
}
}
Use Connection Pooling
#![allow(unused)]
fn main() {
use qail_pg::driver::{PgPool, PoolConfig};
use qail_core::Qail;
let config = PoolConfig::new("localhost", 5432, "user", "db")
.password("secret")
.max_connections(20);
let pool = PgPool::connect(config).await?;
// Acquire connection from pool
let mut conn = pool.acquire().await?;
let probe = Qail::get("users").columns(["id"]).limit(1);
let _rows = conn.fetch_all(&probe).await?;
// Connection automatically returned when dropped
}
Run Migrations
# Pull current schema from database
qail pull postgres://user:pass@localhost/db > schema.qail
# Create a new version with changes
# (edit schema.qail manually)
# Diff and apply
qail diff old.qail new.qail
qail migrate up old.qail:new.qail postgres://...
Schema Layout Examples (Single vs Modular)
Use the built-in samples in this repository:
- Single file:
examples/schema/single/schema.qail - Modular directory:
examples/schema/modular/schema/
Try them:
qail check examples/schema/single/schema.qail
qail check examples/schema/modular/schema
qail check examples/schema/modular/schema.qail
The modular sample includes schema/_order.qail with strict manifest mode:
-- qail: strict-manifest- every discovered module must be listed (directly or through listed directories)
Qail Gateway
Qail Gateway is an auto-REST API server that turns your PostgreSQL database into a full-featured API — with zero backend code. Point it at a database, get instant CRUD, real-time subscriptions, and enterprise security.
Full-featured data API — without GraphQL complexity. Binary AST protocol instead of string-based queries.
Quick Start
# Set your database URL and run
DATABASE_URL=postgres://user:pass@localhost:5432/mydb cargo run --release
The gateway auto-discovers all tables and exposes them as REST endpoints:
GET /api/{table} # List (with filters, sort, pagination)
GET /api/{table}/:id # Get by ID
POST /api/{table} # Create
PATCH /api/{table}/:id # Update
DELETE /api/{table}/:id # Delete
GET /api/{table}/_explain # EXPLAIN ANALYZE
GET /api/{table}/aggregate # Aggregations
GET /api/{table}/_aggregate # Aggregations (compat alias)
GET /api/{table}/:id/{child} # Nested resources (FK-based)
POST /api/rpc/{function} # Function RPC with JSON args
Query API
Filtering
All filter operators are supported as query parameters:
GET /api/orders?status=paid # Exact match
GET /api/orders?total=gt.100 # Greater than
GET /api/orders?total=gte.50&total=lte.200 # Range
GET /api/orders?status=in.(paid,shipped) # IN list
GET /api/orders?name=like.*ferry* # Pattern match
GET /api/orders?name=ilike.*FERRY* # Case-insensitive
GET /api/orders?notes=is_null # NULL check
GET /api/orders?status=ne.cancelled # Not equal
GET /api/orders?tags=contains.premium # Array contains
Full-Text Search
GET /api/products?search=ferry+bali # Search all text columns
GET /api/products?search=ferry&search_columns=name,desc # Search specific columns
Uses PostgreSQL’s to_tsvector / websearch_to_tsquery — supports natural language queries.
Sorting
GET /api/orders?sort=created_at # Ascending (default)
GET /api/orders?sort=-total # Descending (prefix -)
GET /api/orders?sort=-status,created_at # Multi-column
Pagination
GET /api/orders?limit=25&offset=50 # Offset-based
GET /api/orders?limit=25&cursor=eyJpZCI6... # Cursor-based
Distinct
GET /api/orders?distinct=status # Distinct values
Relationships & Expansion
FK-Based JOIN Expansion
Automatically resolves foreign key relationships with ?expand=:
GET /api/orders?expand=users # Inline user object
GET /api/orders?expand=users,operators # Multiple relations
GET /api/orders?expand=nested:users # Nested FK expansion
Response includes the full related object inline — no N+1 queries. The gateway performs a server-side JOIN.
expand= is for forward (many-to-one / one-to-one) relations. For reverse one-to-many expansion, use nested: to avoid parent-row duplication.
Nested Resource Routes
Access child resources through parent:
GET /api/operators/:id/orders # All orders for operator
GET /api/users/:id/bookings?status=confirmed # Filtered child resources
Mutations
Create
# Single insert
curl -X POST /api/orders -d '{"user_id": "...", "total": 100}'
# Batch insert
curl -X POST /api/orders -d '[{"total": 100}, {"total": 200}]'
Update
curl -X PATCH /api/orders/:id -d '{"status": "shipped"}'
Upsert
curl -X POST /api/orders -H "X-Upsert: true" \
-H "X-On-Conflict: order_number" \
-d '{"order_number": "ORD-001", "total": 150}'
Delete
curl -X DELETE /api/orders/:id
Returning Clause
All mutations support ?returning=id,status to get back specific columns after the operation.
Aggregations
GET /api/orders/aggregate?fn=count # COUNT(*)
GET /api/orders/aggregate?fn=sum&column=total # SUM(total)
GET /api/orders/aggregate?fn=avg&column=total # AVG
GET /api/orders/aggregate?fn=min&column=created_at # MIN
GET /api/orders/aggregate?fn=max&column=total # MAX
GET /api/orders/aggregate?fn=count&status=paid # Filtered aggregation
GET /api/orders/_aggregate?fn=count # Alias (compat)
Function RPC
POST /api/rpc/search_orders
Body: { "tenant_id": "acme", "limit": 25 } # Named args
POST /api/rpc/rebuild_index
Body: ["orders", true] # Positional args
Authentication & Security
JWT Authentication
The gateway validates Authorization: Bearer <jwt> tokens and extracts tenant context for RLS.
Set JWT_SECRET in the environment:
export JWT_SECRET="your-hs256-secret"
Header-Based Dev Auth
For development, pass claims directly as headers:
curl \
-H "x-user-id: user-123" \
-H "x-user-role: operator" \
-H "x-tenant-id: tenant-abc" \
/api/orders
Row-Level Security (RLS)
Every query is automatically scoped to the authenticated tenant via PostgreSQL’s native RLS. The gateway sets session variables (app.current_tenant_id, app.current_user_id) before each query, and also writes legacy app.current_operator_id for compatibility — no manual WHERE clauses needed.
YAML Policy Engine
Fine-grained access control per table, per role:
policies:
- name: orders_agent_read
table: orders
role: agent
operations: [read]
filter: "tenant_id = $tenant_id"
allowed_columns: ["id", "status", "total", "created_at"]
- name: orders_viewer_read
table: orders
role: viewer
operations: [read]
allowed_columns: ["id", "status"]
Legacy schemas can still use
operator_idby settingtenant_column = "operator_id"inqail.toml.
Query Allow-Listing
Lock down which queries can run in production:
# qail.toml
[gateway]
allow_list_path = "allow_list.txt"
Real-Time
WebSocket Subscriptions
Subscribe to table changes via WebSocket (PostgreSQL LISTEN/NOTIFY):
const ws = new WebSocket('ws://localhost:8080/ws');
ws.send(JSON.stringify({
type: 'subscribe',
channel: 'qail_table_orders'
}));
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
// type: "subscribed" | "notification" | "error"
console.log(msg);
};
Live Queries
Auto-refresh query results when underlying data changes:
ws.send(JSON.stringify({
type: 'live_query',
qail: "get orders where status = 'paid' order by created_at desc limit 10",
table: 'orders',
interval_ms: 2000
}));
Event Triggers
Fire webhooks on database mutations:
- name: order_created
table: orders
operations: [create]
webhook_url: "https://api.example.com/hooks/order-created"
retry_count: 3
headers:
X-Secret: webhook-secret-key
- name: order_updated
table: orders
operations: [update]
webhook_url: "https://api.example.com/hooks/order-updated"
Performance
Response Streaming
For large datasets, stream results as NDJSON:
GET /api/large_table?stream=true
Each row is sent as a newline-delimited JSON object — no buffering the entire result set.
EXPLAIN Endpoint
Inspect query plans without touching production:
GET /api/orders/_explain?status=paid&expand=users
Returns the PostgreSQL EXPLAIN ANALYZE output for the generated query.
Prepared Statement Caching
The gateway caches prepared statements per query shape, eliminating repeated parse overhead.
Query Cache
LRU cache with TTL and table-level invalidation. Identical queries hit cache instead of the database.
Observability
Prometheus Metrics
GET /metrics
Exposes request counts, latencies, error rates, and connection pool stats.
Request Tracing
Every response includes:
x-request-id— unique request identifierx-response-time— duration in milliseconds
Health Check
GET /health
Schema Introspection
GET /api/_schema # Full schema with tables, columns, types, FKs
GET /api/_openapi # Auto-generated OpenAPI 3.0 spec
Benchmark: Why Gateway > GraphQL
The gateway’s ?expand= does server-side JOINs — the same approach as Qail AST but over HTTP:
| Approach | Avg Latency | DB Queries | vs Qail |
|---|---|---|---|
| Qail AST (binary) | 449µs | 1 | baseline |
Gateway (?expand=) | 635µs | 1 | 1.4× |
| GraphQL + DataLoader | 1.52ms | 3 | 3.4× |
| GraphQL naive (N+1) | 18.2ms | 151 | 40× |
The 1.4× gap is pure JSON serialization overhead. On the wire, the gateway executes the exact same single-query JOIN as the Qail driver.
Unlike GraphQL, the gateway makes N+1 structurally impossible.
?expand=always resolves to a server-side JOIN — there’s no resolver pattern to misconfigure.
Architecture & Invariants
This page describes the gateway’s internal architecture — the request lifecycle, security boundaries, and the invariants that make multi-tenant isolation structural rather than behavioral.
Read time: ~10 minutes.
Recent hardening:
- Policy evaluation now avoids premature deny outcomes when later matching policies allow the operation.
- Handler execution paths consistently run command optimization before DB execution.
Request Lifecycle
Every HTTP request flows through a fixed, ordered pipeline:
Client Request
│
┌─────▼─────────────────────────────────────────────────────┐
│ 1. CORS + Security Headers │
│ X-Content-Type-Options: nosniff │
│ X-Frame-Options: DENY │
│ Body size limit: 2 MiB │
├───────────────────────────────────────────────────────────┤
│ 2. Rate Limiter │
│ Token bucket per IP (configurable burst + rate) │
│ → 429 Too Many Requests on exhaustion │
├───────────────────────────────────────────────────────────┤
│ 3. Authentication │
│ JWT validation (HS256) → extract tenant_id, │
│ user_id, role from claims │
│ Fallback: header-based (dev only, no JWT_SECRET) │
├───────────────────────────────────────────────────────────┤
│ 4. Policy Engine │
│ YAML-defined per-table, per-role access control │
│ Column filtering at AST level │
│ → 403 Forbidden on policy violation │
├───────────────────────────────────────────────────────────┤
│ 5. Tenant Concurrency Gate │
│ Per-tenant semaphore (configurable permits) │
│ Prevents one tenant from consuming all connections │
│ → 429 on tenant saturation │
├───────────────────────────────────────────────────────────┤
│ 6. Connection Acquisition │
│ pool.acquire_with_rls(RlsContext) or │
│ pool.acquire_with_rls_timeout(RlsContext, timeout) │
│ Sets PostgreSQL GUCs: │
│ set_config('app.current_tenant_id', '...', false) │
│ set_config('app.current_operator_id', '...', false) │
│ set_config('app.is_super_admin', '...', false) │
├───────────────────────────────────────────────────────────┤
│ 7. EXPLAIN Pre-Check (reads only) │
│ EXPLAIN on generated SQL before execution │
│ Rejects if cost > explain_max_cost │
│ or rows > explain_max_rows │
│ → QUERY_TOO_EXPENSIVE with structured JSON detail │
├───────────────────────────────────────────────────────────┤
│ 8. Query Execution │
│ AST → SQL transpilation → prepared statement cache │
│ PostgreSQL RLS policies filter rows invisibly │
│ Result row cap: max_result_rows (configurable) │
├───────────────────────────────────────────────────────────┤
│ 9. Connection Release │
│ COMMIT → resets txn-local RLS + statement_timeout │
│ Prepared statement caches remain hot for reuse │
│ Connection returned to pool in clean state │
└───────────────────────────────────────────────────────────┘
│
JSON Response + X-Request-Id + X-Response-Time
Connection Safety Model
The connection pool enforces a strict lifecycle:
Acquisition
Four public methods:
| Method | RLS | Timeout | Use Case |
|---|---|---|---|
acquire_with_rls(ctx) | ✅ Set | Default | Normal tenant queries |
acquire_with_rls_timeout(ctx, ms) | ✅ Set | Custom | Gateway with statement_timeout_ms |
acquire_system() | ✅ Empty | Default | Schema introspection, migrations |
acquire_raw() ⚠️ | ❌ None | Default | Advanced/internal paths that set RLS immediately |
Release
Every connection release executes:
COMMIT; -- Ends txn opened by RLS setup; resets transaction-local GUCs
With transaction-local settings (set_config(..., true) + SET LOCAL), this resets:
- RLS context GUCs
statement_timeout
Prepared statement caches are intentionally preserved for performance while tenant context is reset on each release.
Why acquire_raw() Is Restricted
acquire_raw() returns a connection with no RLS context. If used for tenant queries, it would bypass row-level security entirely.
Every internal call site must include a // SAFETY: comment explaining why raw acquisition is justified (typically: “RLS context is set immediately on the next line”). This convention is enforced via CI:
# Must return empty — every acquire_raw() must have a SAFETY comment
grep -rn "acquire_raw" pg/src/ | grep -v "// SAFETY:"
Core Invariants
These are the properties the system guarantees. If any are violated, it’s a bug.
1. Fail-Closed RLS
Every tenant query runs on a connection where RLS GUCs are set before any SQL executes. If GUC setup fails, the connection is not used — the error propagates to the caller.
There is no code path where a tenant query runs on a connection with another tenant’s context.
2. Cost-Bounded Execution
Read queries are gated by EXPLAIN analysis before execution. If the estimated cost or row count exceeds configured limits, the query is rejected before touching the database.
Limits are configurable per role via [gateway.role_overrides.<role>] in qail.toml, allowing analytics roles to run heavier queries without weakening default safety.
3. Tenant Isolation on Connection Reuse
When a connection is released and re-acquired by a different tenant:
COMMITresets transaction-local RLS state and timeout- New RLS context is set for the new tenant
- Query executes under the new tenant context
The integration test test_pool_connection_recycling_isolation verifies this with a pool of size 1, forcing the same physical connection to serve two different tenants sequentially.
4. Bounded Resource Consumption
| Resource | Bound | Mechanism |
|---|---|---|
| Connections | max_connections | Pool size cap |
| Per-tenant connections | tenant_max_concurrent | Semaphore per tenant |
| Result rows | max_result_rows | Row cap per query |
| Query duration | statement_timeout_ms | PostgreSQL SET LOCAL |
| Query cost | explain_max_cost | EXPLAIN pre-check |
| Request body | 2 MiB | Axum body limit layer |
| Cache entries | max_entries | moka TinyLFU eviction |
| Cache entry TTL | ttl_secs | moka time-to-live |
5. Graceful Shutdown
On SIGTERM or Ctrl+C:
- Stop accepting new connections
- Wait for in-flight requests to complete
- Drain the connection pool
- Exit cleanly
No request is silently dropped. No connection is leaked.
Component Map
GatewayState (shared Arc across all handlers)
├── pool: PgPool — Connection pool with RLS-aware acquisition
├── config: GatewayConfig — All configuration (qail.toml + env overrides)
├── policy_engine: PolicyEngine — YAML-defined per-table, per-role policies
├── schema: SchemaRegistry — Auto-discovered table/column/FK metadata
├── cache: QueryCache — moka-backed LRU with table-level invalidation
├── rate_limiter: RateLimiter — Token bucket rate limiting
├── explain_cache: ExplainCache — Cached EXPLAIN results per query shape
├── explain_config: ExplainConfig — Cost/row thresholds for EXPLAIN pre-check
├── tenant_semaphore: TenantSemaphore — Per-tenant concurrency limiter
├── event_engine: EventTriggerEngine — Webhook triggers on mutations
└── user_operator_map: HashMap — JWT user_id → tenant_id resolution cache
Where to Hook Observability
| Signal | Endpoint / Header | Format |
|---|---|---|
| Metrics | GET /metrics | Prometheus |
| Request ID | X-Request-Id response header | UUID |
| Latency | X-Response-Time response header | Duration string |
| Health | GET /health | 200 OK or error |
| Schema | GET /api/_schema | JSON |
| OpenAPI | GET /api/_openapi | OpenAPI 3.0 JSON |
| Cache stats | Via /metrics | hit/miss/entries/weighted_size |
REST API Reference
The gateway auto-discovers all tables and exposes them as REST endpoints under /api/.
Endpoints
| Method | Path | Description |
|---|---|---|
GET | /api/{table} | List with filters, sort, pagination |
GET | /api/{table}/:id | Get by primary key |
GET | /api/{table}/_explain | EXPLAIN ANALYZE |
GET | /api/{table}/aggregate | Aggregations |
GET | /api/{table}/_aggregate | Aggregations (compat alias) |
GET | /api/{table}/:id/{child} | Nested resources (FK-based) |
POST | /api/rpc/{function} | Function RPC with JSON args |
POST | /api/{table} | Create (single or batch) |
PATCH | /api/{table}/:id | Partial update |
DELETE | /api/{table}/:id | Delete by primary key |
Filtering
All filter operators are supported as query parameters.
Both key-style (column.op=value) and value-style (column=op.value) are accepted:
GET /api/orders?status=paid # Exact match
GET /api/orders?status.eq=paid # Exact match (key-style)
GET /api/orders?total=gt.100 # Greater than
GET /api/orders?total.gte=50&total.lte=200 # Range (key-style)
GET /api/orders?total=gte.50&total=lte.200 # Range
GET /api/orders?status=in.(paid,shipped) # IN list
GET /api/orders?status.in=paid,shipped # IN list (key-style)
GET /api/orders?name=like.*ferry* # Pattern match
GET /api/orders?name=ilike.*FERRY* # Case-insensitive
GET /api/orders?notes=is_null # NULL check
GET /api/orders?notes.is_null=true # NULL check (key-style)
GET /api/orders?status=ne.cancelled # Not equal
GET /api/orders?status.ne=cancelled # Not equal (key-style)
GET /api/orders?tags=contains.premium # Array contains
Operator Reference
| Operator | SQL | Example |
|---|---|---|
eq (default) | = | ?status=paid |
ne | != | ?status=ne.cancelled or ?status.ne=cancelled |
gt / gte | > / >= | ?total=gte.100 |
lt / lte | < / <= | ?age=lt.30 |
in | IN (...) | ?status=in.(active,pending) or ?status.in=active,pending |
like | LIKE | ?email=like.*@gmail* |
ilike | ILIKE | ?name=ilike.*john* |
is_null | IS NULL | ?deleted_at=is_null or ?deleted_at.is_null=true |
contains | @> | ?tags=contains.premium |
Full-Text Search
GET /api/products?search=ferry+bali # Search all text columns
GET /api/products?search=ferry&search_columns=name,desc # Search specific columns
Uses PostgreSQL’s to_tsvector / websearch_to_tsquery — supports natural language queries.
Sorting
GET /api/orders?sort=created_at # Ascending (default)
GET /api/orders?sort=-total # Descending (prefix -)
GET /api/orders?sort=-status,created_at # Multi-column
GET /api/orders?sort=total:desc,created_at:asc # Multi-column (explicit)
Pagination
GET /api/orders?limit=25&offset=50 # Offset-based
GET /api/orders?limit=25&cursor=eyJpZCI6... # Cursor-based
Column Selection & Distinct
GET /api/orders?select=id,status,total # Return specific columns
GET /api/orders?distinct=status # Distinct values
Relationships & Expansion
FK-Based JOIN Expansion
Automatically resolves foreign key relationships with ?expand=:
GET /api/orders?expand=users # Inline user object
GET /api/orders?expand=users,operators # Multiple relations
GET /api/orders?expand=nested:users # Nested FK expansion
Response includes the full related object inline — no N+1 queries. The gateway performs a server-side JOIN.
expand= is for forward (many-to-one / one-to-one) relations. For reverse one-to-many expansion, use nested: to avoid parent-row duplication.
Nested Resource Routes
Access child resources through parent:
GET /api/operators/:id/orders # All orders for operator
GET /api/users/:id/bookings?status=confirmed # Filtered child resources
Mutations
Create
# Single insert
curl -X POST /api/orders -d '{"user_id": "...", "total": 100}'
# Batch insert
curl -X POST /api/orders -d '[{"total": 100}, {"total": 200}]'
Update
curl -X PATCH /api/orders/:id -d '{"status": "shipped"}'
Upsert
curl -X POST /api/orders -H "X-Upsert: true" \
-H "X-On-Conflict: order_number" \
-d '{"order_number": "ORD-001", "total": 150}'
Delete
curl -X DELETE /api/orders/:id
Returning Clause
All mutations support ?returning=id,status to get back specific columns after the operation.
Aggregations
GET /api/orders/aggregate?fn=count # COUNT(*)
GET /api/orders/aggregate?fn=sum&column=total # SUM(total)
GET /api/orders/aggregate?fn=avg&column=total # AVG
GET /api/orders/aggregate?fn=min&column=created_at # MIN
GET /api/orders/aggregate?fn=max&column=total # MAX
GET /api/orders/aggregate?fn=count&status=paid # Filtered aggregation
GET /api/orders/_aggregate?fn=count # Alias (compat)
Supported functions: count, sum, avg, min, max.
RPC Functions
Call PostgreSQL functions directly:
# Named arguments (object body)
curl -X POST /api/rpc/search_orders \
-d '{"tenant_id":"acme","limit":25}'
# Positional arguments (array body)
curl -X POST /api/rpc/rebuild_index \
-d '["orders", true]'
Accepted body formats:
- JSON object: named args (
arg => value) - JSON array: positional args
- scalar/null: single positional arg
- empty body: no args
Authentication & Security
The gateway provides multiple layers of security: JWT authentication, PostgreSQL Row-Level Security integration, a YAML policy engine, and query allow-listing.
Note: Webhook-based authentication has been removed. JWT (
JWT_SECRET) is the only supported authentication mechanism. If your reverse proxy needs to authenticate, forward user identity within the JWT — not via custom headers.
JWT Authentication
The gateway validates JWT tokens and extracts tenant context for RLS.
Set JWT_SECRET as an environment variable:
export JWT_SECRET="your-hs256-secret"
The extracted claims (tenant_id, user_id, role) are set as PostgreSQL session variables before every query, enabling native RLS enforcement:
set_config('app.current_tenant_id', '<from JWT>', false);
set_config('app.current_operator_id', '<from JWT>', false); -- legacy compat alias
set_config('app.current_user_id', '<from JWT>', false);
set_config('app.role', '<from JWT>', false);
Header-Based Dev Auth
For development, pass claims directly as headers:
curl \
-H "x-user-id: user-123" \
-H "x-user-role: operator" \
-H "x-tenant-id: tenant-abc" \
/api/orders
Warning: Header-based auth is only active when
QAIL_DEV_MODE=trueis set. This works independently ofJWT_SECRET— you can have both JWT and dev-mode headers active simultaneously. If aBearertoken is provided but fails validation, the request is denied (not degraded to dev-mode or anonymous). The gateway logs a startup warning when dev mode is enabled and does not enforce bind-address restrictions.
Row-Level Security (RLS)
Every query is automatically scoped to the authenticated tenant via PostgreSQL’s native RLS. The gateway sets session variables before each query:
-- Automatically executed before every query:
set_config('app.current_operator_id', '<from JWT>', false);
set_config('app.current_tenant_id', '<from JWT>', false);
set_config('app.current_user_id', '<from JWT>', false);
set_config('app.role', '<from JWT>', false);
Your PostgreSQL RLS policies reference these variables:
CREATE POLICY tenant_isolation ON orders
FOR ALL
USING (tenant_id = current_setting('app.current_tenant_id')::uuid);
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
ALTER TABLE orders FORCE ROW LEVEL SECURITY;
No manual WHERE clauses needed. The gateway + RLS combination provides database-level multi-tenancy.
Important: Your application database role must be a non-superuser with
NOBYPASSRLS. Superusers bypass RLS even withFORCE ROW LEVEL SECURITY.
YAML Policy Engine
Fine-grained access control per table, per role:
policies:
- name: orders_agent_read
table: orders
role: agent
operations: [read]
filter: "tenant_id = $tenant_id"
allowed_columns: ["id", "status", "total", "created_at"]
- name: orders_viewer_read
table: orders
role: viewer
operations: [read]
allowed_columns: ["id", "status"]
Compatibility: gateway JWT parsing still accepts legacy
operator_idclaims and maps them intotenant_idwhentenant_idis absent.
Column Permissions
Control which columns each role can read or write:
- Whitelist: Only the listed columns are returned/allowed.
- Blacklist: All columns except the listed ones are returned/allowed.
Policies are applied at the AST level before the query reaches PostgreSQL — denied columns never leave the database.
Operation Permissions
Control CRUD operations per role per table:
| Permission | Operations |
|---|---|
read | GET list and single |
create | POST create |
update | PATCH update |
delete | DELETE delete |
Query Allow-Listing
Lock down which queries can run in production:
# qail.toml
[gateway]
allow_list_path = "allow_list.txt"
When enabled, any query pattern not in the allow-list is rejected with 403 Forbidden. This provides defense-in-depth: even if auth is bypassed, only pre-approved query shapes can execute.
RPC Contract Hardening
Harden /api/rpc/{function} with strict function naming and signature checks:
[gateway]
rpc_require_schema_qualified = true
rpc_allowlist_path = "rpc_allowlist.txt"
rpc_signature_check = true
rpc_allowlist_path format:
# One function per line (case-insensitive)
api.search_orders
public.health_check
What each control does:
| Setting | Effect |
|---|---|
rpc_require_schema_qualified | Rejects unqualified calls like search_orders; requires schema.function |
rpc_allowlist_path | Blocks RPC calls not explicitly listed |
rpc_signature_check | For named-arg JSON bodies, rejects unknown argument keys not present in PostgreSQL function signatures |
When rpc_signature_check=true, the gateway also uses a parser-only PostgreSQL probe (PREPARE ...; DEALLOCATE) to align overload resolution with PostgreSQL itself before execution.
RPC DevEx endpoint:
GET /api/_rpc/contractsreturns callable function signatures (identity_args, defaults, variadic, return type) for typed client generation.
RPC result format control:
- Optional header
x-qail-result-format: binaryenables binary column format on RPC responses. - Default is
x-qail-result-format: text.
Database Auth/TLS Hardening
Gateway database transport/auth policy can be configured through database_url query parameters:
[gateway]
database_url = "postgresql://app:secret@db.internal:5432/app\
?sslmode=require\
&sslrootcert=/etc/qail/ca.pem\
&channel_binding=require\
&auth_mode=scram_only"
Supported parameters:
| Parameter | Values | Effect |
|---|---|---|
sslmode | disable, prefer, require (verify-ca/verify-full map to require) | TLS policy |
sslrootcert | file path | Custom CA bundle for server cert validation |
sslcert + sslkey | file paths | Enable mTLS client cert auth |
channel_binding | disable, prefer, require | SCRAM channel-binding policy |
auth_mode | scram_only, gssapi_only, compat | Auth policy preset |
auth_scram / auth_md5 / auth_cleartext | boolean | Fine-grained mechanism toggles |
auth_kerberos / auth_gssapi / auth_sspi | boolean | Enterprise auth mechanism toggles |
gss_provider | linux_krb5, callback, custom | Selects built-in Linux krb5 provider vs external callback wiring |
gss_service | string (default postgres) | Kerberos service used for host-based target (service@host) |
gss_target | string | Optional full host-based target override |
gss_connect_retries | integer (default 2) | Retries transient GSS/Kerberos connect/auth failures |
gss_retry_base_ms | integer ms (default 150) | Base delay for exponential GSS retry backoff |
gss_circuit_threshold | integer (default 8) | Failures in window before local GSS circuit opens |
gss_circuit_window_ms | integer ms (default 30000) | Rolling window for circuit failure counting |
gss_circuit_cooldown_ms | integer ms (default 15000) | Cooldown while open circuit blocks new connect attempts |
If sslcert or sslkey is provided, both must be set.
If gss_provider=linux_krb5 is set, build the gateway with feature enterprise-gssapi on Linux.
Startup runs Kerberos preflight checks and emits clear diagnostics for common misconfiguration
(missing explicit credential cache/keytab paths, invalid KRB5_CONFIG, etc).
Example:
[gateway]
database_url = "postgresql://app@db.internal:5432/app\
?sslmode=require\
&auth_mode=gssapi_only\
&gss_provider=linux_krb5\
&gss_service=postgres\
&gss_connect_retries=3\
&gss_retry_base_ms=200\
&gss_circuit_threshold=8\
&gss_circuit_window_ms=30000\
&gss_circuit_cooldown_ms=15000"
Security Summary
| Threat | Traditional REST | QAIL Gateway |
|---|---|---|
| SQL injection | Possible (one mistake) | Impossible (binary AST) |
| Tenant data leak | Missing WHERE clause | RLS auto-injected |
| N+1 catastrophe | Default behavior | Structurally impossible |
| Over-fetching | Manual column control | Policy-enforced |
| Query abuse | Rate limiting only | Allow-list + rate limit |
Internal Endpoint Protection (M4)
The /metrics and /health/internal endpoints expose operational details. Protect them in production:
# qail.toml
[gateway]
admin_token = "your-secret-admin-token"
When set, both endpoints require Authorization: Bearer <admin_token>. Without the token, they return 401 Unauthorized.
Alternatively, restrict access via network policy (firewall rules, reverse proxy).
Real-Time & Events
The gateway supports real-time data through WebSocket subscriptions, live queries, and event triggers with webhook delivery.
WebSocket Subscriptions
Subscribe to table changes via WebSocket (backed by PostgreSQL LISTEN/NOTIFY):
const ws = new WebSocket('ws://localhost:8080/ws');
ws.send(JSON.stringify({
type: 'subscribe',
channel: 'qail_table_orders'
}));
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
// type: "subscribed" | "notification" | "error"
console.log(msg);
};
Subscriptions are scoped by your RLS policies — each client only receives events for rows they are authorized to see.
Live Queries
Auto-refresh query results when underlying data changes:
ws.send(JSON.stringify({
type: 'live_query',
qail: "get orders where status = 'paid' order by created_at desc limit 10",
table: 'orders',
interval_ms: 2000
}));
The gateway re-executes the query at the specified interval and pushes updated results only when data has changed.
Event Triggers
Fire webhooks on database mutations. Define triggers in YAML:
- name: order_created
table: orders
operations: [create]
webhook_url: "https://api.example.com/hooks/order-created"
headers:
X-Secret: webhook-secret-key
retry_count: 3
- name: order_updated
table: orders
operations: [update]
webhook_url: "https://api.example.com/hooks/order-updated"
Webhook Payload
{
"trigger": "order_created",
"table": "orders",
"operation": "INSERT",
"data": {
"new": { "id": "uuid-...", "total": 150.00, "status": "pending" },
"old": null
},
"timestamp": "2025-01-15T10:30:00Z"
}
Delivery Guarantees
- Non-blocking: Webhooks fire asynchronously via
tokio::spawn. The REST response is never delayed. - Retry with backoff: Failed deliveries retry with exponential backoff up to the configured count.
- Custom headers: Attach secret keys or auth tokens to webhook requests.
Operations
Each trigger can fire on one or more operations:
| Operation | Fires on | Payload |
|---|---|---|
create | POST /api/{table} | new data |
update | PATCH /api/{table}/:id | new + old data |
delete | DELETE /api/{table}/:id | old data |
NDJSON Streaming
For large datasets, stream results as newline-delimited JSON:
GET /api/events?stream=true
Each row is sent as a separate JSON line with chunked transfer encoding — no buffering:
{"id":"uuid-1","type":"click","timestamp":"2025-01-01T00:00:00Z"}
{"id":"uuid-2","type":"purchase","timestamp":"2025-01-01T00:01:00Z"}
...
Ideal for data exports, ETL pipelines, and processing large tables without memory pressure.
Performance & Observability
The gateway is built for production throughput with prepared statement caching, query caching, rate limiting, and full observability.
Query Cache
LRU cache with configurable TTL and automatic table-level invalidation.
[gateway.cache]
enabled = true
max_entries = 1000
ttl_secs = 60
Behavior
- Cache key: Normalized query string + auth context hash
- Invalidation: Automatic on any mutation (
INSERT,UPDATE,DELETE) to the same table - Eviction: LRU when capacity is reached
- TTL: Configurable per-entry time-to-live
- Correctness: Concurrent reads during a mutation never serve stale data — invalidation is atomic
Cache Headers
Responses include cache status headers:
X-Cache: HIT # Served from cache
X-Cache: MISS # Fresh query executed
Prepared Statement Caching
The gateway caches prepared statements per query shape, eliminating repeated parse overhead. This is separate from the query cache — it caches the PostgreSQL server-side prepared statement, not the result.
Rate Limiting
Built-in token-bucket rate limiter keyed by client IP:
[gateway]
rate_limit_rate = 100.0 # Tokens refilled per second
rate_limit_burst = 200 # Maximum burst capacity
When exceeded, the gateway returns:
HTTP/1.1 429 Too Many Requests
Retry-After: 1
{
"error": {
"code": "RATE_LIMITED",
"message": "Too many requests",
"status": 429
}
}
Request Timeouts
All queries have a configurable timeout (default: 30 seconds, set via statement_timeout_ms in qail.toml). Long-running queries are cancelled and return:
{
"error": {
"code": "TIMEOUT",
"message": "Request timed out",
"status": 408
}
}
Prometheus Metrics
GET /metrics
Exposes request counts, latencies (p50/p95/p99), error rates, cache hit ratios, and connection pool stats in Prometheus format.
Request Tracing
Every response includes tracing headers:
X-Request-Id: req-uuid-... # Unique request identifier
X-Response-Time: 12ms # Duration
Health Check
GET /health
Returns 200 OK when the gateway and database connection are healthy.
Schema Introspection
GET /api/_schema # Full schema: tables, columns, types, FKs
GET /api/_openapi # Auto-generated OpenAPI 3.0 spec
The OpenAPI spec is generated from the live database schema — always up to date.
EXPLAIN Cost Guard
The gateway runs EXPLAIN on read queries before execution to reject expensive queries early.
Configure thresholds in qail.toml:
[gateway]
explain_max_cost = 100000.0 # Reject if estimated cost exceeds this
explain_max_rows = 1000000 # Reject if estimated rows exceed this
Per-role overrides allow analytics roles to run heavier queries:
[gateway.role_overrides.reporting]
explain_max_cost = 500000.0
Rejected queries return a structured QUERY_TOO_EXPENSIVE error:
{
"code": "QUERY_TOO_EXPENSIVE",
"message": "Query rejected: estimated cost 150000 exceeds limit 100000...",
"details": "{\"estimated_cost\":150000,\"cost_limit\":100000,\"estimated_rows\":2000000,\"row_limit\":1000000,\"suggestions\":[\"Add WHERE clauses\",\"Reduce ?expand depth\"]}"
}
The details field is machine-readable JSON — client SDKs can parse it to display actionable suggestions.
EXPLAIN Endpoint
Inspect query plans manually:
GET /api/orders/_explain?status=paid&expand=users
Returns the PostgreSQL EXPLAIN ANALYZE output for the generated query.
Error Responses
All errors follow a consistent JSON structure:
{
"error": {
"code": "QUERY_ERROR",
"message": "column \"foo\" does not exist",
"status": 400,
"request_id": "req-uuid-..."
}
}
Error Codes
| Code | HTTP | Description |
|---|---|---|
RATE_LIMITED | 429 | Rate limit exceeded |
CONCURRENCY_LIMIT | 429 | Tenant concurrency limit reached |
QUERY_TOO_EXPENSIVE | 422 | EXPLAIN cost/row estimate exceeded threshold |
TIMEOUT | 408 | Query exceeded statement timeout |
PARSE_ERROR | 400 | Malformed query parameters |
QUERY_ERROR | 400 | Database query failed |
UNAUTHORIZED | 401 | Authentication failed |
FORBIDDEN | 403 | Policy denied access |
NOT_FOUND | 404 | Resource not found |
INTERNAL | 500 | Unexpected server error |
Benchmark: Gateway vs GraphQL
The gateway’s ?expand= does server-side JOINs — same approach as the Qail AST driver but over HTTP:
| Approach | Avg Latency | DB Queries | vs Qail |
|---|---|---|---|
| Qail AST (binary) | 449µs | 1 | baseline |
Gateway (?expand=) | 635µs | 1 | 1.4× |
| GraphQL + DataLoader | 1.52ms | 3 | 3.4× |
| GraphQL naive (N+1) | 18.2ms | 151 | 40× |
The 1.4× gap is pure JSON serialization overhead. On the wire, the gateway executes the exact same single-query JOIN as the Qail driver.
Unlike GraphQL, the gateway makes N+1 structurally impossible.
?expand=always resolves to a server-side JOIN — there’s no resolver pattern to misconfigure.
Benchmark: Qail vs GraphQL vs REST
A scientific comparison of seven architectural approaches to the same complex database query, proving why AST-native matters — and showing there’s still room to go faster.
The Query
Get all enabled ferry connections with their origin harbor name, destination harbor name, and operator brand name — 3×LEFT JOIN, filtered by
is_enabled, sorted by name, limited to 50 rows.
This is a realistic query that every SaaS backend encounters: fetching a list of resources with their related entities.
Results
100 iterations, --release mode, PostgreSQL buffer cache pre-warmed, randomized execution order to eliminate ordering bias. Sorted by median (most stable metric).
| # | Approach | Median | p99 | DB Queries | vs Qail |
|---|---|---|---|---|---|
| 1 | Qail AST (prepared) | 224µs | 633µs | 1 | baseline |
| 2 | Qail AST (fast) | 380µs | 494µs | 1 | 1.7× |
| 3 | Qail AST (uncached) | 465µs | 2.1ms | 1 | 2.1× |
| 4 | REST + ?expand= | 539µs | 1.7ms | 1 | 2.4× |
| 5 | GraphQL + DataLoader | 1.09ms | 2.5ms | ~3 | 4.9× |
| 6 | REST naive | 17.7ms | 54.6ms | ~151 | 78.9× |
| 7 | GraphQL naive | 17.8ms | 161.6ms | ~151 | 79.6× |
The Three Qail Tiers
Qail offers three fetch modes, each trading overhead for features:
| Mode | What It Does | Median | Why Use It |
|---|---|---|---|
| Prepared | Cached statement (skips Parse) — Bind+Execute only | 224µs | Production default. Postgres skips query planning on repeat calls. |
| Fast | Skips ColumnInfo metadata, no Arc clones per row | 380µs | When you know your column layout and want zero allocation overhead. |
| Uncached | Full Parse+Bind+Execute every call | 465µs | Dynamic queries that change per request. Still faster than REST. |
What This Proves
- Prepared statements are the headline — Qail prepared is 2.4× faster than the best possible REST because Postgres skips query planning entirely
- The fast path eliminates Rust overhead — Skipping ColumnInfo + Arc saves ~85µs per call (18% improvement over uncached)
- DataLoader helps but still 4.9× slower — 3 round trips vs 1, and each trip has its own Parse+Bind+Execute cycle
- N+1 is catastrophic — 79× slower with 151 database round trips. The p99 tells the real story: 161ms tail latency on GraphQL naive
- REST+expand ≈ Qail uncached + JSON overhead — when both do Parse+Bind+Execute with the same query, the gap is the JSON serialization cost
Still Tuning
These results are from early Qail development. The
fetch_all_fastpath was added during this benchmark session — a 10-minute optimization that saved 18%. There is significant headroom remaining:
- SIMD row decoding — parsing fixed-width columns with SIMD instructions
- Zero-copy text fields — borrowing from the receive buffer instead of allocating
- Batch prepared execution — pipelining multiple Bind+Execute in a single round trip
- Arena allocation — per-query bump allocator instead of per-row heap allocation
Qail’s architecture (AST → binary wire protocol) is designed so these optimizations compose. Each one is a multiplier, not a replacement.
Security Comparison
| Qail | GraphQL | REST | |
|---|---|---|---|
| SQL Injection | Impossible (binary AST) | String fields vulnerable | String queries vulnerable |
| Tenant Isolation | RLS at protocol level | Per-resolver ACL | Per-middleware ACL |
| Query Abuse | AST validates at compile time | Depth/complexity attacks | IDOR per endpoint |
Approach 1: Qail AST — Prepared (Production Default)
The fastest path. The prepared statement is cached after the first call — subsequent executions skip PostgreSQL’s Parse phase entirely, sending only Bind+Execute.
1 query. 1 round trip. No Parse. No JSON.
#![allow(unused)]
fn main() {
use qail_core::prelude::*;
use qail_core::ast::{Operator, JoinKind, SortOrder};
use qail_pg::PgDriver;
async fn run_qail_prepared(driver: &mut PgDriver) -> Result<Vec<QailRow>, Box<dyn std::error::Error>> {
let cmd = Qail::get("odyssey_connections")
.columns(vec![
"odyssey_connections.id",
"odyssey_connections.name",
"odyssey_connections.description",
"odyssey_connections.is_enabled",
"odyssey_connections.created_at",
])
.join(JoinKind::Left, "harbors AS origin",
"odyssey_connections.origin_harbor_id", "origin.id")
.join(JoinKind::Left, "harbors AS dest",
"odyssey_connections.destination_harbor_id", "dest.id")
.join(JoinKind::Left, "operators",
"odyssey_connections.operator_id", "operators.id")
.column("origin.name AS origin_harbor")
.column("dest.name AS dest_harbor")
.column("operators.brand_name AS operator_name")
.filter("odyssey_connections.is_enabled", Operator::Eq, Value::Bool(true))
.order_by("odyssey_connections.name", SortOrder::Asc)
.limit(50);
// fetch_all_cached: first call does Parse+Bind+Execute,
// subsequent calls skip Parse (cached prepared statement)
let rows = driver.fetch_all_cached(&cmd).await?;
Ok(rows)
}
}
Generated SQL:
SELECT odyssey_connections.id, odyssey_connections.name,
odyssey_connections.description, odyssey_connections.is_enabled,
odyssey_connections.created_at,
origin.name AS origin_harbor,
dest.name AS dest_harbor,
operators.brand_name AS operator_name
FROM odyssey_connections
LEFT JOIN harbors AS origin ON odyssey_connections.origin_harbor_id = origin.id
LEFT JOIN harbors AS dest ON odyssey_connections.destination_harbor_id = dest.id
LEFT JOIN operators ON odyssey_connections.operator_id = operators.id
WHERE odyssey_connections.is_enabled = true
ORDER BY odyssey_connections.name ASC
LIMIT 50
Key insight: This SQL is never generated as a string. The AST encodes directly to PostgreSQL’s binary wire protocol. There is zero injection surface because there are no strings to inject into.
Approach 2: Qail AST — Fast (Zero-Overhead Receive)
Same query as prepared, but uses fetch_all_fast which skips building the ColumnInfo metadata HashMap and avoids Arc::clone() per row. This eliminates ~50 atomic reference count operations per call.
#![allow(unused)]
fn main() {
// Same query as Approach 1, but:
let rows = driver.fetch_all_fast(&cmd).await?;
// Rows come back without ColumnInfo — access by index only, not by name.
// Saves: HashMap allocation + Arc::clone() × num_rows
}
When to use: High-throughput codepaths where you know your column layout at compile time and don’t need name-based column access. The 18% speedup over uncached comes entirely from eliminating Rust-side allocation and reference counting.
Approach 3: Qail AST — Uncached (Full Round Trip)
Full Parse+Bind+Execute on every call. This is the baseline for comparing against REST+expand, since both pay the same PostgreSQL protocol overhead.
#![allow(unused)]
fn main() {
let rows = driver.fetch_all_uncached(&cmd).await?;
// Parse+Bind+Execute every time — Postgres re-plans the query each call
}
465µs vs 224µs: The 2.1× gap between uncached and prepared is pure Postgres query planning overhead. Prepared statements are free performance.
Approach 4: GraphQL Naive — N+1 Resolvers
The classic GraphQL anti-pattern. Each resolver fires a separate query for each related entity:
- Root query: Fetch all connections (1 query → 50 rows)
- Per-row: For each connection, resolve origin harbor (50 queries)
- Per-row: For each connection, resolve destination harbor (50 queries)
- Per-row: For each connection, resolve operator (50 queries)
Total: ~151 queries per request.
#![allow(unused)]
fn main() {
async fn run_graphql_naive(driver: &mut PgDriver)
-> Result<(usize, Duration, usize), Box<dyn std::error::Error>>
{
// Step 1: Root query
let root_cmd = Qail::get("odyssey_connections")
.filter("is_enabled", Operator::Eq, Value::Bool(true))
.order_by("name", SortOrder::Asc)
.limit(50);
let connections = driver.fetch_all_uncached(&root_cmd).await?;
// Step 2: N+1 — resolve each relation individually
for conn in &connections {
let origin_id = conn.text(2);
let dest_id = conn.text(3);
let op_id = conn.get_string(11);
// GET /harbors/:origin_id
driver.fetch_all_uncached(
&Qail::get("harbors")
.filter("id", Operator::Eq, Value::String(origin_id))
.limit(1)
).await?;
// GET /harbors/:dest_id
driver.fetch_all_uncached(
&Qail::get("harbors")
.filter("id", Operator::Eq, Value::String(dest_id))
.limit(1)
).await?;
// GET /operators/:op_id
if let Some(oid) = op_id {
driver.fetch_all_uncached(
&Qail::get("operators")
.filter("id", Operator::Eq, Value::String(oid))
.limit(1)
).await?;
}
}
// ...
}
}
Why this is the default: Most GraphQL tutorials teach exactly this pattern. Junior developers follow it because it’s “clean” — each resolver is independent. The N+1 problem is invisible until production load hits. The p99 of 161ms means 1% of your requests take over 160ms — for a single database query.
Approach 5: GraphQL + DataLoader — Batched Queries
The standard optimization. DataLoader collects all IDs from the root query, then fires batched WHERE id IN (...) queries:
- Root query: Fetch all connections (1 query)
- Batch: All unique harbor IDs →
WHERE id IN ($1, $2, ...)(1 query) - Batch: All unique operator IDs →
WHERE id IN ($1, $2, ...)(1 query)
Total: 3 queries per request.
#![allow(unused)]
fn main() {
async fn run_graphql_dataloader(driver: &mut PgDriver)
-> Result<(usize, Duration, usize), Box<dyn std::error::Error>>
{
let connections = driver.fetch_all_uncached(&root_cmd).await?;
// Collect unique IDs (DataLoader batching)
let mut harbor_ids = HashSet::new();
let mut operator_ids = HashSet::new();
for conn in &connections {
harbor_ids.insert(conn.text(2)); // origin
harbor_ids.insert(conn.text(3)); // destination
if let Some(oid) = conn.get_string(11) {
operator_ids.insert(oid);
}
}
// Batch: SELECT * FROM harbors WHERE id IN (...)
let harbor_ids_list: Vec<String> = harbor_ids.into_iter().collect();
driver.fetch_all_uncached(
&Qail::get("harbors").filter("id", Operator::In,
Value::Array(harbor_ids_list.iter()
.map(|s| Value::String(s.clone())).collect()))
).await?;
// Batch: SELECT * FROM operators WHERE id IN (...)
let op_ids_list: Vec<String> = operator_ids.into_iter().collect();
if !op_ids_list.is_empty() {
driver.fetch_all_uncached(
&Qail::get("operators").filter("id", Operator::In,
Value::Array(op_ids_list.iter()
.map(|s| Value::String(s.clone())).collect()))
).await?;
}
// ...
}
}
Still 3 round trips. In a local benchmark, latency is ~0ms. In production (App → RDS across AZs), each round trip adds 1-2ms. DataLoader’s 3-trip approach would add 3-6ms of pure network latency that Qail’s single-trip approach avoids entirely.
Approach 6: REST Naive — Sequential Calls + JSON
Simulates a frontend client that:
GET /api/connections→ JSON → parse- For each connection:
GET /api/harbors/:id→ JSON → parse (×50 origins) - For each connection:
GET /api/harbors/:id→ JSON → parse (×50 destinations) - For each connection:
GET /api/operators/:id→ JSON → parse (×50 operators)
Total: ~151 queries + JSON serialization/deserialization overhead.
#![allow(unused)]
fn main() {
async fn run_rest_naive(driver: &mut PgDriver)
-> Result<(usize, Duration, usize), Box<dyn std::error::Error>>
{
let connections = driver.fetch_all_uncached(&root_cmd).await?;
// Serialize root response to JSON
let mut conn_data: Vec<String> = Vec::with_capacity(connections.len());
for conn in &connections {
conn_data.push(format!(
r#"{{"id":"{}","odyssey_id":"{}","name":"{}"}}"#,
conn.text(0), conn.text(1), conn.text(4)
));
}
// N+1: resolve each relation + JSON each response
for conn in &connections {
let origin_id = conn.text(2);
let dest_id = conn.text(3);
let origin_rows = driver.fetch_all_uncached(
&Qail::get("harbors")
.filter("id", Operator::Eq, Value::String(origin_id))
.limit(1)
).await?;
// Simulate JSON deserialization
if let Some(h) = origin_rows.first() {
let _ = format!(r#"{{"name":"{}"}}"#, h.text(1));
}
// ... repeat for dest + operator
}
// ...
}
}
Approach 7: REST + ?expand= — Server-Side JOIN
The optimized REST pattern. The server performs the JOIN (same query as Qail AST) and returns denormalized JSON:
GET /api/connections?expand=harbors,operators
1 query + JSON serialization overhead.
#![allow(unused)]
fn main() {
async fn run_rest_expand(driver: &mut PgDriver)
-> Result<(usize, Duration), Box<dyn std::error::Error>>
{
// Same JOIN query as Qail (server-side)
let cmd = build_join_query();
let rows = driver.fetch_all_uncached(&cmd).await?;
// Simulate full JSON response serialization
let mut json_out = String::with_capacity(4096);
json_out.push('[');
for (i, r) in rows.iter().enumerate() {
if i > 0 { json_out.push(','); }
json_out.push_str(&format!(
r#"{{"id":"{}","name":"{}","origin":"{}","dest":"{}","operator":"{}"}}"#,
r.text(0), r.text(1), r.text(5), r.text(6), r.text(7),
));
}
json_out.push(']');
std::hint::black_box(&json_out); // prevent compiler optimization
// ...
}
}
The 2.4× gap vs Qail prepared is JSON + query planning. REST+expand pays for both JSON serialization AND Parse+Bind+Execute (no prepared statement). Against Qail uncached, the gap narrows to just 1.15× — proving JSON overhead is minimal; the real win is prepared statements.
Methodology
Randomized Execution Order
Approaches run in Fisher-Yates shuffled order to eliminate Postgres buffer cache ordering bias. Previous benchmarks showed the first approach could appear slower due to cold cache pages.
Cache Equalization
A 10-iteration global warmup loads all table data pages into PostgreSQL’s buffer cache before any timed approach runs:
#![allow(unused)]
fn main() {
println!("⏳ Global warmup (loading data pages into Postgres buffer cache)...");
for _ in 0..10 {
let _ = driver.fetch_all_uncached(&warmup_join).await?;
let _ = driver.fetch_all_uncached(&warmup_harbors).await?;
let _ = driver.fetch_all_uncached(&warmup_operators).await?;
}
println!("✓ Buffer cache warm — all approaches start equal");
}
Statistical Rigor
Each approach reports median (most stable, resistant to outliers) and p99 (tail latency, shows worst-case behavior). The results table is sorted by median, not average, to avoid outlier distortion.
Fair Protocol
All seven approaches use Qail’s PgDriver internally. This isolates the architectural difference (1 query vs N+1 vs batched, prepared vs uncached) from protocol differences. We’re measuring the cost of the pattern, not the driver.
RLS Bypass
Row-Level Security is bypassed (SET app.is_super_admin = 'true') so timing measures pure query performance. In production, Qail enforces RLS at the protocol level — GraphQL and REST must implement it at the application layer.
Network Latency
This is a local benchmark (app and database on the same machine). Network latency = 0ms. In a real cloud deployment (e.g., AWS App → RDS), each database round trip adds 1-2ms of network latency:
| Approach | Local Median | Estimated Cloud Latency |
|---|---|---|
| Qail prepared (1 trip) | 224µs | ~1.5ms |
| Qail uncached (1 trip) | 465µs | ~2ms |
| DataLoader (3 trips) | 1.09ms | ~5-7ms |
| Naive N+1 (151 trips) | 17.8ms | ~150-300ms |
In production, the gap between Qail and N+1 widens from 79× to potentially 100-200×.
Run It Yourself
DATABASE_URL=postgresql://user:pass@localhost:5432/yourdb \
cargo run --example battle_comparison --features chrono,uuid --release
The full benchmark source is at pg/examples/battle_comparison.rs.
AST Builder API
The recommended way to use QAIL. Build queries as typed Rust structs.
Query Types
| Method | SQL Equivalent |
|---|---|
Qail::get() | SELECT |
Qail::add() | INSERT |
Qail::set() | UPDATE |
Qail::del() | DELETE |
Qail::put() | UPSERT (INSERT ON CONFLICT) |
Qail::make() | CREATE TABLE |
Qail::truncate() | TRUNCATE |
Qail::explain() | EXPLAIN |
Qail::explain_analyze() | EXPLAIN ANALYZE |
Qail::lock() | LOCK TABLE |
Qail::listen() | LISTEN (Pub/Sub) |
Qail::notify() | NOTIFY (Pub/Sub) |
Qail::unlisten() | UNLISTEN (Pub/Sub) |
Qail::export() | COPY TO |
SELECT Queries
#![allow(unused)]
fn main() {
use qail_core::{Qail, Operator, SortOrder};
let cmd = Qail::get("users")
.columns(["id", "email", "name"])
.filter("active", Operator::Eq, true)
.order_by("created_at", SortOrder::Desc)
.limit(10)
.offset(20);
}
INSERT Queries
#![allow(unused)]
fn main() {
let cmd = Qail::add("users")
.columns(["email", "name"])
.values(["alice@example.com", "Alice"])
.returning(["id", "created_at"]);
}
UPDATE Queries
#![allow(unused)]
fn main() {
let cmd = Qail::set("users")
.set_value("status", "active")
.set_value("verified_at", "now()")
.where_eq("id", 42);
}
DELETE Queries
#![allow(unused)]
fn main() {
let cmd = Qail::del("users")
.where_eq("id", 42);
}
Pub/Sub (LISTEN/NOTIFY)
#![allow(unused)]
fn main() {
// Subscribe to a channel
let listen = Qail::listen("orders");
// → LISTEN orders
// Send a notification
let notify = Qail::notify("orders", "new_order:123");
// → NOTIFY orders, 'new_order:123'
// Unsubscribe
let unlisten = Qail::unlisten("orders");
// → UNLISTEN orders
}
Builder Methods
Column Selection
| Method | Description |
|---|---|
.columns([...]) | Select specific columns |
.column("col") | Add a single column |
.select_all() | SELECT * |
.column_expr(expr) | Add an expression as column |
.returning([...]) | RETURNING clause |
.returning_all() | RETURNING * |
Filtering
| Method | Description |
|---|---|
.filter(col, op, val) | WHERE condition |
.or_filter(col, op, val) | OR condition |
.where_eq(col, val) | WHERE col = val |
.eq(col, val) | Shorthand for = val |
.ne(col, val) | Shorthand for != val |
.gt(col, val) | > val |
.gte(col, val) | >= val |
.lt(col, val) | < val |
.lte(col, val) | <= val |
.is_null(col) | IS NULL |
.is_not_null(col) | IS NOT NULL |
.like(col, pattern) | LIKE pattern |
.ilike(col, pattern) | ILIKE pattern (case-insensitive) |
.in_vals(col, [...]) | IN (values) |
.filter_cond(condition) | Add a raw Condition struct |
Sorting & Pagination
| Method | Description |
|---|---|
.order_by(col, dir) | ORDER BY |
.order_asc(col) | ORDER BY col ASC |
.order_desc(col) | ORDER BY col DESC |
.limit(n) | LIMIT n |
.offset(n) | OFFSET n |
.fetch_first(n) | FETCH FIRST n ROWS ONLY |
.fetch_with_ties(n) | FETCH FIRST n ROWS WITH TIES |
Joins
| Method | Description |
|---|---|
.left_join(table, left, right) | LEFT JOIN |
.inner_join(table, left, right) | INNER JOIN |
.left_join_as(table, alias, left, right) | LEFT JOIN with alias |
.inner_join_as(table, alias, left, right) | INNER JOIN with alias |
.join(kind, table, left, right) | Generic join |
.join_on(related) | Auto-inferred FK join |
.join_on_optional(related) | FK join (no-op if no relation) |
Grouping & Aggregation
| Method | Description |
|---|---|
.group_by([...]) | GROUP BY columns |
.having_cond(condition) | HAVING clause |
.distinct_on([...]) | DISTINCT ON columns |
.distinct_on_all() | DISTINCT ON all columns |
Mutations
| Method | Description |
|---|---|
.values([...]) | INSERT values |
.set_value(col, val) | SET col = val (UPDATE) |
.default_values() | INSERT with DEFAULT VALUES |
.on_conflict(...) | ON CONFLICT handling |
Advanced
| Method | Description |
|---|---|
.table_alias(alias) | FROM table AS alias |
.for_update() | SELECT … FOR UPDATE |
.for_share() | SELECT … FOR SHARE |
.for_no_key_update() | FOR NO KEY UPDATE |
.for_key_share() | FOR KEY SHARE |
.tablesample_bernoulli(pct) | TABLESAMPLE BERNOULLI |
.tablesample_system(pct) | TABLESAMPLE SYSTEM |
.repeatable(seed) | REPEATABLE (seed) |
.only() | FROM ONLY table |
.overriding_system_value() | OVERRIDING SYSTEM VALUE |
.overriding_user_value() | OVERRIDING USER VALUE |
.update_from([...]) | UPDATE … FROM tables |
.delete_using([...]) | DELETE … USING tables |
.with_ctes(ctes) | WITH (Common Table Expressions) |
.with_rls(&ctx) | Inject RLS context |
Materialized Views
#![allow(unused)]
fn main() {
// Create
let view = Qail::create_materialized_view(
"monthly_stats",
Qail::get("orders")
.columns(["date_trunc('month', created_at) AS month", "sum(total)"])
.group_by(["month"])
);
// Refresh
let refresh = Qail::refresh_materialized_view("monthly_stats");
// Drop
let drop = Qail::drop_materialized_view("monthly_stats");
}
Expression Types
QAIL v0.14.2 provides 100% PostgreSQL expression coverage. All expression types are native AST nodes that encode directly to wire protocol bytes.
Coverage
| Category | Coverage |
|---|---|
| Expressions | 100% |
| DML (SELECT, INSERT, UPDATE, DELETE) | 100% |
| DDL (CREATE, DROP, ALTER) | 100% |
Basic Expressions
Column Reference
#![allow(unused)]
fn main() {
use qail_core::ast::Expr;
// Simple column
let expr = Expr::Named("email".to_string());
// → email
// With alias
let expr = Expr::Aliased {
name: "users.email".to_string(),
alias: "user_email".to_string(),
};
// → users.email AS user_email
}
Literals
#![allow(unused)]
fn main() {
use qail_core::ast::{Expr, Value};
let expr = Expr::Literal(Value::Int(42));
// → 42
let expr = Expr::Literal(Value::String("hello".into()));
// → 'hello'
}
Aggregate Functions
#![allow(unused)]
fn main() {
use qail_core::ast::{Expr, AggregateFunc};
let expr = Expr::Aggregate {
func: AggregateFunc::Count,
col: "*".into(),
distinct: false,
filter: None,
alias: Some("total".into()),
};
// → COUNT(*) AS total
}
With FILTER Clause (v0.14.2+)
#![allow(unused)]
fn main() {
let expr = Expr::Aggregate {
func: AggregateFunc::Sum,
col: "amount".into(),
distinct: false,
filter: Some(vec![condition]), // WHERE condition
alias: Some("filtered_sum".into()),
};
// → SUM(amount) FILTER (WHERE ...) AS filtered_sum
}
Window Functions
#![allow(unused)]
fn main() {
use qail_core::ast::{Expr, WindowFrame, FrameBound};
let expr = Expr::Window {
func: "SUM".into(),
params: vec![Expr::Named("amount".into())],
partition: vec!["department".into()],
order: vec![order_spec],
frame: Some(WindowFrame::Rows {
start: FrameBound::UnboundedPreceding,
end: FrameBound::CurrentRow,
}),
alias: Some("running_total".into()),
};
// → SUM(amount) OVER (
// PARTITION BY department
// ORDER BY date
// ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
// ) AS running_total
}
CASE Expressions
#![allow(unused)]
fn main() {
let expr = Expr::Case {
when_clauses: vec![
(condition1, Value::String("A".into())),
(condition2, Value::String("B".into())),
],
else_value: Some(Value::String("C".into())),
alias: Some("grade".into()),
};
// → CASE WHEN ... THEN 'A' WHEN ... THEN 'B' ELSE 'C' END AS grade
}
New in v0.14.2
Array Constructor
#![allow(unused)]
fn main() {
let expr = Expr::ArrayConstructor {
elements: vec![
Expr::Named("col1".into()),
Expr::Named("col2".into()),
],
alias: Some("arr".into()),
};
// → ARRAY[col1, col2] AS arr
}
Row Constructor
#![allow(unused)]
fn main() {
let expr = Expr::RowConstructor {
elements: vec![
Expr::Named("id".into()),
Expr::Named("name".into()),
],
alias: Some("person".into()),
};
// → ROW(id, name) AS person
}
Subscript (Array Access)
#![allow(unused)]
fn main() {
let expr = Expr::Subscript {
expr: Box::new(Expr::Named("tags".into())),
index: Box::new(Expr::Literal(Value::Int(1))),
alias: Some("first_tag".into()),
};
// → tags[1] AS first_tag
}
Collation
#![allow(unused)]
fn main() {
let expr = Expr::Collate {
expr: Box::new(Expr::Named("name".into())),
collation: "C".into(),
alias: None,
};
// → name COLLATE "C"
}
Field Access (Composite Types)
#![allow(unused)]
fn main() {
let expr = Expr::FieldAccess {
expr: Box::new(Expr::Named("address".into())),
field: "city".into(),
alias: Some("city".into()),
};
// → (address).city AS city
}
Type Casting
#![allow(unused)]
fn main() {
let expr = Expr::Cast {
expr: Box::new(Expr::Named("id".into())),
target_type: "TEXT".into(),
alias: None,
};
// → id::TEXT
}
JSON Access
#![allow(unused)]
fn main() {
let expr = Expr::JsonAccess {
column: "data".into(),
path_segments: vec![
("user".into(), false), // ->
("name".into(), true), // ->>
],
alias: Some("name".into()),
};
// → data->'user'->>'name' AS name
}
GROUP BY Modes
| Mode | SQL | Status |
|---|---|---|
GroupByMode::Simple | GROUP BY a, b | ✓ |
GroupByMode::Rollup | GROUP BY ROLLUP(a, b) | ✓ |
GroupByMode::Cube | GROUP BY CUBE(a, b) | ✓ |
GroupByMode::GroupingSets | GROUP BY GROUPING SETS ((a, b), (c)) | ✓ v0.14.2 |
#![allow(unused)]
fn main() {
use qail_core::ast::GroupByMode;
// GROUPING SETS
let mode = GroupByMode::GroupingSets(vec![
vec!["year".into(), "month".into()],
vec!["year".into()],
vec![], // grand total
]);
// → GROUP BY GROUPING SETS ((year, month), (year), ())
}
DDL Actions
| Action | SQL | Status |
|---|---|---|
Action::Make | CREATE TABLE | ✓ |
Action::Drop | DROP TABLE | ✓ |
Action::Index | CREATE INDEX | ✓ |
Action::CreateView | CREATE VIEW AS | ✓ v0.14.2 |
Action::DropView | DROP VIEW | ✓ v0.14.2 |
Action::CreateMaterializedView | CREATE MATERIALIZED VIEW | ✓ |
Action::RefreshMaterializedView | REFRESH MATERIALIZED VIEW | ✓ |
Action::DropMaterializedView | DROP MATERIALIZED VIEW | ✓ |
Action::Truncate | TRUNCATE | ✓ |
Action::Explain | EXPLAIN | ✓ |
Action::ExplainAnalyze | EXPLAIN ANALYZE | ✓ |
Action::Lock | LOCK TABLE | ✓ |
Action::Listen | LISTEN channel | ✓ |
Action::Notify | NOTIFY channel, 'payload' | ✓ |
Action::Unlisten | UNLISTEN channel | ✓ |
#![allow(unused)]
fn main() {
use qail_core::ast::{Qail, Action};
// Create view
let mut cmd = Qail::get("orders")
.columns(["customer_id", "SUM(amount) AS total"])
.group_by(["customer_id"]);
cmd.action = Action::CreateView;
cmd.table = "customer_totals".into();
// → CREATE VIEW customer_totals AS SELECT ...
}
Expression Builders
QAIL provides ergonomic builder functions for constructing AST expressions without verbose struct creation.
Import
#![allow(unused)]
fn main() {
use qail_core::ast::builders::*;
}
Column References
#![allow(unused)]
fn main() {
// Named column
col("phone_number")
// Star (*)
star()
}
Aggregate Functions
#![allow(unused)]
fn main() {
// COUNT(*)
count()
// COUNT(DISTINCT column)
count_distinct("phone_number")
// COUNT(*) FILTER (WHERE ...)
count_filter(vec![
eq("direction", "outbound"),
gt("created_at", now_minus("24 hours")),
]).alias("messages_sent_24h")
// Other aggregates
sum("amount")
avg("score")
min("price")
max("quantity")
}
All aggregates support:
.distinct()— Add DISTINCT modifier.filter(conditions)— Add FILTER clause.alias("name")— Add AS alias
Time Functions
#![allow(unused)]
fn main() {
// NOW()
now()
// INTERVAL 'duration'
interval("24 hours")
// NOW() - INTERVAL 'duration' (common pattern)
now_minus("24 hours")
// NOW() + INTERVAL 'duration'
now_plus("7 days")
}
Type Casting
#![allow(unused)]
fn main() {
// expr::type
cast(col("amount"), "float8")
// With alias
cast(col("amount"), "float8").alias("amount_f")
}
CASE WHEN Expressions
#![allow(unused)]
fn main() {
// Simple CASE
case_when(gt("score", 80), text("A"))
.otherwise(text("F"))
.alias("grade")
// Multiple WHEN clauses
case_when(gt("score", 90), text("A"))
.when(gt("score", 80), text("B"))
.when(gt("score", 70), text("C"))
.otherwise(text("F"))
.alias("grade")
}
Condition Helpers
#![allow(unused)]
fn main() {
// Equality
eq("status", "active") // status = 'active'
ne("status", "deleted") // status != 'deleted'
// Comparisons
gt("created_at", now_minus("24 hours")) // created_at > NOW() - INTERVAL '24 hours'
gte("age", 18) // age >= 18
lt("price", 100) // price < 100
lte("quantity", 10) // quantity <= 10
// IN / NOT IN
is_in("status", ["delivered", "read"]) // status IN ('delivered', 'read')
not_in("type", ["spam", "junk"]) // type NOT IN ('spam', 'junk')
// NULL checks
is_null("deleted_at") // deleted_at IS NULL
is_not_null("email") // email IS NOT NULL
// Pattern matching
like("name", "John%") // name LIKE 'John%'
ilike("email", "%@gmail%") // email ILIKE '%@gmail%'
}
Function Calls
#![allow(unused)]
fn main() {
// Generic function
func("MY_FUNC", vec![col("a"), col("b")])
// COALESCE
coalesce(vec![col("nickname"), col("name"), text("Anonymous")])
// NULLIF
nullif(col("value"), int(0))
}
Binary Expressions
#![allow(unused)]
fn main() {
// Arithmetic
binary(col("price"), BinaryOp::Mul, col("quantity"))
// With alias
binary(
cast(col("success"), "float8"),
BinaryOp::Div,
cast(col("total"), "float8")
).alias("success_rate")
}
Literals
#![allow(unused)]
fn main() {
int(42) // Integer literal
float(3.14) // Float literal
text("hello") // String literal (quoted)
}
Complete Example
Here’s a complex analytics query using all the builders:
#![allow(unused)]
fn main() {
use qail_core::ast::builders::*;
let stats = Qail::get("whatsapp_messages")
.columns([
count_distinct("phone_number").alias("total_contacts"),
count().alias("total_messages"),
count_filter(vec![
eq("direction", "outbound"),
gt("created_at", now_minus("24 hours")),
]).alias("messages_sent_24h"),
count_filter(vec![
eq("direction", "inbound"),
eq("status", "received"),
]).alias("unread_messages"),
]);
let cmd = Qail::get("stats")
.with_cte("stats", stats)
.columns([
col("total_contacts"),
col("total_messages"),
case_when(gt("messages_sent_24h", 0),
binary(
cast(col("successful"), "float8"),
BinaryOp::Div,
cast(col("messages_sent_24h"), "float8")
)
).otherwise(float(0.0)).alias("delivery_rate"),
]);
}
This replaces 40+ lines of raw SQL with type-safe, compile-time checked Rust code.
QAIL: The AST-Native Approach
QAIL takes a unique approach to building SQL queries: instead of strings or macros, queries are constructed as a typed Abstract Syntax Tree.
The AST-Native Difference
| Approach | How Queries Work |
|---|---|
| String-based | SQL written as text, parameterized at runtime |
| Macro-based | DSL macros expand to SQL at compile time |
| AST-Native | Typed AST compiles directly to wire protocol |
What QAIL Enables
Native PostgreSQL Features
#![allow(unused)]
fn main() {
use qail_core::{Qail, builders::*};
// Native JSON operators (->, ->>)
json_path("metadata", ["vessel_bookings", "0", "key"])
// COALESCE with type safety
coalesce([col("booking_number"), text("N/A")])
// String concatenation
concat([col("first_name"), text(" "), col("last_name")])
// Type casting
cast(col("total_fare"), "float")
// CASE WHEN expressions
case_when(gt("score", 80), text("pass"))
.otherwise(text("fail"))
}
Full Query Example
A production WhatsApp integration query with JSON access, string concat, and type casts:
#![allow(unused)]
fn main() {
use qail_core::{Qail, Operator, builders::*};
let route = coalesce([
concat([
json_path("o.metadata", ["vessel_bookings", "0", "depart_departure_loc"]),
text(" → "),
json_path("o.metadata", ["vessel_bookings", "0", "depart_arrival_loc"]),
]),
text("Route"),
]).alias("route");
let cmd = Qail::get("orders")
.table_alias("o")
.column_expr(col("o.id"))
.column_expr(coalesce([col("o.booking_number"), text("N/A")]).alias("booking_number"))
.column_expr(cast(col("o.status"), "text").alias("status"))
.column_expr(route)
.column_expr(coalesce([
json_path("o.metadata", ["vessel_bookings", "0", "depart_travel_date"]),
text("TBD")
]).alias("travel_date"))
.filter_cond(cond(json("o.contact_info", "phone"), Operator::Eq, param(1)))
.or_filter_cond(cond(
replace(json("o.contact_info", "phone"), text("+"), text("")),
Operator::Eq,
param(1)
))
.order_desc("o.created_at")
.limit(10);
let orders = pool.fetch_all::<OrderRow>(&cmd).await?;
}
QAIL Highlights
| Feature | QAIL Approach |
|---|---|
| Safety | Structural - no SQL strings to inject |
| JSON | Native json(), json_path() operators |
| Expressions | coalesce(), concat(), cast() builders |
| CTEs | with_cte() for complex queries |
| Async | Full async/await support |
| Type Validation | ColumnType enum with compile-time checks |
ColumnType Validation
QAIL validates types at build time:
#![allow(unused)]
fn main() {
pub enum ColumnType {
Uuid, Text, Varchar(Option<u16>), Int, BigInt,
Serial, BigSerial, Bool, Float, Decimal(Option<(u8,u8)>),
Jsonb, Timestamp, Timestamptz, Date, Time, Bytea,
}
// Compile-time validation
ColumnType::Uuid.can_be_primary_key() // true
ColumnType::Jsonb.can_be_primary_key() // false - caught at build time
ColumnType::Jsonb.supports_indexing() // false - warned before migration
}
When to Use QAIL
QAIL shines for:
- Complex PostgreSQL queries with JSON, CTEs, aggregates
- Type-safe query building with IDE support
- Production systems where safety is critical
- Projects that need advanced SQL features without string literals
Text Syntax
For CLI and LSP usage. Parses to AST internally.
Keywords
| Keyword | Description | Example |
|---|---|---|
get | SELECT query | get users fields * |
set | UPDATE query | set users values ... |
del | DELETE query | del users where ... |
add | INSERT query | add users values ... |
fields | Select columns | fields id, email |
where | Filter conditions | where active = true |
order by | Sort results | order by name desc |
limit | Limit rows | limit 10 |
offset | Skip rows | offset 20 |
left join | Left outer join | left join profiles |
Examples
Simple Select
get users fields *
→ SELECT * FROM users
Filtered Query
get users
fields id, email, name
where active = true
order by created_at desc
limit 50
Join Query
get users
inner join bookings
fields id, email, bookings.total
where created_at >= 2024-01-01
Insert
add users values (email = "alice@example.com", name = "Alice")
Update
set users values (status = "active") where id = 42
PostgreSQL Driver
The qail-pg crate provides a native PostgreSQL driver with AST-native wire protocol encoding. It communicates directly with Postgres at the wire level — no libpq, no ORM, and no app-side SQL interpolation on the AST path.
Features
- AST-Native — Direct AST to wire protocol on the primary query path
- Zero-Alloc — Reusable buffers, no heap allocation per query
- LRU Statement Cache — Bounded cache (100 max), auto-evicts
- SSL/TLS — Full TLS with mutual TLS (mTLS) support
- Password Auth Modes — Supports SCRAM-SHA-256, MD5, and cleartext server flows
- Enterprise Auth Policy — Configure allowed auth mechanisms and SCRAM channel binding mode
- Kerberos/GSS/SSPI Hooks — Protocol-level support with pluggable token providers (legacy + stateful)
- Built-in Linux Kerberos Provider — Optional
enterprise-gssapifeature for native krb5/GSS flow - Protocol Negotiation — Requests startup protocol 3.2 by default with one-shot fallback to 3.0 on explicit server version rejection
- Cancel-Key Compatibility — Native bytes cancel-key APIs with legacy i32 wrappers retained for 4-byte keys
- Connection Pooling — Efficient resource management with RLS-safe checkout
- COPY Protocol — Bulk insert for high throughput (1.63M rows/sec)
- Pipeline Execution — Multiple queries per round-trip
- Cursors — Stream large result sets
- Transactions — BEGIN/COMMIT/ROLLBACK
Architecture
Understanding which type to use is the most important concept:
PgPool (manages N connections, handles checkout/return)
└── PooledConnection (call `release().await` for deterministic pool return)
└── PgConnection (raw TCP/TLS stream, wire protocol I/O)
└── PgDriver (convenience wrapper over PgConnection)
| Type | Use When |
|---|---|
PgDriver | Quick scripts, benchmarks, single-connection use cases |
PgConnection | You need raw control (TLS, mTLS, Unix sockets, manual lifecycle) |
PgPool | Production code — multi-connection, concurrent workloads |
PooledConnection | You called pool.acquire*() — call release().await |
Rule of thumb: If you’re building a server, use PgPool. Everything else is for specialized cases.
Connection Methods
Choose based on your deployment:
| Scenario | Method | Notes |
|---|---|---|
Local dev (pg_hba.conf = trust) | PgDriver::connect() | No password required |
| Password auth (most common) | PgDriver::connect_with_password() | Auto cleartext / MD5 / SCRAM-SHA-256 |
| Cloud DB (RDS, Cloud SQL, Supabase) | PgConnection::connect_tls() | Server-side TLS |
| Zero-trust / mTLS | PgConnection::connect_mtls() | Client certificate |
| Enterprise policy (TLS/auth/channel binding) | PgDriver::connect_with_options() | Explicit TLS mode + auth controls |
| Unix socket (same host) | PgConnection::connect_unix() | Lowest latency |
.env / DATABASE_URL | PgDriver::connect_env() | Parses URL format |
| Custom config | PgDriver::builder() | Builder pattern for full control |
Basic Connection
#![allow(unused)]
fn main() {
use qail_pg::PgDriver;
// Trust mode (no password)
let driver = PgDriver::connect("localhost", 5432, "user", "db").await?;
// With password (auto-detects MD5 or SCRAM-SHA-256)
let driver = PgDriver::connect_with_password(
"localhost", 5432, "user", "db", "password"
).await?;
// From DATABASE_URL env var
let driver = PgDriver::connect_env().await?;
}
SSL/TLS
#![allow(unused)]
fn main() {
use qail_pg::PgConnection;
// Standard TLS — verifies server certificate
let conn = PgConnection::connect_tls("localhost", 5432, "user", "db").await?;
}
Mutual TLS (Client Certificates)
#![allow(unused)]
fn main() {
use qail_pg::{PgConnection, TlsConfig};
let config = TlsConfig {
client_cert_pem: cert_bytes,
client_key_pem: key_bytes,
ca_cert_pem: Some(ca_bytes),
};
let conn = PgConnection::connect_mtls("localhost", 5432, "user", "db", config).await?;
}
Enterprise Auth/TLS Policy
#![allow(unused)]
fn main() {
use qail_pg::{AuthSettings, ConnectOptions, PgDriver, ScramChannelBindingMode, TlsMode};
let options = ConnectOptions {
tls_mode: TlsMode::Require,
auth: AuthSettings {
allow_cleartext_password: false,
allow_md5_password: false,
allow_scram_sha_256: true,
channel_binding: ScramChannelBindingMode::Require,
..AuthSettings::default()
},
..Default::default()
};
let driver = PgDriver::connect_with_options(
"db.internal",
5432,
"app_user",
"app_db",
Some("secret"),
options,
)
.await?;
}
Kerberos / GSSAPI Token Hook
#![allow(unused)]
fn main() {
use qail_pg::{
AuthSettings, ConnectOptions, EnterpriseAuthMechanism, PgDriver,
};
fn gss_provider(
mech: EnterpriseAuthMechanism,
challenge: Option<&[u8]>,
) -> Result<Vec<u8>, String> {
// Plug your krb5/gssapi token generation here.
// Return initial token when challenge=None, then continue tokens per challenge.
let _ = (mech, challenge);
Err("not wired yet".to_string())
}
let options = ConnectOptions {
auth: AuthSettings::gssapi_only(),
gss_token_provider: Some(gss_provider),
..Default::default()
};
let _driver = PgDriver::connect_with_options(
"db.internal", 5432, "app_user", "app_db", None, options
).await?;
}
Stateful GSS Provider (Per-Session Context)
#![allow(unused)]
fn main() {
use std::sync::Arc;
use qail_pg::{AuthSettings, ConnectOptions, GssTokenRequest, PgDriver};
let provider = Arc::new(move |req: GssTokenRequest<'_>| -> Result<Vec<u8>, String> {
// req.session_id is stable for one auth handshake.
// Keep per-session context in your own map if your GSS stack requires it.
let _ = req;
Err("wire your stateful provider".to_string())
});
let options = ConnectOptions {
auth: AuthSettings::gssapi_only(),
gss_token_provider_ex: Some(provider),
..Default::default()
};
let _driver = PgDriver::connect_with_options(
"db.internal", 5432, "app_user", "app_db", None, options
).await?;
}
Built-in Linux Kerberos Provider (Feature-Gated)
#![allow(unused)]
fn main() {
#[cfg(all(feature = "enterprise-gssapi", target_os = "linux"))]
{
use qail_pg::{
AuthSettings, ConnectOptions, LinuxKrb5ProviderConfig, PgDriver,
linux_krb5_preflight, linux_krb5_token_provider,
};
let gss_cfg = LinuxKrb5ProviderConfig {
service: "postgres".to_string(),
host: "db.internal".to_string(),
target_name: None, // optional override, e.g. Some("postgres@db.internal".into())
};
let report = linux_krb5_preflight(&gss_cfg)?;
for warning in &report.warnings {
eprintln!("Kerberos preflight warning: {}", warning);
}
let provider = linux_krb5_token_provider(gss_cfg)?;
let options = ConnectOptions {
auth: AuthSettings::gssapi_only(),
gss_token_provider_ex: Some(provider),
..Default::default()
};
let _driver = PgDriver::connect_with_options(
"db.internal", 5432, "app_user", "app_db", None, options
).await?;
}
}
Unix Socket
#![allow(unused)]
fn main() {
let conn = PgConnection::connect_unix(
"/var/run/postgresql", // socket directory
"user",
"db"
).await?;
}
AST-Native Queries
All queries are constructed through the typed AST — no raw SQL strings.
#![allow(unused)]
fn main() {
use qail_core::Qail;
let cmd = Qail::get("users").select_all().limit(10);
// Fetch all rows
let rows = driver.fetch_all(&cmd).await?;
// Fetch one row
let row = driver.fetch_one(&cmd).await?;
// Execute mutation (returns affected rows)
let affected = driver.execute(&cmd).await?;
}
Statement Cache (LRU)
Prepared statements are cached automatically. The AST is hashed by structure, so identical query shapes reuse the same prepared statement.
Cached execution includes one-shot self-heal for common server-side invalidation cases
(prepared statement does not exist, cached plan must be replanned): local cache state is
cleared and the statement is retried once automatically.
#![allow(unused)]
fn main() {
// Cache is bounded (default: 100 statements)
// Auto-evicts least recently used when full
let (size, capacity) = driver.cache_stats(); // (42, 100)
// Manual clear if needed
driver.clear_cache();
}
| Method | Description |
|---|---|
fetch_all() | Uses cache (~25,000 q/s) |
fetch_all_with_format(cmd, ResultFormat::Binary) | Cached fetch with binary column format |
fetch_all_uncached() | Skips cache |
fetch_all_uncached_with_format(...) | Uncached fetch with text/binary result format |
cache_stats() | Returns (current, max) |
clear_cache() | Frees all cached statements |
Pipeline Methods
Pipelining sends multiple queries in a single network round-trip. This is the key to high throughput. Choose based on your needs:
Do you need return values from each query?
├── Yes
│ └── pipeline_ast() — full parse/bind/execute per query, returns rows
│
└── No (fire-and-forget mutations)
├── Repeating the same query shape with different params?
│ ├── pipeline_ast_cached() — hash-based statement reuse
│ └── pipeline_prepared_fast()— named prepared statement reuse
│
└── Different query shapes?
├── pipeline_ast_fast() — parse+bind+execute, discard results
├── pipeline_simple_fast() — simple query protocol (no params)
└── pipeline_bytes_fast() — pre-encoded buffers (fastest possible)
Quick Reference
| Method | Returns Rows? | Statement Caching | Relative Speed |
|---|---|---|---|
pipeline_ast() | ✅ | Hash-based | ★★★ |
pipeline_ast_fast() | ❌ | None | ★★★★ |
pipeline_ast_cached() | ❌ | Hash + LRU | ★★★★★ |
pipeline_bytes_fast() | ❌ | Pre-encoded | ★★★★★ |
pipeline_prepared_fast() | ❌ | Named | ★★★★★ |
pipeline_prepared_zerocopy() | ❌ | Named + zero-copy | ★★★★★ |
pipeline_prepared_ultra() | ❌ | Named + ultra | ★★★★★+ |
Example: Pipelined Inserts
#![allow(unused)]
fn main() {
let commands: Vec<QailCmd> = users.iter().map(|u| {
Qail::add("users")
.set("name", &u.name)
.set("email", &u.email)
.build()
}).collect();
// Fire-and-forget — fastest for bulk mutations
let affected = driver.pipeline_ast_fast(&commands).await?;
// With results — slower but returns inserted rows
let rows = driver.pipeline_ast(&commands).await?;
}
Connection Pooling
#![allow(unused)]
fn main() {
use qail_pg::{PgPool, PoolConfig};
use qail_core::Qail;
let config = PoolConfig::new("localhost", 5432, "user", "db")
.password("secret")
.tls_mode(qail_pg::TlsMode::Require)
.auth_settings(qail_pg::AuthSettings::scram_only())
.max_connections(20)
.min_connections(5);
let pool = PgPool::connect(config).await?;
// Acquire connection (return deterministically with release())
let mut conn = pool.acquire().await?;
let probe = Qail::get("users").columns(["id"]).limit(1);
let _ = conn.fetch_all(&probe).await?;
conn.release().await;
// Check idle count
let idle = pool.idle_count().await;
}
Pool with RLS (Multi-Tenant)
#![allow(unused)]
fn main() {
use qail_core::RlsContext;
let ctx = RlsContext {
operator_id: "tenant-123".into(),
agent_id: Some("agent-456".into()),
is_super_admin: false,
};
// Acquire + set RLS context in one call
// Call release() after query work to reset context and return to pool
let mut conn = pool.acquire_with_rls(&ctx).await?;
conn.release().await;
}
Important: When using
acquire_with_rls(), the RLS context is automatically cleared when the connection is returned to the pool. This prevents cross-tenant data leakage — a connection used by Tenant A will never carry Tenant A’s context when checked out by Tenant B.
Pool Configuration
#![allow(unused)]
fn main() {
use std::time::Duration;
let config = PoolConfig::new("localhost", 5432, "user", "db")
.idle_timeout(Duration::from_secs(600)) // 10 min
.acquire_timeout(Duration::from_secs(30)) // 30 sec
.connect_timeout(Duration::from_secs(10)); // 10 sec
}
| Option | Default | Description |
|---|---|---|
max_connections | 10 | Maximum pool size |
min_connections | 1 | Minimum idle connections |
idle_timeout | 10 min | Stale connections auto-discarded |
acquire_timeout | 30 sec | Max wait for connection |
connect_timeout | 10 sec | Max time to establish new connection |
max_lifetime | 30 min | Max age of any connection |
test_on_acquire | true | Ping connection before returning |
Bulk Insert (COPY Protocol)
High-performance bulk insert using PostgreSQL’s COPY protocol. Benchmarked at 1.63M rows/sec for 100M rows.
#![allow(unused)]
fn main() {
use qail_core::ast::Value;
let cmd = Qail::add("users").columns(&["name", "email"]);
let rows = vec![
vec![Value::Text("Alice".into()), Value::Text("a@x.com".into())],
vec![Value::Text("Bob".into()), Value::Text("b@x.com".into())],
];
let count = driver.copy_bulk(&cmd, &rows).await?;
// count = 2
}
Performance Comparison
| Operation | Rows/sec | Notes |
|---|---|---|
| COPY bulk insert | 1.63M | Native COPY protocol |
| Pipelined INSERT | 180K | Extended Query |
| Single INSERT | 22K | Per-statement |
Cursor Streaming
Stream large result sets in batches:
#![allow(unused)]
fn main() {
let cmd = Qail::get("logs").select_all();
let batches = driver.stream_cmd(&cmd, 1000).await?;
for batch in batches {
for row in batch {
// Process row
}
}
}
Transactions
#![allow(unused)]
fn main() {
let mut conn = pool.acquire().await?;
conn.begin_transaction().await?;
// ... queries ...
conn.commit().await?;
// Or rollback on error
conn.rollback().await?;
}
Row Decoding
By Index
#![allow(unused)]
fn main() {
let name = row.get_string(0);
let age = row.get_i32(1);
}
By Column Name (Recommended)
#![allow(unused)]
fn main() {
// Safer — column order changes don't break code
let name = row.get_string_by_name("name");
let age = row.get_i32_by_name("age");
let email = row.get_string_by_name("email");
// Check if NULL
if row.is_null_by_name("deleted_at") { ... }
}
Available get_by_name methods:
get_string_by_name,get_i32_by_name,get_i64_by_nameget_f64_by_name,get_bool_by_nameget_uuid_by_name,get_json_by_nameis_null_by_name,column_index
Supported Types
| Rust Type | PostgreSQL Type |
|---|---|
i16/i32/i64 | INT2/INT4/INT8 |
f32/f64 | FLOAT4/FLOAT8 |
bool | BOOLEAN |
String | TEXT/VARCHAR |
Vec<u8> | BYTEA |
Uuid | UUID |
Timestamp | TIMESTAMPTZ |
Date | DATE |
Time | TIME |
Json | JSONB |
Inet | INET |
Cidr | CIDR |
MacAddr | MACADDR |
Numeric | NUMERIC/DECIMAL |
✅ AST-Only Driver
Raw SQL helper APIs (execute_raw, fetch_raw) were removed.
#![allow(unused)]
fn main() {
// ✅ Use AST-native transaction APIs
driver.begin().await?;
// ... execute QAIL commands ...
driver.commit().await?;
}
API Decision Guide
Quick reference: “I want to do X → use Y.”
Connecting
| I want to… | Use |
|---|---|
| Connect with just host/port/user/db | PgDriver::connect() |
| Connect with a password | PgDriver::connect_with_password() |
Use DATABASE_URL from env | PgDriver::connect_env() |
| Connect over TLS (cloud DB) | PgConnection::connect_tls() |
| Use client certificates (mTLS) | PgConnection::connect_mtls() |
| Connect via Unix socket | PgConnection::connect_unix() |
| Customize everything | PgDriver::builder() |
| Run a production server | PgPool::connect(config) |
Querying
| I want to… | Use |
|---|---|
| Get rows back | driver.fetch_all(&cmd) |
| Get exactly one row | driver.fetch_one(&cmd) |
| Run a mutation (INSERT/UPDATE/DELETE) | driver.execute(&cmd) |
Use SQL text anyway (outside qail-pg) | tokio-postgres / sqlx for that service |
Bulk Operations
| I want to… | Use |
|---|---|
| Insert thousands of rows fast | driver.copy_bulk(&cmd, &rows) (COPY protocol) |
| Pipeline many inserts (no results needed) | driver.pipeline_ast_fast(&commands) |
| Pipeline inserts (need the inserted rows) | driver.pipeline_ast(&commands) |
| Pipeline the same query shape with many param sets | driver.pipeline_ast_cached(&commands) |
Connection Pool
| I want to… | Use |
|---|---|
| Get a pooled connection | pool.acquire() |
| Get a connection with RLS tenant context | pool.acquire_with_rls(&ctx) |
| Check pool status | pool.idle_count(), pool.stats() |
Transactions
| I want to… | Use |
|---|---|
| Start a transaction | conn.begin_transaction() |
| Commit | conn.commit() |
| Roll back | conn.rollback() |
Multi-Tenant (RLS)
| I want to… | Use |
|---|---|
| Set tenant context on connection | pool.acquire_with_rls(&ctx) |
| Manually set RLS | driver.set_rls_context(&ctx) |
| Clear RLS context | driver.clear_rls_context() |
| Define RLS policies in schema | policy name on table for select using $$ ... $$ in .qail |
| Generate RLS setup SQL | rls_setup_sql(&table, &policy) |
Performance Tips
- Use the pool —
PgPoolreuses connections and caches prepared statements. - Use
pipeline_ast_fast()for bulk mutations — one round-trip instead of N. - Use
copy_bulk()for truly massive inserts (>10K rows) — 10x faster than pipelining. - Use
fetch_all()(cached) notfetch_all_uncached()— statement caching gives ~2x speedup. - Use
acquire_with_rls()in multi-tenant apps — auto-clears on Drop, prevents cross-tenant leaks.
Migrations
QAIL supports two migration workflows:
- Schema-Diff (State-Based): Compare standard schema files (good for evolving production DBs)
- File-Based (Sequential): Apply
.qailfiles frommigrations/directory (good for hybrid setups)
1. Schema-Diff Workflow (State-Based)
QAIL uses an intent-aware .qail schema format that solves the ambiguity problem of state-based migrations.
The Problem with JSON/State-Based Migrations
// v1: {"users": {"username": "text"}}
// v2: {"users": {"name": "text"}}
Did we rename username → name or delete + add? JSON can’t express intent.
The Solution: .qail Schema Format
# schema.qail - Human readable, intent-aware
table users {
id serial primary_key
name text not_null
email text unique
}
# Migration hints express INTENT
rename users.username -> users.name
Single File vs Modular Directory
Qail supports both:
- Single file:
schema.qail - Modular:
schema/*.qail(recursive), optionalschema/_order.qail
Modular schema is useful when one file becomes very large. If _order.qail exists, listed modules load first; in strict mode, every module must be listed.
Repository examples:
examples/schema/single/schema.qailexamples/schema/modular/schema/
Workflow
1. Pull Current Schema
qail pull postgres://user:pass@localhost/db > v1.qail
2. Create New Version
Edit v2.qail with your changes and any migration hints:
table users {
id serial primary_key
name text not_null # was 'username'
email text unique
created_at timestamp not_null
}
rename users.username -> users.name
3. Preview Migration
qail diff v1.qail v2.qail
# Output:
# ALTER TABLE users RENAME COLUMN username TO name;
# ALTER TABLE users ADD COLUMN created_at TIMESTAMP NOT NULL;
4. Apply Migration
qail migrate up v1.qail:v2.qail postgres://...
5. Rollback (if needed)
qail migrate down v1.qail:v2.qail postgres://...
2. File-Based Workflow (Sequential)
For hybrid projects or simple setups, you can use sequential .qail files in the migrations/ directory.
Structure
migrations/
├── 001_initial_schema.up.qail
├── 001_initial_schema.down.qail
├── 002_add_users.up.qail
└── 002_add_users.down.qail
Applying Migrations
# Applies all pending .up.qail files
qail migrate apply
Generating from Sync Rules
Hybrid projects can auto-generate migrations for sync triggers:
qail sync generate
# Creates: migrations/00X_qail_sync_triggers.up.qail
Migration Hints
| Hint | Description |
|---|---|
rename table.old -> table.new | Rename column (not drop+add) |
transform expr -> table.col | Data transformation hint |
drop confirm table.col | Explicit drop confirmation |
3. Drift Detection (v0.15.8)
Compare a live database against a .qail schema file to find unexpected drift:
qail diff _ schema.qail --live --url postgres://localhost/mydb
# Drift detection: [live DB] → schema.qail
# → Introspecting live database...
# 80 tables, 287 indexes introspected
#
# ✅ No drift detected — live DB matches schema file.
If drift exists, it shows categorized changes with risk levels:
# 🔴 HIGH — missing column (was dropped outside migrations)
# 🟡 MEDIUM — index mismatch
# 🟢 LOW — default value difference
4. Migration Reset (v0.15.8)
Nuclear option for development — drops everything and recreates from schema:
qail migrate reset schema.qail postgres://...
# Phase 1: DROP all tables (FK-ordered)
# Phase 2: CLEAR migration history
# Phase 3: CREATE from schema
# ✓ Reset complete
⚠️ Warning: This is destructive. Use only in development or staging.
5. Migration Status (v0.15.8)
Rich tabular view of migration history:
qail migrate status postgres://...
# 📋 Migration Status — mydb
# ┌──────────┬────────────────────┬─────────────────────┬──────────────┐
# │ Version │ Name │ Applied At │ Checksum │
# ├──────────┼────────────────────┼─────────────────────┼──────────────┤
# │ 001 │ qail_queue │ 2026-02-01 10:00:00 │ a3b8d1... │
# └──────────┴────────────────────┴─────────────────────┴──────────────┘
Named Migrations
Named migrations provide version-controlled migration files with metadata for better tracking.
Creating a Named Migration
qail migrate create add_user_avatars --depends 002_add_users --author orion
Output:
📝 Creating Named Migration
✓ Created: migrations/20251226071129_add_user_avatars.qail
Migration: 20251226071129_add_user_avatars
Depends: 002_add_users
Author: orion
Migration File Format
-- migration: 20251226071129_add_user_avatars
-- depends: 002_add_users
-- author: orion
-- created: 2025-12-26T07:11:29+08:00
+table avatars {
id UUID primary_key
user_id UUID not_null references(users.id)
url TEXT not_null
}
Metadata Fields
| Field | Description |
|---|---|
migration | Unique name (timestamp_description) |
depends | Comma-separated list of dependencies |
author | Author of the migration |
created | ISO 8601 timestamp |
CLI Options
qail migrate create <name>
-d, --depends <migration> # Dependencies (comma-separated)
-a, --author <name> # Author attribution
Dependency Resolution
QAIL validates dependencies before applying migrations:
- Checks all dependencies exist
- Detects circular dependencies
- Applies in topological order
Data-Safe Migrations
QAIL provides enterprise-grade migration safety features that protect your data during schema changes.
Overview
| Feature | Description |
|---|---|
| Impact Analysis | Shows exactly what data will be affected |
| Pre-Migration Backup | Option to backup before destructive changes |
| Record-Level Backup | JSONB-based data backup in database |
| Shadow Database | Blue-green migrations for zero-downtime |
Phase 1: Impact Analysis & Backup Prompt
When running migrations with destructive operations, QAIL analyzes the impact:
$ qail migrate up old.qail:new.qail postgres://...
🚨 Migration Impact Analysis
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DROP COLUMN users.email → 1,234 values at risk
DROP TABLE sessions → 5,678 rows affected
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total: 6,912 records at risk
Choose an option:
[1] Proceed (I have my own backup)
[2] Backup to files (_qail_snapshots/)
[3] Backup to database (with rollback support)
[4] Cancel migration
Options Explained
- [1] Proceed - Continue without QAIL backup (you manage your own)
- [2] File Backup - Export affected data to
_qail_snapshots/directory - [3] Database Backup - Store data in
_qail_data_snapshotstable (enables true rollback) - [4] Cancel - Abort the migration
Phase 2: Record-Level Database Backup
When you choose option [3], QAIL creates a snapshot table:
-- Automatically created
CREATE TABLE _qail_data_snapshots (
id SERIAL PRIMARY KEY,
migration_version VARCHAR(255) NOT NULL,
table_name VARCHAR(255) NOT NULL,
column_name VARCHAR(255),
row_id TEXT NOT NULL,
value_json JSONB NOT NULL,
snapshot_type VARCHAR(50) NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
What Gets Backed Up
| Operation | Backup Content |
|---|---|
| DROP COLUMN | Column values with row IDs |
| DROP TABLE | Full table as JSONB objects |
| ALTER TYPE | Original values before cast |
True Data Rollback
After migration, you can restore data:
# Schema rollback (adds column back)
qail migrate down new.qail:old.qail postgres://...
# Data rollback (restores values)
# Coming in future release: qail rollback --data
Phase 3: Shadow Database (Blue-Green)
For zero-downtime migrations, use shadow database mode:
# Step 1: Create shadow, apply migrations, sync data
qail migrate shadow old.qail:new.qail postgres://...
🔄 Shadow Migration Mode
━━━━━━━━━━━━━━━━━━━━━━━━━━
[1/4] Creating shadow database: mydb_shadow ✓
[2/4] Applying migration to shadow... ✓
[3/4] Syncing data from primary to shadow...
✓ users (1,234 rows)
✓ orders (5,678 rows)
✓ Synced 2 tables, 6,912 rows
[4/4] Shadow ready for validation
Shadow URL: postgres://...mydb_shadow
Available Commands:
qail migrate promote → Switch traffic to shadow
qail migrate abort → Drop shadow, keep primary
Shadow Workflow
- Create Shadow - New database with new schema
- Apply Migrations - Run DDL on shadow only
- Sync Data - Copy data from primary
- Validate - Test your application against shadow
- Promote or Abort - Make the decision
Promote (Go Live)
$ qail migrate promote postgres://...
🚀 Promoting Shadow to Primary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[1/3] Renaming mydb → mydb_old_20241226 ✓
[2/3] Renaming mydb_shadow → mydb ✓
[3/3] Keeping old database as backup
✓ Shadow promoted successfully!
Old database preserved as: mydb_old_20241226
To clean up: DROP DATABASE mydb_old_20241226
Abort (Rollback)
$ qail migrate abort postgres://...
🛑 Aborting Shadow Migration
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Dropping shadow database: mydb_shadow
✓ Shadow database dropped. Primary unchanged.
Comparison with Other Tools
| Feature | QAIL | Prisma | SeaORM | Liquibase |
|---|---|---|---|---|
| Schema Migrations | ✅ | ✅ | ✅ | ✅ |
| Impact Analysis | ✅ | ❌ | ❌ | ❌ |
| Pre-Migration Backup | ✅ | ❌ | ❌ | ❌ |
| Record-Level Backup | ✅ | ❌ | ❌ | ❌ |
| Shadow Database | ✅ | ❌ | ❌ | ❌ |
| True Data Rollback | ✅ | ❌ | ❌ | ❌ |
Best Practices
- Always use database backup for production migrations
- Test in shadow before promoting
- Keep old database for 24-48 hours after promotion
- Use transactions (QAIL does this automatically)
Configuration
No configuration required! Features are enabled by default when destructive operations are detected.
Migration Impact Analyzer
Prevents runtime errors by scanning your codebase before running migrations.
Why Use It?
Dropping a table or column is easy—but if your code still references it, you’ll get runtime errors. The analyzer:
- Scans your codebase for QAIL AST/text query usage and SQL markers
- Detects breaking changes like dropped tables/columns
- Shows exact file:line locations with code snippets
- Prevents downtime by catching issues before production
Usage
qail migrate analyze old.qail:new.qail --codebase ./src
Real-World Example
Testing against a production codebase:
🔍 Migration Impact Analyzer
Schema: 001_initial_schema.up.qail → breaking_change.qail
Codebase: ~/api.fortunebali.com/src
Scanning codebase...
🔍 Analyzing files...
├── 🦀 main.rs (AST: 60 refs)
└── 1 files analyzed
Found 60 query references
⚠️ BREAKING CHANGES DETECTED
Affected files: 1
┌─ DROP TABLE admin_otps (11 references) ─────────────────────────┐
│ ❌ main.rs:397 → Qail::del("admin_otps")
│ ❌ main.rs:402 → Qail::add("admin_otps")
│ ❌ main.rs:403 → .columns(["email", "code_hash", "expires_at"])
│ ... and 8 more
└──────────────────────────────────────────────────────────────────┘
┌─ DROP TABLE inquiries (11 references) ─────────────────────────┐
│ ❌ main.rs:238 → Qail::add("inquiries")
│ ❌ main.rs:239 → .columns(["name", "email", ...])
│ ... and 9 more
└──────────────────────────────────────────────────────────────────┘
┌─ DROP COLUMN portfolio.status (2 references) ─────────────────┐
│ ❌ main.rs:179 → uses status in .columns(["id" +8])
│ ⚠️ RAW SQL main.rs:225 → "SELECT id, title, status FROM..."
└──────────────────────────────────────────────────────────────────┘
What would you like to do?
1. Run anyway (DANGEROUS - will cause 5 runtime errors)
2. Dry-run first (show SQL, don't execute)
3. Let me fix the code first (exit)
Semantic Scanning Modes
| Mode | Badge | Detection |
|---|---|---|
| Rust Semantic | 🦀 | QAIL semantic scanner for Qail::get(), Qail::add(), call-flow and loop-aware query diagnostics |
| Text Literal | 📘📍🐍 | String-literal extraction with comment masking for SQL/QAIL marker detection in non-Rust files |
The analyzer auto-detects file types and uses the most appropriate scanning method.
syn is no longer required on the runtime analyzer path for this semantic detection flow.
Rollback Safety Analysis
Warning
Data-Destructive Changes Cannot Be Rolled Back!
Some migrations are irreversible. The analyzer identifies:
| Change | Rollback Safe? | Why |
|---|---|---|
ADD COLUMN | ✅ Yes | Can DROP COLUMN |
DROP COLUMN | ❌ No | Data lost permanently |
DROP TABLE | ❌ No | Data lost permanently |
RENAME | ✅ Yes | Can rename back |
ADD INDEX | ✅ Yes | Can drop index |
TRUNCATE | ❌ No | Data lost permanently |
Breaking Change Types
| Change Type | Severity | Description |
|---|---|---|
DROP TABLE | 🔴 Critical | Table referenced in code → runtime errors |
DROP COLUMN | 🔴 Critical | Column queries will fail |
RENAME TABLE | 🟡 Warning | Code needs updating |
RENAME COLUMN | 🟡 Warning | Code needs updating |
TYPE CHANGE | 🟡 Warning | May cause type mismatch |
CI/CD Integration
For GitHub Actions, use --ci flag for annotations:
- name: Check migration safety
run: qail migrate analyze $OLD:$NEW --codebase ./src --ci
This outputs GitHub Actions annotations that appear inline in PR diffs:
::error file=src/main.rs,line=225,title=Breaking Change::Column 'portfolio.status' is being dropped but referenced here
Best Practices
-
Always run before
migrate upqail migrate analyze old.qail:new.qail --codebase ./src qail migrate up old.qail:new.qail $DATABASE_URL -
Use with
migrate planfor full previewqail migrate plan old.qail:new.qail # See SQL qail migrate analyze old.qail:new.qail # Check codebase qail migrate up old.qail:new.qail $URL # Apply -
Handle irreversible changes carefully
- Backup data before
DROP TABLEorDROP COLUMN - Consider soft-delete (add
deleted_atcolumn) instead of hard delete
- Backup data before
Foreign Key Validation
QAIL provides compile-time validation for foreign key references, ensuring your schema is consistent before migrations run.
Defining Foreign Keys
Use the builder API to define foreign key constraints:
#![allow(unused)]
fn main() {
use qail_core::migrate::{Column, ColumnType, ForeignKey, FkAction};
let user_id = Column::new("user_id", ColumnType::Uuid)
.references("users", "id")
.on_delete(FkAction::Cascade)
.on_update(FkAction::NoAction);
}
FK Actions
| Action | SQL | Description |
|---|---|---|
FkAction::NoAction | NO ACTION | Reject if referenced row exists (default) |
FkAction::Cascade | CASCADE | Delete/update child rows |
FkAction::SetNull | SET NULL | Set FK column to NULL |
FkAction::SetDefault | SET DEFAULT | Set FK column to default value |
FkAction::Restrict | RESTRICT | Same as NO ACTION but checked immediately |
Schema Validation
Call validate() to check all FK references exist:
#![allow(unused)]
fn main() {
let mut schema = Schema::new();
schema.add_table(Table::new("users")
.column(Column::new("id", ColumnType::Uuid).primary_key()));
schema.add_table(Table::new("posts")
.column(Column::new("id", ColumnType::Uuid).primary_key())
.column(Column::new("user_id", ColumnType::Uuid)
.references("users", "id")));
// Validate all FK references
match schema.validate() {
Ok(()) => println!("Schema is valid"),
Err(errors) => {
for e in errors {
eprintln!("Error: {}", e);
}
}
}
}
Error Messages
If a FK references a non-existent table or column:
FK error: posts.user_id references non-existent table 'users'
FK error: posts.author_id references non-existent column 'users.author_id'
Best Practices
-
Always validate before migrating
#![allow(unused)] fn main() { let schema = parse_qail(&content)?; schema.validate()?; } -
Use Cascade carefully - it can delete more data than expected
-
Prefer SetNull for optional relationships
#![allow(unused)] fn main() { .references("categories", "id") .on_delete(FkAction::SetNull) }
First-Class Relations
QAIL enables implicit joins through schema-defined foreign key relationships. Instead of manually specifying join conditions, define relationships once in schema.qail and use join_on() to automatically infer the join.
The Dream
#![allow(unused)]
fn main() {
// ❌ Before: Explicit join conditions
Qail::get("users")
.left_join("posts", "users.id", "posts.user_id")
// ✅ After: Implicit joins via schema
Qail::get("users")
.join_on("posts")
}
1. Define Relations in Schema
Use ref: syntax to declare foreign key relationships:
table users {
id UUID primary_key
email TEXT not_null
}
table posts {
id UUID primary_key
user_id UUID ref:users.id
title TEXT
}
The ref:users.id annotation tells QAIL that posts.user_id references users.id.
2. Load Relations at Runtime
Before using join_on(), load the schema relations:
#![allow(unused)]
fn main() {
use qail_core::schema;
// Load at application startup
schema::load_schema_relations("schema.qail")?;
}
3. Use Implicit Joins
#![allow(unused)]
fn main() {
use qail_core::Qail;
// Auto-infers: LEFT JOIN posts ON users.id = posts.user_id
let query = Qail::get("users")
.columns(["users.id", "users.email", "posts.title"])
.join_on("posts");
// Forward and reverse relations work automatically
let posts_with_users = Qail::get("posts")
.join_on("users"); // Infers: LEFT JOIN users ON posts.user_id = users.id
}
API Reference
join_on(table)
Joins a related table using the schema-defined foreign key. Panics if no relation exists.
#![allow(unused)]
fn main() {
Qail::get("users").join_on("posts")
}
join_on_optional(table)
Same as join_on(), but returns self unchanged if no relation exists (no panic).
#![allow(unused)]
fn main() {
Qail::get("users").join_on_optional("comments") // No-op if no relation
}
How It Works
- Schema Parsing:
build.rsparsesref:annotations and stores them asForeignKeyentries - Runtime Registry:
schema::load_schema_relations()populates a globalRelationRegistry - Lookup:
join_on()callslookup_relation()to find the join condition - Bidirectional: Both forward (
posts.user_id → users.id) and reverse directions are checked
Typed Codegen
QAIL can generate fully typed Rust modules from your schema.qail file, enabling compile-time checked table and column references.
Setup
1. Add build script
Create or update your build.rs:
fn main() {
let out_dir = std::env::var("OUT_DIR").unwrap();
qail_core::build::generate_typed_schema(
"schema.qail",
&format!("{}/schema.rs", out_dir)
).unwrap();
println!("cargo:rerun-if-changed=schema.qail");
}
2. Include generated module
In your lib.rs or main.rs:
#![allow(unused)]
fn main() {
include!(concat!(env!("OUT_DIR"), "/schema.rs"));
}
Usage
Generated Structure
From this schema:
table users {
id UUID primary_key
email TEXT not_null
age INT
}
table posts {
id UUID primary_key
user_id UUID ref:users.id
title TEXT
}
QAIL generates:
#![allow(unused)]
fn main() {
pub mod users {
pub struct Users;
impl Table for Users { ... }
pub const table: Users = Users;
pub const id: TypedColumn<uuid::Uuid> = ...;
pub const email: TypedColumn<String> = ...;
pub const age: TypedColumn<i32> = ...;
}
pub mod posts {
pub struct Posts;
pub const table: Posts = Posts;
pub const user_id: TypedColumn<uuid::Uuid> = ...;
}
}
Using Typed References
#![allow(unused)]
fn main() {
use schema::{users, posts};
// Tables and columns are type-safe
Qail::get(users::table)
.columns([users::id, users::email])
.join_on(posts::table)
}
Type Mapping
| QAIL Type | Rust Type |
|---|---|
UUID | uuid::Uuid |
TEXT, VARCHAR | String |
INT, INTEGER | i32 |
BIGINT | i64 |
FLOAT, REAL | f32 |
DOUBLE | f64 |
BOOL | bool |
TIMESTAMP | chrono::DateTime<Utc> |
JSON, JSONB | serde_json::Value |
Logic-Safe Relations (Scenario B)
QAIL codegen now generates compile-time relationship checking using the RelatedTo<T> trait.
How It Works
When schema.qail contains foreign key references:
table posts {
user_id UUID ref:users.id
}
The codegen produces:
#![allow(unused)]
fn main() {
// Forward: child -> parent
impl RelatedTo<users::Users> for posts::Posts {
fn join_columns() -> (&'static str, &'static str) { ("user_id", "id") }
}
// Reverse: parent -> children
impl RelatedTo<posts::Posts> for users::Users {
fn join_columns() -> (&'static str, &'static str) { ("id", "user_id") }
}
}
Compile-Time Safety
This enables “logic-safe” joins that fail at compile time:
#![allow(unused)]
fn main() {
// ✅ Compiles - tables are related
Qail::get(users::table).join_related(posts::table)
// ❌ Compile Error: "Users: RelatedTo<Products> is not satisfied"
Qail::get(users::table).join_related(products::table)
}
Data Access Policies (Phase 4)
QAIL now supports compile-time data governance using the protected keyword.
Schema Definition
Mark sensitive columns with protected:
table users {
id UUID primary_key
email TEXT not_null
password_hash TEXT protected
two_factor_secret TEXT protected
}
Generated Types
Protected columns get TypedColumn<T, Protected> instead of TypedColumn<T, Public>:
#![allow(unused)]
fn main() {
// Public - accessible by default
pub const email: TypedColumn<String, Public> = ...;
// Protected - requires capability witness
pub const password_hash: TypedColumn<String, Protected> = ...;
}
Policy Hierarchy
| Policy | Description | Use Case |
|---|---|---|
Public | Default, no restrictions | Normal data |
Protected | Requires AdminCap witness | Passwords, secrets |
Restricted | Requires SystemCap witness | Audit-critical data |
Capability Witness API
Access protected columns using the builder pattern:
#![allow(unused)]
fn main() {
use qail_core::typed::{CapabilityProvider, WithCap};
// In your auth middleware (Root of Trust):
let admin_cap = CapabilityProvider::mint_admin(); // After JWT verification
// Build query with typed table reference (no strings!)
let query = Qail::get(users::table) // ✓ Typed, not string
.with_cap(&admin_cap) // Prove authorization
.column(users::email) // Public - always allowed
.column_protected(users::password_hash) // Protected - now allowed!
.build();
}
Root of Trust
Important
AdminCapandSystemCaphave sealed constructors (private fields). They can only be minted viaCapabilityProvider::mint_*(). Place this in a single, auditable auth layer.
#![allow(unused)]
fn main() {
// In your AuthService (the ONLY place that can mint capabilities):
impl AuthService {
pub fn verify_admin(&self, token: &str) -> Result<AdminCap, AuthError> {
let claims = self.verify_jwt(token)?;
if claims.role == "admin" {
Ok(CapabilityProvider::mint_admin())
} else {
Err(AuthError::Forbidden)
}
}
}
}
Compile-Time Enforcement
Attempting to access protected columns without capability fails at compile time:
#![allow(unused)]
fn main() {
// ❌ Compile Error: Protected: PolicyAllowedBy<NoCap> is not satisfied
Qail::get(users::table)
.with_cap(&NoCap)
.column_protected(users::password_hash)
}
Connection Pooling
Efficient connection reuse with built-in multi-tenant safety.
Configuration
#![allow(unused)]
fn main() {
use qail_pg::{PgPool, PoolConfig};
let config = PoolConfig::new("localhost", 5432, "user", "database")
.password("secret")
.max_connections(20)
.min_connections(5);
}
Or load from qail.toml:
#![allow(unused)]
fn main() {
let pool = PgPool::from_config().await?;
}
Acquiring Connections
Always use RLS-aware methods for tenant queries:
#![allow(unused)]
fn main() {
use qail_core::rls::RlsContext;
// Tenant-scoped connection — RLS is set before any query runs
let ctx = RlsContext::operator(operator_id);
let mut conn = pool.acquire_with_rls(ctx).await?;
// With custom statement timeout (milliseconds)
let mut conn = pool.acquire_with_rls_timeout(ctx, 30_000).await?;
// System connection — no tenant context (for schema introspection, migrations)
let mut conn = pool.acquire_system().await?;
}
Warning: Never use
acquire_raw()for tenant queries. It returns a connection with no RLS context, bypassing row-level security. This method is crate-internal only.
Connection Lifecycle
Every connection follows a strict lifecycle:
acquire_with_rls(ctx)
→ set_config('app.current_operator_id', '...', false)
→ set_config('app.is_super_admin', '...', false)
→ execute queries (RLS policies filter rows automatically)
→ release()
→ DISCARD ALL (clears ALL server-side state)
→ clear client-side caches
→ return to pool
DISCARD ALL destroys prepared statements, temp tables, GUCs, and all session state. This guarantees zero state leakage between tenants sharing the same physical connection.
Pool Stats
#![allow(unused)]
fn main() {
let stats = pool.stats();
println!("Active: {}, Idle: {}", stats.active, stats.idle);
}
Best Practices
- Create pool once at application startup
- Share via
Arcacross threads/tasks - Don’t hold connections longer than needed
- Always use
acquire_with_rls()for tenant queries — neveracquire_raw() - Set appropriate pool size — CPU cores × 2 is a good start
#![allow(unused)]
fn main() {
use std::sync::Arc;
let pool = Arc::new(PgPool::connect(config).await?);
// Clone Arc for each task
let pool_clone = pool.clone();
tokio::spawn(async move {
let ctx = RlsContext::operator(op_id);
let conn = pool_clone.acquire_with_rls(ctx).await?;
// ...
});
}
Hybrid Architecture
QAIL bridges the gap between relational data (PostgreSQL) and vector search (Qdrant) using a robust Outbox Pattern.
Overview
Instead of dual-writing to both databases (which is prone to inconsistencies), QAIL uses a transactional outbox approach:
- Transactional Write: You write to your PostgreSQL tables.
- Auto-Trigger: A database trigger captures the change and writes a “sync event” to the
_qail_queuetable in the same transaction. - Async Worker: The
qail workerdaemon polls the queue and reliably syncs data to Qdrant.
Setup
1. Initialize Hybrid Project
qail init
# Select "3. Hybrid (PostgreSQL + Qdrant)" when prompted
This creates a migrations/001_qail_queue.up.qail file defining the _qail_queue table.
2. Configure Sync Rules
Define which tables should sync to Qdrant in qail.toml:
[[sync]]
source_table = "products" # Table to watch
trigger_column = "description" # Only sync if this column changes (optional optimization)
target_collection = "products_search" # Qdrant collection name
embedding_model = "candle:bert-base" # (Coming soon)
3. Generate Triggers
Run the command to generate PostgreSQL triggers based on your rules:
qail sync generate
# Creates migrations/002_qail_sync_triggers.up.qail
This generates triggers that:
- INSERT: Adds
UPSERTevent to queue. - UPDATE: Adds
UPSERTevent (only iftrigger_columnchanged). - DELETE: Adds
DELETEevent to queue.
4. Apply Migrations
Apply the migration files to your database:
qail migrate apply
5. Start Worker
Start the background worker to process the queue:
qail worker
Resilience
The qail worker is designed for production reliability:
- At-Least-Once Delivery: Events are only removed from the queue after successful sync to Qdrant.
- Circuit Breaker: If Qdrant goes down, the worker pauses and retries with exponential backoff (up to 30s).
- Per-Item Error Handling: A single reduced item won’t crash the worker; it will be marked with an error and retried later.
- Batch Processing: Updates are sent to Qdrant in batches for high throughput.
The _qail_queue Table
| Column | Type | Description |
|---|---|---|
id | SERIAL | Sequence ID for ordering |
ref_table | TEXT | Source table name |
ref_id | TEXT | Primary key of the source row |
operation | TEXT | UPSERT or DELETE |
payload | JSONB | Full row data (snapshot) |
status | TEXT | pending, processing, failed |
retry_count | INT | Number of failed attempts |
error_message | TEXT | Last error message |
Type System
QAIL provides type conversion between Rust and PostgreSQL.
Supported Types
| Rust Type | PostgreSQL Type | Notes |
|---|---|---|
String | TEXT, VARCHAR | UTF-8 |
i32 | INT4 | 32-bit integer |
i64 | INT8, BIGINT | 64-bit integer |
f64 | FLOAT8 | Double precision |
bool | BOOLEAN | |
Uuid | UUID | 128-bit |
Timestamp | TIMESTAMP | Microsecond precision |
Date | DATE | |
Time | TIME | |
Json | JSON, JSONB | |
Decimal | NUMERIC | Arbitrary precision |
Compile-Time Type Safety
QAIL uses the ColumnType enum for compile-time validation in schema definitions:
#![allow(unused)]
fn main() {
use qail_core::migrate::{Column, ColumnType};
// ✅ Compile-time enforced - no typos possible
Column::new("id", ColumnType::Uuid).primary_key()
Column::new("name", ColumnType::Text).not_null()
Column::new("email", ColumnType::Varchar(Some(255))).unique()
// Available types:
// Uuid, Text, Varchar, Int, BigInt, Serial, BigSerial,
// Bool, Float, Decimal, Jsonb, Timestamp, Timestamptz, Date, Time, Bytea
}
Validation at compile time:
primary_key()validates the type can be a PK (UUID, INT, SERIAL)unique()validates the type supports indexing (not JSONB, BYTEA)
Usage
Reading Values
#![allow(unused)]
fn main() {
use qail_pg::types::{Timestamp, Uuid, Json};
for row in rows {
let id: i32 = row.get("id")?;
let uuid: Uuid = row.get("uuid")?;
let created: Timestamp = row.get("created_at")?;
let data: Json = row.get("metadata")?;
}
}
Temporal Types
#![allow(unused)]
fn main() {
use qail_pg::types::{Timestamp, Date, Time};
// Timestamp with microsecond precision
let ts = Timestamp::from_micros(1703520000000000);
// Date only
let date = Date::from_ymd(2024, 1, 15);
// Time only
let time = Time::from_hms(14, 30, 0);
}
JSON
#![allow(unused)]
fn main() {
use qail_pg::types::Json;
let json = Json("{"key": "value"}".to_string());
}
Custom Types
Implement FromPg and ToPg for custom types:
#![allow(unused)]
fn main() {
use qail_pg::types::{FromPg, ToPg, TypeError};
impl FromPg for MyType {
fn from_pg(bytes: &[u8], oid: u32, format: i16) -> Result<Self, TypeError> {
// Decode from wire format
}
}
impl ToPg for MyType {
fn to_pg(&self) -> (Vec<u8>, u32, i16) {
// Encode to wire format
(bytes, oid, format)
}
}
}
Compile-Time Type Safety
New in v0.14.20 — Full Diesel-like type checking for QAIL queries
QAIL now supports compile-time type validation through generated schema files, similar to Diesel but with AST-native architecture.
Quick Start
1. Generate Schema
qail types schema.qail -o src/schema.rs
2. Use Type-Safe Builders
#![allow(unused)]
fn main() {
use crate::schema::users;
let query = Qail::get(users::TABLE)
.typed_column(users::id())
.typed_column(users::email())
.typed_eq(users::active(), true) // Compile-time: active must be bool
.typed_gt(users::age(), 18); // Compile-time: age must be numeric
}
Schema Generation
Input: schema.qail
table users {
id uuid primary_key
email text not_null unique
name text
active boolean default(true)
age integer
created_at timestamptz default(now())
}
Output: schema.rs
#![allow(unused)]
fn main() {
pub mod users {
use qail_core::typed::{TypedColumn, Table};
pub const TABLE: &str = "users";
pub fn id() -> TypedColumn<uuid::Uuid> {
TypedColumn::new("id")
}
pub fn email() -> TypedColumn<String> {
TypedColumn::new("email")
}
pub fn active() -> TypedColumn<bool> {
TypedColumn::new("active")
}
pub fn age() -> TypedColumn<i32> {
TypedColumn::new("age")
}
}
}
Type-Safe Methods
| Method | Description | Example |
|---|---|---|
typed_eq(col, val) | Type-safe equality | typed_eq(users::active(), true) |
typed_ne(col, val) | Type-safe not-equal | typed_ne(users::status(), "banned") |
typed_gt(col, val) | Type-safe greater-than | typed_gt(users::age(), 18) |
typed_lt(col, val) | Type-safe less-than | typed_lt(users::balance(), 0.0) |
typed_gte(col, val) | Greater-than or equal | typed_gte(users::score(), 100) |
typed_lte(col, val) | Less-than or equal | typed_lte(users::priority(), 5) |
typed_column(col) | Add typed column | typed_column(users::email()) |
SQL to Rust Type Mapping
| SQL Type | Rust Type |
|---|---|
uuid | uuid::Uuid |
text, varchar | String |
integer, int4 | i32 |
bigint, int8 | i64 |
smallint, int2 | i16 |
boolean, bool | bool |
real, float4 | f32 |
double precision, float8 | f64 |
numeric, decimal | f64 |
timestamptz, timestamp | chrono::DateTime<Utc> |
date | chrono::NaiveDate |
jsonb, json | serde_json::Value |
bytea | Vec<u8> |
Reserved Keywords
Rust reserved keywords are automatically escaped:
| Column Name | Generated Function |
|---|---|
type | fn r#type() |
fn | fn r#fn() |
struct | fn r#struct() |
Compile-Time Errors
Type mismatches are caught at compile time:
#![allow(unused)]
fn main() {
// ✅ Compiles - active is bool
query.typed_eq(users::active(), true);
// ❌ Compile error - age is i32, not string
query.typed_eq(users::age(), "eighteen");
// error[E0277]: the trait bound `&str: ColumnValue<i32>` is not satisfied
}
Integration with Existing Code
Type-safe methods can be mixed with dynamic methods:
#![allow(unused)]
fn main() {
let query = Qail::get(users::TABLE)
.typed_eq(users::active(), true) // Type-safe
.filter("created_at", Operator::Gte, "2024-01-01") // Dynamic
.typed_column(users::email());
}
Row-Level Security (RLS)
New in v0.15 — The first Rust PostgreSQL driver with built-in multi-tenant data isolation
QAIL injects tenant context at the AST level, ensuring every query is automatically scoped to the correct tenant, agent, or user — without manual WHERE clauses.
The Problem
Every multi-tenant SaaS app needs data isolation. Traditional solutions are fragile:
#![allow(unused)]
fn main() {
// ❌ Manual WHERE clauses — easy to forget, impossible to audit
let sql = "SELECT * FROM bookings WHERE tenant_id = $1";
// ❌ Every query must remember to add the filter
let sql = "SELECT * FROM invoices"; // BUG: leaks ALL tenant data
}
One missed WHERE clause = cross-tenant data leak. In a codebase with 200+ queries, this is a ticking time bomb.
The Solution: AST-Level RLS
QAIL solves this at the driver level:
#![allow(unused)]
fn main() {
use qail_core::rls::{RlsContext, SuperAdminToken};
// Create context from authenticated session
let ctx = RlsContext::tenant(tenant_id);
// Every query is automatically scoped
let query = Qail::get("bookings")
.columns(["id", "customer", "status"])
.with_rls(&ctx); // ← RLS injected at AST level
// Generated SQL: SELECT ... FROM bookings
// But the connection sets app.current_tenant_id/app.current_agent_id
// PostgreSQL RLS policy handles the rest
}
RlsContext Constructors
| Constructor | Scope | Use Case |
|---|---|---|
RlsContext::tenant(id) | Single tenant | Tenant dashboard |
RlsContext::agent(id) | Single agent | Agent portal |
RlsContext::tenant_and_agent(t, ag) | Both | Agent within tenant |
RlsContext::operator(id) | Legacy alias | Backward compatibility |
RlsContext::global() | Shared/global rows (tenant_id IS NULL) | Public/reference data |
RlsContext::super_admin(token) | Bypasses RLS | Internal platform-only ops |
Query Methods
#![allow(unused)]
fn main() {
let ctx = RlsContext::tenant_and_agent(tenant_id, agent_id);
ctx.has_tenant(); // true
ctx.has_agent(); // true
ctx.bypasses_rls(); // false
let global = RlsContext::global();
global.is_global(); // true
global.bypasses_rls();// false
let token = SuperAdminToken::for_system_process("admin_task");
let admin = RlsContext::super_admin(token);
admin.bypasses_rls(); // true
}
How It Works
┌─────────────────────────────────────────────────┐
│ Application Code │
│ │
│ Qail::get("bookings").with_rls(&ctx) │
│ ↓ │
│ AST Builder adds RLS context to query │
│ ↓ │
│ PgDriver::execute() │
│ ├─ set_config('app.current_tenant_id', '<uuid>', true) │
│ ├─ set_config('app.current_operator_id', '<uuid>', true) │
│ ├─ set_config('app.current_agent_id', '<uuid>', true) │
│ └─ Execute query on SAME connection │
│ ↓ │
│ PostgreSQL RLS Policy │
│ CREATE POLICY tenant_isolation ON bookings │
│ USING (tenant_id = current_setting( │
│ 'app.current_tenant_id')::uuid) │
│ ↓ │
│ Only matching rows returned │
└─────────────────────────────────────────────────┘
Compatibility note: gateway/driver still writes legacy operator GUCs and accepts
operator_idin JWTs while tenant-first naming is rolled out.
Why AST-Level?
| Approach | Reliability | Audit | Performance |
|---|---|---|---|
Manual WHERE | ❌ Easy to forget | ❌ Grep every query | ✅ Fast |
| ORM middleware | ⚠️ Can be bypassed | ⚠️ Framework-specific | ⚠️ Overhead |
| QAIL RLS | ✅ Structural | ✅ Single entry point | ✅ Native PG |
QAIL’s approach is structural — the RLS context is part of the query pipeline, not an afterthought bolted onto SQL strings.
Combined with TypedQail
RLS works with the typed API too:
#![allow(unused)]
fn main() {
use schema::{bookings, users};
let query = Qail::typed(bookings::table)
.join_related(users::table) // Compile-time safe join
.typed_column(bookings::id())
.typed_column(bookings::status())
.with_rls(&ctx) // Multi-tenant isolation
.build();
}
Comparison with Other Drivers
| Feature | QAIL | sqlx | Diesel | SeaORM |
|---|---|---|---|---|
| Built-in RLS | ✅ | ❌ | ❌ | ❌ |
| AST-level injection | ✅ | N/A | N/A | N/A |
with_rls() API | ✅ | N/A | N/A | N/A |
| Session variable management | ✅ Auto | Manual | Manual | Manual |
| Connection-scoped context | ✅ | Manual | Manual | Manual |
QAIL is the only Rust PostgreSQL driver with built-in Row-Level Security support.
Centralized Configuration
Qail uses a single qail.toml file as the source of truth for all components. Configuration follows a layered priority:
Environment Variables > qail.toml > Hardcoded Defaults
Quick Start
Generate a config with qail init, or create qail.toml manually:
[project]
name = "my-app"
mode = "postgres"
[postgres]
url = "postgres://${DB_USER}:${DB_PASSWORD}@${DB_HOST:-localhost}:5432/mydb"
max_connections = 20
min_connections = 2
idle_timeout_secs = 600
acquire_timeout_secs = 30
connect_timeout_secs = 10
test_on_acquire = false
[postgres.rls]
default_role = "app_user"
super_admin_role = "super_admin"
[qdrant]
url = "http://localhost:6333"
grpc = "localhost:6334"
max_connections = 10
[gateway]
bind = "0.0.0.0:8080"
cors = true
policy = "policies.yaml"
max_result_rows = 10000
statement_timeout_ms = 30000
explain_max_cost = 100000.0
explain_max_rows = 1000000
max_expand_depth = 3
tenant_max_concurrent = 10
# Per-role guard overrides — override global defaults for specific roles.
# Any field not specified here falls back to the global [gateway] default.
[gateway.role_overrides.reporting]
max_result_rows = 100000
statement_timeout_ms = 120000
explain_max_cost = 500000.0
[gateway.role_overrides.admin]
max_expand_depth = 5
[gateway.cache]
enabled = true
max_entries = 1000
ttl_secs = 60
[[sync]]
source_table = "products"
trigger_column = "description"
target_collection = "products_search"
embedding_model = "candle:bert-base"
Environment Variable Expansion
Use ${VAR} syntax inside TOML values to reference environment variables:
| Syntax | Behavior |
|---|---|
${VAR} | Required — errors if VAR is not set |
${VAR:-default} | Optional — uses default if VAR is not set |
$$ | Literal $ character |
Example:
[postgres]
url = "postgres://${DB_USER}:${DB_PASSWORD}@${DB_HOST:-localhost}:5432/mydb"
Set DB_USER and DB_PASSWORD in your .env or shell. DB_HOST falls back to localhost if unset.
Environment Variable Overrides
These env vars always override their TOML counterparts, regardless of ${VAR} expansion:
| Env Var | Overrides |
|---|---|
DATABASE_URL | [postgres].url |
QDRANT_URL | [qdrant].url |
QAIL_BIND | [gateway].bind |
This lets you keep one qail.toml across dev/staging/prod and switch databases purely via env vars.
Usage in Rust
PostgreSQL Pool (one-liner)
#![allow(unused)]
fn main() {
use qail_pg::driver::pool::PgPool;
let pool = PgPool::from_config().await?;
}
Manual Config Loading
#![allow(unused)]
fn main() {
use qail_core::config::QailConfig;
use qail_pg::driver::pool::PoolConfig;
let qail = QailConfig::load()?; // reads ./qail.toml
let pg_config = PoolConfig::from_qail_config(&qail)?; // parse postgres section
let pool = PgPool::connect(pg_config).await?;
}
Qdrant
#![allow(unused)]
fn main() {
use qail_qdrant::pool::PoolConfig as QdrantPoolConfig;
let qail = QailConfig::load()?;
if let Some(config) = QdrantPoolConfig::from_qail_config(&qail) {
let pool = QdrantPool::new(config).await?;
}
}
Gateway
#![allow(unused)]
fn main() {
use qail_gateway::config::GatewayConfig;
let qail = QailConfig::load()?;
let gw = GatewayConfig::from_qail_config(&qail);
}
Section Reference
| Section | Required | Description |
|---|---|---|
[project] | Yes | Project name, mode (postgres/qdrant/hybrid), schema path |
[postgres] | Yes | Database URL and pool tuning |
[postgres.rls] | No | RLS role names |
[qdrant] | No | Qdrant REST + gRPC endpoints |
[gateway] | No | HTTP server bind, CORS, cache |
[[sync]] | No | Vector sync rules (hybrid mode) |
Generated Files
qail init creates:
qail.toml— project config with commented-out optional sections.env.example— documents all supported env var overrides
CLI Commands
The qail command-line tool — v0.26.2.
Installation
cargo install qail
Commands
qail init
Initialize a new QAIL project. Auto-detects running PostgreSQL instances on the host, Docker, and Podman.
# Interactive mode — scans for running databases
qail init
# 🪝 QAIL Project Initialization
# Scanning for PostgreSQL instances...
# ✓ Found 2 instance(s):
# 1. 🖥 host localhost:5432 (host)
# 2. 🐳 docker localhost:5433 (docker) — my-pg
# 3. Enter URL manually
# Select [1-3]:
# Non-interactive mode (CI/scripting)
qail init --name myapp --mode postgres --url postgres://localhost/mydb
qail init --name myapp --mode hybrid --url postgres://localhost/mydb --deployment docker
Options:
--name <NAME>: Project name--mode <MODE>: Database mode (postgres,qdrant,hybrid)--url <URL>: Database URL (skips interactive prompt)--deployment <TYPE>: Deployment type (host,docker,podman)
Generates qail.toml and necessary migration files.
qail exec
Execute QAIL statements against a database:
# Inline queries
qail exec "get users'id'email[active = true]" --url postgres://...
qail exec "add users fields name, email values 'Alice', 'a@test.com'" --url postgres://... --tx
qail exec "set users[id = 1] fields name = 'Bob'" --url postgres://...
# Count rows
qail exec "cnt orders[status = 'paid']" --url postgres://...
# → SELECT COUNT(*) FROM orders WHERE status = 'paid'
# JSON output (pipe-friendly)
qail exec "get users" --url postgres://... --json
qail exec "get users" --url postgres://... --json | jq '.[].email'
# From file
qail exec -f seed.qail --url postgres://...
# Dry-run (preview generated SQL)
qail exec "get users'*'" --dry-run
# 📋 Parsed 1 QAIL statement(s)
# 🔍 DRY-RUN MODE — Generated SQL:
# SELECT * FROM users
# With SSH tunnel
qail exec "get users" --url postgres://remote/db --ssh user@bastion
Syntax:
add <table> fields <col1>, <col2> values <val1>, <val2>
set <table>[id = $1] fields name = 'new', updated_at = now
del <table>[id = $1]
get <table>'id'name[active = true]
cnt <table>[active = true]
Value Types:
| Type | Examples |
|---|---|
| Strings | 'hello', "world" |
| Numbers | 42, 3.14, -1 |
| Booleans | true, false |
| Null | null |
| Parameters | $1, $2, :name |
| Intervals | 24h, 7d, 30m |
| JSON | ["a", "b"], {"key": "val"} |
| Timestamp | now |
Options:
-f, --file <FILE>: Path to.qailfile with statements-u, --url <URL>: Database connection URL--json: Output SELECT results as JSON array--tx: Wrap all statements in a transaction--dry-run: Preview generated SQL without executing--ssh <USER@HOST>: SSH tunnel via bastion host
qail pull
Extract schema from a live database:
qail pull postgres://user:pass@localhost/db > schema.qail
qail diff
Compare two schemas or detect drift against a live database:
# Compare two schema files
qail diff old.qail new.qail
qail diff old.qail new.qail --format json
# Live drift detection (introspects running database)
qail diff _ schema.qail --live --url postgres://localhost/mydb
# Drift detection: [live DB] → schema.qail
# → Introspecting live database...
# 80 tables, 287 indexes introspected
# ✅ No drift detected — live DB matches schema file.
Options:
--format <FMT>: Output format (sql,json,pretty)--live: Use live database introspection as “old” schema--url <URL>: Database URL (required with--live)
qail check
Validate a schema file or preview migration safety:
# Validate schema
qail check schema.qail
# ✓ Schema is valid
# Tables: 80 | Columns: 1110 | Indexes: 287
# ✓ 82 primary key(s)
# Validate modular schema directory
qail check schema/
# Fallback mode: if schema.qail is missing, sibling schema/ is used
qail check schema.qail
# Check migration safety
qail check old.qail:new.qail
# ✓ Both schemas are valid
# Migration preview: 4 operation(s)
# ✓ 3 safe operation(s)
# ⚠️ 1 reversible operation(s)
Migrate Commands
qail migrate status
View migration history with rich tabular output:
qail migrate status postgres://...
# 📋 Migration Status — mydb
# ┌──────────┬────────────────────┬─────────────────────┬──────────────┐
# │ Version │ Name │ Applied At │ Checksum │
# ├──────────┼────────────────────┼─────────────────────┼──────────────┤
# │ 001 │ qail_queue │ 2026-02-01 10:00:00 │ a3b8d1... │
# │ 002 │ add_users │ 2026-02-05 14:32:00 │ f81d4f... │
# └──────────┴────────────────────┴─────────────────────┴──────────────┘
qail migrate up
Apply migrations:
qail migrate up v1.qail:v2.qail postgres://...
# With codebase check (warns about breaking references)
qail migrate up v1.qail:v2.qail postgres://... -c ./src
qail migrate down
Rollback migrations:
qail migrate down v1.qail:v2.qail postgres://...
qail migrate plan
Preview migration SQL without executing (dry-run):
qail migrate plan old.qail:new.qail
# 📋 Migration Plan (dry-run)
# ┌─ UP (2 operations) ─────────────────────────────────┐
# │ 1. ALTER TABLE users ADD COLUMN verified BOOLEAN
# │ 2. CREATE INDEX idx_users_email ON users (email)
# └─────────────────────────────────────────────────────┘
# Save to file
qail migrate plan old.qail:new.qail --output migration.sql
qail migrate analyze
Analyze codebase for breaking changes before migrating:
qail migrate analyze old.qail:new.qail --codebase ./src
# 🔍 Migration Impact Analyzer
# Scanning codebase... Found 395 query references
#
# ⚠️ BREAKING CHANGES DETECTED
# ┌─ DROP TABLE promotions (6 references) ─────────────┐
# │ ❌ src/repository/promotion.rs:89 → INSERT INTO...
# │ ❌ src/repository/promotion.rs:264 → SELECT...
# └────────────────────────────────────────────────────┘
qail migrate apply
Apply file-based migrations from migrations/ directory:
qail migrate apply
# → Found 1 migrations to apply
# ✓ Connected to mydb
# → 001_qail_queue.up.qail... ✓
# ✓ All migrations applied successfully!
qail migrate reset
Nuclear option — drop all objects, clear history, re-apply target schema:
qail migrate reset schema.qail postgres://...
# ⚠️ This will DROP all tables, clear migration history, and recreate from schema.
# Phase 1: DROP all tables...
# Phase 2: CLEAR migration history...
# Phase 3: CREATE from schema...
# ✓ Reset complete
qail migrate shadow
Test migrations against a shadow database:
qail migrate shadow v1.qail:v2.qail postgres://shadow-db/...
Other Commands
qail explain
Parse and explain a QAIL query:
qail "get users'id'email[active = true]"
# SELECT id, email FROM users WHERE active = true
qail repl
Interactive QAIL REPL — type queries, see SQL in real-time:
qail repl
# 🪝 QAIL REPL v0.15.8
# Type QAIL queries, see SQL output.
# qail> get users[active = true]
# → SELECT * FROM users WHERE active = true
qail types
Generate typed Rust schema from .qail file:
qail types schema.qail > src/generated/schema.rs
qail watch
Watch schema file for changes and auto-generate migrations:
qail watch schema.qail --url postgres://... --auto-apply
qail lint
Check schema for best practices:
qail lint schema.qail
# 🔍 Schema Linter
# ⚠ 144 warning(s)
# ℹ 266 info(s)
| Check | Level | Description |
|---|---|---|
| Missing primary key | 🔴 ERROR | Every table needs a PK |
| Missing created_at/updated_at | ⚠️ WARNING | Audit trail columns |
_id column without references() | ⚠️ WARNING | FK integrity |
| Uppercase table names | ⚠️ WARNING | Use snake_case |
qail sync generate
Generate trigger migrations from [[sync]] rules in qail.toml (Hybrid mode):
qail sync generate
# ✓ Created migrations/002_qail_sync_triggers.up.qail
qail worker
Start the background worker to sync PostgreSQL → Qdrant (Hybrid mode):
qail worker --interval 1000 --batch 100
Global Options
| Flag | Description |
|---|---|
-d, --dialect | Target SQL dialect (pg, mysql) |
-f, --format | Output format (sql, ast, json) |
-v, --verbose | Verbose output |
--version | Show version |
--help | Show help |
API Reference
Full API documentation is generated from source code.
Rust Crates
| Crate | Description | Docs |
|---|---|---|
qail-core | AST, Builder, Parser | docs.rs |
qail-pg | PostgreSQL driver | docs.rs |
qail-gateway | Auto-REST gateway | docs.rs |
qail | CLI and tooling | docs.rs |
SDKs
| SDK | Status | Distribution |
|---|---|---|
TypeScript (@qail/client) | Supported | npm |
Swift (sdk/swift) | Supported | Source package |
Kotlin (sdk/kotlin) | Supported | Source module |
| Node.js native binding | Deferred | Not published |
| WASM binding | Deferred | Not published |
Generate Local Docs
cargo doc --no-deps --open
Key Types
qail-core
QailCmd- Query command builderOperator- Comparison operatorsSortOrder- ASC/DESCExpr- Expression AST nodes
qail-pg
PgDriver- Database connectionPgPool- Connection poolPgRow- Result rowPgError- Error types
Source Code
View the source on GitHub:
Changelog
For the full project changelog, see the repository file:
Current Highlights (v0.26.2)
- PostgreSQL startup now requests protocol
3.2by default, with one-shot downgrade retry to3.0on explicit protocol-version rejection. - Native startup decode/handling for backend
NegotiateProtocolVersion('v') is now implemented. - Cancel key handling is bytes-native (
4..=256), with public bytes-based cancel APIs for protocol 3.2 correctness. - Legacy i32 cancel access/cancel APIs are retained as compatibility wrappers for 4-byte key behavior.
- Hardening/integration tests now cover protocol negotiation, downgrade boundaries, and startup/copy/replication harnesses under the 3.2 default.