Skip to content

FAQ

Common questions about using walrust.

Walrust is a lightweight SQLite replication tool written in Rust. It continuously backs up SQLite databases to S3-compatible storage by watching WAL (Write-Ahead Log) files and uploading changes as LTX files.

Both tools use WAL-based replication with the LTX file format. Key differences:

AspectwalrustLitestream
Memory (1 DB)19 MB36 MB
Memory (100 DBs)20 MB160 MB
LanguageRustGo
Config formatTOMLYAML

See Migration from Litestream for detailed comparison.

Walrust is actively developed and used in production environments. Testing includes:

  • Unit and integration tests for core functionality
  • Chaos testing with fault injection (walrust-dst)
  • Property-based testing for invariants
  • Uses the same LTX format as Litestream
  • SHA256 checksums for data integrity verification

As with any backup tool, test restores regularly and maintain a disaster recovery plan.

Walrust works with any SQLite database in WAL mode. This includes:

  • Raw SQLite databases
  • Turso local databases
  • Python apps using sqlite3
  • Node.js apps using better-sqlite3
  • Any application using SQLite

No. Walrust requires WAL mode to capture incremental changes. Enable it with:

PRAGMA journal_mode=WAL;

No. Walrust works with any S3-compatible storage:

  • AWS S3
  • Tigris (Fly.io’s object storage)
  • Cloudflare R2
  • MinIO (self-hosted)
  • Backblaze B2
  • DigitalOcean Spaces

See S3 Providers for setup guides.

Three ways:

  1. CLI arguments (quick, one-off commands)
  2. Environment variables (for credentials)
  3. Config file (walrust.toml for complex setups)

See Configuration Reference for all options.

Yes! Pass multiple paths:

Terminal window
walrust watch app.db users.db analytics.db --bucket my-backups

Or use wildcards in a config file:

[[databases]]
path = "/data/*.db"

Walrust uses one process for all databases with minimal memory overhead.

Default: every 3600 seconds (1 hour). Configure with:

[sync]
snapshot_interval = 1800 # 30 minutes

You can also trigger snapshots based on:

  • WAL frame count (max_changes)
  • Idle time (on_idle)
  • Time since last change (max_interval)

How much data will I lose if my server crashes?

Section titled “How much data will I lose if my server crashes?”

Depends on wal_sync_interval:

  • Default (1 second): Up to 1 second of data
  • Aggressive (0.5 seconds): Up to 0.5 seconds of data

WAL changes are batched and uploaded on this interval. Lower values = less data loss but more S3 API calls.

Terminal window
walrust restore mydb --bucket my-backups -o restored.db

This downloads the latest snapshot and applies all incremental LTX files.

Can I restore to a specific point in time?

Section titled “Can I restore to a specific point in time?”

Yes, using point-in-time recovery (PITR):

Terminal window
walrust restore mydb \
--bucket my-backups \
-o restored.db \
--point-in-time "2024-01-15T10:30:00Z"

Walrust will restore to the closest transaction before that timestamp.

Run periodic test restores:

test-restore.sh
#!/bin/bash
walrust restore mydb --bucket my-backups -o /tmp/test.db
sqlite3 /tmp/test.db "PRAGMA integrity_check;"
if [ $? -eq 0 ]; then
echo "Backup verified successfully"
rm /tmp/test.db
else
echo "Backup verification FAILED"
exit 1
fi

Schedule this with cron or CI.

Use the verify command:

Terminal window
walrust verify mydb --bucket my-backups

This checks:

  • File existence
  • SHA256 checksums
  • TXID continuity
  • LTX header validity

You can also enable automated verification:

[sync]
validation_interval = 86400 # Verify daily

It depends on:

  • Database size
  • Write rate
  • Retention policy

Example: A 100 MB database with moderate writes and default retention (24 hourly + 7 daily + 12 weekly + 12 monthly snapshots) uses roughly:

~100 MB (latest snapshot)
+ ~50 MB (hourly incrementals)
+ ~300 MB (older snapshots)
= ~450 MB total
  1. Aggressive retention:
[retention]
hourly = 6 # Keep only last 6 hours
daily = 3 # Last 3 days
weekly = 4 # Last 4 weeks
monthly = 3 # Last 3 months
  1. Auto-compaction:
[sync]
compact_after_snapshot = true
  1. Manual compaction:
Terminal window
walrust compact mydb --bucket my-backups --force

Walrust uses Grandfather-Father-Son (GFS) rotation to keep storage bounded:

TierDefaultKeeps
Hourly24Last 24 hours
Daily7One per day for a week
Weekly12One per week for 12 weeks
Monthly12One per month beyond that

Run walrust compact to delete old snapshots according to this policy.

In transit: Yes, HTTPS by default.

At rest: Depends on your S3 provider:

  • AWS S3: Enable server-side encryption (SSE-S3 or SSE-KMS)
  • Tigris: Enabled by default
  • MinIO: Configure encryption in MinIO settings

Walrust doesn’t do client-side encryption (yet). Use your S3 provider’s encryption features.

  • Single database: ~19 MB
  • 10 databases: ~20 MB
  • 100 databases: ~19 MB

Walrust shares S3 clients and file watchers. Memory remains relatively constant regardless of database count.

  • Idle: <1%
  • Active syncing: 2-5% on modern hardware
  • High write rate (10K+ writes/sec): 10-20%

If CPU is high, increase monitor_interval or wal_sync_interval.

Can walrust keep up with high write rates?

Section titled “Can walrust keep up with high write rates?”

Yes. Benchmarks show walrust handles:

  • 10K+ writes/sec with 500 concurrent databases
  • 4% average CPU usage
  • <1 second sync latency (P95)

See Benchmark Results for details.

No. Walrust watches the WAL file externally and doesn’t interfere with SQLite operations. Your app continues writing normally.

Read replicas are local databases that poll S3 for changes and stay in sync with the primary database. Useful for:

  • Offloading read queries
  • Running analytics without affecting production
  • Disaster recovery (warm standby)
Terminal window
walrust replicate s3://my-bucket/mydb --local replica.db --interval 5s

This polls S3 every 5 seconds, downloads new LTX files, and applies them to the local database.

Freshness = wal_sync_interval (primary) + interval (replica) + S3 propagation time

Example:

  • Primary syncs every 1 second
  • Replica polls every 5 seconds
  • S3 eventual consistency: ~1 second

Total lag: ~7 seconds (P95)

For near-real-time replication, use --interval 1s.

No. Replicas are read-only. Walrust will reject writes to replica databases to prevent conflicts.

Install via pip:

Terminal window
pip install walrust

Use the Python API:

from walrust import Walrust
# Create instance
ws = Walrust("s3://my-bucket", endpoint="https://fly.storage.tigris.dev")
# Snapshot
ws.snapshot("/path/to/app.db")
# List databases
dbs = ws.list()
# Restore
ws.restore("app", "/path/to/restored.db")

See Python API Reference for full documentation.

Yes! Same Python API works in notebooks:

from walrust import snapshot, restore
# Backup
snapshot("analysis.db", "s3://my-bucket")
# Later... restore
restore("analysis", "analysis-restored.db", "s3://my-bucket")

Use systemd, Docker, or Kubernetes. See Deployment Guide for examples.

Recommended: systemd

[Service]
ExecStart=/usr/local/bin/walrust watch /data/app.db --bucket my-backups
Restart=always

Yes. Mount your database volume:

services:
walrust:
image: walrust
command: watch /data/app.db --bucket my-backups
volumes:
- app-data:/data:ro
environment:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}

Should walrust run as a separate process or in-app?

Section titled “Should walrust run as a separate process or in-app?”

Separate process (recommended):

  • Easier to restart independently
  • Simpler deployment
  • Works with any language

In-app (Python only):

  • Fewer moving parts
  • Tighter integration
  • Good for simple deployments

For production, run walrust as a separate process (sidecar, systemd service, etc.).

  1. Check credentials:
Terminal window
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
  1. Verify bucket exists:
Terminal window
walrust list --bucket my-backups
  1. Enable debug logging:
Terminal window
export RUST_LOG=walrust=debug
walrust watch app.db --bucket my-backups

See Troubleshooting Guide for more.

Enable logging:

Terminal window
export RUST_LOG=walrust=info
walrust watch app.db --bucket my-backups

Log levels: error, warn, info, debug, trace

Check exit codes in your systemd service:

Terminal window
sudo journalctl -u walrust -n 50

Walrust uses structured exit codes (0-6) to indicate different error types. See Troubleshooting for details.

Can I use walrust with Raft or distributed SQLite?

Section titled “Can I use walrust with Raft or distributed SQLite?”

No. Walrust is designed for single-node SQLite databases. For distributed setups, consider:

  • rqlite (Raft-based distributed SQLite)
  • LiteFS (FUSE-based replication)
  • Primary-replica with walrust (one primary, multiple read replicas)

Not built-in. Use your S3 provider’s server-side encryption:

  • AWS S3: SSE-S3 or SSE-KMS
  • Tigris: Enabled by default
  • MinIO: Configure via encryption settings

Client-side encryption may be added in a future version.

Yes! Walrust is open source (Apache 2.0). See the GitHub repo for:

  • Issues and feature requests
  • Pull requests
  • Development setup