Skip to content
🦀 Rust / Systems February 1, 2026 8 min read

From Compile-Time SQL to Dynamic Entities: Why I Switched to SeaORM

#rust#seaorm#sqlx#architecture#database#backend#systems-engineering
BASH
$ cargo check
error: environment variable `DATABASE_URL` not set

It was a Friday afternoon. I had just onboarded a new migration, updated the schema, and tried to run cargo check before pushing. The CI was already red. The .sqlx cache was out of date. I hadn’t run cargo sqlx prepare.

Again.

That was the third time that week. That was the moment I started seriously looking at SeaORM.

I’ll say it plainly: sqlx is brilliant. Raw SQL, verified against a live database at compile time. If it compiles, the query is valid. The types align. The columns exist. It feels like Rust’s promise applied directly to your database layer.

But software engineering isn’t about absolute truths. It’s about shifting bottlenecks.

As my backend grew from a handful of endpoints into a full IoT device management platform — dynamic filters, partial updates, complex relations — the friction points of sqlx started to cost more than its compile-time guarantees were worth. I was writing boilerplate to handle a simple PATCH request. I was fighting SQLX_OFFLINE metadata in CI. I was spending time on the query builder instead of the domain logic.

So I migrated to SeaORM. Here’s what I actually learned.

The Real Price of sqlx at Scale

1. The Compile-Time Tax

The query!() macro requires an active database connection during compilation — or a pre-generated .sqlx directory in offline mode. For a small project, fine. For a growing backend with dozens of endpoints and schema changes every few days, this becomes a genuine pain.

Every schema change requires a perfectly synced dev database, a fresh migration run, and then cargo sqlx prepare before you push. Forget that step — and I did, repeatedly — and your CI pipeline fails.

Worse, procedural macros that touch the network slow down compile times. That tight feedback loop that makes Rust enjoyable? You start losing it.

2. Dynamic Queries Are a Nightmare

This is where sqlx really breaks down for me. The query!() macro only works for static SQL strings. The moment you need optional filters — say, a REST endpoint where status, firmware_version, and location_id are all optional — you can’t use it.

You fall back to QueryBuilder:

RUST
// The sqlx way: manual string assembly
use sqlx::{QueryBuilder, Postgres};

let mut query_builder: QueryBuilder<Postgres> = QueryBuilder::new(
    "SELECT id, mac_address, status, firmware_version FROM devices WHERE 1=1"
);

if let Some(status) = filter.status {
    query_builder.push(" AND status = ");
    query_builder.push_bind(status);
}

if let Some(firmware) = filter.firmware_version {
    query_builder.push(" AND firmware_version = ");
    query_builder.push_bind(firmware);
}

if let Some(loc_id) = filter.location_id {
    query_builder.push(" AND location_id = ");
    query_builder.push_bind(loc_id);
}

// All compile-time type checking is gone at this point.
let devices = query_builder.build_query_as::<Device>()
    .fetch_all(&pool)
    .await?;

You’re manually appending strings. Managing spaces. Hoping you didn’t miss a comma. The compile-time safety that made sqlx attractive in the first place is completely gone — replaced with boilerplate that’s hard to read and easy to break.

Enter SeaORM

SeaORM isn’t a replacement for sqlx. It’s built on top of it. The stack looks like this:

  • SeaORM — the entity-relational layer. Macros, ActiveModel, relations.
  • SeaQuery — the dynamic SQL query builder. Constructs ASTs independent of database engine.
  • sqlx — the underlying driver. Connection pooling, async runtime, wire protocol.

You still get sqlx’s battle-tested connection handling. You interact with it through a Rust-native type system instead of raw strings.

Dynamic Queries, Done Right

Here’s that same filter logic in SeaORM:

RUST
use sea_orm::{EntityTrait, QueryFilter, Condition, ColumnTrait};

let mut condition = Condition::all();

if let Some(status) = filter.status {
    condition = condition.add(device::Column::Status.eq(status));
}

if let Some(firmware) = filter.firmware_version {
    condition = condition.add(device::Column::FirmwareVersion.eq(firmware));
}

if let Some(loc_id) = filter.location_id {
    condition = condition.add(device::Column::LocationId.eq(loc_id));
}

let devices = Device::find()
    .filter(condition)
    .all(&db)
    .await?;

No raw strings. device::Column::Status is a generated enum — if I rename the column and regenerate entities, this code fails to compile. I’ve got my type safety back, without writing a single SQL fragment by hand.

Under the hood, SeaORM passes this to sea-query, which builds an Abstract Syntax Tree and then compiles valid SQL for whatever backend you’re targeting — Postgres, MySQL, or SQLite. The output is guaranteed to be syntactically correct.

Solving the PATCH Problem

Reading data dynamically in sqlx is annoying. Writing data dynamically is worse.

A PATCH request might update only status. Or only firmware_version. Or both. In sqlx, you’re back to building dynamic UPDATE statements, tracking which fields were provided, managing commas, binding values in the right order. It’s error-prone every single time.

SeaORM solves this with ActiveModel and ActiveValue:

RUST
pub enum ActiveValue<V> {
    Set(V),      // This field was explicitly provided — include it in the query
    NotSet,      // Not in the request — skip it entirely
    Unchanged(V), // Loaded from DB but not modified
}

When a PATCH request comes in, I convert the DTO into an ActiveModel, marking only the fields that were actually sent:

RUST
use sea_orm::{ActiveModelTrait, Set, ActiveValue::NotSet};

let mut device_active_model = device::ActiveModel {
    id: Set(payload.id),
    status: if let Some(s) = payload.status { Set(s) } else { NotSet },
    firmware_version: if let Some(f) = payload.firmware_version { Set(f) } else { NotSet },
    ..Default::default()
};

// SeaORM generates UPDATE devices SET status = $1 WHERE id = $2
// (only the Set fields — nothing more)
let updated_device = device_active_model.update(&db).await?;

SeaORM inspects the ActiveModel at runtime and generates the minimal UPDATE statement. If only status is Set, that’s the only column in the query. No manual tracking. No dynamic SQL assembly. It just works.

What Happened to My CI/CD

Because SeaORM uses derived macros (#[derive(DeriveEntityModel)]) rather than procedural macros that hit a live database, my compile times dropped. I stopped needing SQLX_OFFLINE caches. I stopped needing a services: postgres block in GitHub Actions just to run cargo clippy.

That alone was worth a lot.

But the biggest win was in testing. Testing database logic in sqlx basically requires integration tests — spin up a transaction, run queries against a real database, roll back. Slow. Occasionally flaky.

SeaORM ships a MockDatabase. Because all DB interaction goes through ConnectionTrait, I can inject a mock connection that expects specific operations and returns predefined results:

RUST
#[cfg(test)]
mod tests {
    use sea_orm::{DatabaseBackend, MockDatabase, MockExecResult};
    use super::*;

    #[tokio::test]
    async fn test_update_device_status() {
        let db = MockDatabase::new(DatabaseBackend::Postgres)
            .append_exec_results([
                MockExecResult {
                    last_insert_id: 0,
                    rows_affected: 1,
                },
            ])
            .into_connection();

        let result = update_device_status(&db, 1, "ONLINE".to_string()).await;
        assert!(result.is_ok());
    }
}

Unit tests that run in milliseconds. No Docker required. Real coverage of my service layer without touching a database.

The Trade-Offs (Keeping It Real)

No abstraction is free. Here’s what you’re actually giving up.

Macro complexity. SeaORM leans hard on the macro system. When you hit a type mismatch inside a derived macro, the compiler errors are not fun. It takes some time before reading them feels natural.

Runtime allocation overhead. SeaORM constructs an AST at runtime before compiling it to SQL. For 99% of web applications, this is completely irrelevant — nanoseconds against network latency. But if you’re building an ultra-low-latency system and every microsecond matters, raw sqlx is still your friend.

Complex analytical queries. For heavy reporting — window functions, deep GROUP BYs, multi-level CTEs — ORMs tend to get in the way. SeaORM has an escape hatch: Statement::from_sql_and_values for raw SQL, or drop down to sea-query directly, or grab the underlying sqlx::Pool and use query! for that one specific endpoint where you need it.

Wrapping Up

Choosing between sqlx and SeaORM isn’t a performance debate. It’s a lifecycle debate.

sqlx optimizes for strictness at the database boundary. It’s excellent for smaller projects or services with stable, static query patterns where you want maximum visibility into the SQL hitting your database.

SeaORM optimizes for maintainability at scale. Dynamic filtering, partial updates, proper unit testing — these stop being friction and start being solved problems. The type safety moves from SQL strings into the Rust AST, and you keep the compiler guarantees without the boilerplate.

The code I don’t have to write is just as important as the code I do.

That’s what SeaORM gave me: more time on the business logic, less time fighting the query builder at 2 AM.

BASH
$ cargo check
   Compiling myapp v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 4.32s

No database required.


Working on a similar project or have questions about Rust or SeaORM? Reach out — qcynaut@gmail.com