Switch between backends

This tutorial shows you how to run the same expression on different execution engines. You’ll learn when to choose each backend, see how Xorq moves data between them using Apache Arrow, and compare backend performance to find the best fit for your workload.

After completing this tutorial, you’ll know how to pick the right backend and understand the performance trade-offs.

Important

This tutorial requires DuckDB support. Install with pip install "xorq[duckdb]" or pip install "xorq[examples]" for all tutorial dependencies.

Why switch backends?

Different backends excel at different tasks. DuckDB handles analytical queries efficiently, Pandas works great for small datasets and prototyping, and DataFusion gives you custom UDF capabilities.

Xorq lets you write your expression once and run it anywhere. Same code, different engines.

Tip

Xorq uses Apache Arrow to move data between backends without serialization overhead. This makes backend switching fast and memory-efficient.

To see this in practice, you’ll run the same expression on the iris dataset across three backends: embedded, DuckDB, and Pandas.

Run on the embedded backend

You’ll start with Xorq’s default embedded backend. This uses DataFusion, an in-memory query engine optimized for Arrow operations.

import xorq.api as xo


con = xo.connect()


iris = xo.examples.iris.fetch(backend=con)


expr = (
    iris
    .filter(xo._.sepal_length > 6)
    .group_by("species")
    .agg(avg_width=xo._.sepal_width.mean())
)


result = expr.execute()
print(f"Backend: {con}")
print(result)
1
Connect to the embedded backend (uses DataFusion).
2
Load the iris dataset into this backend.
3
Build a filter and aggregation expression.
4
Execute on the embedded backend.

The embedded backend is the default. It’s fast, supports all Xorq features, and doesn’t require external setup.

Switch to DuckDB

Now you’ll run the same expression on DuckDB. DuckDB excels at analytical queries and works well with larger datasets.


duckdb_con = xo.duckdb.connect()


iris_duck = xo.examples.iris.fetch(backend=duckdb_con)


duck_expr = (
    iris_duck
    .filter(xo._.sepal_length > 6)
    .group_by("species")
    .agg(avg_width=xo._.sepal_width.mean())
)


duck_result = duck_expr.execute()
print(f"\nBackend: {duckdb_con}")
print(duck_result)
1
Connect to DuckDB (in-memory by default).
2
Load iris data into DuckDB.
3
Build the same expression as before.
4
Execute on DuckDB.

Notice how the expression code is identical. Only the backend connection changed.

Note

This DuckDB connection is in-memory. To use a persistent database file, pass database="my_db.duckdb" to connect().

Switch to Pandas

Now you’ll run the same expression on Pandas. Pandas is great for small datasets and interactive analysis, making it perfect for prototyping and working with data that fits in memory.


pandas_con = xo.pandas.connect()


iris_pandas = xo.examples.iris.fetch(backend=pandas_con)


pandas_expr = (
    iris_pandas
    .filter(xo._.sepal_length > 6)
    .group_by("species")
    .agg(avg_width=xo._.sepal_width.mean())
)


pandas_result = pandas_expr.execute()
print(f"\nBackend: {pandas_con}")
print(pandas_result)
1
Connect to Pandas backend.
2
Load data into Pandas.
3
Same expression, different backend.
4
Execute on Pandas.

So far, you’ve loaded data separately into each backend. But what if you start analysis in one backend and need to switch to another mid-workflow? That’s where data transfer comes in.

Move data between backends

Sometimes you need to move data from one backend to another. Xorq makes this easy with .into_backend().

First, see what you can do on the embedded backend:


con = xo.connect()
iris = xo.examples.iris.fetch(backend=con)


result = (
    iris
    .filter(xo._.sepal_length > 6)
    .select("species", "sepal_length", "petal_length")
    .group_by("species")
    .agg(
        avg_sepal=xo._.sepal_length.mean(),
        count=xo._.species.count()
    )
    .execute()
)

print("Processing on embedded backend:")
print(result)
1
Load data into embedded backend.
2
Chain operations: filter, select columns, group, and aggregate.

This works well for most queries. But sometimes you need features that only certain backends provide. That’s when you move data.

Now see why you’d move to DuckDB for temporal joins (AsOf joins):


stock_prices = xo.memtable({
    "symbol": ["AAPL", "AAPL", "AAPL", "GOOGL", "GOOGL"],
    "time": [10, 20, 30, 15, 25],
    "price": [150.0, 151.5, 149.8, 2800.0, 2805.0]
})

trades = xo.memtable({
    "symbol": ["AAPL", "AAPL", "GOOGL"],
    "trade_time": [12, 28, 18],
    "volume": [100, 50, 200]
})


duckdb_con = xo.duckdb.connect()
prices_db = stock_prices.into_backend(duckdb_con)
trades_db = trades.into_backend(duckdb_con)


result = trades_db.asof_join(
    prices_db,
    on=trades_db.trade_time >= prices_db.time,
    predicates="symbol"
).execute()

print("\nAsOf join (matches each trade to most recent price):")
print(result[["symbol", "trade_time", "time", "price", "volume"]])
1
Create sample stock prices and trades with timestamps.
2
Move both tables to DuckDB using .into_backend().
3
AsOf join on the temporal condition (trade_time >= time), matching by symbol.

The output shows each trade matched with the most recent price before the trade time:

  symbol  trade_time  time   price  volume
0  GOOGL          18    15  2800.0     200
1   AAPL          12    10   150.0     100
2   AAPL          28    20   151.5      50

What happened? The trade at time 12 gets the price from time 10 (most recent before 12). The trade at time 28 gets the price from time 20 (most recent before 28). This is an “as-of” temporal join—a DuckDB feature not available in the embedded backend.

.into_backend() transfers data between backends using Arrow’s zero-copy protocol.

Tip

Move data to a different backend when you need specific features (like DuckDB’s AsOf joins for temporal data) or better performance for your query type.

Compare backend performance

You’ll time the same query on different backends to see performance characteristics.

import time

def time_query(backend, name):
    """Time a query execution."""
    iris = xo.examples.iris.fetch(backend=backend)
    expr = (
        iris
        .filter(xo._.sepal_length > 5)
        .group_by("species")
        .agg(
            count=xo._.species.count(),
            avg_width=xo._.sepal_width.mean()
        )
    )
    
    start = time.time()
    result = expr.execute()
    elapsed = time.time() - start
    
    return elapsed, len(result)


con = xo.connect()
duck = xo.duckdb.connect()
pandas = xo.pandas.connect()


print("Timing comparison:")
print("-" * 50)


t1, rows1 = time_query(con, "Embedded")
print(f"Embedded:  {t1:.4f}s - {rows1} rows")

t2, rows2 = time_query(duck, "DuckDB")
print(f"DuckDB:    {t2:.4f}s - {rows2} rows")

t3, rows3 = time_query(pandas, "Pandas")
print(f"Pandas:    {t3:.4f}s - {rows3} rows")
1
Connect to all three backends.
2
Print a comparison header.
3
Time the same query on each backend.

For small datasets like iris, performance differences are minimal. With larger datasets, you’ll see DuckDB and the embedded backend outperform Pandas.

Choose the right backend

Each backend has different strengths. The embedded backend works well for most tasks. For analytical queries on larger datasets or temporal operations like AsOf joins, DuckDB is a better fit. Pandas works best for small datasets and quick prototyping.

Warning

Not all backends support every operation. For example, some complex window functions might work in DuckDB but not in Pandas. Check the documentation if you hit an unsupported operation error.

Now that you understand when to use each backend, here’s a complete workflow that ties everything together.

Complete example

Here’s a full example showing backend switching:

import xorq.api as xo

# Connect to backends
embedded = xo.connect()
duckdb = xo.duckdb.connect()

# Load data in embedded backend
data = xo.examples.iris.fetch(backend=embedded)

# Build expression
expr = (
    data
    .filter(xo._.sepal_length > 6)
    .group_by("species")
    .agg(avg_width=xo._.sepal_width.mean())
)

# Execute on embedded backend
result1 = expr.execute()
print("Embedded result:", result1)

# Move to DuckDB and execute there
data_in_duck = data.into_backend(duckdb)
expr_duck = (
    data_in_duck
    .filter(xo._.sepal_length > 6)
    .group_by("species")
    .agg(avg_width=xo._.sepal_width.mean())
)
result2 = expr_duck.execute()
print("DuckDB result:", result2)

Next steps

Now you know how to switch backends. Continue learning: