Friday, January 23, 2026

PgBench on MongoDB through Overseas Knowledge Wrapper


Disclaimer: That is an experiment, not a benchmark, and never an architectural advice. Translation layers don’t enhance efficiency, whether or not you emulate MongoDB on PostgreSQL or PostgreSQL on MongoDB.

I needed to check the efficiency of the mongo_fdw international information wrapper for PostgreSQL and reasonably than writing a selected benchmark, I used PgBench.

The default PgBench workload is just not consultant of an actual software as a result of all classes replace the identical row — the worldwide stability — but it surely’s helpful for testing lock competition. That is the place MongoDB shines, because it gives ACID ensures with out locking. I pressured the scenario by working pgbench -c 50, with 50 shopper connections competing to replace these rows.

To check, I’ve run the identical pgbench command on two PostgreSQL databases:

  • PostgreSQL tables created with pgbench -i, and benchmark run with pgbench -T 60 -c 50
  • PostgreSQL international tables storing their rows into MongoDB collections, although the MongoDB Overseas Knowledge Wrapper, and the identical pgbench command with -n as there’s nothing to VACUUM on MongoDB.



Setup (Docker)

I used to be utilizing my laptop computer (MacBook Professional Apple M4 Max), with native MongoDB atlas

I compiled mongo_fdw from EDB’s repository so as to add to the PostgreSQL 18 picture with the next Dockerfile:

FROM docker.io/postgres:18 AS construct
# Set up construct dependencies together with system libmongoc/libbson so autogen.sh does not compile them itself
RUN apt-get replace && apt-get set up -y --no-install-recommends wget unzip ca-certificates make gcc cmake pkg-config postgresql-server-dev-18 libssl-dev libzstd-dev libmongoc-dev libbson-dev libjson-c-dev libsnappy1v5 libmongocrypt0 && rm -rf /var/lib/apt/lists/*
# Construct atmosphere
ENV PKG_CONFIG_PATH=/tmp/mongo_fdw/mongo-c-driver/src/libmongoc/src:/tmp/mongo_fdw/mongo-c-driver/src/libbson/src
ENV LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
ENV MONGOC_INSTALL_DIR=${LD_LIBRARY_PATH}
ENV JSONC_INSTALL_DIR=${LD_LIBRARY_PATH}
# get MongoDB Overseas Knowledge Wrapper sources
RUN apt-get replace && apt-get set up -y --no-install-recommends wget unzip ca-certificates make gcc cmake pkg-config postgresql-server-dev-18 libssl-dev libzstd-dev libmongoc-dev libjson-c-dev libsnappy1v5 libmongocrypt0
ADD https://github.com/EnterpriseDB/mongo_fdw/archive/refs/heads/grasp.zip /tmp/sources.zip
RUN mkdir -p /tmp/mongo_fdw && unzip /tmp/sources.zip -d /tmp/mongo_fdw
# Construct MongoDB Overseas Knowledge Wrapper
WORKDIR /tmp/mongo_fdw/mongo_fdw-master
# take away ineffective ping
RUN sed -i -e '/Ping the database utilizing/d' -e 's?if (entry->conn != NULL)?/*&?' -e 's?return entry->conn?*/&?' connection.c
# construct with Mongodb shopper
RUN ./autogen.sh && make USE_PGXS=1 && make USE_PGXS=1 set up
# closing stage
FROM docker.io/postgres:18
COPY --from=construct /usr/share/postgresql/18/extension/mongo_fdw* /usr/share/postgresql/18/extension/
COPY --from=construct /usr/lib/postgresql/18/lib/mongo_fdw.so /usr/lib/postgresql/18/lib/
RUN apt-get replace && apt-get set up -y libmongoc-1.0-0 libbson-1.0-0 libmongocrypt0 libsnappy1v5 libutf8proc-dev && rm -rf /var/lib/apt/lists/*

Enter fullscreen mode

Exit fullscreen mode

I constructed this picture (docker construct -t pachot/postgres_mongo_fdw) and began it, linking it to a MongoDB Atlas container:

# begin MongoDB Atlas (use Atlas CLI)
atlas deployments setup  mongo --type native --port 27017 --force

# begin PostgreSQL with Mongo FDW linked to MongoDB
docker run -d --link mongo:mongo --name mpg -p 5432:5432 
 -e POSTGRES_PASSWORD=sillybenchmark pachot/postgres_mongo_fdw

Enter fullscreen mode

Exit fullscreen mode

I created a separate database for every check:

export PGHOST=localhost
export PGPASSWORD=sillybenchmark
export PGUSER=postgres

psql -c 'create database pgbench_mongo_fdw'
psql -c 'create database pgbench_postgres'
Enter fullscreen mode

Exit fullscreen mode

For the PostgreSQL baseline, I initialized the database with pgbench -i pgbench_postgres, which creates the tables with major keys and inserts 100,000 accounts right into a single department.

For MongoDB, I outlined the collections as international tables and linked with psql pgbench_mongo_fdw:


DROP EXTENSION if exists mongo_fdw CASCADE;

-- Allow the FDW extension
CREATE EXTENSION mongo_fdw;

-- Create FDW server pointing to the MongoDB host
CREATE SERVER mongo_srv
    FOREIGN DATA WRAPPER mongo_fdw
    OPTIONS (deal with 'mongo', port '27017');

-- Create person mapping for the present Postgres person
CREATE USER MAPPING FOR postgres
    SERVER mongo_srv
    OPTIONS (username 'postgres', password 'sillybenchmark');

-- Overseas tables for pgbench schema
CREATE FOREIGN TABLE pgbench_accounts(
    _id identify,
    assist int, bid int, abalance int, filler textual content
)
SERVER mongo_srv OPTIONS (assortment 'pgbench_accounts');

CREATE FOREIGN TABLE pgbench_branches(
    _id identify,
    bid int, bbalance int, filler textual content
)
SERVER mongo_srv OPTIONS (assortment 'pgbench_branches');

CREATE FOREIGN TABLE pgbench_tellers(
    _id identify,
    tid int, bid int, tbalance int, filler textual content
)
SERVER mongo_srv OPTIONS (assortment 'pgbench_tellers');

CREATE FOREIGN TABLE pgbench_history(
    _id identify,
    tid int, bid int, assist int, delta int, mtime timestamp, filler textual content
)
SERVER mongo_srv OPTIONS (assortment 'pgbench_history');

Enter fullscreen mode

Exit fullscreen mode

On the MongoDB server, I created the person and the collections mapped from PostgreSQL (utilizing mongosh):

db.createUser( {
  person: "postgres",
  pwd: "sillybenchmark",
  roles: [ { role: "readWrite", db: "test" } ]
} )
;

db.dropDatabase("check");
use check;

db.pgbench_branches.createIndex({bid:1},{distinctive:true});
db.pgbench_tellers.createIndex({tid:1},{distinctive:true});
db.pgbench_accounts.createIndex({assist:1},{distinctive:true});
db.createCollection("pgbench_history");
Enter fullscreen mode

Exit fullscreen mode

As a result of pgbench -i truncates tables, which the MongoDB Overseas Knowledge Wrapper doesn’t help, I as a substitute use INSERT instructions (through psql pgbench_mongo_fdw) much like these run by pgbench -i:

set scale 1

INSERT INTO pgbench_branches (bid, bbalance, filler)
  SELECT bid, 0, ''
  FROM generate_series(1, :scale) AS bid;

INSERT INTO pgbench_tellers (tid, bid, tbalance, filler)
  SELECT tid, ((tid - 1) / 10) + 1, 0, ''
  FROM generate_series(1, :scale * 10) AS tid;

INSERT INTO pgbench_accounts (assist, bid, abalance, filler)
  SELECT assist, ((assist - 1) / 100000) + 1, 0, ''
  FROM generate_series(1, :scale * 100000) AS assist;

Enter fullscreen mode

Exit fullscreen mode

Here’s what I’ve run—the outcomes comply with:


docker exec -it mpg 
 pgbench    -T 60 -P 5 -c 50 -r -U postgres -M ready pgbench_postgres              

docker exec -it mpg 
 pgbench -n -T 60 -P 5 -c 50 -r -U postgres -M ready pgbench_mongo_fdw

Enter fullscreen mode

Exit fullscreen mode




PostgreSQL (tps = 4085, latency common = 12 ms)

Listed below are the outcomes of the usual pgbench benchmark on PostgreSQL tables:

franck.pachot % docker exec -it mpg 
 pgbench    -T 60 -P 5 -c 50 -r -U postgres -M ready pgbench_postgres

pgbench (18.1 (Debian 18.1-1.pgdg13+2))
beginning vacuum...finish.
progress: 5.0 s, 3847.4 tps, lat 12.860 ms stddev 14.474, 0 failed
progress: 10.0 s, 4149.0 tps, lat 12.051 ms stddev 12.893, 0 failed
progress: 15.0 s, 3940.6 tps, lat 12.668 ms stddev 12.576, 0 failed
progress: 20.0 s, 3500.0 tps, lat 14.300 ms stddev 16.424, 0 failed
progress: 25.0 s, 4013.0 tps, lat 12.462 ms stddev 13.175, 0 failed
progress: 30.0 s, 3437.4 tps, lat 14.539 ms stddev 25.607, 0 failed
progress: 35.0 s, 4421.9 tps, lat 11.308 ms stddev 12.100, 0 failed
progress: 40.0 s, 4485.0 tps, lat 11.140 ms stddev 12.031, 0 failed
progress: 45.0 s, 4286.2 tps, lat 11.654 ms stddev 13.244, 0 failed
progress: 50.0 s, 4008.6 tps, lat 12.476 ms stddev 13.586, 0 failed
progress: 55.0 s, 4551.8 tps, lat 10.959 ms stddev 13.791, 0 failed
progress: 60.0 s, 4356.2 tps, lat 11.505 ms stddev 15.813, 0 failed
transaction kind: <builtin: TPC-B (kind of)>
scaling issue: 1
question mode: ready
variety of purchasers: 50
variety of threads: 1
most variety of tries: 1
period: 60 s
variety of transactions really processed: 245035
variety of failed transactions: 0 (0.000%)
latency common = 12.234 ms
latency stddev = 14.855 ms
preliminary connection time = 38.862 ms
tps = 4085.473436 (with out preliminary connection time)
assertion latencies in milliseconds and failures:
         0.000           0 set assist random(1, 100000 * :scale)
         0.000           0 set bid random(1, 1 * :scale)
         0.000           0 set tid random(1, 10 * :scale)
         0.000           0 set delta random(-5000, 5000)
         0.036           0 BEGIN;
         0.058           0 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE assist = :assist;
         0.039           0 SELECT abalance FROM pgbench_accounts WHERE assist = :assist;
        10.040           0 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
         1.817           0 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
         0.041           0 INSERT INTO pgbench_history (tid, bid, assist, delta, mtime) VALUES (:tid, :bid, :assist, :delta, CURRENT_TIMESTAMP);
         0.202           0 END;
Enter fullscreen mode

Exit fullscreen mode

The run averages 4,000 transactions per second with 12 ms latency. Most latency comes from the primary replace, when all connections goal the identical row and can’t execute concurrently.




MongoDB (tps = 4922, latency common = 10 ms)

Right here is similar run, with international tables studying from and writing to MongoDB as a substitute of PostgreSQL:

franck.pachot % docker exec -it mpg 
 pgbench -n -T 60 -P 5 -c 50 -r -U postgres -M ready pgbench_mongo_fdw

pgbench (18.1 (Debian 18.1-1.pgdg13+2))
progress: 5.0 s, 4752.1 tps, lat 10.379 ms stddev 4.488, 0 failed
progress: 10.0 s, 4942.9 tps, lat 10.085 ms stddev 3.356, 0 failed
progress: 15.0 s, 4841.7 tps, lat 10.292 ms stddev 2.256, 0 failed
progress: 20.0 s, 4640.4 tps, lat 10.744 ms stddev 3.498, 0 failed
progress: 25.0 s, 5011.3 tps, lat 9.943 ms stddev 1.724, 0 failed
progress: 30.0 s, 4536.0 tps, lat 10.996 ms stddev 8.739, 0 failed
progress: 35.0 s, 4862.1 tps, lat 10.248 ms stddev 2.062, 0 failed
progress: 40.0 s, 5080.6 tps, lat 9.812 ms stddev 1.740, 0 failed
progress: 45.0 s, 5238.3 tps, lat 9.513 ms stddev 1.673, 0 failed
progress: 50.0 s, 4957.9 tps, lat 10.055 ms stddev 2.136, 0 failed
progress: 55.0 s, 5184.8 tps, lat 9.608 ms stddev 1.550, 0 failed
progress: 60.0 s, 4998.5 tps, lat 9.970 ms stddev 2.296, 0 failed
transaction kind: <builtin: TPC-B (kind of)>
scaling issue: 1
question mode: ready
variety of purchasers: 50
variety of threads: 1
most variety of tries: 1
period: 60 s
variety of transactions really processed: 295288
variety of failed transactions: 0 (0.000%)
latency common = 10.122 ms
latency stddev = 3.487 ms
preliminary connection time = 45.401 ms
tps = 4921.889293 (with out preliminary connection time)
assertion latencies in milliseconds and failures:
         0.000           0 set assist random(1, 100000 * :scale)
         0.000           0 set bid random(1, 1 * :scale)
         0.000           0 set tid random(1, 10 * :scale)
         0.000           0 set delta random(-5000, 5000)
         0.121           0 BEGIN;
         2.341           0 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE assist = :assist;
         0.339           0 SELECT abalance FROM pgbench_accounts WHERE assist = :assist;
         2.328           0 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
         2.580           0 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
         2.287           0 INSERT INTO pgbench_history (tid, bid, assist, delta, mtime) VALUES (:tid, :bid, :assist, :delta, CURRENT_TIMESTAMP);
         0.126           0 END;
Enter fullscreen mode

Exit fullscreen mode

MongoDB doesn’t look ahead to locks, so all statements have related response occasions. This yields increased throughput and decrease latency, with the extra layer’s overhead offset by the quicker storage engine.

Within the Dockerfile, I patched the international information wrapper’s connection.c after I’ve seen pointless ping within the name stack. Operating on MongoDB collections was nonetheless quicker than PostgreSQL with the unique code. The PostgreSQL international information wrapper, whereas helpful, is never optimized, provides latency, and affords restricted transaction management and pushdown optimizations. It could actually nonetheless be acceptable to dump some tables to MongoDB collections till you exchange your SQL and join on to MongoDB.


Anyway, do not forget that benchmarks will be made to help nearly any conclusion, together with its reverse. What actually issues is knowing how your database works. Right here, excessive transaction concurrency on a saturated CPU favors MongoDB’s optimistic locking.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles