In MongoDB, a write concern of w:1 signifies {that a} write operation is taken into account profitable as soon as the first node acknowledges it, with out ready for the information to be replicated to secondary nodes. Whereas this reduces latency, it additionally introduces the chance that if the first fails earlier than replication happens, the written knowledge might be misplaced. In duplicate units with a number of voters, such writes may be rolled again if a failure occurs earlier than a majority acknowledges the change.
This isn’t the default setting. Most clusters (Major-Secondary-Secondary) use an implicit w:majority write concern, which ensures sturdiness within the occasion of a zone failure. The implicit default write concern is w:1 solely when an arbiter is current (Major-Secondary-Arbiter) or when the topology lowers the variety of data-bearing voters.
For efficiency causes, chances are you’ll typically write with w:1. Nonetheless, it is vital to grasp the results this setting might need in sure failure situations. To make clear, right here is an instance.
I began a three-node duplicate set utilizing Docker:
docker community create lab
docker run -d --network lab --name m1 --hostname m1 mongo --bind_ip_all --replSet rs
docker run -d --network lab --name m2 --hostname m2 mongo --bind_ip_all --replSet rs
docker run -d --network lab --name m3 --hostname m3 mongo --bind_ip_all --replSet rs
docker exec -it m1 mongosh --host m1 --eval '
rs.provoke( {_id: "rs", members: [
{_id: 0, priority: 3, host: "m1:27017"},
{_id: 1, priority: 2, host: "m2:27017"},
{_id: 2, priority: 1, host: "m3:27017"}]
});
'
I create a set with one “previous” doc:
docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.drop();
db.myCollection.insertOne(
{identify: "previous"},
{writeConcern: {w: "majority", wtimeout: 15000}}
);
'
I checked that the doc is there:
docker exec -it m1 mongosh --host m1 --eval 'db.myCollection.discover()'
[ { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' } ]
I disconnected two nodes, so I now not had a majority. Nonetheless, I rapidly inserted a brand new doc earlier than the first stepped down and have become a secondary that may not settle for new writes:
docker community disconnect lab m2
docker community disconnect lab m3
docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.insertOne(
{identify: "new"},
{writeConcern: {w: "1", wtimeout: 15000}}
);
'
Observe using writeConcern: {w: "1"} to explicitly cut back consistency. With out this setting, the default is “majority”. In that case, the write operation would have waited till a timeout, permitting the appliance to acknowledge that sturdiness couldn’t be assured and that the write was unsuccessful.
With writeConcern: {w: 1}, the operation was acknowledged and the information grew to become seen:
docker exec -it m1 mongosh --host m1 --eval 'db.myCollection.discover()'
[
{ _id: ObjectId('691df945727482ee30fa3350'), name: 'old' },
{ _id: ObjectId('691dfa0ff09d463d36fa3350'), name: 'new' }
]
Take into account that that is seen when utilizing the default ‘native’ learn concern, however not when utilizing ‘majority’:
docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.discover().readConcern("majority")
'
[
{ _id: ObjectId('691df945727482ee30fa3350'), name: 'old' }
]
I checked the Oplog to verify that the idempotent model of my change was current:
docker exec -it m1 mongosh --host m1 native --eval '
db.oplog.rs
.discover({ns:"take a look at.myCollection"},{op:1, o:1, t:1})
.type({ ts: -1 });
'
[
{
op: 'i',
o: { _id: ObjectId('691dfa0ff09d463d36fa3350'), name: 'new' },
t: Long('1')
},
{
op: 'i',
o: { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' },
t: Long('1')
}
]
The first node accepted w:1 writes solely briefly, through the interval between shedding quorum and stepping down. Afterwards, it routinely switches to SECONDARY, and since no quorum is current, there isn’t any PRIMARY. This state can persist for a while:
docker exec -it m1 mongosh --host m1 --eval '
rs.standing().members
'
[
{
_id: 0,
name: 'm1:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 1172,
optime: { ts: Timestamp({ t: 1763572239, i: 1 }), t: Long('1') },
optimeDate: ISODate('2025-11-19T17:10:39.000Z'),
optimeWritten: { ts: Timestamp({ t: 1763572239, i: 1 }), t: Long('1') },
optimeWrittenDate: ISODate('2025-11-19T17:10:39.000Z'),
lastAppliedWallTime: ISODate('2025-11-19T17:10:39.685Z'),
lastDurableWallTime: ISODate('2025-11-19T17:10:39.685Z'),
lastWrittenWallTime: ISODate('2025-11-19T17:10:39.685Z'),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: 1,
configTerm: 1,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: 'm2:27017',
health: 0,
state: 8,
stateStr: '(not reachable/healthy)',
uptime: 0,
optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
lastAppliedWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastDurableWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastWrittenWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastHeartbeat: ISODate('2025-11-19T17:26:03.626Z'),
lastHeartbeatRecv: ISODate('2025-11-19T17:10:37.153Z'),
pingMs: Long('0'),
lastHeartbeatMessage: 'Error connecting to m2:27017 :: caused by :: Could not find address for m2:27017: SocketException: onInvoke :: caused by :: Host not found (authoritative)',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: 1,
configTerm: 1
},
{
_id: 2,
name: 'm3:27017',
health: 0,
state: 8,
stateStr: '(not reachable/healthy)',
uptime: 0,
optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
lastAppliedWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastDurableWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastWrittenWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastHeartbeat: ISODate('2025-11-19T17:26:03.202Z'),
lastHeartbeatRecv: ISODate('2025-11-19T17:10:37.153Z'),
pingMs: Long('0'),
lastHeartbeatMessage: 'Error connecting to m3:27017 :: caused by :: Could not find address for m3:27017: SocketException: onInvoke :: caused by :: Host not found (authoritative)',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: 1,
configTerm: 1
}
]
When there isn’t any main, no additional writes are accepted—even for those who set writeConcern: {w: "1"}:
docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.insertOne(
{identify: "new"},
{writeConcern: {w: "1", wtimeout: 15000}}
);
'
MongoServerError: not main
The system could stay on this state for a while. When at the very least one sync duplicate comes again on-line, it should pull the Oplog and synchronize the write to the quorum, making the acknowledged write sturdy.
Utilizing writeConcern: {w: "1"} boosts efficiency, as the first does not anticipate acknowledgments from different nodes. This write concern tolerates a single node failure for the reason that quorum stays, and may even face up to one other temporary failure. Nonetheless, if a failure persists, further writes aren’t accepted, decreasing the chance of unacknowledged writes. Normally, when a node recovers, it synchronizes through the Oplog, and the first resumes accepting writes.
Within the frequent state of affairs the place temporary, transient failures could happen, utilizing writeConcern: {w: "1"} means the database stays out there if the failure is only a momentary glitch. Nonetheless, the purpose right here is as an instance the worst-case state of affairs. If one node accepts a write that isn’t acknowledged by another node, and this node fails earlier than any others get better, that write could also be misplaced.
For example this doable state of affairs, I first disconnected this node after which proceeded to attach the remaining ones:
docker community disconnect lab m1
docker community join lab m2
docker community join lab m3
On this worst-case state of affairs, a brand new quorum is fashioned with a state that predates when the write might be synchronized to the replicas. Nonetheless, progress continues as a result of a brand new main is established:
docker exec -it m2 mongosh --host m2 --eval '
rs.standing().members
'
> '
[
{
_id: 0,
name: 'm1:27017',
health: 0,
state: 8,
stateStr: '(not reachable/healthy)',
uptime: 0,
optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
lastAppliedWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastDurableWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastWrittenWallTime: ISODate('2025-11-19T17:10:34.194Z'),
lastHeartbeat: ISODate('2025-11-19T17:39:02.913Z'),
lastHeartbeatRecv: ISODate('2025-11-19T17:10:38.153Z'),
pingMs: Long('0'),
lastHeartbeatMessage: 'Error connecting to m1:27017 :: caused by :: Could not find address for m1:27017: SocketException: onInvoke :: caused by :: Host not found (authoritative)',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: 1,
configTerm: 1
},
{
_id: 1,
name: 'm2:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 1952,
optime: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
optimeDate: ISODate('2025-11-19T17:38:56.000Z'),
optimeWritten: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
optimeWrittenDate: ISODate('2025-11-19T17:38:56.000Z'),
lastAppliedWallTime: ISODate('2025-11-19T17:38:56.678Z'),
lastDurableWallTime: ISODate('2025-11-19T17:38:56.678Z'),
lastWrittenWallTime: ISODate('2025-11-19T17:38:56.678Z'),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: 'Could not find member to sync from',
electionTime: Timestamp({ t: 1763573886, i: 1 }),
electionDate: ISODate('2025-11-19T17:38:06.000Z'),
configVersion: 1,
configTerm: 2,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 2,
name: 'm3:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 58,
optime: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
optimeDurable: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
optimeWritten: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
optimeDate: ISODate('2025-11-19T17:38:56.000Z'),
optimeDurableDate: ISODate('2025-11-19T17:38:56.000Z'),
optimeWrittenDate: ISODate('2025-11-19T17:38:56.000Z'),
lastAppliedWallTime: ISODate('2025-11-19T17:38:56.678Z'),
lastDurableWallTime: ISODate('2025-11-19T17:38:56.678Z'),
lastWrittenWallTime: ISODate('2025-11-19T17:38:56.678Z'),
lastHeartbeat: ISODate('2025-11-19T17:39:02.679Z'),
lastHeartbeatRecv: ISODate('2025-11-19T17:39:01.178Z'),
pingMs: Long('0'),
lastHeartbeatMessage: '',
syncSourceHost: 'm2:27017',
syncSourceId: 1,
infoMessage: '',
configVersion: 1,
configTerm: 2
}
]
This duplicate set has a main and is accepting new writes with a brand new Raft time period (configTerm: 2). Nonetheless, throughout restoration, it ignored a pending write from the earlier time period (configTerm: 1) that originated from an unreachable node.
A write made with w:1 after the quorum was misplaced however earlier than the first stepped down was misplaced:
docker exec -it m2 mongosh --host m2 --eval '
db.myCollection.discover()
'
[ { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' } ]
After reconnecting the primary node, it enters restoration mode and synchronizes with the opposite nodes, all of that are on time period 2:
docker community join lab m1
docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.discover()
'
MongoServerError: Oplog assortment reads should not allowed whereas in the rollback or startup state.
The rollback course of employs the ‘Get well To A Timestamp’ algorithm to revive the node to the best majority-committed level. Whereas rolling again, the node transitions to the ROLLBACK state, suspends consumer operations, finds the frequent level with the sync supply, and recovers to the steady timestamp.
After restoration, adjustments made in time period 1 that didn’t obtain quorum acknowledgment are truncated from the Oplog. This habits is an extension to the usual Raft algorithm:
docker exec -it m1 mongosh --host m1 --eval '
rs.standing().members
'
[
{
op: 'i',
o: { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' },
t: Long('1')
}
]
A w:1 write that was seen at one level, and acknlowledged to the consumer, however by no means really dedicated to the quorum, has now disappeared:
docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.discover()
'
[ { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' } ]
With writeConcern: {w: 1}, the developer should be conscious that such challenge can come up if a write happens instantly after quorum is misplaced and the first fails earlier than different nodes get better.
Whereas SQL databases usually summary bodily issues comparable to persistence and replication, MongoDB shifts extra duty to builders. By default, acknowledged writes are thought-about sturdy solely as soon as a majority of nodes verify they’re synced to disk.
In some instances, strict write ensures are pointless and may be relaxed for improved efficiency. Builders can alter the write concern to swimsuit their utility’s wants. When utilizing writeConcern: {w: 1}, this impacts two elements of ACID:
- Sturdiness: If there’s a failure impacting each the first and replicas, and solely replicas get better, writes not acknowledged by replicas could also be rolled again—much like PostgreSQL’s
synchronous_commit = native. - Isolation: Reads with ‘native’ concern may not see writes confirmed to the consumer till these are acknowledged by a majority. There is no such thing as a PostgreSQL equal to MongoDB’s ‘majority’ learn concern (MVCC visibility monitoring what was utilized on the replicas).
Though writeConcern: {w: 1} is typically described as allowing ‘soiled reads’, this time period is deceptive as it is usually used as a synonym of ‘learn uncommitted’ in relational databases. In SQL databases with a single read-write occasion, ‘uncommitted learn’ refers to single-server isolation (the I in ACID). Nonetheless, with writeConcern: {w: 1} and a ‘majority’ learn concern, uncommitted reads don’t happen and solely dedicated adjustments are seen to different classes. The true problem entails sturdiness (the D in ACID) within the context of a reproduction set. With conventional SQL databases replication, writes is likely to be seen earlier than all friends (duplicate, WAL, utility) have totally acknowledged them, since there is not any single atomic operation protecting all. MongoDB’s w:1 is analogous, and calling it a ‘soiled learn’ is beneficial to focus on the implications for builders.
