Releases: mongodb/node-mongodb-native
v6.12.0
6.12.0 (2024-12-10)
The MongoDB Node.js team is pleased to announce version 6.12.0 of the mongodb
package!
Release Notes
[email protected] is now supported for zstd compression
The new @mongodb-js/[email protected] release can now be used with the driver for zstd compression.
Populate ServerDescription.error
field when primary marked stale
We now attach an error to the newly created ServerDescription object when marking a primary as stale. This helps with debugging SDAM issues when monitoring SDAM events.
BSON upgraded to v6.10.1
See: https://github.com/mongodb/js-bson/releases/tag/v6.10.1
Socket read stream set to object mode
Socket data was being read with a stream set to buffer mode when it should be set to object mode to prevent inaccurate data chunking, which may have caused message parsing errors in rare cases.
SOCKS5: MongoNetworkError wrap fix
If the driver encounters an error while connecting to a socks5 proxy, the driver wraps the socks5 error in a MongoNetworkError. In some circumstances, this resulted in the driver wrapping MongoNetworkErrors inside MongoNetworkErrors.
The driver no longer double wraps errors in MongoNetworkErrors.
Features
- NODE-6593: add support for [email protected] (#4346) (ea8a33f)
- NODE-6605: add error message when invalidating primary (#4340) (37613f1)
Bug Fixes
- NODE-6583: upgrade to BSON v6.10.1 to remove internal unbounded type cache (#4338) (249c279)
- NODE-6600: set object mode correctly for message chunking in SizedMessageTransform (#4345) (5558573)
- NODE-6602: only wrap errors from SOCKS in network errors (#4347) (ed83f36)
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.11.0
6.11.0 (2024-11-22)
The MongoDB Node.js team is pleased to announce version 6.11.0 of the mongodb
package!
Release Notes
Client Side Operations Timeout (CSOT)
We've been working hard to try to simplify how setting timeouts works in the driver and are excited to finally put Client Side Operation Timeouts (CSOT) in your hands! We're looking forward to hearing your feedback on this new feature during its trial period in the driver, so feel free to file Improvements, Questions or Bug reports on our Jira Project or leave comments on this community forum thread: Node.js Driver 6.11 Forum Discussion!
CSOT is the common drivers solution for timing out the execution of an operation at the different stages of an operation's lifetime. At its simplest, CSOT allows you to specify one option,timeoutMS
that determines when the driver will interrupt an operation and return a timeout error.
For example, when executing a potentially long-running query, you would specify timeoutMS
as follows:
await collection.find({}, {timeoutMS: 600_000}).toArray(); // Ensures that the find will throw a timeout error if all documents are not retrieved within 10 minutes
// Potential Stack trace if this were to time out:
// Uncaught MongoOperationTimeoutError: Timed out during socket read (600000ms)
// at Connection.readMany (mongodb/lib/cmap/connection.js:427:31)
// at async Connection.sendWire (mongodb/lib/cmap/connection.js:246:30)
// at async Connection.sendCommand (mongodb/lib/cmap/connection.js:281:24)
// at async Connection.command (mongodb/lib/cmap/connection.js:323:26)
// at async Server.command (mongodb/lib/sdam/server.js:170:29)
// at async GetMoreOperation.execute (mongodb/lib/operations/get_more.js:58:16)
// at async tryOperation (mongodb/lib/operations/execute_operation.js:203:20)
// at async executeOperation (mongodb/lib/operations/execute_operation.js:73:16)
// at async FindCursor.getMore (mongodb/lib/cursor/abstract_cursor.js:590:16)
Warning
This feature is experimental and subject to change at any time. We do not recommend using this feature in production applications until it is stable.
What's new?
timeoutMS
The main new option introduced with CSOT is the timeoutMS
option. This option can be applied directly as a client option, as well as at the database, collection, session, transaction and operation layers, following the same inheritance behaviours as other driver options.
When the timeoutMS
option is specified, it will always take precedence over the following options:
socketTimeoutMS
waitQueueTimeoutMS
wTimeoutMS
maxTimeMS
maxCommitTimeMS
Note, however that timeoutMS
DOES NOT unconditionally override the serverSelectionTimeoutMS
option.
When timeoutMS
is specified, the duration of time allotted to the server selection and connection checkout portions of command execution is defined by min(serverSelectionTimeoutMS, timeoutMS)
if both are >0
. A zero value for either timeout value represents an infinite timeout. A finite timeout will always be used unless both timeouts are specified as 0
. Note also that the driver has a default value for serverSelectionTimeoutMS
of 30000
.
After server selection and connection checkout are complete, the time remaining bounds the execution of the remainder of the operation.
Note
Specifying timeoutMS
is not a hard guarantee that an operation will take exactly the duration specified. In the circumstances identified below, the driver's internal cleanup logic can result in an operation exceeding the duration specified by timeoutMS
.
AbstractCursor.toArray()
- can take up to2 * timeoutMS
in'cursorLifetimeMode'
and(n+1) * timeoutMS
when returning n batches in'iteration'
modeAbstractCursor.[Symbol.asyncIterator]()
- can take up to2 * timeoutMS
in'cursorLifetimeMode'
and (n+1)*timeoutMS
when returning n batches in'iteration'
modeMongoClient.bulkWrite()
- can take up to 2 * timeoutMS in error scenarios when the driver must clean up cursors used internally.- CSFLE/QE - can take up to 2 * timeoutMS in rare error scenarios when the driver must clean up cursors used internally when fetching keys from the keyvault or listing collections.
In the AbstractCursor.toArray
case and the AbstractCursor.[Symbol.asyncIterator]
case, this occurs as these methods close the cursor when they finish returning their documents. As detailed in the following section, this results in a refreshing of the timeout before sending the killCursors
command to close the cursor on the server.
The MongoClient.bulkWrite
and autoencryption implementations use cursors under the hood and so inherit this issue.
Cursors, timeoutMS
and timeoutMode
Cursors require special handling with the new timout paradigm introduced here. Cursors can be configured to interact with CSOT in two ways.
The first, 'cursorLifetime'
mode, uses the timeoutMS
to bound the entire lifetime of a cursor and is the default timeout mode for non-tailable cursors (find, aggregate*, listCollections, etc.). This means that the initialization of the cursor and all subsequent getMore
calls MUST finish within timeoutMS
or a timeout error will be thrown. Note, however that the closing of a cursor, either as part of a toArray()
call or manually via the close()
method resets the timeout before sending a killCursors
operation to the server.
e.g.
// This will ensure that the initialization of the cursor and retrieval of all docments will occur within 1000ms, throwing an error if it exceeds this time limit
const docs = await collection.find({}, {timeoutMS: 1000}).toArray();
The second, 'iteration'
mode, uses timeoutMS
to bound each next
/hasNext
/tryNext
call, refreshing the timeout after each call completes. This is the default mode for all tailable cursors (tailable find cursors on capped collections, change streams, etc.). e.g.
// Each turn of the async iterator will take up to 1000ms before it throws
for await (const doc of cappedCollection.find({}, {tailable: true, timeoutMS: 1000})) {
// process document
}
Note that timeoutMode
is also configurable on a per-cursor basis.
GridFS and timeoutMS
GridFS streams interact with timeoutMS
in a similar manner to cursors in 'cursorLifeTime'
mode in that timeoutMS
bounds the entire lifetime of the stream.
In addition, GridFSBucket.find
, GridFSBucket.rename
and GridFSBucket.drop
all support the timeoutMS
option and behave in the same way as other operations.
Sessions, Transactions, timeoutMS
and defaultTimeoutMS
ClientSessions have a new option: defaultTimeoutMS
, which specifies the timeoutMS
value to use for:
commitTransaction
abortTransaction
withTransaction
endSession
Note
If defaultTimeoutMS
is not specified, then it will inherit the timeoutMS
of the parent MongoClient
.
When using ClientSession.withTransaction
, the timeoutMS
can be configured either in the options on the withTransaction
call or inherited from the session's defaultTimeoutMS
. This timeoutMS
will apply to the entirety of the withTransaction
callback provided that the session is correctly passed into each database operation. If the session is not passed into the operation, it will not respect the configured timeout. Also be aware that trying to override the timeoutMS
at the operation level for operations making use of the explicit session inside the withTransaction
callback will result in an error being thrown.
const session = client.startSession({defaultTimeoutMS: 1000});
const coll = client.db('db').collection('coll');
// ❌ Incorrect; will throw an error
await session.withTransaction(async function(session) {
await coll.insertOne({x:1}, { session, timeoutMS: 600 });
})
// ❌ Incorrect; will not respect timeoutMS configured on session
await session.withTransaction(async function(session) {
await coll.insertOne({x:1}, {});
})
ClientEncryption and timeoutMS
The ClientEncryption
class now supports the timeoutMS
option. If timeoutMS
is provided when constructing a ClientEncryption
instance, it will be used to govern the lifetime of all operations performed on instance, otherwise, it will inherit from the timeoutMS
set on the MongoClient
provided to the ClientEncryption
constructor.
If timeoutMS
is set on both the client and provided to ClientEncryption directly, the option provided to ClientEncryption
takes precedence.
const encryption = new ClientEncryption(new MongoClient('localhost:27027'), { timeoutMS: 1_000 });
await encryption.createDataKey('local'); // will not take longer than 1_000ms
const encryption = new ClientEncryption(new MongoClient('localhost:27027', { timeoutMS: 1_000 }));
await encryption.createDataKey('local'); // will not take longer than 1_000ms
const encryption = new ClientEncryption(new MongoClient('localhost:27027', { timeoutMS: 5_000 }), { timeoutMS: 1_000 });
await encryption.createDataKey('local'); // will not take longer than 1_000ms
Limitations
At the time of writing, when using the driver's autoconnect feature alongside CSOT, the time taken for the command doing the autonnection will not be bound by the configured timeoutMS
. We made this design choice because the client's connection logic handles a number of potentially long-running I/O and other setup operations including reading certificate files, DNS lookups, instantiating server monitors, and launching external processes for client encryption.
We recommend manuall...
v6.10.0
6.10.0 (2024-10-21)
The MongoDB Node.js team is pleased to announce version 6.10.0 of the mongodb
package!
Release Notes
Warning
Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.
Support for new client bulkWrite API (8.0+)
A new bulk write API on the MongoClient
is now supported for users on server versions 8.0 and higher.
This API is meant to replace the existing bulk write API on the Collection
as it supports a bulk
write across multiple databases and collections in a single call.
Usage
Users of this API call MongoClient#bulkWrite
and provide a list of bulk write models and options.
The models have a structure as follows:
Insert One
Note that when no _id
field is provided in the document, the driver will generate a BSON ObjectId
automatically.
{
namespace: '<db>.<collection>',
name: 'insertOne',
document: Document
}
Update One
{
namespace: '<db>.<collection>',
name: 'updateOne',
filter: Document,
update: Document | Document[],
arrayFilters?: Document[],
hint?: Document | string,
collation?: Document,
upsert: boolean
}
Update Many
Note that write errors occuring with an update many model present are not retryable.
{
namespace: '<db>.<collection>',
name: 'updateMany',
filter: Document,
update: Document | Document[],
arrayFilters?: Document[],
hint?: Document | string,
collation?: Document,
upsert: boolean
}
Replace One
{
namespace: '<db>.<collection>',
name: 'replaceOne',
filter: Document,
replacement: Document,
hint?: Document | string,
collation?: Document
}
Delete One
{
namespace: '<db>.<collection>',
name: 'deleteOne',
filter: Document,
hint?: Document | string,
collation?: Document
}
Delete Many
Note that write errors occuring with a delete many model present are not retryable.*
{
namespace: '<db>.<collection>',
name: 'deleteMany',
filter: Document,
hint?: Document | string,
collation?: Document
}
Example
Below is a mixed model example of using the new API:
const client = new MongoClient(process.env.MONGODB_URI);
const models = [
{
name: 'insertOne',
namespace: 'db.authors',
document: { name: 'King' }
},
{
name: 'insertOne',
namespace: 'db.books',
document: { name: 'It' }
},
{
name: 'updateOne',
namespace: 'db.books',
filter: { name: 'it' },
update: { $set: { year: 1986 } }
}
];
const result = await client.bulkWrite(models);
The bulk write specific options that can be provided to the API are as follows:
ordered
: Optional boolean that indicates the bulk write as ordered. Defaults to true.verboseResults
: Optional boolean to indicate to provide verbose results. Defaults to false.bypassDocumentValidation
: Optional boolean to bypass document validation rules. Defaults to false.let
: Optional document of parameter names and values that can be accessed using $$var. No default.
The object returned by the bulk write API is:
interface ClientBulkWriteResult {
// Whether the bulk write was acknowledged.
readonly acknowledged: boolean;
// The total number of documents inserted across all insert operations.
readonly insertedCount: number;
// The total number of documents upserted across all update operations.
readonly upsertedCount: number;
// The total number of documents matched across all update operations.
readonly matchedCount: number;
// The total number of documents modified across all update operations.
readonly modifiedCount: number;
// The total number of documents deleted across all delete operations.
readonly deletedCount: number;
// The results of each individual insert operation that was successfully performed.
// Note the keys in the map are the associated index in the models array.
readonly insertResults?: ReadonlyMap<number, ClientInsertOneResult>;
// The results of each individual update operation that was successfully performed.
// Note the keys in the map are the associated index in the models array.
readonly updateResults?: ReadonlyMap<number, ClientUpdateResult>;
// The results of each individual delete operation that was successfully performed.
// Note the keys in the map are the associated index in the models array.
readonly deleteResults?: ReadonlyMap<number, ClientDeleteResult>;
}
Error Handling
Server side errors encountered during a bulk write will throw a MongoClientBulkWriteError
. This error
has the following properties:
writeConcernErrors
: Ann array of documents for each write concern error that occurred.writeErrors
: A map of index pointing at the models provided and the individual write error.partialResult
: The client bulk write result at the point where the error was thrown.
Schema assertion support
interface Book {
name: string;
authorName: string;
}
interface Author {
name: string;
}
type MongoDBSchemas = {
'db.books': Book;
'db.authors': Author;
}
const model: ClientBulkWriteModel<MongoDBSchemas> = {
namespace: 'db.books'
name: 'insertOne',
document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
// error `authorName` cannot be number
};
Notice how authorName is type checked against the Book
type because namespace is set to "db.books"
.
Allow SRV hostnames with fewer than three .
separated parts
In an effort to make internal networking solutions easier to use like deployments using kubernetes, the client now accepts SRV hostname strings with one or two .
separated parts.
await new MongoClient('mongodb+srv://mongodb.local').connect();
For security reasons, the returned addresses of SRV strings with less than three parts must end with the entire SRV hostname and contain at least one additional domain level. This is because this added validation ensures that the returned address(es) are from a known host. In future releases, we plan on extending this validation to SRV strings with three or more parts, as well.
// Example 1: Validation fails since the returned address doesn't end with the entire SRV hostname
'mongodb+srv://mySite.com' => 'myEvilSite.com'
// Example 2: Validation fails since the returned address is identical to the SRV hostname
'mongodb+srv://mySite.com' => 'mySite.com'
// Example 3: Validation passes since the returned address ends with the entire SRV hostname and contains an additional domain level
'mongodb+srv://mySite.com' => 'cluster_1.mySite.com'
Explain now supports maxTimeMS
Driver CRUD commands can be explained by providing the explain
option:
collection.find({}).explain('queryPlanner'); // using the fluent cursor API
collection.deleteMany({}, { explain: 'queryPlanner' }); // as an option
However, if maxTimeMS was provided, the maxTimeMS value was applied to the command to explain, and consequently the server could take more than maxTimeMS to respond.
Now, maxTimeMS can be specified as a new option for explain commands:
collection.find({}).explain({ verbosity: 'queryPlanner', maxTimeMS: 2000 }); // using the fluent cursor API
collection.deleteMany({}, {
explain: {
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
); // as an option
If a top-level maxTimeMS option is provided in addition to the explain maxTimeMS, the explain-specific maxTimeMS is applied to the explain command, and the top-level maxTimeMS is applied to the explained command:
collection.deleteMany({}, {
maxTimeMS: 1000,
explain: {
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
);
// the actual command that gets sent to the server looks like:
{
explain: { delete: <collection name>, ..., maxTimeMS: 1000 },
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
Find and Aggregate Explain in Options is Deprecated
Note
Specifying explain for cursors in the operation's options is deprecated in favor of the .explain()
methods on cursors:
collection.find({}, { explain: true })
// -> collection.find({}).explain()
collection.aggregate([], { explain: true })
// -> collection.aggregate([]).explain()
db.find([], { explain: true })
// -> db.find([]).explain()
Fixed writeConcern.w set to 0 unacknowledged write protocol trigger
The driver now correctly handles w=0 writes as 'fire-and-forget' messages, where the server does not reply and the driver does not wait for a response. This change eliminates the possibility of encountering certain rare protocol format, BSON type, or network errors that could previously arise during server replies. As a result, w=0 operations now involve less I/O, specifically no socket read.
In addition, when command monitoring is enabled, the reply
field of a CommandSucceededEvent
of an unacknowledged write will always be { ok: 1 }
.
Fixed indefinite hang bug for high write load scenarios
When performing large and many write operations, the driver will likely encounter buffering at the socket layer. The logic that waited until buffered writes were complete would mistakenly drop 'data'
(reading from the socket), causing the driver to hang indefinitely or until a socket timeout. Using pausing and resuming mechanisms exposed by Node streams we have eliminated the possibility for data events to go unhandled.
Shout out to @hunkydoryrepair for debugging and finding this issue!
Fixed change stream infinite resume
Before this fix, when change streams would fail to establish a cursor on the server, the driver would infinitely attempt to resume the change stream. Now, when the aggregate to establish the change stream fails, the driver will throw an error and clos the change stream.
`ClientSession.commitTransactio...
v6.9.0
6.9.0 (2024-09-06)
The MongoDB Node.js team is pleased to announce version 6.9.0 of the mongodb
package!
Release Notes
Driver support of upcoming MongoDB server release
Increased the driver's max supported Wire Protocol version and server version in preparation for the upcoming release of MongoDB 8.0.
MongoDB 3.6 server support deprecated
Warning
Support for 3.6 servers is deprecated and will be removed in a future version.
Support for explicit resource management
The driver now natively supports explicit resource management for MongoClient
, ClientSession
, ChangeStreams
and cursors. Additionally, on compatible Node.js versions, explicit resource management can be used with cursor.stream()
and the GridFSDownloadStream
, since these classes inherit resource management from Node.js' readable streams.
This feature is experimental and subject to changes at any time. This feature will remain experimental until the proposal has reached stage 4 and Node.js declares its implementation of async disposable resources as stable.
To use explicit resource management with the Node driver, you must:
- Use Typescript 5.2 or greater (or another bundler that supports resource management)
- Enable
tslib
polyfills for your application - Either use a compatible Node.js version or polyfill
Symbol.asyncDispose
(see the TS 5.2 release announcement for more information).
Explicit resource management is a feature that ensures that resources' disposal methods are always called when the resources' scope is exited. For driver resources, explicit resource management guarantees that the resources' corresponding close
method is called when the resource goes out of scope.
// before:
{
try {
const client = MongoClient.connect('<uri>');
try {
const session = client.startSession();
const cursor = client.db('my-db').collection("my-collection").find({}, { session });
try {
const doc = await cursor.next();
} finally {
await cursor.close();
}
} finally {
await session.endSession();
}
} finally {
await client.close();
}
}
// with explicit resource management:
{
await using client = MongoClient.connect('<uri>');
await using session = client.startSession();
await using cursor = client.db('my-db').collection('my-collection').find({}, { session });
const doc = await cursor.next();
}
// outside of scope, the cursor, session and mongo client will be cleaned up automatically.
The full explicit resource management proposal can be found here.
Driver now supports auto selecting between IPv4 and IPv6 connections
For users on Node versions that support the autoSelectFamily
and autoSelectFamilyAttemptTimeout
options (Node 18.13+), they can now be provided to the MongoClient
and will be passed through to socket creation. autoSelectFamily
will default to true
with autoSelectFamilyAttemptTimeout
by default not defined. Example:
const client = new MongoClient(process.env.MONGODB_URI, { autoSelectFamilyAttemptTimeout: 100 });
Allow passing through allowPartialTrustChain
Node.js TLS option
This option is now exposed through the MongoClient constructor's options parameter and controls the X509_V_FLAG_PARTIAL_CHAIN
OpenSSL flag.
Fixed enableUtf8Validation
option
Starting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.
Add duration indicating time elapsed between connection creation and when the connection is ready
ConnectionReadyEvent
now has a durationMS
property that represents the time between the connection creation event and when the connection ready event is fired.
Add duration indicating time elapsed between the beginning and end of a connection checkout operation
ConnectionCheckedOutEvent
/ConnectionCheckFailedEvent
now have a durationMS
property that represents the time between checkout start and success/failure.
Create native cryptoCallbacks 🔐
Node.js bundles OpenSSL, which means we can access the crypto APIs from C++ directly, avoiding the need to define them in JavaScript and call back into the JS engine to perform encryption. Now, when running the bindings in a version of Node.js that bundles OpenSSL 3 (should correspond to Node.js 18+), the cryptoCallbacks
option will be ignored and C++ defined callbacks will be used instead. This improves the performance of encryption dramatically, as much as 5x faster. 🚀
This improvement was made to [email protected] which is available now!
Only permit mongocryptd spawn path and arguments to be own properties
We have added some defensive programming to the options that specify spawn path and spawn arguments for mongocryptd
due to the sensitivity of the system resource they control, namely, launching a process. Now, mongocryptdSpawnPath
and mongocryptdSpawnArgs
must be own properties of autoEncryption.extraOptions
. This makes it more difficult for a global prototype pollution bug related to these options to occur.
Support for range v2: Queryable Encryption supports range queries
Queryable encryption range queries are now officially supported. To use this feature, you must:
- use a version of mongodb-client-encryption > 6.1.0
- use a Node driver version > 6.9.0
- use an 8.0+ MongoDB enterprise server
Important
Collections and documents encrypted with range queryable fields with a 7.0 server are not compatible with range queries on 8.0 servers.
Documentation for queryable encryption can be found in the MongoDB server manual.
insertMany
and bulkWrite
accept ReadonlyArray
inputs
This improves the typescript developer experience, developers tend to use ReadonlyArray
because it can help understand where mutations are made and when enabling noUncheckedIndexedAccess
leads to a better type narrowing experience.
Please note, that the array is read only but not the documents, the driver adds _id
fields to your documents unless you request that the server generate the _id
with forceServerObjectId
Fix retryability criteria for write concern errors on pre-4.4 sharded clusters
Previously, the driver would erroneously retry writes on pre-4.4 sharded clusters based on a nested code in the server response (error.result.writeConcernError.code). Per the common drivers specification, retryability should be based on the top-level code (error.code). With this fix, the driver avoids unnecessary retries.
The LocalKMSProviderConfiguration
's key
property accepts Binary
for auto encryption
In #4160 we fixed a type issue where a local
KMS provider at runtime accepted a BSON
Binary
instance but the Typescript inaccurately only permitted Buffer
and string
. The same change has now been applied to AutoEncryptionOptions
.
BulkOperationBase
(superclass of UnorderedBulkOperation
and OrderedBulkOperation
) now reports length
property in Typescript
The length
getter for these classes was defined manually using Object.defineProperty
which hid it from typescript. Thanks to @sis0k0 we now have the getter defined on the class, which is functionally the same, but a greatly improved DX when working with types. 🎉
MongoWriteConcernError.code
is overwritten by nested code within MongoWriteConcernError.result.writeConcernError.code
MongoWriteConcernError
is now correctly formed such that the original top-level code is preserved
- If no top-level code exists,
MongoWriteConcernError.code
should be set toMongoWriteConcernError.result.writeConcernError.code
- If a top-level code is passed into the constructor, it shouldn't be changed or overwritten by the nested
writeConcernError.code
Optimized cursor.toArray()
Prior to this change, toArray()
simply used the cursor's async iterator API, which parses BSON documents lazily (see more here). toArray()
, however, eagerly fetches the entire set of results, pushing each document into the returned array. As such, toArray
does not have the same benefits from lazy parsing as other parts of the cursor API.
With this change, when toArray()
accumulates documents, it empties the current batch of documents into the array before calling the async iterator again, which means each iteration will fetch the next batch rather than wrap each d...
v6.8.2
6.8.2 (2024-09-12)
The MongoDB Node.js team is pleased to announce version 6.8.2 of the mongodb
package!
Release Notes
Fixed mixed use of cursor.next() and cursor[Symbol.asyncIterator]
In 6.8.0, we inadvertently prevented the use of cursor.next() along with using for await syntax to iterate cursors. If your code made use of the following pattern and the call to cursor.next retrieved all your documents in the first batch, then the for-await loop would never be entered. This issue is now fixed.
const firstDoc = await cursor.next();
for await (const doc of cursor) {
// process doc
// ...
}
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.8.1
6.8.1 (2024-09-06)
The MongoDB Node.js team is pleased to announce version 6.8.1 of the mongodb
package!
Release Notes
Fixed enableUtf8Validation
option
Starting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.8.0
6.8.0 (2024-06-27)
The MongoDB Node.js team is pleased to announce version 6.8.0 of the mongodb
package!
Release Notes
Add ReadConcernMajorityNotAvailableYet
to retryable errors
ReadConcernMajorityNotAvailableYet
(error code 134
) is now a retryable read error.
ClientEncryption.createDataKey() and other helpers now support named KMS providers
KMS providers can now be associated with a name and multiple keys can be provided per-KMS provider. The following example configures a ClientEncryption object with multiple AWS keys:
const clientEncryption = new ClientEncryption(keyVaultClient, {
'aws:key1': {
accessKeyId: ...,
secretAccessKey: ...
},
'aws:key2': {
accessKeyId: ...,
secretAccessKey: ...
},
clientEncryption.createDataKey('aws:key-1', { ... });
Named KMS providers are supported for azure, AWS, KMIP, local and gcp KMS providers. Named KMS providers cannot be used if the application is using the automatic KMS provider refresh capability.
This feature requires mongodb-client-encryption>=6.0.1.
KMIP data keys now support a delegated
option
When creating a KMIP data key, delegated
can now be specified. If true, the KMIP provider will perform encryption / decryption of the data key locally, ensuring that the encryption key never leaves the KMIP server.
clientEncryption.createDataKey('kmip', { masterKey: { delegated: true } } );
This feature requires mongodb-client-encryption>=6.0.1.
Cursor responses are now parsed lazily 🦥
MongoDB cursors (find, aggregate, etc.) operate on batches of documents equal to batchSize
. Each time the driver runs out of documents for the current batch it gets more (getMore
) and returns each document one at a time through APIs like cursor.next()
or for await (const doc of cursor)
.
Prior to this change, the Node.js driver was designed in such a way that the entire BSON response was decoded after it was received. Parsing BSON, just like parsing JSON, is a synchronous blocking operation. This means that throughout a cursor's lifetime invocations of .next()
that need to fetch a new batch hold up on parsing batchSize
(default 1000) documents before returning to the user.
In an effort to provide more responsiveness, the driver now decodes BSON "on demand". By operating on the layers of data returned by the server, the driver now receives a batch, and only obtains metadata like size, and if there are more documents to iterate after this batch. After that, each document is parsed out of the BSON as the cursor is iterated.
A perfect example of where this comes in handy is our beloved mongosh
! 💚
test> db.test.find()
[
{ _id: ObjectId('665f7fc5c9d5d52227434c65'), ... },
...
]
Type "it" for more
That Type "it" for more
message would now print after parsing only the documents displayed rather than after the entire batch is parsed.
Add Signature to Github Releases
The Github release for the mongodb
package now contains a detached signature file for the NPM package (named
mongodb-X.Y.Z.tgz.sig
), on every major and patch release to 6.x and 5.x. To verify the signature, follow the instructions in the 'Release Integrity' section of the README.md
file.
The LocalKMSProviderConfiguration
's key
property accepts Binary
A local
KMS provider at runtime accepted a BSON
Binary
instance but the Typescript inaccurately only permitted Buffer
and string
.
Clarified cursor state properties
The cursor has a few properties that represent the current state from the perspective of the driver and server. This PR corrects an issue that never made it to a release but we would like to take the opportunity to re-highlight what each of these properties mean.
cursor.closed
-cursor.close()
has been called, and there are no more documents stored in the cursor.cursor.killed
-cursor.close()
was called while the cursor still had a non-zero id, and the driver sent a killCursors command to free server-side resourcescursor.id == null
- The cursor has yet to send it's first command (ex.find
,aggregate
)cursor.id.isZero()
- The server sent the driver a cursor id of0
indicating a cursor no longer exists on the server side because all data has been returned to the driver.cursor.bufferedCount()
- The amount of documents stored locally in the cursor.
Features
- NODE-5718: add ReadConcernMajorityNotAvailableYet to retryable errors (#4154) (4f32dec)
- NODE-5801: allow multiple providers providers per type (#4137) (4d209ce)
- NODE-5853: support delegated KMIP data key option (#4129) (aa429f8)
- NODE-6136: parse cursor responses on demand (#4112) (3ed6a2a)
- NODE-6157: add signature to github releases (#4119) (f38c5fe)
Bug Fixes
- NODE-5801: use more specific key typing for multiple KMS provider support (#4146) (465ffd9)
- NODE-6085: add TS support for KMIP data key options (#4128) (f790cc1)
- NODE-6241: allow
Binary
as local KMS provider key (#4160) (fb724eb) - NODE-6242: close becomes true after calling close when documents still remain (#4161) (e3d70c3)
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.7.0
6.7.0 (2024-05-29)
The MongoDB Node.js team is pleased to announce version 6.7.0 of the mongodb
package!
Release Notes
Support for MONGODB-OIDC Authentication
MONGODB-OIDC
is now supported as an authentication mechanism for MongoDB server versions 7.0+. The currently supported facets to authenticate with are callback authentication, human interaction callback authentication, Azure machine authentication, and GCP machine authentication.
Azure Machine Authentication
The MongoClient
must be instantiated with authMechanism=MONGODB-OIDC
in the URI or in the client options. Additional required auth mechanism properties of TOKEN_RESOURCE
and ENVIRONMENT
are required and another optional username can be provided. Example:
const client = new MongoClient('mongodb+srv://<username>@<host>:<port>/?authMechanism=MONGODB-OIDC&authMechanismProperties=TOKEN_RESOURCE:<azure_token>,ENVIRONMENT:azure');
await client.connect();
GCP Machine Authentication
The MongoClient
must be instantiated with authMechanism=MONGODB-OIDC
in the URI or in the client options. Additional required auth mechanism properties of TOKEN_RESOURCE
and ENVIRONMENT
are required. Example:
const client = new MongoClient('mongodb+srv://<host>:<port>/?authMechanism=MONGODB-OIDC&authMechanismProperties=TOKEN_RESOURCE:<gcp_token>,ENVIRONMENT:gcp');
await client.connect();
Callback Authentication
The user can provide a custom callback to the MongoClient
that returns a valid response with an access token. The callback is provided as an auth mechanism property an has the signature of:
const oidcCallBack = (params: OIDCCallbackParams): Promise<OIDCResponse> => {
// params.timeoutContext is an AbortSignal that will abort after 30 seconds for non-human and 5 minutes for human.
// params.version is the current OIDC API version.
// params.idpInfo is the IdP info returned from the server.
// params.username is the optional username.
// Make a call to get a token.
const token = ...;
return {
accessToken: token,
expiresInSeconds: 300,
refreshToken: token
};
}
const client = new MongoClient('mongodb+srv://<host>:<port>/?authMechanism=MONGODB-OIDC', {
authMechanismProperties: {
OIDC_CALLBACK: oidcCallback
}
});
await client.connect();
For callbacks that require human interaction, set the callback to the OIDC_HUMAN_CALLBACK
property:
const client = new MongoClient('mongodb+srv://<host>:<port>/?authMechanism=MONGODB-OIDC', {
authMechanismProperties: {
OIDC_HUMAN_CALLBACK: oidcCallback
}
});
await client.connect();
Fixed error when useBigInt64=true was set on Db or MongoClient
Fixed an issue where when setting useBigInt64
=true
on MongoClients or Dbs an internal function compareTopologyVersion
would throw an error when encountering a bigint value.
Features
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.6.2
6.6.2 (2024-05-15)
The MongoDB Node.js team is pleased to announce version 6.6.2 of the mongodb
package!
Release Notes
Server Selection performance regression due to incorrect RTT measurement
Starting in version 6.6.0, when using the stream
server monitoring mode, heartbeats were incorrectly timed as having a duration of 0, leading to server selection viewing each server as equally desirable for selection.
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.6.1
6.6.1 (2024-05-06)
The MongoDB Node.js team is pleased to announce version 6.6.1 of the mongodb
package!
Release Notes
ref()
-ed timer keeps event loop running until client.connect()
resolves
When the MongoClient
is first starting up (client.connect()
) monitoring connections begin the process of discovering servers to make them selectable. The ref()
-ed serverSelectionTimeoutMS
timer keeps Node.js' event loop running as the monitoring connections are created. In the last release we inadvertently unref()
-ed this initial timer which would allow Node.js to close before the monitors could create connections.
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.