You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In cosmos-spark-connector, it seems like we cannot write 2MB data as single document into cosmos with bulkimport. After reading the source code https://github.com/Azure/azure-cosmosdb-spark/blob/1247559cae6b843a1ddc9fae9827222578abd42d/src/main/scala/com/microsoft/azure/cosmosdb/spark/CosmosDBSpark.scala#L153, we must use option "MaxMiniBatchImportSizeKB" to let customer set the value, otherwise the max size is 100KB.
val maxMiniBatchImportSizeKB: Int = writeConfig.get[String](CosmosDBConfig.MaxMiniBatchImportSizeKB)
.getOrElse(CosmosDBConfig.DefaultMaxMiniBatchImportSizeKB.toString)
.toInt
As I know, the max size of per document in cosmos DB is 2MB, if the customer set big value on "MaxMiniBatchImportSizeKB", what will happen? At the same time, could we make the default value of "DefaultMaxMiniBatchImportSizeKB" as 2048?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, @FabianMeiswinkel,
As I know, the max size of per document in cosmos DB is 2MB, if the customer set big value on "MaxMiniBatchImportSizeKB", what will happen? At the same time, could we make the default value of "DefaultMaxMiniBatchImportSizeKB" as 2048?
Thanks!
The text was updated successfully, but these errors were encountered: