You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@zhong0x - That's an interesting question, I was wondering about this myself.
The only workaround I found was to use the Time To Live (TTL) capability of Cosmos DB. Basically by enabling TTL on the container level and setting the 'ttl' field of an existing item to a very small value, will do the trick.
To enable TTL on container level follow this tutorial.
Afterwards you could read the records that you want to delete using:
val recordsToDelete = spark.read.cosmosDB(yourCosmosDbReadConfig)
Then you could force the value of TTL to 1 and write the records back.
val recordsMarkedForDeletion = recordsToDelete.withColumn("ttl", lit(1)) recordsMarkedForDeletion.write.mode(SaveMode.Overwrite).cosmosDB(yourCosmosDbWriteConfig)
The above snippet will ensure the deletion of the selected items.
No description provided.
The text was updated successfully, but these errors were encountered: