-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce storage quota before upload #8385
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThis pull request enhances the dataset upload process by incorporating the total file size into storage quota validations. The frontend now calculates the total file size of the files to be uploaded and passes this information to the backend. Consequently, several backend components, including the reserve and upload methods across multiple controllers and services, have been updated to consider the incoming upload size when validating available storage. Additionally, a new error message is introduced to handle cases where the upload exceeds the reserved space. Changes
Assessment against linked issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@fm3 Would you please take a look at this PR? The changes in the frontend are very minimal right now. So it should be ok if no frontend dev takes a look at it imo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
app/controllers/WKRemoteDataStoreController.scala
(1 hunks)conf/messages
(1 hunks)frontend/javascripts/admin/dataset/dataset_upload_view.tsx
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
(5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (9)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala (1)
64-64
: LGTM!The addition of
None
for thetotalFileSize
parameter is correct, as this service is used for composing datasets rather than direct uploads.conf/messages (1)
113-113
: LGTM!The error message is clear and provides helpful guidance to users when they exceed their reserved upload size.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)
38-38
: LGTM!The addition of
totalFileSize
field toReserveUploadInformation
is appropriate.
129-132
: LGTM!The Redis key definitions for tracking total and current upload sizes follow the existing naming convention.
651-652
: LGTM!The cleanup of Redis keys in
removeFromRedis
is complete and correct.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (2)
125-125
: LGTM! Added totalFileSize parameter for storage quota validation.The addition of the
totalFileSize
parameter in theReserveUploadInformation
constructor aligns with the PR's objective to enforce storage quota before upload.
417-417
: LGTM! Added totalFileSize parameter for storage quota validation.The addition of the
totalFileSize
parameter in theReserveUploadInformation
constructor is consistent with the changes in the manual upload method.frontend/javascripts/admin/dataset/dataset_upload_view.tsx (2)
319-319
: LGTM! Correctly calculates total file size.The implementation uses
reduce
to calculate the total size of all files, which is an efficient approach.
328-328
: LGTM! Added totalFileSize to upload information.The calculated total file size is correctly included in the
reserveUploadInformation
object sent to the backend.
@@ -73,7 +73,7 @@ class WKRemoteDataStoreController @Inject()( | |||
uploadInfo.organization) ~> NOT_FOUND | |||
usedStorageBytes <- organizationDAO.getUsedStorage(organization._id) | |||
_ <- Fox.runOptional(organization.includedStorageBytes)(includedStorage => | |||
bool2Fox(usedStorageBytes <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN | |||
bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider requiring totalFileSize for uploads.
The current implementation uses getOrElse(0L)
which could silently pass the quota check if totalFileSize
is None. Consider making totalFileSize
a required field for uploads to prevent accidentally bypassing the quota check.
-bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)
+uploadInfo.totalFileSize match {
+ case Some(size) => bool2Fox(usedStorageBytes + size <= includedStorage)
+ case None => Fox.failure("dataset.upload.totalFileSizeRequired")
+}
Also add this message to conf/messages:
dataset.upload.totalFileSizeRequired=Total file size must be provided for uploads to enforce storage quota.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN | |
uploadInfo.totalFileSize match { | |
case Some(size) => bool2Fox(usedStorageBytes + size <= includedStorage) | |
case None => Fox.failure("dataset.upload.totalFileSizeRequired") | |
} ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN |
totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId)) | ||
_ <- Fox.runOptional(totalFileSize) { maxFileSize => | ||
runningUploadMetadataStore | ||
.find(redisKeyForCurrentUploadedTotalFileSize(uploadId)) | ||
.flatMap(alreadyUploadedAmountOfBytes => | ||
bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong)) | ||
} ?~> "dataset.upload.moreBytesThanReserved" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix potential race condition in size validation.
The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.
-totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
-_ <- Fox.runOptional(totalFileSize) { maxFileSize =>
- runningUploadMetadataStore
- .find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
- .flatMap(alreadyUploadedAmountOfBytes =>
- bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong))
-} ?~> "dataset.upload.moreBytesThanReserved"
+_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") {
+ for {
+ totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
+ _ <- Fox.runOptional(totalFileSize) { maxFileSize =>
+ for {
+ currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
+ newSize = currentSize.getOrElse("0").toLong + chunkSize
+ _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved"
+ _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(uploadId),
+ String.valueOf(newSize))
+ } yield ()
+ }
+ } yield ()
+}
Also add a method to DataStoreRedisStore:
def withLock[T](key: String)(f: => Fox[T]): Fox[T] = {
// Implement Redis-based distributed locking
// Consider using Redisson or other battle-tested implementations
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that might actually be a thing. Will take a look at it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MichaelBuessemeyer Thanks for considering this feedback! Let me know if you need any help with implementing the distributed locking solution.
(_/)
(^.^)
(> <) Thanks for the feedback!
✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.315Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize => | ||
Fox.combined( | ||
List( | ||
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId), | ||
String.valueOf(fileSize)), | ||
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | ||
)) | ||
} | ||
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix duplicate initialization of currentUploadedTotalFileSize.
The currentUploadedTotalFileSize
is initialized twice:
- Inside the
Fox.runOptional(reserveUploadInfo.totalFileSize)
block - Immediately after the block
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize =>
Fox.combined(
List(
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId),
String.valueOf(fileSize)),
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
))
}
-_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize => | |
Fox.combined( | |
List( | |
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId), | |
String.valueOf(fileSize)), | |
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | |
)) | |
} | |
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | |
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize => | |
Fox.combined( | |
List( | |
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId), | |
String.valueOf(fileSize)), | |
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | |
)) | |
} |
…ableminds/webknossos into enforce-storage-quota-before-upload
@@ -324,6 +325,7 @@ class DatasetUploadView extends React.Component<PropsWithFormAndRouter, State> { | |||
organization: activeUser.organization, | |||
totalFileCount: formValues.zipFile.length, | |||
filePaths: filePaths, | |||
totalFileSizeInBytes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @markbader,
here are the changes I mentioned which would need to be added to wklibs when uploading a dataset. Simply put: the initial reserve upload request now needs a new parameter called totalFileSizeInBytes
which is the sum of all files' sizes in bytes. The rest is backend work and no more changes are needed.
Old wklibs version should still work as this parameter is optional for now :)
Currently, the storage quota enforcement does not consider the size of the current upload. With these changes the upload sends it upload size so the check can be done including the size of the dataset that should be uploaded. The backend auto rejects in case the total sum overflows the allowed storage quota. Additionally, the backend automatically rejects chucks which would overflow the reserved amount of bytes for the upload.
URL of deployed dev instance (used for testing):
Steps to test:
dataset_upload_view.tsx
and edit line 319 to force it to be less bytes than the dataset you now want to upload. Then save and reload the frontend.dataset_upload_view.tsx
file changesapplication.conf
setreportUsedStorage.enabled = true
Team
or so (not sure whether that necessary)TODOs:
totalFileSize
tototalFileSizeInBytes
Do you have an improvement idea?
Issues:
(Please delete unneeded items, merge only when none are left open)