You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently developing a intelligent time registration system (DillyDally), where a employee of a given company can log their workhours and the system should create a nice overview of all worklogs. We are hosting our project in Google Firebase, and we are storing all our data in the NoSQL document database Firestore. At average one employee at Prolike submits 5-10 logs per day, it means the data will grow fast. If a company have 1k employees, they will generate approximately 2,6 million logs annually.
Problem 1:
The problem is we need to figure out how to fetch ALL worklogs from firestore, and the performance is the core requirement here. We need to analysis to how we can fetch 50k, 500k, 5 millions and 50 millions document smoothly.
Problem 2:*
If we dont fetch ALL data at once, how should we be able to calculate the sum of all workhours effectively?
Features
A comparison between scenarios measured in time
Fetching ALL data at once
Fetching data with pagination ( 100 at once )
_ Your suggestion?
Benchmark
We should figure out when exactly it doesn't longer pay off to fetch all data at once compared to pagination.
Values
With this analysis, our system will now support a large-amount of entrietes, and now we can sell our system to customers.
The text was updated successfully, but these errors were encountered:
astyltsvig
changed the title
Dilly Dally GET data optimization - Pagination vs get-everything-at-once
Dilly Dally GET data optimization with BIG DATA - Pagination vs get-everything-at-once
Feb 21, 2020
Problem
We are currently developing a intelligent time registration system (DillyDally), where a employee of a given company can log their workhours and the system should create a nice overview of all worklogs. We are hosting our project in Google Firebase, and we are storing all our data in the NoSQL document database Firestore. At average one employee at Prolike submits 5-10 logs per day, it means the data will grow fast. If a company have 1k employees, they will generate approximately 2,6 million logs annually.
Problem 1:
The problem is we need to figure out how to fetch ALL worklogs from firestore, and the performance is the core requirement here. We need to analysis to how we can fetch 50k, 500k, 5 millions and 50 millions document smoothly.
Problem 2:*
If we dont fetch ALL data at once, how should we be able to calculate the sum of all workhours effectively?
Features
A comparison between scenarios measured in time
_ Your suggestion?
Benchmark
We should figure out when exactly it doesn't longer pay off to fetch all data at once compared to pagination.
Values
With this analysis, our system will now support a large-amount of entrietes, and now we can sell our system to customers.
Materials
The text was updated successfully, but these errors were encountered: