You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, user login information is stored as JSON files within Google Cloud Platform (GCP) storage buckets. For a small amount of user login information, this is fine. But thinking ahead, if the amount of user logins expands, we may encounter the following challenges:
Below are three possible ideas along with their pros and cons. Regardless of the solution chosen, updates will likely need to be made to users_repository.py and users_controller.py.
Migrate user login information to a cloud-based relational database. This way, we not only have all the benefits of GCP storage buckets (e.g. data replication, secure storage), but we also have the benefits of using relational databases. Relational databases make it simple and efficient to query data (e.g. SQL) and maintain unique primary keys (e.g. usernames). Something to consider with this approach is the cost of cloud-based relational databases.
Migrate the user login information into the trails-viz-data repo. The benefit of this approach is anyone with access to the trails-viz-data repo can directly add users/change user info without needing to run any functions or store any database keys. Something to consider with this approach is it's not very scalable because GitHub repo has file size constraints. Also, we will continue having the same issues listed above as using the GCP storage buckets.
Migrate the user login information into a local relational database on the DigitalOcean droplet (e.g. SQLite). The benefit is we don't have to pay for an external database service, and we also get the benefits of using a relational database. Something to consider is adding new users/changing user info may be more difficult for the developer, because they will need to SSH into the DigitalOcean droplet to interact with the local database. Also, since it is a local database we no longer have the benefits of a cloud-based database (e.g. data replication, processing queries on a database server).
Currently, user login information is stored as JSON files within Google Cloud Platform (GCP) storage buckets. For a small amount of user login information, this is fine. But thinking ahead, if the amount of user logins expands, we may encounter the following challenges:
Below are three possible ideas along with their pros and cons. Regardless of the solution chosen, updates will likely need to be made to users_repository.py and users_controller.py.
Migrate user login information to a cloud-based relational database. This way, we not only have all the benefits of GCP storage buckets (e.g. data replication, secure storage), but we also have the benefits of using relational databases. Relational databases make it simple and efficient to query data (e.g. SQL) and maintain unique primary keys (e.g. usernames). Something to consider with this approach is the cost of cloud-based relational databases.
Migrate the user login information into the trails-viz-data repo. The benefit of this approach is anyone with access to the trails-viz-data repo can directly add users/change user info without needing to run any functions or store any database keys. Something to consider with this approach is it's not very scalable because GitHub repo has file size constraints. Also, we will continue having the same issues listed above as using the GCP storage buckets.
Migrate the user login information into a local relational database on the DigitalOcean droplet (e.g. SQLite). The benefit is we don't have to pay for an external database service, and we also get the benefits of using a relational database. Something to consider is adding new users/changing user info may be more difficult for the developer, because they will need to SSH into the DigitalOcean droplet to interact with the local database. Also, since it is a local database we no longer have the benefits of a cloud-based database (e.g. data replication, processing queries on a database server).
Slack discussion found here.
The text was updated successfully, but these errors were encountered: