Replies: 3 comments 5 replies
-
It's rather unusual to have tables/queries with more than 128 values. It normally indicates that your database is heavily denormalized, which is an antipattern for relational databases. The usual way to go there is to normalize your database schema, so for example for your settings table instead of having a column per setting have a table Increasing the number of supported columns is not an option as that will lead to another explosion of compile times. The correct way forward there is to wait language level support of variadic generics. |
Beta Was this translation helpful? Give feedback.
-
If you really want super wide columns, consider using clickhouse for that as it can handle up to 10k columns. https://altinity.com/blog/too-wide-or-not-too-wide-that-is-the-clickhouse-question You really only want that if you have to de-normalize multiple massive tables (> billions of rows) into one big wide table for performance reasons. This became a lot less common these days as materialized views get you a long way. That said, if you can fix your table design, do it in the first place. In postgres, you can use a few tricks:
Array types, custom types, and composite types are all well supported in Diesel. I don't know about table inheritance, but if I were to guess, Diesel most likely treats the sub-tables as ordinary tables while ignoring the super-table as long as your don't write wrapper classes for it. |
Beta Was this translation helpful? Give feedback.
-
You might be able to reduce the number of columns using arrays of comosite
types as you are dealing mostly with configuration. See this example repo.
https://github.com/marvin-hansen/bazel-diesel-postgres
This also nicely reducs the number of writes as you only upsert one array
of composite types instead of multiple records.
As you are dealing with only a few 10k's of rows, proper caching may do
more for performance than the underlying database. Take a look at dragonfly
and check if it fits your requirements.
https://www.dragonflydb.io
Diesel code is way more robust, easier to test, and better to maintain so
you are definitely on the right track.
…On Wed, Aug 21, 2024 at 9:24 AM Roardom ***@***.***> wrote:
Thanks for this advice.
The wide tables in our database don't reach beyond a few 10s of thousands
of rows in the largest of production instances currently (so the
"normalized" version would only be a few million rows), so clickhouse
probably isn't the right direction. However a handful of user settings are
fetched frequently (and cached for 30 seconds) on every http request to
generate each user's site layout/navbar/stylesheet.
Currently we're on mysql, but migration to postgres should be relatively
straight forward and might be the best long-term solution, especially with
the composite type support (thanks for mentioning this alternative),
although our large multi-million row tables are very critical and handle a
few thousand upserts per second from our api and might fare differently.
Being able to maintain using the type system is definitely an advantage to
some of the other offered alternatives. Using the EAV
(entity-attribute-value) model which the normalized model resembles is
usually seen as an anti-pattern and treating all settings as strings
requires a bunch of .parse().unwrap() which is very unlike rust. It also
makes much more intuitive sense to select the columns from the settings
table of the settings needed for that http request, instead of selecting
each row for each setting needed. Am I missing something here?
—
Reply to this email directly, view it on GitHub
<#4160 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFYR7XD6SPORSLKDBIRBOWDZSPT6HAVCNFSM6AAAAABMLNQJZKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANBQGE3DGNY>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
With a 128-limit column limit, what workarounds do people use to tailor their database schema to diesel's requirement?
For example, on a FOSS project I maintain, the users table itself has 46 columns, and each user has 1-to-1 relations with their general site customization settings (18 columns), their notification settings (32 columns) and their privacy settings (43 columns). While none of these table have reached 128 columns yet, combined they are over 128 columns, require more joins than necessary, and it's still something that remains on my mind.
Or another example, storing global site config settings in the database, one column per setting, which can be anything from site title, logo, contact email, default css theme, registration enable/disable, outbound email rate limit, user comment posting rate limit, default chatroom, ip/port for external services, 3rd party api keys among hundreds of other site config settings. While some may suggest putting these settings in an .env, that prevents administrators from editing them from the website admin panel, makes backups harder, and requires writing upgrade notes for any changes in the .env that could have been handled automatically via a database migration. Another option would be using a table with one column for a key, another for the value, but that prevents e.g. adding a foreign key constraint between the default chatroom and the chatrooms table, or ensuring that a value is an int and not a string. Another option yet is having multiple site config tables, grouped into batches of 128 settings or less.
I'm interested in hearing what other people have done to work around this limitation.
Beta Was this translation helpful? Give feedback.
All reactions