-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Range function resolution fails for column (tstzrange) and parameter (tsrange) #561
Comments
@dpsenner This is by design. You should be able to configure Could you give that a try and let us know how it goes? |
@dpsenner We are also strongly encouraging users to take a look at our plugin for NodaTime which provides a significant improvement in date and time handling. |
Thanks for having a look. We attempted to change our database models to use nodatetime
and causes the same exception:
I'm going to give your suggestion a shot tomorrow. |
I just gave the column type a shot nonetheless and it improved the situation somewhat: builder.Property(t => t.During)
.IsRequired()
.HasColumnName("during")
.HasColumnType("tstzrange"); The original was: builder.Property(t => t.During)
.IsRequired()
.HasColumnName("during"); Now values can be inserted and fetched from the database, but querying with a where clause does not work: Instant dateTimeOffsetInRange = Instant.FromDateTimeOffset(DateTimeOffset.Now);
List<SomeRecord> userTransponderRecords =
DbContext.SomeRecords
.Where(t => t.During.Contains(dateTimeOffsetInRange))
.ToList(); and produces the following stacktrace:
|
@dpsenner Yes... we've seen this issue elsewhere when passing parameters that need non-default mappings. I'll see what we can do to detect/add type casts to the SQL generation phase. In the meantime, if this is a showstopper, my preferred workaround is to define a custom overloaded operator to patch the type resolution: CREATE TYPE example_enum AS ENUM ('1', '2', '3', '4');
--
-- overloading the equality operator
--
CREATE FUNCTION example_enum_as_text(example_enum, TEXT) RETURNS BOOLEAN AS
$$ SELECT $1 :: TEXT = $2 :: TEXT; $$
LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT;
CREATE OPERATOR = (
procedure = example_enum_as_text,
leftarg = example_enum,
rightarg = TEXT,
HASHES,
MERGES);
--
-- overloading the equality operator (flipped)
--
CREATE FUNCTION example_enum_as_text(TEXT, example_enum) RETURNS BOOLEAN AS
$$ SELECT $1 :: TEXT = $2 :: TEXT; $$
LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT;
CREATE OPERATOR = (
procedure = example_enum_as_text,
leftarg = TEXT,
rightarg = example_enum,
HASHES,
MERGES); Related |
@austindrenski corrected analyzed this, EF Core is indeed missing a way to explicitly specify the store type for cases of non-default mappings. However, with NodaTime you may be able to work around this... Instead of |
Finally, as @austindrenski wrote above, make sure you know exactly why you want to use |
Am going to close this as there's nothing more to do in Npgsql (although the discussion can definitely continue). |
Thanks to both of you again for looking into this. Unfortunately, @roji, neither var instant = ...
var = DbContext
.SomeRecords
.Where(t => t.During.Contains(instant))
.ToList(); @austindrenski @roji Would you please share with us your insights on these datatypes? Under what circumstances have you learned that the timezoned data types are problematic and in what respect are they problematic to you? Have you worked out guidelines when to use either datatypes with timezone or without? To me it is clear that either datatypes store information as utc in the database and timezones affect only the behavior of how data is read from and written to the database store. We delibaretly chose to use the timezoned datatypes because it provides autoconversion from and into the database clients timezone. In the first version we explicitly implemented all database model properties by using I hope to gain some insights from your valuable input that either shows that our findings were valid but cannot be implemented currently because dependencies are not ready for this yet. Or it makes us reconsider this decision and we exclusively use timestamps without timezone and implement conversions in the backend whereever a functionality forces us to do so. |
@dpsenner You can find the answer and examples in npgsql/npgsql#1803. The long story short, |
@dpsenner as I wrote in #63, if you want to know what I think about using timezone-aware types (and PostgreSQL PostgreSQL Regarding The best practice (IMHO) is to make sure any and all timezone conversions happen at the very edge of your application, e.g. when rendering a timestamp for user display based on that user's browser's configured timezone. Any incoming timestamps should be normalized to UTC timestamps as soon as possible, and the rest of your code should simply work with the assumption that they are UTC. This isn't to say that |
@austindrenski it's hard to make sweeping recommendations without taking account any specific needs etc. But yes, my personal opinion is that in the general case, unless your application has special needs and cannot be UTC-everywhere:
Note that this is my personal take on things and I'm sure many would disagree. |
Incidentally I came to the same conclusions. If team lead agrees we are going to set sails with nodatime instants, tsrange and timestamp without timezones. Where required we will introduce additional columns for client local times and/or the timezone id of the client that created or updated that specific entity. |
@dpsenner good to hear it, not everyone agrees with my views on date/time handling :) Note that if you want to represent the client's timezone inside your database, you need a separate column in the database no matter what, since Another important point is that @austindrenski have we actually done any change in this issue? If not can you please remove the label and milestone? |
@roji, that's what we went through when we discussed the proposal. :-) It took a significant amount of arguments until the underlying problems finally were understood by everyone and only then we were able to come to an agreement. Unfortunately we do also have the functional requirement to track the original time in the timezone when something was made or done. Currently we have three possible solutions for this and I would like to know what you think about these:
We have also discussed to store the offset (in hours and minutes), but we are yet unsure about the implications and whether we actually should do so. The reasoning behind the previously mentioned options is that we surely need a system time that every participant can agree with. But we do also need the information of a local timestamp that easily allows to correlate "4am in america" with "4 am in europe" and "4 am in china" which is not possible with utc and the timezone id. Daylight savings would for instance shift the timestamps during summer and "4 am in europe" would suddenly become "3 am in europe". |
IMO this is the safest and most useful option. An UTC timestamp is a unique moment in time that is the same in every place of the world and completely independent of the time zone or whether daylight saving time is in effect. timestamp without time zone doesn't do any magic to your timestamp so it's the best representation for UTC timestamps. |
Thanks for the interesting discussion. I agree with what @Brar wrote above, here's some more detail from my side. Your first two options (if I understand them correctly) seem very similar - in both cases you have one column of type But there's one more reason to prefer storing UTC. Timezones actually change from time to time: a country in GMT+2 can decide to become GMT+1 tomorrow (this is rare but it happens). So if you store a local timestamp in the database and the timezone changes, well, you're kinda screwed :) If you store UTC and only convert to local timestamps in your program when needed, then as long as your timezone database is up to date you'll always be OK. I frankly don't see any added value to the 3rd option - it's needless duplication (and room for error). Once you have a (UTC) timestamp and a timezone, you can always easily convert back and forth, from UTC to local (according to the stored timezone) and back.
Here I'm a bit confused. First, "4AM in America" is a time with timezone, not a timestamp (there is no date). Once you put a date in there, "4AM on 1/1/2018 in EST", daylight savings no longer shifts things around, because a given date was always either in daylight savings or not... I also don't see the point of storing the offset (hours, minutes) - you're already storing the timezone (as an ID), which is much better. To sum it up, just use UTC timestamps everywhere, all the time. If you have to know of a user's preferred timezone (for display purposes or anything else), store that separately and use that to translate the UTC timestamp as required, only where required. In NodaTime terms (and I definitely recommend you use NodaTime here), you're storing an |
The reasoning behind two timestamp columns was, that when a user reads that timestamp today it says "4 am" but if daylight savings change overnight, tomorrow he will read the same timestamp as "3 am". Having a local time stored explicitly will make that record "4 am" forever, which is also what the user would expect. Please correct me if I'm wrong. |
@dpsenner unless I'm totally mistaken about daylight savings (DST), every time zone has a DST definition: DST starts at a certain date, and stops at another date. When you read a UTC timestamp from the database (let's call it Long story short, date/time conversions aren't affected by the current timestamp - if DST changes between today and tomorrow, you will see the same thing. The only thing that matters is the timestamp itself, and the timezone. |
I suggest you do a quick test to confirm the above... Write a trivial console program that uses NodaTime, and see what happens as you play around with your machine's date (in and out of DST). You shouldn't see any change in behavior in the program. |
@dpsenner converting an UTC timestamp to a local timestamp is more than just adding an offset in hours. It's best to rely on tools that are backed by time zone databases that have all the knowledge of those details and do the heavy lifting for you.
|
Not to distract from the wider discussion, but while every time zone has a definition of DST, not every locale in the time zone necessarily observes it: |
To supplement what @Brar said above, NodaTime also uses the IANA timezone database, which is pretty much the standard outside of the Windows world. The advantage of NodaTime is a sane date/time type hierarchy along with the timezone database - basically everything is taken care of for you.
Bah, date/time insanity... That's why it should all be delegated to a library which actually knows about all that complexity... |
Does anyone happen to know how the timezone database is updated in the postgres database? The documentation for Nodatime makes this clear, but I can't find any resources that document where postgres takes its timezone database from. On unix it'll use tzdata, but what does it do on windows? Is there an api to know which timezone database version is currently in effect? I ask because I'd like to evaluate how we could synchronize the timezone database such that we have a plan in case the day comes where the timezone databases used by postgres and the application disagree about how a specific timezone id works. |
At least if I'm understanding you're scenario properly, you're not supposed to care about PostgreSQL at all, since you'll never be asking or to do any sort of conversions for you. This is precisely why you should avoid Don't involve the database in any timezone-aware logic unless you need to - I'm not saying it should never happen, just that you want to avoid it if at all possible. |
Not sure if there's anything more authoritative, but it's discussed here: https://github.com/postgres/postgres/blob/master/src/timezone/README |
You've got a point. There are however several database functions that return timestamptz datatypes and there are therefore scenarios where the query generator causes bad timestamps to be read from the database. |
Are you aware of Edit: It is actually a good practice to set your whole PostgreSQL server to UTC (https://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-TIMEZONE) to avoid any timestamp magic. |
To supplement @Brar's comment (which is very correct), PostgreSQL However, as @Brar suggests, if you use
|
As a last comment I wanted to share with you an insight that I gained while attempting to find a valid way to convert a utc timestamp into a zoned timestamp that works fine without any magic conversions of the resulting timestamp happening. This is what I came up with, something that works regardless of the session timezone: set timezone to 'UTC';
select
'2018-06-06T12:00:00'::timestamp at time zone 'UTC' at time zone 'Europe/Rome', -- works
'2018-06-06T12:00:00Z'::timestamp at time zone 'UTC' at time zone 'Europe/Rome' -- works while the following is broken even though it appears to be the right syntax at first sight: set timezone to 'UTC';
select
'2018-06-06T12:00:00Z'::timestamp at time zone 'Europe/Rome', -- broken
'2018-06-06T12:00:00'::timestamp at time zone 'Europe/Rome' -- broken Happy coding and thanks for the productive conversation we had! |
With the following database table:
The following model:
causes the following exception on insert:
The text was updated successfully, but these errors were encountered: