-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Desired data transformations in analysis workflow #11
Comments
I really like bullet 3 (doing a partial replacement) as this would reduce the amount of filtering necessary. Do we always require the user to have a second file that they're merging with the main? Or can we give them the option to "type in a value" that is applied to a bunch of rows? (Aka, set all specified rows to "Active") As for the Julia wrappers - I think first we should just document the SQL/Julia way of doing things. If people have that cookbook, they can do a lot. |
A file is not mandatory. For example for your example we need filter and set, while set is something we can do now, filter is not yet possible. So definitely another requirement.
Good point, I think maybe now it looks a bit confusing because you don't see all the steps one after the other, and it's also a rather advanced feature. How about I first create a few example sessions based on the test datasets, and then you can create a tutorial out of it? The tutorial can have more conceptual explanations to make it easier to follow what's happening under the hood. |
I think with respect to saving the data files for later, we can come up with a decision and then build it into the workflow and documentation - explaining why we do it the way we do it. But of course taking whatever way they would do it naturally and automating it / adding meta information would be nicest to the user. |
@suvayu This is bizarre:
But if I run it by line in the Julia terminal, it works fine. |
I think C. should be an optional argument, so the user can choose whether to fill missing values from the original source or leave them blank (or set to 0 maybe). |
I think I've seen this error before. I would like to understand it better. Could you open a bug report and provide a recipe to reproduce? Even if it's not an actual bug, it would be best to understand the behaviour.
If they don't want to fill the missing values, can't they use the replace columns transform (B)? |
Sure I'll make a bug report. Yes that's what I mean - to me this should be an option for B.
|
Okay, that makes sense, I think the current implementation assumes (or maybe implements) an I guess we should uncheck B and convert it to an issue as well :-p |
Actually B should be merged with C. I'll do it |
@suvayu Another situation I discussed with someone from OPERA the other day: This might not be a problem with the way we're separating raw data/model/scenarios, but it might still be an issue. I'm not sure. |
What is the difference between "updating all the tables" and "changing numbers in tables"? Does the first mean changing the schema, i.e. changing columns, what constitutes a unique row, etc? Whereas the second would mean, update the source dataset to newest release, change weather year, or inflation calculation, etc? |
Yeah it's kind of hard to relate their structure to ours. One guy is changing the structure of the data. So maybe making new rows and updating old rows. Maybe duplicating a table and filling it in with new information. The other guy is updating the source data, maybe in the same tables where the structure is changing. So the dumb-method to merge them is to copy-paste the new structure tables into the updated source database. BUT that would lose any possible updates in that same table. And as I understand it, the structural changes sort of break the data/scenario until they're complete, so it isn't really possible to do them simultaneously in the same database. (At least with their way of working.) EXAMPLE ATTEMPT:
Like I said, with the way we're going to work, this might not even be a problem. |
Or maybe Person 1 is just adding rows, but Person 2 doesn't want those rows interfering with what he's working on as they appear over time. Basically a version control issue. |
Adding columns doesn't really interfere with what person 2 is doing, but removing a column would. As for adding rows, depending on the operation it may or may not interfere. Basically any kind of aggregation will break. But there's a technique where you add a date column that is your "version" (since it's date, a common name is "asof"), then you can do your query as Note that this technique assumes no data is ever removed. I also don't know what other knock-on effects it might have; e.g. it's possible normal queries become a bit more complex and you incur a small performance hit. |
Also, another technique would be person 1 does their changes as logic (transformations) instead of directly working on the tables. I don't think this will cover everything. |
Terminology
names & types) in a local file; could be CSV, Excel, Parquet,
JSON(LD), or a database
updates its values, or changes it's schema (renaming columns, or
setting data types)
(set of) dataset(s)
Desired input
It would be good to collect the kind of data transformations that a
user might want to do before using the final result as TEM input.
some columns
e.g. you want to change the assets that are investable, so you
create a new CSV with only the the assets you want to change, and
the corresponding
investable
column. This transform will create anew table where the investable column has the new values for the
assets you want to change, and the old values for the assets you
haven't specified.
mathematical function
e.g. maybe you want to normalise a column
a tabular dataset
We can create issues from this list, and mark them "Now" or "Soon".
Workflow wishlist
(Migrated from #289)
Future outlook
The idea is to provide canned Julia recipes (functions) that wrap the
complexity of some of these steps. Then a future user can simply
write a Julia script that looks like somewhat like this:
This is of course an overly simplified example, think of this as
"Eventually".
Unsolved issues
As a user works on a project, tries out different data processing steps, they will probably create many tables in DuckDB. This will easily turn into a mess. There are two mitigations:
TEMPORARY
tables (tables that are transient, and disappear after a session) under circumstances when we know an intermediate table isn't useful laterDev setup
The transformations mentioned earlier can be in SQL or Julia, or a mix
of both. So it would be good to have SQL recipes that do some of the
simpler ones, and for the more custom ones we can use Julia, and
combine them to get the best of both worlds.
To develop the SQL recipes, the simplest setup is to use the DuckDB
CLI, and try to work with CSV files.
$ duckdb
), e.g. something like below:The other option is to use the Julia client:
TulipaIO
as usualfunctions.
& 3rd part of our goals, doing more mathematically complex tasks in
Julia, and bringing them together in SQL.
CC: @clizbe
The text was updated successfully, but these errors were encountered: