-
Notifications
You must be signed in to change notification settings - Fork 0
/
proto_crudl.config
188 lines (160 loc) · 11.4 KB
/
proto_crudl.config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
%% Config file
% Currently, only PostgreSQL is supported. Please set to your database details and credentials that you will generate
% the erlang CRUDL (Create, Read, Update, Delete, List/Lookup) and protobuffer code.
{database, [{provider, postgres},
{host, "localhost"},
{port, 5432},
{database, "proto_crudl"},
{username, "proto_crudl"},
{password, "proto_crudl"}]}.
% proto_crudl has several options in the code it generates. First, it can generate mapping code using either records
% or maps. While records are recommended for relatively static data structures where the columns/fields are known, using
% maps tends to be more flexible especially. Next, if you are using proto_crudl with gpb, then the code will include
% the _pb.hrl files and use the records generated by gpb, otherwise it will inline the record definition. Because
% Erlang does not have namespacing, it is possible to have% collisions when outputting the db and pb files. To get
% around this, you will need to set `use_package` to true. Please% note that you will need to set this option in the
% gpb options if using gpb.
{output, [{source_path, "apps/example/src"},
{include_path, "apps/example/include"},
{use_records, true}, % will generate maps code if false, otherwise records
{use_package, true},
{use_gpb, true}, % turn this on when use_records is true to use the gpb records
{suffix, "_db"}]}.
% PLEASE NOTE: If you are using the `use_gbp`, not mot use the `maps` option in your rebar.config `gpb_opts` config.
% proto_crudl will generate either proto2 or proto3 files. It is important to understand that to use proto_crudl and
% natively map between a relational database where NULL is an valid value and is intrinsic to relational normal form.
% Using surrogate sequences/serial values as primary keys is common. Having a 0, 1, or many relationship requires that
% a foreign key value be null, not a default value of 0 (for the scalar type int for the serial datatype). proto_crudl
% does translate foreign relationships as embedded messages as this also starts to become problematic in that you can
% be pushing a great deal of data just to a client just to display a list of account names.
{proto, [{path, "apps/example/proto"},
{java_package, "com.example"},
{version, "proto2"}]}.
{generator, [
{schemas, ["public", "test_schema"]},
% proto_crudl will read all the tables in one or more schema's. When generating the Erlang code, each module will be
% written to a subdirectory with the schema name. So if your output is 'output', then your code will be
% 'output/public/foo_db.erl' and 'output/test_schema/foo_db.erl' respectively. Please note that proto_crudl
% supports tables with the same name in different schemas with foreign relationships between them.
{excluded, ["public.excluded", "public.foo", "spatial_ref_sys"]},
% A list of tables to exclude from the schemas being read. NOTE that currently, if the excluding tables exist in both
% schema's then it will be excluded from both.
{options, [{version_column, "version"}, indexed_lookups, check_constraints_as_enums]},
% The options include the following: support_record_version, inject_version_column, indexed_lookups
% -------------------------------------------------------------------------------------------------
%
% {version_column, version}:
% Enabling record versioning is recommended. Record versions are key for a distributed system where multiple
% concurrent clients might be reading records and then at varying points, may or may not update the record. At
% issue is the staleness of your record after you update it. It is very likely in a distributed system that another
% update beat you to it. The version is a record scoped counter that guards against update before yours.
%
% For those tables that already have a `version` column, proto_crudl will automatically generate the SQL and code
% to guard against if a prior update by making sure the versions are the same. If they are not, the code will throw
% and error noting the stale version and the client will simply re-read the record, apply the changes, and then update.
%
% If proto_crudl does not find the `version` column on each table in the database, it will automatically
% inject the version column into every table that does not already have it defined.
%
% NOTE: It is recommended that enable record versioning. This will be used to ensure that when a message/struct that
% has embedded messages is updated, that the generated code properly handles the possibility that the record in the
% database might have been changed.
%
% indexed_lookups:
% Setting this to true will result in a method generated for any index whose name is prefixed with 'lookup_' or
% 'list_' based on whether or not you want to generate paginated code. You should only apply this to non foreign key
% and primary key indexes those are handled differently by proto_crudl. For indexes that you do NOT want generated as
% accessors, do not append their name with the keyword.
%
% check_constraints_as_enums:
% Setting this to true will result in proto_crudl evaluating any CHECK constraint on columns. If the constraint is
% an IN statement (i.e. a valid value constraint), proto_crudl will generate the protobuffer with an enum type whose
% labels will match the valid values and map accordingly to the field position of the enum.
%
% For example:
%
% CREATE TABLE "public".workflow (
% workflow_id bigint GENERATED ALWAYS AS IDENTITY NOT NULL ,
% task_no integer NOT NULL ,
% milestone_no integer NOT NULL ,
% project_id bigint NOT NULL ,
% change_id bigint NOT NULL ,
% description text NOT NULL ,
% workflow_type varchar NOT NULL ,
% CONSTRAINT pk_workflow PRIMARY KEY ( workflow_id )
% );
% ALTER TABLE "public".workflow ADD CONSTRAINT check_workflow_type CHECK ( workflow IN ('UNKNOWN', 'BUSINESS', 'INGEST', 'LABEL', 'TRAIN', 'INFER') );
%
% will generate the following enum field in the .proto file and erlang code to map
%
% enum WorkflowType {
% UNKNOWN = 0;
% BUSINESS = 1;
% INGEST = 2;
% LABEL = 3;
% TRAIN = 4;
% INFER = 5;
% }
{exclude_columns, [
{"test_schema.user", ["pword_hash", "geog"]}
]},
% Generate UPSERT on the following tables using the defined UNIQUE constraint, followed by a list of columns to
% UPDATE on CONFLICT
{upserts, [{"test_schema.address", {"unq_address",
{do_update, ["notes"]}}},
{"test_schema.user", {"lookup_email",
{do_update, ["enabled", "my_array", "number_value", "user_type"]}}},
{"public.product", {"lookup_sku", do_nothing}}
]},
% There are occasionally columns that have sensitive values, like a password hash that you do not want as part of
% the default SELECT (which in turns means they will be absent from the generated INSERT and UPDATE functions/queries).
% NOTE: To mutate these columns, you can use a custom query or a SQL transform
{extensions, [{"test_schema.user", "100 to 199"}, {"public.foo", "100 to 110"}]},
% Extend the generated protobuffer messages. This will generate the message with the `extensions` clause
{mapping, [
{"test_schema.user", [
{get_pword_hash, "SELECT pword_hash FROM test_schema.user WHERE email = $email"},
{update_pword_hash, "UPDATE test_schema.user SET pword_hash = $pword_hash WHERE email = $email AND version = $version"},
{reset_pword_hash, "UPDATE test_schema.user SET pword_hash = NULL WHERE email = $email AND version = $version"},
{disable_user, "UPDATE test_schema.user SET enabled = false WHERE email = $email"},
{enable_user, "UPDATE test_schema.user SET enabled = true WHERE email = $email"},
{delete_user_by_email, "DELETE FROM test_schema.user WHERE email = $email"},
{set_token, "UPDATE test_schema.user SET user_token = uuid_generate_v4() WHERE user_id = $user_id AND version = $version RETURNING user_token, version"},
{find_nearest, "SELECT *, ST_Y(geog::geometry) AS lat, ST_X(geog::geometry) AS lon FROM test_schema.user "
"WHERE ST_DWithin(geog, Geography(ST_MakePoint($lon, $lat)), $radius) "
"ORDER BY geog <-> ST_POINT($lon, $lat)::geography"}
]},
{"test_schema.user_product_part", [
{list_users_created_on, "SELECT u.user_id, u.created_on FROM test_schema.user u, test_schema.user_product_part upp "
"WHERE u.user_id = upp.user_id AND u.created_on = $created_on"},
{list_users_updated_on, "SELECT u.user_id, u.created_on FROM test_schema.user u, test_schema.user_product_part upp "
"WHERE u.user_id = upp.user_id AND u.updated_on = $updated_on"}
]}
]},
% Custom query mapping. This will generate a function that will return a result map of column/value from the provided
% query. For UPDATE and DELETE, it will return the operations response. If you have any questions, you can build the
% example code and review the generated code.
% WARNING: Customer queries should include the versioning clause where necessary
{transforms, [
{"test_schema.user", [
% For the select transform, we need to know the datatype of the product of the transform. This is needed for
% generating the protobufs
{select, [{"lat", "decimal", "ST_Y(geog::geometry)"},
{"lon", "decimal", "ST_X(geog::geometry)"}]},
{insert, [{"geog", "geography", "ST_POINT($lon, $lat)::geography"}]},
{update, [{"geog", "geography", "ST_POINT($lon, $lat)::geography"}]}]},
{"public.foo", [
{select, [{"foobar", "integer", "1"}]}]}
]}
% proto_crudl supports applying transformations to values as they are read and written from the data mapping code to the
% table. A good use of this would be to convert the geog value from postgis to lat and lng. While you can do custom
% query mappings to get the lat,lng, this will put the logic in the CRUD. Because proto_crudl operates against database
% SQL operations, it only support transformations between columns that can be legally defined by SQL. This implementation
% is not terrible sophisticated, it is important to understand that record map for the specified table is defined by
% the SELECT statement, which are all the columns except those specifically excluded. This means that when you specify
% a select transform, if those columns do not exist in the table, they will generated in the record map and returned
% on each read. This means than you can then use those virtual columns generated by the SELECT/READ to then be used
% on any writes (INSERT/UPDATE). The following demonstrates how to support lat,lon in with a postgis geography column.
%
% NOTE: any columns that are referenced in a function must be preceded by a $
]}.