- [Datalog] Repeated cardinality many attributes for the same entity. #284
- [Datalog] Apply type coercion according to scheam to ensure correct storage of values. #285
- [Datalog] Correct query caching when referenced content may have changed. #288 [Thx @andersmurphy]
- [Pod] Typos in search code. #291
- [Pod] Consistent
entity
behavior in pod as in JVM. #283 - [Datalog] Allow
:offset 0
. - [Datalog] Implement
empty
on Datom so it can be walked. #286 - [Datalog] Query functions resolve their arguments recursively. #287
- [Native] Remove
:aot
to avoid potential dependency conflict.
- [Datalog]
:offset
and:limit
support, #126, #117 - [Datalog]
:order-by
support, #116 - [Datalog]
count-datoms
function to return the number of datoms of a pattern - [Datalog]
cardinality
function to return the number of unique values of an attribute
- [KV] Added version number to kv-info, in preparation for auto-migration.
- [Datalog] Cache query results by default, can use dynamic var
q/*cache?*
to turn it off.
- [Datalog] Empty results after querying empty database before transact. #269
- [Datalog] Handle multiple variables assigned to the same cardinality many attribute. #272
- [Datalog] Return maps regression. #273
- [Datalog] Regression in dealing with non-existent attributes. #274
- [KV] Change default write option to be the safest, the same as LMDB defaults, i.e. synchronous flush to disk when commit.
- [KV] Explose
sync
function to force a synchronous flush to disk, useful when non-default flags for writes are used. - [Pod] added
clear
to bb pod.
- [JVM] Support Java 8 in uberjar, following Clojure supported Java version.
- [Pod] Added missing arity in
get-conn
[Thx @aldebogdanov]
- [Native] compile native image with UTF-8 encoding on Arm64 Linux and Windows.
- [Platform] native image on Arm64 Linux. [Thx @aldebogdanov]
- [Benchmark] ported Join Order Benchmark (JOB) from SQL
- [Datalog] Planner: nested logic predicates.
- [Datalog] Planner: multiple predicates turned ranges.
- [Datalog] Planner: missing range turned predicates.
- [Datalog] Planner: need to first try target var to find index for :ref plan.
- [Datalog] Planner: fail to unify with existing vars in certain cases. #263
- [Datalog] Planner: skip initial attribute when it does not have a var.
- [Datalog] Planner: target var may be already bound for link step.
- [Datalog] Planner: missing bound var in merge scan.
- [Datalog]
like
function failed to match in certain cases. - [Datalog]
clear
function also clear the meta DBI
- [Datalog] Planner: execute initial step if result size is small during
planning, controlled by dynamic var
init-exec-size-threshold
(default 1000), above which, the same number of samples are collected instead. These significantly improved subsequent join size estimation, as these initial steps hugely impact the final plan. - [Datalog] Planner: search full plan space initially, until the number of plans
considered in a step reaches
plan-space-reduction-threshold
(default 990), then greedy search is performed in later stages, as these later ones have less impact on performance. This provides a good balance between planning time and plan quality, while avoiding potential out of memory issue during planning. - [Datalog] Planner: do parallel processing whenever appropriate during planning and execution (regular JVM only).
- [LMDB] Lock env when creating a read only txn to have safer concurrent reads.
- [Datalog] maintain an estimated total size and a representative sample of
entity ids for each attribute, processed periodically according to
sample-processing-interval
(default 3600 seconds). - [Datalog] reduce default
*fill-db-batch-size*
to 1 million datoms. - [KV] throw exception when transacting
nil
, #267
- [Datalog] Planner: column attributes should be a set of equivalent attribute and variable.
- [Datalog] Planner: convert ranges back to correct predicates.
- [Datalog] Handle
like
,in
within complex logic expressions.
- [Datalog] Optimize
not
,and
andor
logic functions that involve only one variable.
- [Datalog] Handle bounded entity IDs in reverse reference and value equality scans, #260
- [Datalog] Added
:result
toexplain
result map.
- [Datalog]
like
function similar to LIKE operator in SQL:(like input pattern)
or(like input pattern opts)
. Match pattern accepts wildcards%
and_
.opts
map has key:escape
that takes an escape character, default is\!
. Pattern is compiled into a finite state machine that does non-greedy (lazy) matching, as oppose to the default in Clojure/Java regex. This function is further optimized by rewritten into index scan range boundary for patterns that have non-wildcard prefix. Similarly,not-like
function is provided. - [Datalog]
in
function that is similar to IN operator in SQL:(in input coll)
which is optimized as index scan boundaries. Similarly,not-in
. - [Datalog]
fill-db
function to bulk-load a collection of trusted datoms,*fill-db-batch-size*
dynamic var to control the batch size (default 4 million datoms). The same var also controlinit-db
batch size. read-csv
function, a drop-in replacement forclojure.data.csv/read-csv
. This CSV parser is about 1.5X faster and is more robust in handling quoted content.- Same
write-csv
for completeness.
- [Datalog] Wrong variable may be returned when bounded variables are involved. #259
- [KV] Change default initial DB size to 1 GiB.
- [Platform] Use local LMDB library on FreeBSD if available [thx @markusalbertgraf].
- [Datalog]
min
andmax
query predicates handle all comparable data. - [Datalog] Port applicable fixes from Datascript up to 1.7.1.
- update deps.
- [Datalog] planner generates incorrect step when bound variable is involved in certain cases.
- [Datalog]
explain
throws when zero result is determined prior to actual planning. [Thx @aldebogdanov] - [Datalog] regression in staged entity transactions for refs, #244, [Thx @den1k]
- [Datalog] added query graph to
explain
result map.
- [Datalog]
explain
function to show query plan. - [Platform] Embedded library for Linux on Aarch64, which is crossed compiled using zig.
- [KV] Broken embedded library on Windows.
- [Datalog] more robust concurrent writes on server.
- [KV] Flags are now sets instead of vectors.
- Update deps
DB Upgrade is required.
- [Datalog] Query optimizer to improve query performance, particularly for complex queries. See details. #11
- [Datalog] More space efficient storage format, leveraging LMDB's dupsort feature, resulting in about 20% space reduction and faster counting of data entries.
- [Datalog]
search-datoms
function to lookup datoms without having to specify an index. - [KV] Expose LMDB dupsort feature, i.e. B+ trees of B+ trees, #181, as the
following functions that work only for dbi opened with
open-list-dbi
:put-list-items
del-list-items
visit-list
get-list
list-count
key-range-list-count
in-list?
list-range
list-range-count
list-range-filter
list-range-first
list-range-some
list-range-keep
list-range-filter-count
visit-list-range
operate-list-val-range
- [KV]
key-range
function that returns a range of keys only. - [KV]
key-range-count
function that returns the number of keys in a range. - [KV]
visit-key-range
function that visit keys in a range for side effects. - [KV]
range-some
function that is similar tosome
for a given range. - [KV]
range-keep
function that is similar tokeep
for a given range.
- [Datalog] Change VEA index back to VAE.
- [Datalog]
:eavt
,:avet
and:vaet
are no longer accepted as index names, use:eav
,:ave
and:vae
instead. Otherwise, it's misleading, as we don't store tx id. - [KV] Change default write setting from
:mapasync
to:nometasync
, so that the database is more crash resilient. In case of system crash, only the last transaction might be lost, but the database will not be corrupted. #228 - [KV] Upgrade LMDB to the latest version, now tracking mdb.master branch, as it includes important fixes for dupsort, such as https://bugs.openldap.org/show_bug.cgi?id=9723
- [KV]
datalevin/kv-info
dbi to keep meta information about the databases, as well as information about each dbi, as flags, key-size, etc. #184 - [KV] Functions that take a predicate have a new argument
raw-pred?
to indicate whether the predicate takes a raw KV object (default), or a pair of decoded values of k and v (more convenient).
- [Datalog] Query results is now spillable to disk. #166
- [Search] Functions in
search-utils
namespace are now compiled instead of being interpreted to improve performance.
- Support older Clojure version.
- [Server] Recover options after automatic reconnect. #241
- [Datalog] Concurrent writes of large data values.
- [Datalog]
:closed-schema?
option to allow declared attributes only, default isfalse
. [Thx @andersmurphy]
- [Datalog] ported applicable improvements from Datascript up to 1.6.3
- [Datalog]
:validate-data? true
not working for some data types. [Thx @andersmurphy] - [Datalog] ported applicable fixes from Datascript up to 1.6.1
- bump deps
- [Datalog] Add
:db.fulltext/autoDomain
boolean property to attribute schema, default isfalse
. Whentrue
, a search domain specific for this attribute will be created, with a domain name same as attribute name, e.g. "my/attribute". This enables the samefulltext
function syntax as Datomic, i.e.(fulltext $ :my/attribute ?search)
. - [Search] Add
:search-opts
option tonew-search-engine
option argument, specifying default options passed tosearch
function.
- [Datalog] Add
:db.fulltext/domains
property to attribute schema, #176 - [Datalog] Add
:search-domains
to connection option map, a map from domain names to search engine option maps. - [Datalog] Add
:domains
option tofulltext
built-in function option map
- [Datalog] Removed problematic caching in pull api implementation
- [Datalog] Create search engines on-demand. #206
- deps conflict
- [Datalog]
<
,>
,<=
,>=
built-in functions handle any comparable data, not just numbers. - [Datalog] Better fix for #224 [Thx @dvingo]
- bump deps
- [All] Do not interfere with the default print-methods of regular expression, byte array and big integer. #230
- [Datalog]
:xform
in pull expression not called for:cardinality/one
ref attributes, #224. [Thx @dvingo] - [Datalog]
:validate-data?
does not recognize homogeneous tuple data type, #227. - [All] BigDec decoding out of range error for some values in JVM 8. #225.
- [KV] Add JVM shutdown hook to close DB. per #228
- [Datalog] TxReport prints differently from the actual value, #223.
- [Server] re-open server search engine automatically, #229
- [Datalog] Handle refs in heterogeneous and homogeneous tuples, #218. [Thx @garret-hopper]
- [All] Remove some clojure.core redefinition warnings. [Thx @vxe]
- [Test] Fix windows tests.
- [main] Added an
--nippy
option to dump/load database in nippy binary format, which handles some data anomalies, e.g. keywords with space in them, non-printable data, etc., and produces smaller dump file, #216
- [KV] More robust bigdec data type encoding on more platforms
- [All] Create a backup db directory
dtlv-re-index-<unix-timestamp>
inside the system temp directory whenre-index
, #213 - [Search] Graceful avoidance of proximity scoring when positions are not indexed
- Remove Clojure 1.11 features to accommodate older Clojure
- [Search] Consider term proximity in relevance when
:index-position?
search engine option istrue
. #203 - [Search]
:proximity-expansion
search option (default2
) can be used to adjust the search quality vs. time trade-off: the bigger the number, the higher is the quality, but the longer is the search time. - [Search]
:proximity-max-dist
search option (default45
) can be used to control the maximal distance between terms that would still be considered as belonging to the same span. - [Search]
create-stemming-token-filter
function to create stemmers, which uses Snowball stemming library that supports many languages. #209 - [Search]
create-stop-words-token-filter
function to take a customized stop words predicate. - [KV, Datalog, Search]
re-index
function that dump and load data with new settings. Should only be called when no other threads or programs are accessing the database. #179
- [KV] More strict type check for transaction data, throw when transacting un-thawable data. #208
- [Main] Remove
*datalevin-data-readers*
dynamic var, use Clojure's*data-readers*
instead.
- [Native] Rollback GraalVM to 22.3.1, as 22.3.2 is missing apple silicon.
- [Datalog] Unexpected heap growth due to caching error. #204
- [Datalog] More cases of map size reached errors during transaction. #196
- [Datalog] Existing datoms still appear in
:tx-data
when unchanged. #207
- [Datalog] Disable cache during transaction, save memory and avoid disrupting concurrent write processes.
- [Native] upgrade GraalVM to 22.3.2
- [Lib] update deps.
- [KV] When
open-kv
, don't grow:mapsize
when it is the same as the current size. - [Server] automatically reopen DBs for a client that is previously removed from the server.
- [Search]
:include-text?
option to store original text. #178. - [Search]
:texts
and:texts+offsets
keys to:display
option ofsearch
function, to return original text in search results.
- [Main] more robust
dump
andload
of Datalog DB on Windows.
- [KV]
:max-readers
option to specify the maximal number of concurrent readers allowed for the db file. Default is 126. - [KV]
max-dbs
option to specify the maximal number of sub-databases (DBI) allowed for the db file. Default is 128. It may induce slowness if too big a number of DBIs are created, as a linear scan is used to look up a DBI.
- [Datalog]
clear
after db is resized.
- [KV] transacting data more than one order of magnitude larger than the initial map size in one transaction. #196
- [Pod] serialize TxReport to regular map. #190
- [Server] migrate old sessions that do not have
:last-active
.
- [Datalog]
datalog-index-cache-limit
function to get/set the limit of Datalog index cache. Helpful to disable cache when bulk transacting data. #195 - [Server]
:idle-timeout
option when creating the server, in ms, default is 24 hours. #122
- [Datalog] error when Clojure collections are used as lookup refs. #194
- [Datalog] correctly handle retracting then transacting the same datom in the same transaction. #192
- [Datalog] error deleting entities that were previously transacted as part of some EDN data. #191.
- [Lib] update deps.
- [KV] added tuple data type that accepts a vector of scalar values. This supports range queries, i.e. having expected ordering by first element, then second element, and so on. This is useful, for example, as path keys for indexing content inside documents. When used in keys, the same 511 bytes limitation applies.
- [Datalog] added heterogeneous tuple
:db/tupleTypes
and homogeneous tuples:db/tupleType
type. Unlike Datomic, the number of elements in a tuple are not limited to 8, as long as they fit inside a 496 bytes buffer. In addition, instead of usingnil
to indicate minimal value like in Datomic, one can use:db.value/sysMin
or:db.value/sysMax
to indicate minimal or maximal values, useful for range queries. #167 - [Main] dynamic var
*datalevin-data-readers*
to support loading custom tag literals. (thx @respatialized)
- [Main] dump and load big integers.
- [Datalog] avoid unnecessary caching, improve transaction speed up to 25% for large transactions.
- [Native] upgrade Graalvm to 22.3.1
- [Native] static build on Linux. #185
- [Lib] update deps.
- [Datalog] error when large
:db/fulltext
value is added then removed in the same transaction.
- [Search]
search-utils/create-ngram-token-filter
now works. #164 - [Datalog] large datom value may throw off full-text indexing. #177
- [Datalog] intermittent
:db/fulltext
values transaction error. #177
DB Upgrade is required.
- [Search] Breaking search index storage format change. Data re-indexing is necessary.
- [Search] significant indexing speed and space usage improvement: for default
setting, 5X faster bulk load speed; 2 orders of magnitude faster
remove-doc
and 10X disk space reduction; when term positions and offsets are indexed: 3X faster bulk load and 40 percent space reduction. - [Search] added caching for term and document index access, resulting in 5 percent query speed improvement on average, 35 percent improvement at median.
- [Search]
:index-position?
option to indicate whether to record term positions inside documents, defaultfalse
. - [Search]
:check-exist?
argument toadd-doc
indicate whether to check the existence of the document in the index, defaulttrue
. Set it tofalse
when importing data to improve ingestion speed.
- [Datalog] increasing indexing time problem for
:db/fulltext
values. #151 - [Search] error when indexing huge documents.
- [KV] spillable results exception in certain cases.
- [Search]
doc-refs
function. - [Search]
search-index-writer
as well as relatedwrite
andcommit
functions for client/server, as it makes little sense to bulk load documents across network.
- [Native] allow native compilation on apple silicon
- [Datalog] db print-method. (thx @den1k)
- [Datalog] intermittent concurrent transaction problems
- [CI] adjust CI workflow for the latest Graalvm
- [Datalog] moved entity and transaction ids from 32 bits to 64 bits integers, supporting much larger DB. #144
- [Datalog] wrapped
transact!
insidewith-transaction
to ensure ACID and improved performance - [Native] updated to the latest Graalvm 22.3.0. #174
- [KV]
get-range
regression when results are used insequence
. #172
- [Datalog] Ported all applicable Datascript improvements since 0.8.13 up to now (1.4.0). Notably, added composite tuples feature, new pull implementation, many bug fixes and performance improvements. #3, #57, #168
- bump deps
- [Server] error when granting permission to a db created by
create-database
instead of being created by opening a connection URI
- [Datalog] avoid printing all datoms when print a db
- [Doc] clarify that
db-name
is unique on the server. (thx @dvingo)
- avoid
(random-uuid)
, since not every one is on Clojure 1.11 yet.
- typo prevent build on CI
- [KV] spill test that prevents tests on MacOS CI from succeeding.
- [KV] broken deleteOnExit for temporary files
- [KV] clean up spill files
DB Upgrade is required.
- [Platform] embedded library support for Apple Silicon.
- [KV] A new range function
range-seq
that has similar signature asget-range
, but returns aSeqable
, which lazily reads data items into memory in batches (controlled by:batch-size
option). It should be used insidewith-open
for proper cleanup. #108 - [KV] The existent eager range functions,
get-range
andrange-filter
, now automatically spill to disk when memory pressure is high. The results, though mutable, still implementIPersistentVector
, so there is no API level change. The spill-to-disk behavior is controlled byspill-opts
option map when opening the db, allowing:spill-threshold
and:spill-root
options.
- [KV] write performance improvement
- [KV] Upgrade LMDB to 0.9.29
- [Client]
:client-opts
option map that is passed to the client when opening remote databases.
- [KV]
with-transaction-kv
does not drop prior data when DB is resizing. - [Datalog]
with-transaction
does not drop prior data when DB is resizing.
- [Native] Add github action runner image ubuntu-20.04 to avoid using too new a glibc version (2.32) that does not exist on most people's machines.
- [KV]
with-transaction-kv
does not crash when DB is resizing.
- [KV]
with-transaction-kv
macro to expose explicit transactions for KV database. This allows arbitrary code within a transaction to achieve atomicity, e.g. to implement compare-and-swap semantics, etc, #110 - [Datalog]
with-transaction
macro, the same as the above for Datalog database - [KV]
abort-transact-kv
function to rollback writes from within an explicit KV transaction. - [Datalog]
abort-transact
function, same for Datalog transaction. - [Pod] Missing
visit
function
- [Server] Smaller memory footprint
- bump deps
- [Datalog]
fulltext-datoms
function that return datoms found by full text search query, #157
- [Search] Don't throw for blank search query, return
nil
instead, #158 - [Datalog] Correctly handle transacting empty string as a full text value, #159
- Datalevin is now usable in deps.edn, #98 (thx @ieugen)
- [Datalog] Caching issue introduced in 0.6.20 (thx @cgrand)
- [Pod]
entity
andtouch
function to babashka pod, these return regular maps, as theEntity
type does not exist in a babashka script. #148 (thx @ngrunwald) - [Datalog]
:timeout
option to terminate on deadline for query/pull. #150 (thx @cgrand).
- [Datalog] Entity equality requires DB identity in order to better support reactive applications, #146 (thx @den1k)
- bump deps
- [Search] corner case of search in document collection containing only one term, #143
- [Datalog] entity IDs has smaller than expected range, now they cover full 32 bit integer range, #140
- [Datalog] Persistent
max-tx
, #142
- [Datalog]
tx-data->simulated-report
to obtain a transaction report without actually persisting the changes. (thx @TheExGenesis) - [KV] Support
:bigint
and:bigdec
data types, corresponding tojava.math.BigInteger
andjava.math.BigDecimal
, respectively. - [Datalog] Support
:db.type/bigdec
and:db.type/bigint
, correspondingly, #138.
- Better documentation so that cljdoc can build successfully. (thx @lread)
- [Datalog] Additional arity to
update-schema
to allow renaming attributes. #131 - [Search]
clear-docs
function to wipe out search index, as it might be faster to rebuild search index than updating individual documents sometimes. #132 datalevin.constants/*data-serializable-classes*
dynamic var, which can be used forbinding
if additional Java classes are to be serialized as part of the default:data
data type. #134
- [Datalog] Allow passing option map as
:kv-opts
to underlying KV store whencreate-conn
- bump deps
- [Datalog]
clear
function on server. #133
- [Datalog] Changed
:search-engine
option map key to:search-opts
for consistency [Breaking]
- [Search] Handle empty documents
- [Datalog] Handle safe schema migration, #1, #128
- bump deps
- [KV] visitor function for
visit
can return a special value:datalevin/terminate-visit
to stop the visit.
- Fixed adding created-at schema item for upgrading Datalog DB from prior 0.6.4 (thx @jdf-id-au)
- [breaking] Simplified
open-dbi
signature to take an option map instead
:validate-data?
option foropen-dbi
,create-conn
etc., #121
- Schema update regression. #124
:domain
option tonew-search-engine
, so multiple search engines can coexist in the samedir
, each with its own domain, a string. #112
- Server failure to update max-eid regression, #123
- Added an arity to
update-schema
to allow removal of attributes if they are not associated with any datoms, #99
- Search add-doc error when alter existing docs
- Persistent server session that survives restarts without affecting clients, #119
- More robust server error handling
- Query cache memory leak, #118 (thx @panterarocks49)
- Entity retraction not removing
:db/updated-at
datom, #113
datalevin.search-utils
namespace with some utility functions to customize search, #105 (thx @ngrunwald)
- Add
visit
KV function tocore
name space - Handle concurrent
add-doc
for the same doc ref
- Bump deps
- Handle Datalog float data type, #88
- Allow to use all classes in Babashka pods
- Dump and load Datalog option map
- Dump and load
inter-fn
- Dump and load regex
- Pass search engine option map to Datalog store
- Make configurable analyzer available to client/server
- Dot form Java interop regression in query, #103
- Option to pass an analyzer to search engine, #102
:auto-entity-time?
Datalog DB creation option, so entities can optionally have:db/created-at
and:db/updated-at
values added and maintained automatically by the system during transaction, #86
- Dependency bump
doc-count
function returns the number of documents in the search indexdoc-refs
function returns a seq ofdoc-ref
in the search index
datalevin.core/copy
function can copy Datalog database directly.
doc-indexed?
function
add-doc
can update existing docopen-kv
function allows LMDB flags, #100
DB Upgrade is required.
- Built-in full-text search engine, #27
- Key-value database
visit
function to do arbitrary things upon seeing a value in a range
- [breaking]
:instant
handles dates before 1970 correctly, #94. The storage format of:instant
type has been changed. For existing Datalog DB containing:db.type/instant
, dumping as a Datalog DB using the old version of dtlv, then loading the data is required; For existing key-value DB containing:instant
type, specify:instant-pre-06
instead to read the data back in, then write them out as:instant
to upgrade to the current format.
- Improve read performance by adding a cursor pool and switch to a more lightweight transaction pool
- Dependency bump
- Create pod client side
defpodfn
so it works in non-JVM.
load-edn
for dtlv, useful for e.g. loading schema from a file, #101
- Serialized writes for concurrent transactions, #83
defpodfn
macro to define a query function that can be used in babashka pod, #85
- Update
max-aid
after schema update (thx @den1k)
- Updated dependencies, particularly fixed sci version (thx @borkdude)
- occasional LMDB crash during multiple threads write or abrupt exit
- Update graalvm version
- Exception handling in copy
- Handle scalar result in remote queries
- Server asks client to reconnect if the server is restarted and client reconnects automatically when doing next request
- Bump versions of all dependency
- More robust handling of abrupt network disconnections
- Automatically maintain the required number of open connections, #68
- Options to specify the number of connections in the client connection pool and to set the time-out for server requests
- Backport the dump/load fix from 0.5.20
- Dumping/loading Datalog store handles raw bytes correctly
- Remove client immediately when
disconnect
message is received, clean up resources afterwards, so a logically correct number of clients can be obtained in the next API call on slow machines.
- Occasional server message write corruptions in busy network traffic on Linux.
- JVM uberjar release for download
- JVM library is now Java 8 compatible, #69
- Auto switch to local transaction preparation if something is wrong with remote preparation (e.g. problem with serialization)
- Do most of transaction data preparation remotely to reduce traffic
- Handle entity serialization, fix #66
- Allow a single client to have multiple open databases at the same time
- Client does not open db implicitly, user needs to open db explicitly
- New
create-conn
should override the old, fix #65
DTLV_LIB_EXTRACT_DIR
environment variable to allow customization of native libraries extraction location.
- Use clj-easy/graal-build-time, in anticipation of GraalVM 22.
- Better robust jar layout for
org.clojars.huahaiy/datalevin-native
- Release artifact
org.clojars.huahaiy/datalevin-native
on clojars, for depending on Datalevin while compiling GraalVM native image. User no longer needs to manually compile Datalevin C libraries.
- Only check to refersh db cache at user facing namespaces, so internal db calls work with a consistent db view
- Replace unnecessary expensive calls such as
db/-search
ordb/-datoms
with cheaper calls to improve remote store access speed. - documentation
- More robust build
- Wrap all LMDB flags as keywords
- Don't do AOT in library, to avoid deps error due to exclusion of graal
- Expose all LMDB flags in JVM version of kv store
DB Upgrade is required.
- Transparent networked client/server mode with role based access control. #46 and #61
dtlv exec
takes input from stdin when no argument is given.
- When open db, throw exception when lacking proper file permission
- Transactable entity [Thanks @den1k, #48]
clear
function to clear Datalog db
- Native uses the same version of LMDB as JVM, #58
- Remove GraalVM and dtlv specific deps from JVM library jar
- Update deps
- More robust dependency management
- Replacing giant values, this requires Java 11 [#56]
- Transaction of multiple instances of bytes [#52, Thanks @den1k]
- More reflection config in dtlv
- Benchmark deps
- Correct handling of rule clauses in dtlv
- Documentation clarification that we do not support "db as a value"
- Datafy/nav for entity [Thanks @den1k]
- Some datom convenience functions, e.g.
datom-eav
,datom-e
, etc.
- Talk to Babashka pods client in transit+json
- Exposed more functions to Babashka pod
- Native Datalevin can now work as a Babashka pod
- Compile to native on Windows and handle Windows path correctly
close-db
convenience function to close a Datalog db
- Compile to Java 8 bytecode instead of 11 to have wider compatibility
- Use UTF-8 throughout for character encoding
- Improve dtlv REPL (doc f) display
- Provide Datalevin C source as a zip to help compiling native Datalevin dependency
- Minor improvement on the command line tool
- Native image now bundles LMDB
- Handle list form in query properly in command line shell [#42]
- Consolidated all user facing functions to
datalevin.core
, so users don't have to understand and require different namespaces in order to use all features.
DB Upgrade is required.
- [Breaking] Removed AEV index, as it is not used in query. This reduces storage and improves write speed.
- [Breaking] Change VAE index to VEA, in preparation for new query engine. Now all indices have the same order, just rotated, so merge join is more likely.
- [Breaking] Change
open-lmdb
andclose-lmdb
toopen-kv
andclose-kv
,lmdb/transact
tolmdb/transact-kv
, so they are consistent, easier to remember, and distinct from functions indatalevin.core
.
- GraalVM native image specific LMDB wrapper. This wrapper allocates buffer memory in C and uses our own C comparator instead of doing these work in Java, so it is faster.
- Native command line shell,
dtlv
- Improve Java interop call performance
- Allow Java interop calls in where clauses, e.g.
[(.getTime ?date) ?timestamp]
,[(.after ?date1 ?date2)]
, where the date variables are:db.type/instance
. [#32]
- Changed default LMDB write behavior to use writable memory map and asynchronous msync, significantly improved write speed for small transactions (240X improvement for writing one datom at a time).
- Read
:db.type/instant
value asjava.util.Date
, not aslong
[#30]
- Fixed error when transacting different data types for an untyped attribute [#28, thx @den1k]
- proper exception handling in
lmdb/open-lmdb
- Fixed schema update when reloading data from disk
- Fixed
core/get-conn
schema update
- Remove unnecessary locks in read transaction
- Improved error message and documentation for managing LMDB connection
core/get-conn
andcore/with-conn
- Correctly handle
init-max-eid
for large values as well.
- Fixed regression introduced by 0.3.6, where :ignore value was not considered [#25]
- Add headers to key-value store keys, so that range queries work in mixed data tables
- Expose all data types to key-value store API [#24]
- thaw error for large values of
:data
type. [#23]
- portable temporary directory. [#20, thx @joinr]
- Properly initialize max-eid in
core/empty-db
- Add value type for
:db/ident
in implicit schema
- [Breaking] Change argument order of
core/create-conn
,db/empty-db
etc., and putdir
in front, since it is more likely to be specified thanschema
in real use, so users don't have to putnil
forschema
.
- correct
core/update-schema
- correctly handle
false
value as:data
- always clear buffer before putting data in
- thaw exception when fetching large values
- clearer error messages for byte buffer overflow
- correct schema update
core/schema
andcore/update-schema
core/closed?
db/entid
allows 0 as eid
- fix test
- correct results when there are more than 8 clauses
- correct query result size
- automatically re-order simple where clauses according to the sizes of result sets
- change system dbi names to avoid potential collisions
- miss function keywords in cache keys
- hash-join optimization submitted PR #362 to Datascript
- caching DB query results, significant query speed improvement
- fix invalid reuse of reader locktable slot #7
- remove MDB_NOTLS flag to gain significant small writes speed
- update existing schema instead of creating new ones
- Reset transaction after getting entries
- Only use 24 reader slots
- avoid locking primitive #5
- create all parent directories if necessary
- long out of range error during native compile
- apply query/join-tuples optimization
- use array get wherenever we can in query, saw significant improvement in some queries.
- use
db/-first
instead of(first (db/-datom ..))
,db/-populated?
instead of(not-empty (db/-datoms ..)
, as they do not realize the results hence faster. - storage test improvements
- use only half of the reader slots, so other processes may read
- add an arity for
bits/read-buffer
andbits/put-buffer
- add
lmdb/closed?
,lmdb/clear-dbi
, andlmdb/drop-dbi
- code samples
- API doc
core/close
- Port datascript 0.18.13