-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
- Added db macro #388
- Added db macro #388
Conversation
- implemented db macro for KeyGenDb
This reverts commit 4c7aa27.
The idea LGTM :) As for the PR itself,
This definitely should remove a lot of boiler-plate though, so it's much appreciated ^_^ Thank you. |
Also, what was the issue with |
For some reason more tests failed after we had formatted |
The two actual tests failing here aren't your fault. One is #351, the other was intermittent a week ago as it assumes network latency properties GH's CI likes to prove invalid to assume. clippy and fmt are on you though. I'll look at the other failed runs from the prior commits... |
I see, thank you for the heads up. econsta and I are new to OSS I was meaning to ask you if making a PR is the proper protocol here, or is there some other means by which we show you our progress on an issue? |
The fmt commit had the same issue as #351, just with a different error 0_o I'll update the meta for it. Making a PR, and pushing updates to it, is definitely the best way to go! Feel free to chime in on Discord as well, if you prefer a quicker response. Sorry the inconsistent tests threw you off. Since you're editing the DB code, every service which uses a DB has all of their tests run. The larger test suites run 10+ Docker containers at once and I'm still working on resolving their stability, as shown here. |
used db macro for processor/db.rs updated use of db macro in key_gen.rs used in main.rs fixed clippy errors
What was the reason for closing this? |
Here is our implementation of the db macro for review, as well as a modified key_gen.rs file that uses it. We are using bincode instead of SCALE right now, as we ran into two problems when we tried to use SCALE. First HashMap is not compatible with SCALE, but it used in Commitments. Second even when we made a work around for this, we ran into a
overflow evaluating the requirement
problem that we could not find a work around for. Here is an issue we think might have been related as it has the exact same error message we got: paritytech/scale-info#65. Let us know what you would like us to do moving forward, if you think this is sufficient we can start implementing it in other places in the code base.