Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dev: using Array<u32> as buffer in encode trait #300

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

PavitraAgarwal21
Copy link
Contributor

Copy link

vercel bot commented Nov 11, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
raito ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 11, 2024 5:39am

@PavitraAgarwal21
Copy link
Contributor Author

Hey @maciejka , could you take a look at the changes in the encode traits? I’d like to confirm if the core logic seems solid to you. If it looks good, I'll start updating the tests and addressing any minor errors. Thanks for your help!

encode_compact_size(self.len(), ref dest);
dest.append(self);
for i in 0..self.len() {
dest.append(self[i].into());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to split byte array into 4-byte chunks and reinterpret as u32 (big endian)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@maciejka this will work ok probably only for P2TR and P2SH scripts, which length is divisible for 4
Other script types would introduce shifts and we won't be able to use this encoding.

fn encode_to(self: @u32, ref dest: ByteArray) {
dest.append_word_rev((*self).into(), 4);
fn encode_to(self: @u32, ref dest: Array<u32>) {
let byte1: u32 = *self % 256;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here it's just dest.append(self)

@m-kus
Copy link
Collaborator

m-kus commented Nov 12, 2024

@maciejka I think this would not work because of compact size encoding and pubkey script (and also segwit flags).
Maybe we can instead use a more efficient ByteArray alternative?

struct WordArray {
    buffer: Array<u32>,
    pending_word_len: usize
}

In case the pending word is partially filled we apply inefficient algorithm involving mult/div, if we are lucky and pending word len is 0 and our item happened to be of u32 - then we just append.

Will need to benchmark this to make sure we are actually gaining something.

@maciejka
Copy link
Collaborator

Isn't ByteArray doing the same but with 8 bits of granularity? What we save here is that we don't need to deserialize from array of felts to an array of u32 before hashing.

@PavitraAgarwal21
Copy link
Contributor Author

Hey! @m-kus @maciejka , what idea should we move forward with?

@m-kus
Copy link
Collaborator

m-kus commented Nov 13, 2024

Isn't ByteArray doing the same but with 8 bits of granularity? What we save here is that we don't need to deserialize from array of felts to an array of u32 before hashing.

Yes, ByteArray packs data into 31-byte chunks which we later convert to Array, here we use 4-byte chunks from the beginning.

@maciejka
Copy link
Collaborator

Isn't ByteArray doing the same but with 8 bits of granularity? What we save here is that we don't need to deserialize from array of felts to an array of u32 before hashing.

@PavitraAgarwal21we are busy at devcon. Sorry for the delay. I will have a look today.

Copy link
Collaborator

@maciejka maciejka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to take into account the fact that you might need to encode data which length in bytes is not divisible by 4. I already suggested a solution in the issue description: #298 (comment). Sorry for delayed review. We are busy at Devcon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[feat] Optimize the way we serialize transactions
3 participants