-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Max records for a batch_write call is less than server's default #425
Comments
We are aware of this issue and have created a Jira ticket for it |
This is currently being worked on here: #548 |
Is there any update on the status of this PR? |
Hi all, the PR is finished, and I will review it ASAP. Once the PR is merged, I can publish a pre-release build to PyPI to provide an early build with this change, but it will not be tested internally by our quality engineering team. Is this okay? |
A pre-release is fine for now! Is there a ballpark for how long it would take to complete internal testing? Thank you! |
@anshulkamath Me and @DomPeliniAerospike are going to work on it today, but we don't have an ETA yet because there's a chance we might have to make major changes to the PR. We can update you again by tomorrow. |
Ah, makes sense. Thanks! |
Our internal team has been working on this issue and it's still in-progress. |
The default server batch-max-requests size is 5k but this client's max appears to be 4096 set here:
https://github.com/aerospike/aerospike-client-python/blob/master/src/include/pool.h#L12
When I attempt to construct a batch larger than 4096 keys/records, I get 'Cannot allocate as_bytes'
Am I creating the batch incorrectly? Is there a way to get around the limitation without limiting batch_write() to 4096 records?
The text was updated successfully, but these errors were encountered: