-
Notifications
You must be signed in to change notification settings - Fork 3
fix: parallel set state #301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
nicklasl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 👍
|
Oh... actually... During runtime, would it not be good to have it sync? |
|
But I guess then we would need to take the instance out of the rotation when updating the state? |
5502eee to
928b281
Compare
|
@nicklasl it is sync but it's parallel, we wait for all parallel state sets. in each rotation we create a new wasm module so that's the rotation. it happens in SwapWasmResolverApi. |
15ee5de to
e572e6a
Compare
| } | ||
|
|
||
| public void flushLogs() { | ||
| logResolveLock.lock(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this lock ever be contented? The idea is it shouldn't be right? Cause we use different instances for different threads (although I think the thread local thing is fishy, cause we can very well have more than number of cpu threads calling us and some will be mapped to the same instance). But if this lock can be contended it would be good to know where that can happen. If we don't think it can be contented maybe it's better to do a tryLock and throw or at least log otherwise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I later saw that we are depending on this lock.
| final var request = consumeResponse(respPtr, WriteFlagLogsRequest::parseFrom); | ||
| final var ignore = writeFlagLogs.write(request); | ||
| isFlushed = true; | ||
| logResolveLock.unlock(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have try/finally around all lock acquires.
| final var respPtr = (int) wasmMsgFlushLogs.apply(reqPtr)[0]; | ||
| final var request = consumeResponse(respPtr, WriteFlagLogsRequest::parseFrom); | ||
| final var ignore = writeFlagLogs.write(request); | ||
| isFlushed = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit strange to me that we make this, the lowest root resolver, be flush once only?
| logResolveLock.unlock(); | ||
| return consumeResponse(respPtr, ResolveWithStickyResponse::parseFrom); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't consumeResponse be inside the lock? It will call free right? Also add try/finally, which in this case will make it easier to return inside the try block and still unlock in the finally..
| final int reqPtr = transferRequest(request); | ||
| final int respPtr = (int) wasmMsgGuestResolve.apply(reqPtr)[0]; | ||
| logResolveLock.unlock(); | ||
| return consumeResponse(respPtr, ResolveFlagsResponse::parseFrom); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Consume should be inside the lock.
| final var instance = wasmResolverApiRef.get(); | ||
| final ResolveWithStickyResponse response; | ||
| try { | ||
| response = instance.resolveWithSticky(request); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. So I guess here we might wait for an ongoing flush and then throw to retry, so we are using the lock inside WasmResolveApi, but I guess that's fine unless we wait for a whole setState..
…ion, change to isClosed
…dException, change to isClosed
No description provided.