-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Initial pass at removing qs logging in view of the PlayerEventLogging System #4542
base: master
Are you sure you want to change the base?
Conversation
Add retention for etl tables Rename tables/fields to etl nomenclature Combine database work to one atomic load
testing passed though appears that the event itself has a few bugs. Will fix them in another commit
Cleanup and Formatting
Playerevent::Trade PlayerEvent::Speech (new event to mirror functionality of qs_speech
Playerevent::KilledNPC, KilledNamedNPC and KilledRaidNPC
Add etl for Playerevent::AA_purchase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work!
Some things we need to follow up on
- All QS table code needs to be removed, period both in world and in QS ingestion. We will provide an optional transformer who wants to transform old tables to new data.
- We will provide a data transformer from old QS tables to new. I can take this on.
- Player event processing code needs to be on its own thread and its own database connection, I can also take this on
- We need a command in world that exposes the ETL settings in JSON format so it can feed into Spire admin. Similar to ./bin/world database:schema
- As you see in the PR comments, most of the vectors can be
reserved
on which does help substantially to bulk allocate memory once instead of doing it each event. We know the size of all of the data coming in, so we just need to reserve again even if it means doing another loop prior to fetch each event, call reserve in a switch.
common/events/player_event_logs.cpp
Outdated
out_entries.augment_5_id = augments[4]; | ||
out_entries.augment_6_id = augments[5]; | ||
} | ||
etl_queues.npc_handin_entries.push_back(out_entries); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we know the size of etl_queues.npc_handin_entries
from in.handin_items
we can do a reserve before the loop to pre-allocate memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we know the 'entries' sizes, there could be other events of the same type in the batch queue, so we would need to cycle through the batch_queue to parse the total size for the potential multiple events. Would this be beneficial, instead of the dynamic size adjustments as the vectors are add to? If so, happy to do that.
Same comment for all the reserve items.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it makes a huge difference to pre-allocate especially with higher event volume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added reserve to all etl_queues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you sir
@@ -123,21 +134,466 @@ void PlayerEventLogs::ProcessBatchQueue() | |||
|
|||
BenchTimer benchmark; | |||
|
|||
EtlQueues etl_queues{}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at all of this in ProcessBatchQueue and in our deletions, I think we should wrap these functions in thread calls so we don't tie up the main thread.
This also means that we should create a separate database connection just for processing player events because what happens is even if we had all of this in another thread its going to be lock contended with database locks on the main thread.
World needs to be kept as lightweight as possible in the main thread and this certainly adds weight if anything starts to take more than 500ms which it most certainly could here.
Same would go for using QS if folks use QS to process.
I may end up taking this one on.
Correct a failed test case for improper next id for etl tables when table is first created.
{ | ||
auto results = db.QueryDatabase( | ||
fmt::format( | ||
"SELECT AUTO_INCREMENT FROM information_schema.tables WHERE TABLE_NAME = '{}';", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should assume that the server user can't directly query information_schema
is there a reason we leaned into this versus max + 1 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I ran into a couple of test cases that failed. If I empty an etl table, the max + 1 will return 1, when in fact the auto_increment may be 100. Further, once the retention period is hit, the next id via max + 1 will again be different from the auto_increment.
@@ -441,7 +441,7 @@ int main(int argc, char **argv) | |||
} | |||
|
|||
if (player_event_process_timer.Check()) { | |||
player_event_logs.Process(); | |||
std::jthread event_thread(&PlayerEventLogs::Process, &player_event_logs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's also a spot in queryserv
Amazing work sir! One more stretch goal if you are down for it -
|
I'll be picking this one up in the next few days |
Description
In discussions with the devs, this is the start to migrate the qs logging system into the player event system. Given that the player event logging system uses a json payload, this will allow for that payload to be flatten into 'qs like' structures to allow for easier server operator queries.
At present, this enables:
aa purchase
killed npc, named and raid mobs
loot
merchant selling and purchasing
npc handins
player speech (off by default)
trades
The qs logging code for the above is still present. This can be removed (now or in the future) to allow for a migration path for operators.
Type of change
Please delete options that are not relevant.
Testing
Tested locally on RoF2. Requires further testing and documentation on my part. I wanted to start the review process.
Clients tested:
N/A
Checklist