You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently create a test_run_id when initiative a test run, and store it as a class variable inside the application test harness.
classOneHopTestHarness:
# Caching of processes, indexed by test_run_id (timestamp identifier as string)_test_run_id_2_worker_task: Dict[str, Dict] =dict()
etc...
Such data (and related metadata) ought to be stored for shared access in the TestReportDatabase.
The text was updated successfully, but these errors were encountered:
This may actually be a red herring issue, in that we pretty well have to run the worker task running the Pytest test run within one process (Docker container) which ought to always have its (sub)process metadata internal to the process (container). A better idea might be to figure out how best to tag and route all business logic requests for a given (test run) work task (sub)process in a deterministic fashion to the "owner" of the worker task.
One idea to explore is global mapping of test_run_id's (which are timestamps) to their service (container) identifier, which may just be the number of the network port accessing the (container) process API. That is, every clone of a 'validator' service would have a unique port number which, alongside a test run identifier, help ensure that operations on an active test run (e.g. /status and /delete) are properly routed to their owner process.
We currently create a test_run_id when initiative a test run, and store it as a class variable inside the application test harness.
etc...
Such data (and related metadata) ought to be stored for shared access in the TestReportDatabase.
The text was updated successfully, but these errors were encountered: