Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is just a proof of concept for a possible test environment using Python, for a little more robust testing of the API endpoints. Feel free to close if you think this is out of scope for the platform.
WHY?
Current testing environment in Perl is OK for most scenarios, and probably is more than enough for the scope, but at the end of the day is very limited when it means to really interacting with the Redis DB, we have to mock up the actual Redis's commands,which at the end of the day means that tests might not end up being very reliable. With this tiny Python test environment, we actually mount the API, and Python simply requests the API, which allows it to execute the actual Redis commands to an actual Redis DB, and hence allows us to center on checking the actual result is what we should be expecting from an input data.
Is it a better approach?
TBH No, since this only allows us to test the API endpoints, and not the inner methods, TDD usually requires that every single piece of code is tested, and here we have that limitation, so this is more of a way to create superficial tests to determine that LRR is still working as expected after a whole set of work, than to verify that the platform as a whole is stable.
How does it work?
I added a new docker compose file, and a new Dockerfile, those are the same than the base platform, but they add a Python instance with pytest and a few other libraries. Inside the tests folder there is a Python folder with all the python files for the testing(And a requirements.txt in the root). You call pytest on the Python instance to run the tests and verify that everything is passing. I usually use these 2 commands on a Makefile to execute them:
I'm only providing one single test for this concept, since working on everything is a little lot of work, and I'm not sure if this might be a good feature, or just something to be rejected, especially because testing the log in required endpoint will probably need a little more of work to find a way to generate an API key on start up. I'm mostly creating this PR to showcase and know if this is something worth working on, or if should be discarded.