Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python test environment #1165

Closed
wants to merge 2 commits into from
Closed

Conversation

EfronC
Copy link
Contributor

@EfronC EfronC commented Jan 14, 2025

This is just a proof of concept for a possible test environment using Python, for a little more robust testing of the API endpoints. Feel free to close if you think this is out of scope for the platform.

WHY?
Current testing environment in Perl is OK for most scenarios, and probably is more than enough for the scope, but at the end of the day is very limited when it means to really interacting with the Redis DB, we have to mock up the actual Redis's commands,which at the end of the day means that tests might not end up being very reliable. With this tiny Python test environment, we actually mount the API, and Python simply requests the API, which allows it to execute the actual Redis commands to an actual Redis DB, and hence allows us to center on checking the actual result is what we should be expecting from an input data.

Is it a better approach?
TBH No, since this only allows us to test the API endpoints, and not the inner methods, TDD usually requires that every single piece of code is tested, and here we have that limitation, so this is more of a way to create superficial tests to determine that LRR is still working as expected after a whole set of work, than to verify that the platform as a whole is stable.

How does it work?
I added a new docker compose file, and a new Dockerfile, those are the same than the base platform, but they add a Python instance with pytest and a few other libraries. Inside the tests folder there is a Python folder with all the python files for the testing(And a requirements.txt in the root). You call pytest on the Python instance to run the tests and verify that everything is passing. I usually use these 2 commands on a Makefile to execute them:

build-test:
	docker compose -f docker-compose-testing.yml build
test:
	-docker compose -f docker-compose-testing.yml run --rm python-tests python -m pytest
	docker compose -f docker-compose-testing.yml down --volumes --remove-orphans

I'm only providing one single test for this concept, since working on everything is a little lot of work, and I'm not sure if this might be a good feature, or just something to be rejected, especially because testing the log in required endpoint will probably need a little more of work to find a way to generate an API key on start up. I'm mostly creating this PR to showcase and know if this is something worth working on, or if should be discarded.

@psilabs-dev
Copy link
Contributor

Regarding how to inject API keys to LRR, I have a method you can reference here.

Integration/e2e tests in Python are something I'm also working on through https://github.com/psilabs-dev/lanraragi-satellite, but right now it's not mature. It provides an async API client and mock archive generation tools (manycbz), which are used mostly by my microservice but also for future testing.

The issue with testing a data-oriented web server is reproducible data generation at scale. Some problems only express themselves when a large number of archives are inserted, the same goes for making many simultaneous requests to trigger race conditions and locking issues. To find these bugs we need the infrastructure to test them, hence the need for an async client and mock data generator.

@EfronC
Copy link
Contributor Author

EfronC commented Jan 15, 2025

Thanks, I just added a simple example using an api key for a test to an auth required endpoint, for a more complete showcase.

@Difegue
Copy link
Owner

Difegue commented Jan 21, 2025

Thanks for the work put into this, I appreciate the effort.
I do think a separate Python test environment with its own set of Dockerfiles/libraries/samples is a bit out of scope to put in the main repo...
Having that in a separate project/microservice like Satellite makes more sense to me.

I'm definitely interested in integration tests and further instrumentation of the server, as I think it'd be key to fixing some of the speed/processes issues we have at scale/load as mentioned.

As far as the core server can help, implementing better support for metrics that can be read by external services as in #1080 would probably be helpful?
Stuff like CPU time per request and current amount of processes/workers, etc.

@EfronC
Copy link
Contributor Author

EfronC commented Jan 21, 2025

Ok, thanks for the answer.

Yeah, that's why I created this showcase, to know if it was worth to work on before doing so. I'll close the PR, and leave the space open for another implementation of this same idea 👍

@EfronC EfronC closed this Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants