You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm using scrapyrt to provide an HTTP interface to a big Scrapy project. I'm running a single scrapyrt instance in a Docker container, some spiders require ~60-120 seconds to complete and I've noticed that requests are handled sequentially, causing substantial delays.
Is that the expected behavior? I know scrapyrt is not suitable for long running spiders, but I'm wondering if there exist a quick fix, for example running multiple workers/threads. Asking here cause I'm not really familiar with the twisted framework.
Another solution would be running multiple scrapyrt instances behind a load balancer, but I'd rather not go down that path.
The text was updated successfully, but these errors were encountered:
Hello, I'm using scrapyrt to provide an HTTP interface to a big Scrapy project. I'm running a single scrapyrt instance in a Docker container, some spiders require ~60-120 seconds to complete and I've noticed that requests are handled sequentially, causing substantial delays.
Is that the expected behavior? I know scrapyrt is not suitable for long running spiders, but I'm wondering if there exist a quick fix, for example running multiple workers/threads. Asking here cause I'm not really familiar with the twisted framework.
Another solution would be running multiple scrapyrt instances behind a load balancer, but I'd rather not go down that path.
The text was updated successfully, but these errors were encountered: