Skip to content

Commit

Permalink
chore(refactor): Move utility functions into utils from scrapyd.jobst…
Browse files Browse the repository at this point in the history
…orage
  • Loading branch information
jpmckinney committed Jul 19, 2024
1 parent b1854fd commit d1e62dc
Show file tree
Hide file tree
Showing 5 changed files with 16 additions and 16 deletions.
11 changes: 6 additions & 5 deletions docs/news.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,13 +60,14 @@ CLI
Utils
^^^^^

Move functions from ``scrapyd.utils`` into their callers:
- Move functions from ``scrapyd.utils`` into their callers:

- ``sorted_versions`` to ``scrapyd.eggstorage``
- ``get_crawl_args`` to ``scrapyd.launcher``
- ``JsonResource``, ``get_spider_list`` and ``UtilsCache`` to ``scrapyd.webservice``
- ``sorted_versions`` to ``scrapyd.eggstorage``
- ``get_crawl_args`` to ``scrapyd.launcher``
- ``JsonResource``, ``get_spider_list`` and ``UtilsCache`` to ``scrapyd.webservice``

Move ``activate_egg`` from ``scrapyd.eggutils`` to ``scrapyd.runner``
- Move ``activate_egg`` from ``scrapyd.eggutils`` to its caller, ``scrapyd.runner``.
- Move ``job_items_url`` and ``job_log_url`` from ``scrapyd.jobstorage`` to ``scrapyd.utils``. :ref:`jobstorage` is not responsible for URLs.

Fixed
~~~~~
Expand Down
8 changes: 0 additions & 8 deletions scrapyd/jobstorage.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,6 @@
from scrapyd.utils import sqlite_connection_string


def job_log_url(job):
return f"/logs/{job.project}/{job.spider}/{job.job}.log"


def job_items_url(job):
return f"/items/{job.project}/{job.spider}/{job.job}.jl"


class Job:
def __init__(self, project, spider, job=None, start_time=None, end_time=None):
self.project = project
Expand Down
8 changes: 8 additions & 0 deletions scrapyd/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@
from scrapy.utils.misc import load_object


def job_log_url(job):
return f"/logs/{job.project}/{job.spider}/{job.job}.log"


def job_items_url(job):
return f"/items/{job.project}/{job.spider}/{job.job}.jl"


def get_spider_queues(config):
"""Return a dict of Spider Queues keyed by project name"""
spiderqueue_path = config.get("spiderqueue", "scrapyd.spiderqueue.SqliteSpiderQueue")
Expand Down
3 changes: 1 addition & 2 deletions scrapyd/webservice.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,8 @@

from scrapyd.config import Config
from scrapyd.exceptions import EggNotFoundError, ProjectNotFoundError, RunnerError
from scrapyd.jobstorage import job_items_url, job_log_url
from scrapyd.sqlite import JsonSqliteDict
from scrapyd.utils import native_stringify_dict
from scrapyd.utils import job_items_url, job_log_url, native_stringify_dict


def param(
Expand Down
2 changes: 1 addition & 1 deletion scrapyd/website.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from twisted.web import resource, static

from scrapyd.interfaces import IEggStorage, IPoller, ISpiderScheduler
from scrapyd.jobstorage import job_items_url, job_log_url
from scrapyd.utils import job_items_url, job_log_url


class PrefixHeaderMixin:
Expand Down

0 comments on commit d1e62dc

Please sign in to comment.