Skip to content

Commit

Permalink
docs: Copy-edit
Browse files Browse the repository at this point in the history
  • Loading branch information
jpmckinney committed Jul 26, 2024
1 parent 3064a28 commit 2b63d64
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 12 deletions.
4 changes: 2 additions & 2 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Parameters
entry_points = {'scrapy': ['settings = projectname.settings']},
)
Do this easily with the `scrapyd-deploy` command from the `scrapyd-client <https://github.com/scrapy/scrapyd-client>`__ package.
Do this easily with the ``scrapyd-deploy`` command from the `scrapyd-client <https://github.com/scrapy/scrapyd-client>`__ package.

Example:

Expand Down Expand Up @@ -241,7 +241,7 @@ Get the pending, running and finished jobs of a project.

- Pending jobs are in :ref:`spider queues<spiderqueue>`.
- Running jobs have Scrapy processes.
- Finished jobs are in job storage.
- Finished jobs are in :ref:job storage<jobstorage>`.

.. note::

Expand Down
12 changes: 6 additions & 6 deletions docs/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ Default
``scrapyd.spiderqueue.SqliteSpiderQueue``
Options
- ``scrapyd.spiderqueue.SqliteSpiderQueue`` stores spider queues in SQLite databases named after each project, in the :ref:`dbs_dir` directory
- Implement your own, using the ``ISpiderQueue`` interface
- Implement your own, using the :py:interface:`~scrapyd.interfaces.ISpiderQueue` interface
Also used by
- :ref:`addversion.json` webservice, to create a queue if the project is new
- :ref:`schedule.json` webservice, to add a pending job
Expand Down Expand Up @@ -183,7 +183,7 @@ Options
- The launcher adds :ref:`max_proc` capacity at startup, and one capacity each time a Scrapy process ends.
- The :ref:`application` starts a timer so that, every :ref:`poll_interval` seconds, jobs start if there's capacity: that is, if the number of Scrapy processes that are running is less than the :ref:`max_proc` value.

- Implement your own, using the ``IPoller`` interface
- Implement your own, using the :py:interface:`~scrapyd.interfaces.IPoller` interface

.. _poll_interval:

Expand Down Expand Up @@ -280,11 +280,11 @@ The directory in which to write Scrapy items.

An item feed is written to ``{items_dir}/{project}/{spider}/{job}.jl``.

If this option is non-empty, the `FEEDS <https://docs.scrapy.org/en/latest/topics/feed-exports.html#std-setting-FEEDS>`__ Scrapy setting is set as follows, resulting in feeds being written to the specified directory as JSON lines:
If this option is non-empty, the `FEEDS <https://docs.scrapy.org/en/latest/topics/feed-exports.html#std-setting-FEEDS>`__ Scrapy setting is set as follows, resulting in items being written to the above path as JSON lines:

.. code-block:: json
{"value from items_dir": {"format": "jsonlines"}}
{"file:///path/to/items_dir/project/spider/job.jl": {"format": "jsonlines"}}
Default
``""`` (empty), because it is recommended to instead use either:
Expand Down Expand Up @@ -408,7 +408,7 @@ Options
- ``scrapyd.eggstorage.FilesystemEggStorage`` writes eggs in the :ref:`eggs_dir` directory

.. note:: Eggs are named after the ``version``, replacing characters other than ``A-Za-z0-9_-`` with underscores. Therefore, if you frequently use non-word, non-hyphen characters, the eggs for different versions can collide.
- Implement your own, using the ``IEggStorage`` interface: for example, to store eggs remotely
- Implement your own, using the :py:interface:`~scrapyd.interfaces.IEggStorage` interface: for example, to store eggs remotely

.. _eggs_dir:

Expand Down Expand Up @@ -439,7 +439,7 @@ Default
Options
- ``scrapyd.jobstorage.MemoryJobStorage`` stores jobs in memory, such that jobs are lost when the Scrapyd process ends
- ``scrapyd.jobstorage.SqliteJobStorage`` stores jobs in a SQLite database named ``jobs.db``, in the :ref:`dbs_dir` directory
- Implement your own, using the ``IJobStorage`` interface
- Implement your own, using the :py:interface:`~scrapyd.interfaces.IJobStorage` interface

.. _finished_to_keep:

Expand Down
4 changes: 2 additions & 2 deletions docs/contributing/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ To install an editable version for development, clone the repository, change to

.. code-block:: shell
pip install -e .
pip install -e .[test,docs]
Developer documentation
-----------------------
Expand Down Expand Up @@ -99,7 +99,7 @@ A **finished job** is an object with the attributes ``project``, ``spider``, ``j
- ISpiderQueue
- IPoller
- ScrapyProcessProtocol
- Job
- IJobStorage
* - Project
- *not specified*
- _project
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Upload a project

This involves building a `Python egg <https://setuptools.pypa.io/en/latest/deprecated/python_eggs.html>`__ and uploading it to Scrapyd via the `addversion.json <https://scrapyd.readthedocs.org/en/latest/api.html#addversion-json>`_ webservice.

Do this easily with the `scrapyd-deploy` command from the `scrapyd-client <https://github.com/scrapy/scrapyd-client>`__ package. Once configured:
Do this easily with the ``scrapyd-deploy`` command from the `scrapyd-client <https://github.com/scrapy/scrapyd-client>`__ package. Once configured:

.. code-block:: shell
Expand Down
10 changes: 9 additions & 1 deletion docs/news.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@ Release notes

.. changelog
Unreleased
----------

Added
~~~~~

- Default webservices can be disabled. See :ref:`config-services`.

1.5.0b1 (2024-07-25)
--------------------

Expand Down Expand Up @@ -345,7 +353,7 @@ Added
Changed
~~~~~~~

- Move scrapyd-deploy command to `scrapyd-client <https://pypi.org/project/scrapyd-client/>`__ package. (:commit:`c1358dc`, :commit:`c9d66ca`, :commit:`191353e`)
- Move ``scrapyd-deploy`` command to `scrapyd-client <https://pypi.org/project/scrapyd-client/>`__ package. (:commit:`c1358dc`, :commit:`c9d66ca`, :commit:`191353e`)
- Allow the :ref:`items_dir` setting to be a URL. (:commit:`e261591`, :commit:`35a21db`)
- Look for a ``~/.scrapyd.conf`` file in the user's home directory. (:commit:`1fce99b`)

Expand Down

0 comments on commit 2b63d64

Please sign in to comment.