Skip to content

Commit

Permalink
Merge pull request #62 from IsabelParedes/docs
Browse files Browse the repository at this point in the history
Update docs
  • Loading branch information
JohanMabille authored Jun 4, 2024
2 parents f74fb0e + da1073c commit dd79890
Show file tree
Hide file tree
Showing 4 changed files with 59 additions and 31 deletions.
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ copyright on their contributions.
This software is licensed under the BSD-3-Clause license. See the LICENSE file
for details.

.. toctree::
.. toctree::
:caption: INSTALLATION
:maxdepth: 2

Expand Down
14 changes: 7 additions & 7 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,16 @@ With mamba or conda

To ensure that the installation works, it is preferable to install ``xeus-zmq`` in a fresh environment.
It is also needed to use a `miniforge`_ or `miniconda`_ installation because with the full `anaconda`_
you may have a conflict with the ``zeroMQ`` library already installed in the distribution.
you may have a conflict with the ``ZeroMQ`` library already installed in the distribution.

The safest usage is to create an environment named ``xeus-zmq``
The safest usage is to create an environment named ``xeus-env``

.. code:: bash
mamba create -n xeus-zmq
mamba activate xeus-zmq
mamba create -n xeus-env
mamba activate xeus-env
Then you can install in this freshly created environment ``xeus-zmq`` and its dependencies:
Then you can install ``xeus-zmq`` and its dependencies in this freshly created environment:

.. code:: bash
Expand All @@ -44,8 +44,8 @@ We have packaged all these dependencies on conda-forge. The simplest way to inst

.. code:: bash
mamba env create -f environment-dev.yml -n xeus-zmq
mamba activate xeus-zmq
mamba env create -f environment-dev.yml -n xeus-env
mamba activate xeus-env
You can then build and install ``xeus-zmq``:

Expand Down
54 changes: 41 additions & 13 deletions docs/source/server.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ on one of these channels, the corresponding callback is invoked. Any code execut
will be executed by the main thread. If the ``publish`` method is called, the main thread sends a message
to the publisher thread.

Having a dedicated thread for publishing messages makes this operation a non-blocking one. When the kernel
Having a dedicated thread for publishing messages makes this operation a non-blocking one. When the kernel's
main thread needs to publish a message, it simply sends it to the publisher thread through an internal socket
and continues its execution. The publisher thread will poll its internal socket and forward the messages to
the ``publisher`` channel.
Expand All @@ -40,15 +40,14 @@ The last thread is the heartbeat. It is responsible for notifying the client tha
This is done by sending messages on the ``heartbeat`` channel at a regular rate.

The main thread is also connected to the publisher and the heartbeat threads through internal ``controller``
channels. These are used to send ``stop`` messages to the subthread and allow to stop the kernel in a clean
way.
channels. These are used to send ``stop`` messages to the subthread and to cleanly stop the kernel.

Extending the default implementation
------------------------------------

The default implementation performs a blocking poll of the channels, which can be a limitation in some
use cases. For instance, you may way want to poll within an event loop, to allow asynchronous execution
of code. ``xeus-zmq`` makes it possible ot extend the default implementation by inheriting from the
use cases. For instance, you may way want to poll within an event loop to allow asynchronous execution
of code. ``xeus-zmq`` makes it possible to extend the default implementation by inheriting from the
`xserver_zmq class`_. It provides utility methods to poll, read and send messages, so that defining
a new server does not require a lot of code.

Expand All @@ -67,18 +66,18 @@ This server runs four threads that communicate through internal `ZeroMQ` sockets
responsible for polling the ``control`` channel while a dedicated thread listens on the ``shell``
channel. Having separated threads for the ``control`` and ``shell`` channel makes it possible to send
messages on a channel while the kernel is already processing a message on the other channel. For instance
one can send on the ``control`` a request to interrupt a computation running on the ``shell``.
one can send on the ``control`` channel a request to interrupt a computation running on the ``shell``.

The control thread is also connected to the shell, the publisher and the heartbeat threads through internal
``controller`` channels. These are used to send ``stop`` messages to the subthread and allow to stop the
kernel in a clean way, similarly to the ``xserver_zmq``.
``controller`` channels. Similar to ``xserver_zmq``, these are used to send ``stop`` messages to the
subthread and to stop the kernel in a clean way.

The rest of the implementation is similar to that of ``xserver_zmq``.
The rest of the implementation is also similar to that of ``xserver_zmq``.

xserver_shell_main internals
----------------------------

The ``xserver_shell_main`` class is very similar to the ``xserver_control_main`` class, except that
The ``xserver_shell_main`` class is almost identical to the ``xserver_control_main`` class, except that
the main thread listens on the ``shell`` channel as illustrated in the following diagram:

.. image:: server_main.svg
Expand All @@ -89,12 +88,41 @@ Extending xserver_zmq_split

Like the default implementation, the ``xserver_control_main`` and ``xserver_shell_main`` servers
perform a blocking poll on each channel. It is possible to provide a different execution model for
both kind of servers. However, the methods to do it slightly differs from extending the default
implementation. Instead of inheriting from the `xserver_zmq_split class`_, one can provide independent
both kinds of servers. However, the process to accomplish this slightly differs from the process of extending
the default implementation. Instead of inheriting from the `xserver_zmq_split class`_, one can provide independent
execution models for the control channel and the shell channel by inheriting from the `xcontrol_runner class`_
and the `xshell_runner class`_ respectively.

``TODO``: provide a code illustrating this
.. code::
// xcustom_runner.hpp
#include "xeus-zmq/xshell_runner.hpp"
class xcustom_runner final : public xshell_runner
{
public:
xcustom_runner(event_loop loop);
~xcustom_runner() override = default;
private:
void run_impl() override;
event_loop p_loop{ nullptr };
};
.. code::
// xcustom_runner.cpp
# include "xcustom_runner.hpp"
void xcustom_runner::run_impl()
{
// Add custom execution model here
// Example:
p_loop->run_forever();
}
.. _xeus server API: https://xeus.readthedocs.io/en/latest/server.html#public-api
.. _xserver_zmq class: https://github.com/jupyter-xeus/xeus-zmq/blob/main/include/xeus-zmq/xserver_zmq.hpp
Expand Down
20 changes: 10 additions & 10 deletions docs/source/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,18 +45,18 @@ Instantiating a server
`xeus-zmq` provides three different implementations for the server:

- ``xserver_zmq_default`` is the default server implementaion, it runs three thread, one for publishing,
one for the heartbeat messages, and the main thread handles the shell, control and stdin sockets. To
- ``xserver_zmq_default`` is the default server implementation; it runs three threads, one for publishing,
one for the heartbeat messages, and the main thread for handling the shell, control and stdin sockets. To
instantiate this implementation, include ``xserver_zmq.hpp`` and call the ``make_xserver_default``
function.
- ``xserver_control_main`` runs an additional thread for handling the shell and the stdin sockets. Therefore
the main thread only listens to the control socket. This allow to easily implement interruption of code
- ``xserver_control_main`` runs an additional thread for handling the shell and the stdin sockets. Therefore,
the main thread only listens to the control socket. This allows us to easily implement interruption of code
execution. This server is required if you want to plug a debugger in the kernel. To instantiate this
implementation, include ``xserver_zmq_split`` and call the ``make_xserver_control_main`` function.
implementation, include ``xserver_zmq_split.hpp`` and call the ``make_xserver_control_main`` function.
- ``xserver_shell_main`` is similar to ``xserver_control_main`` except that the main thread handles the shell
and the stdin sockets while the additional thread listens to the control socket. This server is required if
you want to plug a debugger that does not support native threads and requires the code to be run by the main
thread. To instantiate this implementation, include ``xserver_zmq_split`` and call the
thread. To instantiate this implementation, include ``xserver_zmq_split.hpp`` and call the
``make_xserver_shell_main`` function.

Instantiating a client
Expand All @@ -67,10 +67,10 @@ have a look at our `ipc client class`_ and the `ipc client implementation file`_

`xeus-zmq` currently provides a single implementation for the client:

- ``xclient_zmq`` is the primary client implementaion, it runs two threads, one for sending a "ping" message to the
heartbeat each 100ms, one for polling the iopub socket and pushing the received message into a queue, and the main
thread waits for messages either popping messages from the queue or polling the shell and the controll sockets for
receieved messages. To instantiate this implementation, include ``xclient_zmq.hpp`` and call the
- ``xclient_zmq`` is the primary client implementation, it runs two threads, one for sending a "ping" message to the
heartbeat every 100ms, and one for polling the iopub socket and pushing the received message into a queue. The main
thread waits for messages by either popping messages from the queue or polling the shell and the control sockets for
received messages. To instantiate this implementation, include ``xclient_zmq.hpp`` and call the
``make_xclient_zmq`` function.

.. _ipc client class: https://github.com/jupyter-xeus/xeus-zmq/blob/main/test/xipc_client.hpp
Expand Down

0 comments on commit dd79890

Please sign in to comment.