Skip to content

kotauskas/interprocess

Repository files navigation

Crates.io Docs.rs Build status Maintenance status Rust version: 1.75+

Interprocess communication toolkit for Rust programs that aims to expose as many platform-specific features as possible while maintaining a uniform interface for all platforms and encouraging portable, correct code.

Communication primitives

Interprocess provides both OS-specific IPC interfaces and cross-platform abstractions for them.

Cross-platform IPC APIs
  • Local sockets – similar to TCP sockets, but use filesystem or namespaced paths instead of ports on localhost, depending on the OS, bypassing the network stack entirely; implemented using named pipes on Windows and Unix domain sockets on Unix
Platform-specific, but present on both Unix-like systems and Windows
  • Unnamed pipes – anonymous file-like objects for communicating privately in one direction, most commonly used to communicate between a child process and its parent
Unix-only
  • FIFO files – special type of file which is similar to unnamed pipes but exists on the filesystem, often referred to as "named pipes" but completely different from Windows named pipes
  • Unix domain sockets – Interprocess no longer provides those, as they are present in the standard library; they are, however, exposed as local sockets
Windows-only
  • Named pipes – resemble Unix domain sockets, use a separate namespace instead of on-drive paths

Asynchronous I/O

Currently, the only supported async runtime is Tokio. Local sockets and Windows named pipes are provided by Interprocess, while Unix domain sockets are available in Tokio itself.

Support for smol is planned.

Platform support

Interprocess supports Windows and all generic Unix-like systems. Additionally, platform-specific extensions are supported on select systems. The policy with those extensions is to put them behind #[cfg] gates and only expose on the supporting platforms, producing compile errors instead of runtime errors on platforms that have no support for those features.

Four levels of support (not called tiers to prevent confusion with Rust target tiers, since those work completely differently) are provided by Interprocess. It would be a breaking change for a platform to be demoted, although promotions quite obviously can happen as minor or patch releases.

Explicit support

OSes at this level: Windows, Linux, macOS

  • Interprocess is guaranteed to compile and succeed in running all tests – it would be a critical bug for it not to
  • CI, currently provided by GitHub Actions, runs on all of those platforms and displays an ugly red badge if anything is wrong on any of those systems
  • Certain #[cfg]-gated platform-specific features are supported with stable public APIs
Explicit support with incomplete CI

OSes at this level: FreeBSD, Android

  • Interprocess is expected to compile and succeed in running all tests – it would be a bug for it not to
  • GitHub Actions only allows Clippy and Rustdoc to be run for those targets in CI (via cross-compilation) due to a lack of native VMs
  • Certain #[cfg]-gated platform-specific features are supported with stable public APIs
Explicit support without CI

OSes at this level: OpenBSD

  • Interprocess is expected to compile and succeed in running all tests – it would be a bug for it not to
  • Manual testing on local VMs is usually done before every release; no CI happens because those targets' standard library .rlibs cannot be installed via rustup target add
  • Certain #[cfg]-gated platform-specific features are supported with stable public APIs
Support by association

OSes at this level: Dragonfly BSD, NetBSD, Redox, Fuchsia, iOS, tvOS, watchOS

  • Interprocess is expected to compile and succeed in running all tests – it would be a bug for it not to
  • No manual testing is performed, and CI is unavailable because GitHub Actions does not provide it
  • Certain #[cfg]-gated platform-specific features that originate from other platforms are supported with stable public APIs because they behave here identically to how they do on an OS with a higher support level
Assumed support

OSes at this level: POSIX-conformant #[cfg(unix)] systems not listed above for which the libc crate compiles

  • Interprocess is expected to compile and succeed in running all tests – it would be a bug for it not to
  • Because this level encompasses a practically infinite amount of systems, no manual testing or CI can exist

Feature gates

  • tokio, off by default – enables the Tokio variants of IPC primitives (where applicable).

Anti-LLM notice

I (Goat), the author of this software, have never used any LLM (large language model) software in any context for any reason. I will continue to uphold this in the lack of violent or otherwise unweatherable external pressure mandating me to do so.

Accordingly, LLMs have not been used in the development of this software, and LLM-generated code is not present in the source tree. If you desire to avoid software produced with the use of LLMs, or if you are working under restrictions that prohibit use of LLM-generated dependencies, this software can safely be added to your allowlist.

Please note that, while an effort is made to avoid introducing dependencies containing LLM-generated code and ones known to have been made with the use of LLMs by their core developers, there are insufficient resources to guarantee a total lack of such software in the transitive dependency tree. If you discover use of LLMs upstream of this software, you are urged to report this on the issue tracker. The offending dependency will be removed within the constraints of feasibility and maintainer bandwidth.

Additionally, some dependencies form part of the public API of this software. Those cannot simply be removed without impacting users and thereby stooping lower than the users of LLMs themselves. For dependencies that are large and/or part of the public API while not being authored and maintained by me, an effort is made to gate them behind off-by-default features to minimize the risk of LLM contamination. Note that this policy predates the LLM pandemic and is also conductive of proactive prevention of watering hole attacks (please do not refer to those as "supply-chain attacks").

Why avoid "AI" and its users?

My ethical judgement of LLMs and other forms of so-called AI is the result of experiencing the "AI" bubble take the world by storm and unmistakably worsening just about every aspect of human life I care about:

  • Machine translation has lowered into the ground the standards for what is considered adequate translation. Something that I have bared witness to in my own living room is that it is now completely normal for an English→Russian translation of a safety manual for a construction crane to be a freelance gig with pitiful pay that requires deciphering atrocious Chinese→English machine translation and inventing ways of localizing terminology that does not exist in the industry – all because it is that much cheaper to roll the token casino and have qualified professionals take impossible responsibility for the negligence of people outside their control. Misleading marketing targeting those underinformed and those eager to line their pockets, aided by the fraudulent benchmarks with which machine learning companies misrepresent their products, despite the levels of conflict of interest with little to no precedent of comparable scale in the history of academia being patently obvious, have made this method of cutting costs acceptable in general business ethics, even when people's lives are on the line. That large language models have turned out to be pretty bad at languages is about as symbolic a wakeup call as it gets.

  • Superficiality, obligatoriness, and technique have long been an artificial and circularly reasoning benchmark of quality that mainstream society amply employs as a weapon of sophism to bash artists of various fields for doing things they don't like, but with neural network image diffusion excelling at achieving those three qualities and hardly anything else, the "AI" bubble confers the strongest push toward conformism and soullessness in art in recent times. Utility art is now ugly, utility music is dull, and artists have fewer means than ever of convincing the economy that they deserve to exist as living beings – all while the supposedly intelligent tools replacing their work are so unoriginal that one can distill their output to one of 12 generic recurring images. Other fields of art and creative expression are experiencing similar devaluation of authenticity as LLM-generated blog posts designed to be skimmed rather than read disgrace the eyes of readers in nerd online spaces all while executives of AAA game companies compete with those of Hollywood in how much they can cut costs on soulless cash-grab drivel by outsourcing more and more to the slop machines.

  • The propensity of LLMs and neural network image diffusion to produce content that appears natural to users but has absolutely no relation to reality has plunged the concept of truth itself into a deepening crisis. Media outlets without standards of quality can now fabricate images of objects that do not exist and events that never happened at a higher rate than ever before, and gullible people are more eager than ever to propagate them as evidence. The "AI" boom is a gift not only to the likes of RT and Tsargrad, which are professional disinformation outlets catering to a particular audience akin to content farms, but also to social media personalities abusing context collapse to deliver harmful falsehoods to audiences that would otherwise never stumble upon them. Deepfakes in stark violation of consent, automated libel, blunt force propaganda – nothing is off the table in the intellectual and ethical race to the bottom powered by the slop machines.

  • Finally, and of most interest to this anti-LLM notice, the effect of LLMs on the software industry have been no less negative. Nearly half of all code generated by "AI" has been found to contain security flaws, and curl's maintainers have experienced first-hand the shocking spam wave of pseudo-security-research mass-produced by LLMs. Social contracts fostering cooperation and reciprocity have been obliterated by a typhoon of abuse motivated by financial interest, as website administrators are now forced to make choices with no good options: either prevent users who have JavaScript disabled from accessing their website, or suffer outages that prevent everyone from accessing their website. Far from all of the kind souls carrying out volunteer work completely for free will weather this manufactured storm in the wake of "AI" companies ruthlessly robbing our temples of open information, as sifting through spam and abuse in a constant state of heightened caution is now a mandatory part of the workload.

Yet more evidence for the overwhelming amounts of harm brought upon all of humankind by the AI pandemic can be found on resources such as the AI Incident Database and the Wikipedia article on the challenges to ethics of "artificial intelligence". You might also want to take a look at the LLM-Afflicted Software registry. If you know of concise and helpful resources that could be additionally linked to by this notice, do not hesitate to inform me of them.

The bottom line of all this is that those who wish to avoid a fate of drowning in the world flood of neural network slop have to take it upon themselves to resist this tide. The manufacture of consent for things that go starkly against the interests of the people is not a magical process, and it can be interfered with. A key part of this resistance is vocal rejection – the louder we cry about the harm the "AI" boom is causing us, the more difficult to chew we become for the all-devouring worm of venture capital and stock market hysteria. Just as important if not more important is hitting the perpetrators where it materially hurts – their coffers. The more difficult it is to use "AI" tools on account of societal pushback, the less likely people are to spend their disposable income on subscriptions that finance the perpetual treadmill of training of models, thereby funding the abuse of our internet resources, the devaluation of actual intelligent life, and the destruction of our planet with ever-growing emissions, debasement of otherwise inhabitable territory by the noise pollution and resource consumption of superfluous data centers, and a fundamentally destructive trajectory of unbounded growth on a finitely-sized planet.

The limitations of this methodology are not lost on me, and I certainly do not believe that a collective boycott of so-called artificial intelligence is sufficient to steer our world away from the worst possible outcome. Still, doing what you can to create collective, decentralized, unignorable and unyielding pushback is a much better alternative to sitting idly with your hands thrown high up in the air and hopelessly watching all that which makes life worth living be cast into a planet-sized fire pit. By simply sharing this message and replicating a zero-tolerance policy against LLMs in the projects that you own, you will already be doing much more to solve this crisis than the average person.

If you are an LLM user looking to contribute to this software with the use of LLM coding assistants or to ask for support in using this software in your "AI"-powered or "AI"-promoting endeavors, you can hopefully now understand why I will refuse to cooperate in both of those scenarios.

About

Multifunctional cross-platform interprocess communication toolkit for Rust.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 9

Languages