You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your effort of writing your perspective. Here is mine from tinkering
for a while as part of exploring how a CI library could look like.
General critique
Missing tldr; how to you see things and where you want to go. Saying X bad
is not a good intro and may make readers question about the quality of the content.
Main tradeoffs of 1. performance vs 2. correctness vs 3. simplicity vs 4. throughput vs 5. latency unclear. This is most prevalent in clone/fork semantics using on Unixes copy-on-write(COW) semantics for maximum performance of
complex/huge process block data. Numbers on performance differences would be here relevant for reproducible simple to complex spawning of processes and spawning processes from shell.
Try to be shorter. "Why bother?" belongs into the first or second sentence.
No need to use political wording like "decarbonization", if you want to
focus on the technical core. Making things energy efficient and less
error-prone to improve safety and security sounds to me better. Ideally with
numbers.
Formal models, ideally testable or verifiable reduce risks in every metric,
so your argument of "changing huge systems" does not hold. Currently Unix has
no (formal) process model, even though the process is the core abstraction data
structure.
Now the content
"file descriptors for everything" does not explain the idea of race-free
resource handling based on in-Kernel reference counting of each process holding
the resource. Try to write this more concise.
Definition of "global namespaces" are missing. Are you referring to Linux
namespaces, which are complex to setup and incomplete (kernel api and resource
are other attack vectors)? Do you have example programs where you tried to do that,
so the reader can estimate complexity vs regular setup?
Time/scheduling and security are global properties and as such modification can
be seen as mandating a global namespace, because there must be one admin, so
the statements are unclear to me.
"Spawning processes" is way to short and does not capture 1. memory of COW, 2. elimination of various race conditions, 3. signaling, 4. process groups and time tradeoffs Unixes took (and how slow things would get onelimination)
"File System" I dont understand the specialization on files and dirs, if there are already
generalized capapbility systems on pointers to reuse (CHERI). No use case mentioned, so YAGNI?
"Mounting" sounds good
"Temporarily temporary files" Why should tmp files persist across process group termination? Shell hacks?
"File system transactions" Looks like async io_uring API for file ops. The problem with transactions is that its easy to get deadlocks and or to dos the system resource or have very bad time dependencies. The other flaw is that the underlying hardware may offer no guarantee whatsoever, so one has to split syscalls in 1. has hw guarantees, 2. has no hw guarantees or do what posix does 3. make an ordering-unreliable api for both
"Asynchronous interfaces" Looks like the case for async io_uring APIs.
"Networking" Does this mean lifetime of socket would be bound to process group or not?
My main complain on the article is that is leaves out some of the most horrible places:
process group semantics and how resources should relate to process groups (including performance tradeoffs)
scalable permission model (unix files have 1 owner 1 groups 1 domain)
which is fast but annoying to use leaving open security holes by lazy configuration
pty semantics, meaning what time-critical in-band protocols are in kernel
and performance tradeoffs of replacements (Arcan being only candidate)
The text was updated successfully, but these errors were encountered:
link https://github.com/Ericson2314/baccumulation/blob/main/reforming-unix.adoc
Thanks for your effort of writing your perspective. Here is mine from tinkering
for a while as part of exploring how a CI library could look like.
General critique
is not a good intro and may make readers question about the quality of the content.
complex/huge process block data. Numbers on performance differences would be here relevant for reproducible simple to complex spawning of processes and spawning processes from shell.
focus on the technical core. Making things energy efficient and less
error-prone to improve safety and security sounds to me better. Ideally with
numbers.
so your argument of "changing huge systems" does not hold. Currently Unix has
no (formal) process model, even though the process is the core abstraction data
structure.
Now the content
resource handling based on in-Kernel reference counting of each process holding
the resource. Try to write this more concise.
namespaces, which are complex to setup and incomplete (kernel api and resource
are other attack vectors)? Do you have example programs where you tried to do that,
so the reader can estimate complexity vs regular setup?
be seen as mandating a global namespace, because there must be one admin, so
the statements are unclear to me.
generalized capapbility systems on pointers to reuse (CHERI). No use case mentioned, so YAGNI?
My main complain on the article is that is leaves out some of the most horrible places:
which is fast but annoying to use leaving open security holes by lazy configuration
and performance tradeoffs of replacements (Arcan being only candidate)
The text was updated successfully, but these errors were encountered: