[kwlug-disc] On service feedback (from rant to constructive something)

Mikalai Birukou mb at 3nsoft.com
Sun Feb 8 11:41:24 EST 2026


I am writing this "simple and an almost trivial" service that provides 
to FUSE a filesystem implementation that gets mounted by kernel, like 
FUSE does it. I am writing a service.

So, I go through phases: calls go from NodeJS's to FUSE-ing process, via 
NAPI-RS, test mount produces in logs usual suspects, as desktop 
surrounding knock on newly attached mount. Then I do a more deliberate 
test setup and get normally looking logs in console, followed by nothing 
there, while OS returns EIO status errors.

I kinda expect kernel to nudge me and produce error to me, so, for me to 
start to suspect last logged call, etc. But, there is nothing.

Eventually, after implementing two different threading models [Rust 
libraries are the best for ... match-ing, composing bigger parts from 
them, they ..._rule!], eventually I notice that attrs for root directory 
node had "it is file" flag. Once corrected, kernel (FUSE) continues 
working with my service.

A day+ of debugging begs the following simple question, why wouldn't 
FUSE, a client of my service, provide me with a feedback. Please, 
please, slam the door, when you stop working with me. Let me know why 
you don't come back. [... sh-h-... Do boy-girl breakups look like 
that, sometimes? ... as I eye back into own past ... yikes ...]

In this case implementation of service provides callbacks. Client asks 
nicely, server returns reply nicely, conceptually function call is done 
here, client attempts to digest reply and then it gets other thoughts.

When there is only one book around, os bible, one may expect to RTFM the 
implementor of service to understand implicit feedback. May be. But, in 
the wider world, where we provide services to each others, it is not 
economically [time economy] viable to expect others to know all your 
rules beforehand. Hence, feedback feels needed.

How would you do feedback in aforementioned function calling pattern? 
There seem to always be an implicit layer! For example, since kernel 
EIO's all of requests without telling my service implementation, service 
effectively doesn't exist, filesystem is effectively unmounted. Kernel 
could explicitly unmount service, and it would get an unexpected 
unmount/destruction feedback to implementation.
I'd say that this pattern generalizes: all calls to services happen over 
some connection, even if it is just a pointer to buffer and counters. 
Connections cross runtime boundaries. Services and clients usually run 
in different runtimes. So, door-slamming feedback can be implemented 
wtih connection closing.

Of course, feedback on connection closing is not viable for TCP on a 
network. But, network switch reaction pattern (packets dropping vibe?) 
doesn't have to be the best in all scenarios.

Summarizing:
Service provider and consumer live in different runtimes. Connections to 
pass messages will always exist in one form or another. Let's help each 
other with feedback when expectations are broken.




More information about the kwlug-disc mailing list