Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

Usering in a new age of inter-program communication, with JSON?

Categories

TLDR?

Recently I’ve been watching a lot of talk,news, and videos about the various automation systems that we use in our attempts to make us better (at the very least more consistent) programmers. One thing I have repeatedly hit my head on is the virtue (yet klugyness) of the unrestricted system call. No matter which automation system you look at, it seems that to achieve utmost flexibility, unrestricted system calls are almost always included. Ultimately, most automation systems want to give you a way to replicate the “you” that is sitting at the command line, that knows exactly what commands to run, and what order in which to run them. However — in an often unpredictable environment, things go wrong — and how do we know if something’s gone wrong with the 5th/6th/28th arbitrary shell command you called? A binary (in terms of success/failure) 0 or non-zero value.

There have been a lot of efforts to advance automation in recent decades (Proliferation of VMs, Docker, Chef/Puppet/Ansible, TDD, Travis CI, to name a few), why do we still deal with (and expect) binary indicators of success and failure? Success and failure of course are not binary things, multiple failures may occur, a success may not be a complete success, yet we still represent it in such binary means. Yes, you can take 0 to mean “every single operation we tries succeeded”, and you may assign numbers 1 – x for error types 1 – x, however, what happens when you have mixtures? I think the system is not ideal (yes, almost nothing is) — but we can do better.

Core kernel developers, Assembly devs, K&R true believers, you may leave now, because I’m relatively sure, even with my limited knowledge (I actually think C is really cool, I try and read the K&R book annually) I’m pretty sure what I’m about to suggest is absolute garbage. Sorry for taking up your time.

However, those still naieve enough to believe this might be a yet-unsolvable problem worth solving, this is what I’m getting at:

A contract to govern inter-program communication, upheld with JSON.

A naive iteration of this contract might go something like this (for output):

`TLDR?

Recently I’ve been watching a lot of talk,news, and videos about the various automation systems that we use in our attempts to make us better (at the very least more consistent) programmers. One thing I have repeatedly hit my head on is the virtue (yet klugyness) of the unrestricted system call. No matter which automation system you look at, it seems that to achieve utmost flexibility, unrestricted system calls are almost always included. Ultimately, most automation systems want to give you a way to replicate the “you” that is sitting at the command line, that knows exactly what commands to run, and what order in which to run them. However — in an often unpredictable environment, things go wrong — and how do we know if something’s gone wrong with the 5th/6th/28th arbitrary shell command you called? A binary (in terms of success/failure) 0 or non-zero value.

There have been a lot of efforts to advance automation in recent decades (Proliferation of VMs, Docker, Chef/Puppet/Ansible, TDD, Travis CI, to name a few), why do we still deal with (and expect) binary indicators of success and failure? Success and failure of course are not binary things, multiple failures may occur, a success may not be a complete success, yet we still represent it in such binary means. Yes, you can take 0 to mean “every single operation we tries succeeded”, and you may assign numbers 1 – x for error types 1 – x, however, what happens when you have mixtures? I think the system is not ideal (yes, almost nothing is) — but we can do better.

Core kernel developers, Assembly devs, K&R true believers, you may leave now, because I’m relatively sure, even with my limited knowledge (I actually think C is really cool, I try and read the K&R book annually) I’m pretty sure what I’m about to suggest is absolute garbage. Sorry for taking up your time.

However, those still naieve enough to believe this might be a yet-unsolvable problem worth solving, this is what I’m getting at:

A contract to govern inter-program communication, upheld with JSON.

A naive iteration of this contract might go something like this (for output):

`

Yes, there will be various losses (speed, codebase size/length,etc) — but I think that what you gain is worth it, I think that we can more easily build systems that think. Systems that learn, intelligently retry, train themselves, and execute flawlessly in an imperfect world. Yeah, that’s a pretty grandiose statement, but I dunno, doesn’t it seem possible? Sit back and think about it. Still not convinced? That’s OK, skepticism is good, and there’s a really good chance I have no idea what I’m talking about, so there that too.

Imagine with me, a trivial use case for imaginary contract — the developer sitting on some comfy chair, trying to throw together some build scripts in his/her provisioning/automation tool. He’s sitting at the terminal, and the shell command that should work has been entered. Pretty sure it will work, maybe even confident (it’s worked every time before, after all). However, he/she provisions/deploys/starts automating, and encounters a perplexing error & accompanying stack trace/dump. It seems that a shell command he inserted returned silently without the side effects that were expected. Of course, before the shell command is exposed as the root-cause, the code will be scoured, the logs will be read, code will be scoured perhaps once more, and the problematic line number will come slowly into view, and our developer will fix the error. Whether this plays out over seconds, minutes, hours, or never happens at all is anyone’s guess. I think even the naive implementation of the imaginary contract that I have proposed could have helped with that. With the knowledge of the correct execution of the program (or at least a once-through on the docs), this developer could maybe have surmised the reason (by viewing the program output) for the crash much quicker, and perhaps, written that logic into his build script, after that system call (increasing it’s flexibility).

Is this even something that should be attempted? Is this a “good-intentioned bad idea that will actually ruin everything”? Honestly, I’m not quite sure, but I thought I should at least share. After all, there’s that saying about some roads being paved with good intentions. Can’t seem to remember how it goes.

tl/dr: Why don’t we return JSON instead of 0s and non-zeros and relay program state and execution information to the user? If we decide on a reasonable standard, maybe it wouldn’t be horrible and we could do more things with stuff