“Packaging Kubernetes for Debian”:
This raises key questions: bundling (“vendoring”) and its implications, contemporary #FreeSoftware development practices and their impact on distro relevance, avoiding/resolving technical disputes, and more.
@civodul because of a directory in a repository called ‘vendor’ that we need to make (proprietary) things work properly.
@civodul I see complexity as one of the modern barriers to practical software freedom. If a reasonably skilled person can't comprehend a system, one can't exercise the freedom to make any meaningful changes, letalone redistribute those changes.
Projects that describe themselves as open source may not begin to consider that point.
I feel a modern interpretation of software freedom requires mindfulness to complexity. Unfortunately, the backbones of modern systems are the opposite of that.
@civodul (I'm not expressing any sort of disagreement with you, or suggesting you think differently than I do; I'm merely adding more for others to consider.)
So glad you mentioned this. Even some GNU software has fallen prey to complexity. Even some of the simplest programs in coreutils look complex compared to those in say OpenBSD. I understand that programs are what they are in its current form for various reasons and the needs to run in various platforms etc. But something got lost on the way..
@vu3rdd @mikegerwitz @civodul Absolutely this. Last year I wrote "Free software is not enough" <https://www.colbyrussell.com/2019/05/15/may-integration.html#free-software-is-not-enough>.
Related: "one of the cornerstones of the FSF/GNU philosophy is that it focuses on maximizing benefit to the user. What could be more beneficial to a user of free software than ensuring that its codebase is clean and comprehensible for study and modification?" <https://www.colbyrussell.com/2019/05/15/may-integration.html#software-finishing>
A key to pushing for solutions is to make it easy for allusions to the problem they solve to live as memes. Slogans and slogan-like turns of phrase are a form of such memes.
It would be beneficial and convenient if "Free software is not enough" (or "FOSS...") became the shorthand for referencing this problem.
But there's a careful balance to be had. We can't take "practical" to mean "anyone can modify without any training". As @civodul said, some projects have inherent complexity.
Some projects are also complex simply because they are poorly factored, planned, or authored.
To use an example: let's say I'm using a phone dialer, and I want to make the "back" button clear all digits that have been entered when held. Simple right? So you:
Find the relevant git repository, set up a build toolchain, get deps, find the relevant part of the code, make your change, build, run...
...why can't there be a "view source" button?
The barrier to entry is much higher than it could be.
@mikegerwitz @colby @vu3rdd @civodul May be because you have scripts blocked; the page makes heavy use of autoplaying video, so if you're not seeing those it may not be immediately clear what's going on.
There's a good blog post here that explains some of it in a bit more detail and is a bit more static: https://omar.website/posts/notes-from-dynamicland-geokit/
@jfred @colby @vu3rdd @civodul The most notable project I can think of and use that has the equivalent of "view source" for complex programs is Emacs, where you can jump to the source code of any definition, even if it's part of the C sources.
I agree that it could be more front-and-center. Many developers can't make it easy _for other developers_, let alone less familiar users.
I've found what Guix has done with `guix envionment` to be really helpful for building software. (and `guix edit`)
I'd say for me personally, blurring the lines between using software and developing it is a key part of what I'd consider practical user freedom.
(I didn't actually know about `guix edit`! Seems like a good step in this direction, though it also seems to open up a read-only copy of Guix at this point.)
@roptat @jfred @colby @vu3rdd @civodul Indeed, `guix environment` is the really important part there. Any decent package manger will have a means to acquire sources (e.g. apt-get source in Debian), but there's usually additional steps to build it (apt-get build-deps), and then your system configuration may be insufficient for building.
`guix environment` Just Works. And you can build in an isolated environment with `-C`. It's not only convenient, but wonderfully empowering.
> The most notable project I can think of and use that has the equivalent of "view source" for complex programs is Emacs, where you can jump to the source code of any definition, even if it's part of the C sources.
I've never noticed that. Is there an emacs wiki (or similar) page explaining how to do this?
I found some interesting commands at
but not how to jump into the source code of a function.
I guess somewhere at
@boud You can figure out the command associated with any keybinding using `C-h k <keybinding>`. From there, you will see "<keybinding> runs the command (found in M), which is an [interactive] [compiled] [Lisp] function in 'file'", where 'file' is a link to the source file, and will jump to the proper location.
For functions without keybindings, use `C-h f`.
Thanks for your reply. I learnt something :), but I also see that I should have read the full thread first. :P
I mis-interpreted your toot to mean that emacs has a quick command for jumping to the source of the definition of a more-or-less arbitrary C function (in the C language) in a more-or-less arbitrary C file; whereas now it seems you were talking about emacs self-documentation that takes you easily to the emacs source, whether that source is in C or lisp.
@jfred I lament this frequently with my children. They play a number of free software games and there are a number of changes they'd love to make. Conceptually, those changes aren't difficult.
But the barrier to entry is large enough that I simply don't have the free time to get acquainted, letalone go through with the change.
@jfred But there's also the opposite side: e.g. with Minetest, the problem is that some of the mods are so poorly written by people who are clearly not experienced programmers. On one hand, this is wonderful to see---they've been empowered to manipulate this program as they please, and have done amazing things!
But on the other hand, it's an unmaintainable, bug-laden disaster that I struggle to contribute to or even fix obvious bugs in.
It's a really complex situation.
@mikegerwitz Yeah, that's fair. I think there's of course still room for code quality requirements in upstream projects; you're under no obligation to accept contributions that will be a burden to maintain. IMO it's still hugely positive that users can make those changes, though.
How many of those people will go on to *become* experienced programmers after having a taste of it through writing a Minetest mod? How many of those same people would if they hadn't been given that opportunity?
@jfred Oh absolutely, the benefit is enormous. I'm highly supportive of these authors---a number of them young children. I don't mean for my frustrations as a professional to downplay that.
And much of this problem is due to the architecture of the system they're working under.
But I meant to convey that complexity is also subjective. An elegant system to me is unfathomably complex to a beginner lacking the necessary intuition.
It's hard to convey in toots.
@mikegerwitz Yeah, understandable.
I think to some extent the best we can do is to try to make it as easy as possible to learn how a system works as a natural part of using it (without being intimidating).
E.g. if you can point at something in a game, start changing it from there, and see what happens, that's a big improvement over having to open up a text editor to a file yourself and trying to understand what you're looking at.
It'll never be perfect, but I think it's a worthwhile goal.
@mikegerwitz And yes, asking every application developer to do this independently is a tall order. I think the industry/community has lost sight of some things that were possible in older systems, where applications weren't as monolithic and isolated as they are today. This Alto demo is a good demonstration of what I mean by that: https://youtu.be/AnrlSqtpOkw?t=135
(Especially at around 9:00, where some of the more interactive aspects of the system are shown.)
GNU and the free software movement at large needs to figure out whether its commitment is to enabling and advocating software freedom or to a particular computing culture as it existed at an arbitrary point + an exercise in promoting the idiosyncrasies of that cultural snapshot.
The traditions of how most software is developed has led to a plateau in "practical software freedom".
@colby @jfred I'm not concerned about the binary as long as I can be confident that I (a) have a copy of the corresponding source code and (b) can reproduce that binary myself, bit-for-bit. If that's true, then it doesn't matter what comes out the other end of a compiler. Binaries are _not_ substitutes for sources (which is explicit in the GPL).
The problem I speak of is the inherent complexity in the source code itself and the _process_ of getting it to build.
@mikegerwitz agreed, but as Adam Spitz originally points out in "Open source is not enough", the immediacy of going from running program -> poking at it should not be discounted (or rather, the effect of latency should not be underestimated).
I'm also acutely sensitive to crummy build processes, e.g., ones that don't produce the same program that you were running, ones that take too long to complete, and ones that don't complete at all.
It's unfortunate that the process in @JordiGH's <http://jordi.inversethought.com/blog/exercising-software-freedom-on-firefox/> and what you describe are the norm.
In GNU, we have IceCat, which has suffered from lack of maintenance; Mark Weaver was applying patches for Guix. So I decided to take up maintainership, along with both Mark and Amin Bandali.
Then some events happened that kept me busy. Then COVID-19 happened. And still, many many months later, I have not had the time to even begin to grok this monstrosity. It's incredibly demotivating. And sadly, I may never find the time.
More importantly, most of us who work on free software do so in our own time, which is hard to come by. I'm also a father, so at the end of the day, I have only a couple hours to select from a queue of things that I can't hope to complete in a lifetime, and have to sacrifice sleep to get more time.
So a large barrier to entry is effectively a DOS attack on all of us who don't have the luxury of time.
'What I've run into as well is free software, where the source is available but the process of building it into something usable would take "months" of work, and the internal toolchain (old software versions, patched packages, etc) is only vaguely documented.'
(Slightly different context, but relevant still.)
@mikegerwitz @colby @vu3rdd @civodul It is, I agree. I'm somewhat sympathetic though; while I like the idea of selling free software as a business model, it doesn't exactly incentivize making the software easy to build.
IMO it'd be nice if there were some standardized way for a package author to ask for payment, and for the distros to integrate support for it. I believe Elementary OS's AppCenter does something like this, albeit in a centralized way.
@jfred I've been thinking about a voluntary oath—let's call it the oath of the Order of the Sable as a placeholder. When a free software user takes OotS oath, they pledge that they will respect the wishes of project leaders who ask for payment when speaking on behalf of the project. (But the payment need not be in the amount suggested—any non-zero payment works.)
@colby @vu3rdd @civodul @jfred Looks like I use the term "practical user freedom" in my LP2019 talk (or at least in the sources for it, which contains an intended transcript; I didn't check the audio). But I was discussing a very different kind of practical, from the perspective of _using_ software, not writing it (though the lines are blurred).
I agree that we could benefit from new terminology in certain cases.
In some domains, complexity is hardly avoidable: compilers, video-editing applications, etc.
But in other domains, it’s mostly an “emerging phenomenon”: developers focus on one thing and build upon a pile of software regarded as a black box. All developers do that to some extent, but this has reached the point where everyone gives up. Definitely a barrier to practical user freedom.
@civodul @mikegerwitz after reading the article I mostly worry about people picking up development practices from a big team that manages dependencies and applying them in hobby projects. How will you ever update 30 dependencies if all you have are 5 hours per week? How will you ensure that your users get security updates? How can you actually find out that there are security updates in any of the 30 libs?
That comes down to change management — and minimizing its cost.
As a user I certainly prefer installing only tools that are shipped in my distro. That’s why as a dev I mostly limit myself to using only the libs that are in my distro.
Also those are nicer to install :-)
This has also been a packaging headache in Guix. And for a FSDG distro, there's also the problem of trying to determine whether a program is actually free, given all of those dependencies.
Tools like NPM and Node support and encourage complexity by making it easy for developers to build gigantic dependency graphs and to ignore everything at the levels below.
It’s both an “impressive” feature and an invitation to create this incomprehensible mess.
@cwebber @civodul @mikegerwitz @ArneBab npm-the-tool + npm-the-software-collection really do need to be reworked. (And Yarn is not that thing; Facebook is one of the most egregious offenders of package bloat.)
Some Haxe folks at least have begun making an attempt to do package management differently, which can address some of the problems for the Haxe ecosystem. (The rest comes down to culture, though.)
@colby @cwebber @civodul @ArneBab @codeberg Dependenices are not typically committed to the repo itself (they're downloaded after cloning via the package manager) and so do not contribute to the repository size.
Even showing the size of the repository post-checkout isn't simple, since each package can run arbitrary scripts and perform environment/platform-specific tasks, including compilation.
@mikegerwitz re not cleanly separating acquisition from execution/compilation:
I think accepting that this is the way things can and/or should be done is _the_ problem, though, Not just part of it. Anxiety about dependency graphs, including bloat, is the result. I'm aware that this goes against modern orthodoxy.
From a capability-based systems view, it strikes me as an unnecessary capability that violates POLA.
@colby @mikegerwitz @civodul @ArneBab @codeberg Yes, this is whay an ocap/POLA approach is *more* important from security perspectives, but a) defense in depth and b) it is important for your trusted computing base though and c) we don't live in ocap systems yet oh no and d) it's still critically important for *community hacking* purposes
@cwebber @mikegerwitz @civodul @ArneBab @codeberg maybe I'm misreading, but I'm not so sure about the "critically important" part. We are arguably seeing that it's actively harmful in some way (and actively harmful towards every reaching the "yet").
This modern strategy that's been adopted for handling dependencies feels like it was an attractively packaged bad idea, like inheritance—and has been all along. (Worth noting that both are attempts to solve the code re-use problem?)
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!