Kelvin versioning is one of the worst ideas in Urbit.
The goal of Kelvin versioning is that the version number counts down instead of up, until it reaches “absolute zero” and is frozen forever.
Kelvin versioning ignores why versioning exists in the first place. The problem with Kelvin versioning isn’t that it’s too difficult to reach “Kelvin Zero”, in practice; the problem is it doesn’t make sense, in principle.
Versioning: conceptual deconstruction
To explore the concept of “versioning” from a few angles, consider the following observations:
- Every piece of software that gets published is already at Kelvin Zero, as long as a copy of that program is available, or its source code is available and you have a working compiler. You can run it, forever, exactly the same as it was published.
- Note that in this ontology, a modified version of the program (what would normally be called a “new version”) is considered a different program. This is deconstructive (it’s not how the concept of “versioning” is normally applied), but it’s illustrative as a thought experiment.
- Saying “…assuming a copy is available” could be considered a big assumption, but that’s purely an operational problem, not in principle related to software development process or versioning.
- In the more conventional sense (treating “a program” as a set of programs with a shared genealogy), any developer can instantly set any project to Kelvin Zero at any time, by deciding not to publish any more updates. This is often referred to as “abandoning” the project.
These are not spurious examples. In practice, both of these happen all the time. People run “outdated” versions of software, often quite happily (e.g., consider Microsoft Windows); developers abandon projects, often unhappily for the users of those projects.
Therefore, if reaching Kelvin Zero is good (the premise of Kelvin Versioning), and in principle can be done effortlessly, we must consider why developers publish software updates in the first place.
Why do we have software updates at all?
There are basically two reasons for updating software. (You could say one is a special case of the other, but it’s useful to break it down):
- They found a “better way to do it”. In some way, something about the program is being made better: new features, bug fixes, performance improvements.
- Something about the environment that the program operates in has changed (i.e., some of its assumptions are no longer true), and so the old version no longer works properly.
Observe that Kelvin Versioning is of no value in either of these cases!
The first case is simple: someone can decide, “I’m happy with the software as is, and I’ll willingly forego any further updates.” As stated above, Kelvin Versioning isn’t required for people to do this; the only pertinent question is whether that version is available for download, not whether new versions have also been published in the meantime.
The second case is especially relevant in the modern software context, and worth breaking down further. Most programs have a huge amount of environmental and runtime dependencies. Programs run inside interpreters (e.g., Python or Javascript), have lots of dynamically linked libraries, consume lots of external network APIs and services, etc. These are things that are outside the program’s control, and if any of them change substantially, the program won’t work anymore. Hardware is in this category too; if a program is built for a certain chipset which is no longer made, you won’t be able to run it anymore. Note that, again, none of these issues are meaningfully addressed by Kelvin Versioning, because the problem is outside the program that’s being versioned.
Again, this isn’t a spurious example; a huge reason for version updates in real software is “bump dependency from version X to version Y”.
Dependency trees
You could say, “Well, that just means to reach Kelvin Zero, you can only rely on sub-components that are also Kelvin Zero!” That just pushes the problem down a level; just like how any program that’s still available is already Kelvin Zero, so is any library; the same rule applies to hardware, and also (most importantly) to network services; and, likewise, in the inverse: if you make a Telegram client, and then Telegram changes their authentication system (or takes the server down, or eliminates your favorite feature, etc.), your Telegram client won’t work anymore, because an assumption it made about the world doesn’t hold anymore.
At every stage of the dependency tree, you either need a copy of that thing, or the ability to construct one.
Take a hypothetical program, which has successfully reached Kelvin Zero, and therefore you will always be able to run it, forever:
- So you need either a copy of the program, or its source code plus a working compiler.
- For any dynamically linked libraries (or script modules in the case of JS/Python), you need a copy of them… or a link to a site you can download them from… or their source code and a working compiler.
- If it’s an interpreted program, you also need a copy of the interpreter… or the interpreter’s source code plus a compiler for it.
- If you’re compiling any of the above items, the compiler itself is a program which follows the same rules as above. Either you have a copy of it, or you have to compile one, recursively.
- You also need hardware that can execute the machine code for that program, and its dependencies– or, a spec for the chip plus a factory that can manufacture it.
- If your program connects to a remote service of any kind… well, you could continue the analogy by saying, “Either it’s up, or you have to host your own version of it”, although in the case of stuff like AWS or Stripe or Google Search, this starts to degenerate to the point of silliness.
- …etc
Every branch of this dependency tree eventually ends in a question of availability. Kelvin Versioning is of no use to solve this, and creating newer versions in the same genealogy doesn’t harm it.
Urbit’s Nock is intended to circumvent this, by defining a base layer machine code spec which is so simple, keeping it available forever is trivial… but, a Nock interpreter is extremely non-trivial! Begin again at the top of the list, please!
Summary
The fundamental need that is supposedly being met by Kelvin Versioning is the desire of the user to be able to run “the same program”, at vary levels of resolution, forever, without any changes.
What jeopardizes this, in both cases outlined above, is fundamentally an issue of availability (of either the program itself, or something external it relies on), which can’t be solved by a versioning scheme. In practice, from the perspective of the user’s experience, the only thing that meaningfully improves the longevity of a program is reducing runtime dependencies, i.e., static linking and in-sourcing.
For programs where the primary use case is acting as a client for a network service, e.g., Telegram, Discord, Claude Cood, there is no way to make this “frozen forever” from the perspective of the user, and there’s no point in a charade with a countdown timer that asymptotically approaches zero without ever reaching it.
For other types of programs, more conventional techniques like static linking and vendoring are more effective in practice at reducing the need for updates.