Monday, February 1, 2010


Stali is a new Linux distribution from that is based on a "hand selected collection of the best tools for each task, and each tool being statically linked (including some X clients such as xterm, surf, dwm, dmenu, mplayer)." Suckless has often delivered outstanding software, and I am eagerly awaiting an official release. So, what else is in store?

Radically different choices. I have to say that for those of us who've wanted something original, Stali seems to be just that. To start things off, it doesn't use many GNU libraries. In stead of Glibc, Stali uses uClibc. Prior to Stali, I had only seen uClibc in distributions that target embedded platforms and distributions that are rather minimalistic and feature lacking. One advantage to this is that uClibc produces smaller executables and therefore offsets that increased size resultant of static linking. These two things together produce more efficient code. Statically linked software usually loads faster, and smaller software uses less RAM.

In Stali, the kernel sheds module support. Yup, one giant monolith. Again, when we consider that it will be statically linked, and that it will be produced using uClibc, there are some major performance gains possible. This also means that no initrd will be needed. The one major problem I can foresee is that a single driver failure could crash the whole kernel... but if Stali ends up using ONLY truly stable kernel releases this could be avoided.

The init system also looks a bit different. Instead of having several scripts called from one initial rc.S or similarly named script (as has been the trend lately), Stali is taking some lessons from Plan 9. The entire system is loaded from one rc.start, and halted from rc.stop. This has benefits in the realm of simplicity and speed, but once again, should I choose to stop a service without stopping the machine, a lot of manual setup and configuration will be needed.

Beyond all of these changes, the directory hierarchy is a bit different. It looks like the following:


Some notable omissions are usr, lib, boot, and media. All mounted devices would reside in mnt, the kernel is in bin, and libraries are statically linked and therefore the lib directory is outmoded. usr is being done away with as well. Sources can be located in the users' home directories, and as for binaries, bin already exists.

I see a lot of potential here, and considering that these fellows have already proven their talents, I have no reason to doubt them. I have made several posts concerning my dissatisfaction with the direction things have been taking in the Linux world, and I must say that this is exactly the type of system I had been wanting. I have attempted several Linux distribution projects on my own, and each time run into a wall. I simply wasn't ever creative enough to find solutions for various problems into which I ran. Maybe these guys will have more luck/skill?


Jens Staal said...

I am also VERY keen on trying this one out. A truly novel distribution bringing something new. The whole debian/fedora/etc respin "distributions" are getting old (and probably account for 90% of all distros out there).

According to an Arch forum thread this distro will also replace large parts of the GNU userland with BSD counterparts (Ksh instead of Bash etc). It will be quite interesting to see where this will go (and I suppose a variant using the glendix kernel extensions to run plan9 on "bare kernel" rather than in userland will come in the future).

It would be very cool if this BSD/Linux took off.

Ford said...

Indeed. I am eagerly awaiting an official first release. The only thing I hope they keep from the GNU project is GCC. Otherwise, their ideas seem amazing.

woohoo said...

"notable emissions" .. ???
I apologize about being picky on spelling, but I am not a born English speaker, and I cant friggin stand when someone publishes an article without (proof-)reading it first.

Ford said...

sorry about that, fixed.

James said...

Totally awesome. I just wish we had a few more juicy clues to speculate with like which tools are expected to be included and which hardware will be supported. Maybe some info about time would be good too. Do these guys expect to release stali this year/week? Cant wait

Ford said...

Their website does not list a lot of information, but I would expect many of the tools to come from BSD, and some possibly from BusyBox. As for when, there is no way to tell, and I expect it would be something like "when its ready"... but a year from now is probable. A lot has to be done. The current rsync repo shows a lot of BSD software, but also shows that Stali is not production ready by any means.

Resuna said...

No lib, no include, you have to install the compiler in your home directory? And all unlinked libraries? I think having a centrally managed toolchain is still desirable, even if you dont have shared libraries... dont forget, there was a /lib before ANY UNIX had shared libraries.

Resuna said...

Looking at their web page, it looks like they will have a /lib and /include. Theyre keeping /var, though, for some reason... even though theres no better reason for /var than for /usr other than decluttering root.

Perhaps they should follow the beos/haiku model and create a /linux or /system and shove most of root in there.

Ford said...

on their web page, the first directory hierarchy listing is for development purposes, while the second is the end-user directory tree. Keeping var is somewhat essential for server purposes. lib will be obsoleted by using static linking as opposed to dynamic. The haiku model is ill advised as it requires a lot of either symlinking or kernel hacking as the Linux kernel looks for things like dev, sys, and proc.

Resuna said...

Yah, I caught on about the two profiles.

The *kernel* looks for specifically named files? Or standard *utilities* look for them? If the Linux kernel actually has any file names hardcoded into it though... thats just wrong.

Ford said...

It does indeed. Technically, the Linux kernel looks for several. Among them: dev, proc, sys, lib, sbin. sys and lib can be eliminated through configuration. sbin can be eliminated by modifying init/main.c and dev and proc are just going to be there no matter what. I am certain that there are ways you could recode the kernel... but it would take some doing. A faster way of circumventing those necessities is to use symlinks as does GoboLinux.

cob said...

Kernel uses libc?
Kernel modules preserve kernel from crashing, if a driver is crappy?
The initrd isnt needed just because of lack of kernel module support?

All three statements are nonsense! Do you have a clue, what you are talking about?

The kernel does not use libc. Libc is a userland library that implements system calls to the kernel (e.g. for input/output) so that you can use c functions instead of writing your own system calls in assembly language.

Kernel modules are used for dynamical loading of drivers, for example for hotpluggable USB devices. But the code runs in the same context as the usual kernel code. It can do everything evil that common kernel code may do, just like libraries that can break your userland application.

If you dont have that feature, you have to reboot your kernel if you plug in a new device where you have not already included the drivers for.

Initrd can do more than just storing kernel modules. It is simply a full RAM image, you can put whole parts of the system there to increase startup rapidly.


Back to topic:
I like the idea of having a statically linked system as well. But I think I would build a kernel with dynamically loading modules anyways.

cob said...

One more thing:

Memory footprint reduction does not have to result in performance increase. The GNU stuff may be bloated, and dynamical linking may slow down start of applications. But the execution of the code will not be faster. uClibc may be much slower because it is optimized in size instead of execution speed.

technosaurus said...

I previously worked on a similar build using dietlibc, the files are available here:

Memory footprint reduction if significant enough to make a program fit in cache will almost always increase performance.

uclibc is HIGHLY configureable - speed, size, compatibility - its the users choice but you can usually only get two

dietlibc is built for speed and size but with little regard for compatibility and patching is difficult at times without adding external libs such as libowfat (-lowfat)

I eventually took a break from playing with dietlibc after realizing that mixing libraries of different licenses into a static binary required much more thought into legal matters.

Has anyone worked something similar with redhats newlib or debians eglibc or even googles bionic or bsd libc?

Ford said...

on modern systems that relatively fast anyway, the main reason for an initrd is to include kernel modules in system startup on odd hardware that would otherwise not be able to boot with Linux.

secondly, take an OS design class. the main reason to use modular device drivers is to keep the system up and running in the event that a driver should fail (or just to be nice to programmers).

last but not least, when you write something in C and do dynamic linking, the library files called by that C code, must be present and available to the application you wrote. C, Ada, and most other compiled languages (AFAIK) must be compiled and linked. if you know of some new fancy way to write C that avoids this... please let me know. Until then, go back to school and take more CS courses.

Da Button Factory said...

> This has benefits in the realm of simplicity and speed, but once again, should I choose to stop a service without stopping the machine, a lot of manual setup and configuration will be needed.

Why not use the daemontools? It seems to me like the most Unixian way of doing the job. See for instance ("Running svscan as process 1").

Or maybe use something like runit.

Anyway good luck with stali, it looks very interesting!

Syntropy said...

In regards to booting up the system, a better choice than hand tailored shell scripts would be kyuba-core: a part of the Kyuba Project.

Check out for details.

Jens Staal said...

Apparently discussions in the Dev mailing list at has resulted in a consensus that the user land will be taken from 9base>openBSD>GNU based on their degree of suck(less)ness, leading to that GNU parts in the distro will be as few as possible.

For libc, they are looking at bionic from Android as primary libc with "fallbacks" to eglibc. Attempts are being made to compile 9base etc using bionic. Another cool thing they are working with is an "ldwrapper", which should be able to "trick" dynamic libraries by pretending to be a dynamic linker while actually building a static library for the compilation.

As a small and stable base I think this will be a super-cool distro (legacy stuff requiring/being more optimally run dynamically could still be run in a fakeroot+debootstrap or similar solutions).

For package builders this distribution could be a dream come true - one binary that can be run on every version of every linux distribution (+ perhaps BSDs with linux syscall compatibilities) which can be updated according to the development of the package itself rather than based on whatever libraries distribution X, Y and Z choose to ship at any given time. A simple packaging of the binary (with 0 dependencies!) in a .deb, .rpm or whatever and you are good to distribute it along the normal channels.

0010112 said...

So where would I download the OS?

xtg said...

Im greatly looking forward to this system. It has the appeal and slimness of Linux-From-Scratch, but without the mandatory Do-It-Yourself. Is the toolchain capable of preparing a bootable system yet?

I agree with Ford to a great degree. A couple years ago, I compiled in all the device drivers for my laptop figuring that things wouldnt change, and also decided not to compile any of the unneeded modules.

Suffice it to say, I had a lot of "oh, sorry" moments when I had to explain to people that although Linux generally *could* casually read their usb flash drive, mine couldnt (not my model). This happened in different, but essentially equivalent scenarios as well.

Also, there are plenty of cases where youd want to unload drivers for hardware you cant get rid of. Some wireless cards and other devices that have reverse-engineered drivers simply do not have great stability, and perhaps never will in practice (stable for whatever the developers had tested against in terms of remote access points and switches, but not necessarily ad hoc or anything interesting). In these cases, unloading/loading the kmod is the way to get things back to a sane state without at least softbooting the system again.

In other cases, such as with nvidia cards, accessing the better tty rendering modes requires a module that is mutually exclusive with the one for accelerated X. So you have to unload one to get the other (and building both into the kernel wont work).

There might yet be some other interesting cases where device power-up/down is currently tied into the module load and unload code in lieu of userland utils to do the same thing, which, if so, would have implications for portable machines.

In the end, I think if youre already building monolithic kernels for different targets (e.g. eeepc) and distributing them all with the system, you might as well provide module support, and build in those things that are invariant (disk, mem, framebuffer, keyboard, etc), and bundle modules for everything else thats sane (you dont need to bundle vendor specific parport drivers with a netbook, but you do need general parport modules, because they might use a USB equivalent). In general, all USB modules should be bundled. Things like an integrated webcam or wifi might be better off as modules rather than built-ins, though anything built against published device specs, rather than reverse engineered, *and* that doesnt have power-management (or other side effects) through the load process are safe to build in.

@Jens, yeah, it really would be great for package builders because library API compatibility becomes a non-issue (assuming the Linux syscalls dont change). There are still some protocol constraints, though. Big example is X. As long as old parts of the X protocol dont get taken out, a binary you build could still work with a new version of X ten years later.

Major benefit of this is that instead of listing the compatible version of X, package maintainers could list the compatible *range* of versions, and even predict when it will stop working if deprecations and removals get scheduled well ahead of time (so I could build xeyes now, and maybe tell you itll work from release ABC until release XYZ).

Ford said...

@Jenns Staal,
When this post was written, the libc to be in use was no where near a finished discussion.

matti christensen said...

please hurry - this is just what ive been waiting for ! im so frustrated with bloatware and not able to find aschetic enough system out there ( used to use core-linux and now crux, but even they are too filled )...

Jens Staal said...

The sabotage linux project looks promising.

It looks a lot like what stali was aiming for.

Post a Comment