Shared libraries extend the resource utilitization benefits obtained from sharing code between processes running the same program to processes running different programs by sharing the libraries common to them.
To be clear: this was a resource optimization issue. The example I've seen sited often is “why does every program need it's own copy of prinf()?” But to be real, printf() was not the probem. The problem was the X Window System. The Xlib and Xt libraries were huge. The limiting factor was disk space, not RAM. Installing the window system used about as much disk space as the rest of a typical POSIX type distro of the late 1980's (e.g. SunOS 3). This in the days when a typical hard drive was a few hundred mebibytes.
The argument can be made that this was a flaw in the design of the X Window System: puting too much window system functionality on the wrong side of the X protocol. However, the Wayland approach seems to double down on this idea removing everything from the display server except the ability to push bitmaps. This is a radical simplification of the display server but it moves the rendering complexity back to the application side.Another important goal expressed was:
Do not require shared libraries. Although we might make the use of shared libraries the default system behavior, we felt we could not require their use or otherwise build fundamental assumptions requiring them into other system components.
Oh how times have changed.
On modern Linux, to run a program (binary) you must have the correct constellation of shared libraries of compatible versions installed someplace where the runtime linker can find them or no dice.
This means that applications built for Linux are typically bound to a single distro and sometimes very specific versions of that distro.
Although the worst of the dependency hell issues are mitigated by advanced packaging tools, those issues still exist and sometimes cause real problems.
Containers are technologies that allow programs to be deployed as units that include all of their dependencies including (dynamic) libraries. Those libraries are still dynamic but are no longer shared.
I guess in order to re-share the libraries for containerized apps we now have Flatpack. Apps packaged using this technology are built against “runtimes” which are, essentially, distros of shared libraries. These runtimes effectively duplicate a distribution's set of shared libraries, but in a distro-neutral sort of way.
Now your Flatpack'ed app is tied to a runtime, instead of a distro.
To the extent that multiple apps are built against exactly the same (version of a) runtime, libraries can be shared.
Of course, this assumes kernel APIs remain compatible. The Linux kernel people have a pretty hard “don't break userland” philosophy but even if compatibility is perfect it is only backward compatibility.
I have experienced issues running Red Hat kernels with Debian user space code.
Both GCC and LLVM now provide extensive “link time” optimization capabilities.
Allowing the LTO mechanism access to the entire program, including library code, should allow maximum benifit from such optimizations.
…packagers are strongly encouraged not to ship static libs unless a compelling reason exists…
No explicit rational is given for this edict.
Reading between the lines it seems clear that rather than being used as a resource optimization, shared libraries have taken on a new role that I contend they are not suited for. That is minimizing the avalanche of package rebuilding when changes are made.
I contend that allowing users of the Fedora distro to create static binaries is in and of itself a compelling reason to provide static libraries.
I argue against *-static packages as a means of tracking what dependent packages need to be rebuilt when a library changes.
As is clearly acknowledged in the guidelines, header only libraries “act just like static libraries.” But in truth any library *-devel package that includes header files (and that has to be all of them) can use the same techniques as header only libraries, such as inline functions.
Any code or technique used in a header only library might just as easily be found in the header files of a library package that also includes a shared object or static library.
Maintaining stable ABIs for libraries requires care and thought and (especially for FLOSS libraries) may be considered a waste of time. Why bother when dependent packages can simply rebuild against new versions?
The typical package dependency is a program (or library) depending on a library in another package. If that other library is packaged as a shared object (*.so) then installing a new version is all you need for dependent packages to start using the new code. Except when it isn't. Such as when the library's ABI changes. Library packagers are supposed to increment the major version number when incompatible ABI changes are made. This is not done automatically: it must be done by the package maintainer — at least for deb and RPM packages. Often the library author is not the package maintaner. You can see how this might not turn out too well.
Linux package managers (those that I'm familiar with) don't distinguish between API and an ABI dependencies.
What's broken is: shared library ABIs are not strongly and automatically correlated with package version numbers. Package dependencies are expressed only with package version numbers.
See Titus Winters “Live at Head” about 25:00 in or so for some practical information about SemVer.
Usually is not always. And almost always is not always.
Changing shared libraries out from under programs and other libraries that depend on them (with the same major version) usually works. In fact, almost always works.