ChatGPT解决这个技术问题 Extra ChatGPT

Linux static linking is dead?

In fact, -static gcc flag on Linux doesn't work now. Let me cite from the GNU libc FAQ:

2.22. Even statically linked programs need some shared libraries which is not acceptable for me. What can I do? {AJ} NSS (for details just type `info libc "Name Service Switch"') won't work properly without shared libraries. NSS allows using different services (e.g. NIS, files, db, hesiod) by just changing one configuration file (/etc/nsswitch.conf) without relinking any programs. The only disadvantage is that now static libraries need to access shared libraries. This is handled transparently by the GNU C library. A solution is to configure glibc with --enable-static-nss. In this case you can create a static binary that will use only the services dns and files (change /etc/nsswitch.conf for this). You need to link explicitly against all these services. For example: gcc -static test-netdb.c -o test-netdb \ -Wl,--start-group -lc -lnss_files -lnss_dns -lresolv -Wl,--end-group The problem with this approach is that you've got to link every static program that uses NSS routines with all those libraries. {UD} In fact, one cannot say anymore that a libc compiled with this option is using NSS. There is no switch anymore. Therefore it is highly recommended not to use --enable-static-nss since this makes the behaviour of the programs on the system inconsistent.

Concerning that fact is there any reasonable way now to create a full-functioning static build on Linux or static linking is completely dead on Linux? I mean static build which:

Behaves exactly the same way as dynamic build do (static-nss with inconsistent behaviour is evil!);

Works on reasonable variations of glibc environment and Linux versions;

Does no other replacement C library suit your purpose? (diet / uclibc / etc) ?
Do they use NSS? Most likely, behaviour'll be inconsistent as well since I doubt that these libraries take into account NSS.
Do you even use any functions that ultimately ends up with calling out to nss (e.g. gethostname/getpwname/getgroups/etc.) ?
Sure )) This is a client/server application.
Is this still true, or have things changed since 2010?

K
Ketil

I think this is very annoying, and I think it is arrogant to call a feature "useless" because it has problems dealing with certain use cases. The biggest problem with the glibc approach is that it hard-codes paths to system libraries (gconv as well as nss), and thus it breaks when people try to run a static binary on a Linux distribution different from the one it was built for.

Anyway, you can work around the gconv issue by setting GCONV_PATH to point to the appropriate location, this allowed me to take binaries built on Ubuntu and run them on Red Hat.


n
nh2

Static linking is back on the rise!

Linus Torvalds is in support of static linking, and expressed concern about the amount of static linking in Linux distributions (see also this discussion).

Many (most?) Go programming language executables are statically linked. The increased portability and backward compatibility is one reason for them being popular.

The increased portability and backward compatibility is one reason for them being popular.

Other programming languages have similar efforts to make static linking really easy, for example: Haskell (I am working on this effort) Zig (see here for details)

Haskell (I am working on this effort)

Zig (see here for details)

Configurable Linux distributions / package sets like NixOS / nixpkgs make it possible to link a large fraction of their packages statically (for example, its pkgsStatic package set can provide all kinds of statically linked executables).

Static linking can result in better unused-code elimination at link time, making executables smaller.

libcs like musl make static linking easy and correct.

Some big software industry leaders agree on this. For example Google is writing new libc targeted at static linking ("support static non-PIE and static-PIE linking", "we do not intend to invest in at this point [in] dynamic loading and linking support").


After almost 10 years since the original question being asked, this is a good news. I personally see the advantage, since I work in a regulated industry, where the embedded targets I have are less flexible for libc updates. But I can always provide a new binary for my (fully) statically linked program to devices in the field at any time.
It would be fair to mention disadvantages of static linking (slower vulnerability fixing, bloated package updates, increased link times).
@yugr Increased link times are probably true. I don't think the others are universally true. Static vs dynamic has no impact on the speed of vulnerability fixing; that depends only on who is working on providing the updates. If a distro provides them, they are equally fast, and if a third-party provides them, they may be faster or slower than your distro of choice. The size of updates depend on what you are comparing to. A normal distro like Ubuntu may have to transfer less bytes with shared objects, but often e.g. Docker images must be upgraded wholesale and are larger than static exes.
"I don't think the others are universally true" - probly but same principle may be applied to your answer) I'm looking from a position of average Linux desktop user (who uses some popular distro and maybe few apps from third-party developers).
"Static vs dynamic has no impact on the speed of vulnerability fixing ... if a third-party provides them, they may be faster or slower" - I'd argue that third-party provider is mainly focused on SW functionality and/or performance, security is more of a second class citizen. And in any case, third-party provider would have to do duplicate distro's work on tracking security issues (which means less resources will be spent on his product).
D
Dummy00001

Concerning that fact is there any reasonable way now to create a full-functioning static build on Linux or static linking is completely dead on Linux?

I do not know where to find the historic references, but yes, static linking is dead on GNU systems. (I believe it died during the transition from libc4/libc5 to libc6/glibc 2.x.)

The feature was deemed useless in light of:

Security vulnerabilities. Application which was statically linked doesn't even support upgrade of libc. If app was linked on system containing a lib vulnerability then it is going to be perpetuated within the statically linked executable.

Code bloat. If many statically linked applications are ran on the same system, standard libraries wouldn't be reused, since every application contains inside its own copy of everything. (Try du -sh /usr/lib to understand the extent of the problem.)

Try digging LKML and glibc mail list archives from 10-15 years ago. I'm pretty sure long ago I have seen something related on LKML.


Unfortunately they failed to mention the other side: that static linked binaries take 90% less time to startup and have much lower dirty page overhead than their dynamic-linked counterparts. These days, the eglibc fork of glibc has made static linking at least halfway-possible again, but if you want to actually use static linking without gigantic binaries and bugs/issues like nss, you probably need to use a different libc implementation.
@R: you wrote, "... but if you want to actually use static linking without gigantic binaries and bugs/issues like nss, you probably need to use a different libc implementation". Does anyone know of a libc implementation that provides static access to nss? I would like static access to getgrgid_r() in an NIS environment. Theoretically, someone can implement grgetgid_r and other similar routines using static access to the underlying NIS routines. Has anyone actually done this?
I think my answer is outdated: I have linked statically a moderately sized applications with no problems whatsoever on Ubuntu 14.04. But the reasoning why the static linking is bad stays the same. Unless, of course, one gravely needs the bonuses listed by R.. above, and they outweigh the downsides.
@yugr: I have a test program that measures elapsed time from just before execve to first line of main after self-exec. For a trivial program with no shared libraries except libc, static linking takes about 50% less time to self-exec, and the time is in the range of tens or hundreds of microseconds depending on machine speed. Throw in a few shared libs and you can easily make it 90% less. You're comparing O(n) time to O(1) time so no matter how small the per-lib cost you can reach arbitrarily close to 100%, but in practice it only takes a few.
@yugr: Even just mmap overhead is enough to make it significant with lots of libraries. For small libraries mmap time will dominate anyway but for large (esp. C++) libraries, relocation time, including demand-paging in every data/GOT page that needs to be patched up, is the dominant factor.
S
Smi

Static linking doesn't seem to get much love in the Linux world. Here's my take.

People who do not see the appeal of static linking typically work in the realm of the kernel and lower-level operating system. Many *nix library developers have spent a lifetime dealing with the inevitable issues of trying to link a hundred ever-changing libraries together, a task they do every day. Take a look at autotools if you ever want to know the backflips they are comfortable performing.

But everyone else should not be expected to spend most of their time on this. Static linking will take you a long way towards being buffered from library churn. The developer can upgrade her software's dependencies according to the software's schedule, rather than being forced to do it the moment new library versions appear. This is important for user-facing applications with complex user interfaces that need to control the flux of the many lower-level libraries upon which they inevitably depend. And that's why I will always be a fan of static linking. If you can statically link cross-compiled portable C and C++ code, you have pretty much made the world your oyster, as you can more quickly deliver complex software to a wide range of the world's ever-growing devices.

There's lots to disagree with there, from other perspectives, and it's nice that open source software allows for them all.


It would be fair to mention disadvantages of static linking: immunity to library updates (and thus bug/vulnerability fixes), longer link times and larger executable sizes.
Au contrere, you still do library updates, just when you need, eg when you are nearing a release and it is worthwhile to incorporate latest security patches - that's less wasted time. Try ninja to speed up link times. It's a pretty rare situation where the binary size matters much - perhaps in tiny embedded spaces, but you can strip your OS of shared libs in that case. Again, use cases vary, ymmv, all that.
"you still do library updates, just when you need" - normally the person who really needs the update is the user of your app who now can not simply rely on automatic security updates provided by distro maintainers but has to track whichever library versions are used in your statically linked binary (or rely on you to relink the said binary against fixed libraries in timely fashion, duplicating the work of distro security team).
"Try ninja to speed up link times" - I'm not sure how Ninja can speed up my linker.
The explosion of docker containers show a great interest in capturing an entire static environment over and over again at the expense of disk size. But again, as I said, it depends, you may be dealing with a tiny executable that you have to load into a busybox environment or whatnot. At this point we are "arguing on the internet". I'll concede there are times when the points you mention are valid.
D
Dean Harding

Just because you have to dynamically link to the NSS service doesn't mean you can't statically link to any other library. All that FAQ is saying is that even "statically" linked programs have some dynamically-linked libraries. It's not saying that static linking is "impossible" or that it "doesn't work".


It means that it's not completely static build. In fact, in most cases it'll require the same verion of glibc to be installed to work properly. And why do I need such static build?
@Dead: statically build executables do not have dynamic linker thus cannot load shared libraries. Best reference I could find: en.wikipedia.org/wiki/Static_build . As I wrote below, Linux doesn't support it intentionally.
Late to the game, but when working on highly rigid systems where, say the c++ compiler on one system only supports C++98, and code written to C++11 is ported... compiling it on a RHEL 6 system that supports C++0x, and then copying the binary over to the older RHEL 5 system that doesn't support C++0x.. if I could static link the whole application, then it would run on the RHEL 5 system. An example of why static linking is useful. But I cannot - because, for one, there is no G++ compiler options on g++ 4.4.7 to statically link in the C++ libraries.
C
Community

Adding on other answers:

Due to the reasons said in the other answers, it's not recommended for most of Linux distributions, but there are actually distributions that are made specifically to run statically linked binaries:

stali

morpheus

starchlinux

bifrost

From stali description:

static linux is based on a hand selected collection of the best tools for each task and each tool being statically linked (including some X clients such as st, surf, dwm, dmenu), It also targets binary size reduction through the avoidance of glibc and other bloated GNU libraries where possible (early experiments show that statically linked binaries are usually smaller than their dynamically linked glibc counterparts!!!). Note, this is pretty much contrary to what Ulrich Drepper reckons about static linking. Due to the side-benefit that statically linked binaries start faster, the distribution also targets performance gains.

Statically linking also helps to for dependency reduction.

You can read more about it in this question about static vs dynamic linking.