In fact, -static gcc flag on Linux doesn't work now. Let me cite from the GNU libc FAQ:
2.22. Even statically linked programs need some shared libraries which is not acceptable for me. What can I do? {AJ} NSS (for details just type `info libc "Name Service Switch"') won't work properly without shared libraries. NSS allows using different services (e.g. NIS, files, db, hesiod) by just changing one configuration file (/etc/nsswitch.conf) without relinking any programs. The only disadvantage is that now static libraries need to access shared libraries. This is handled transparently by the GNU C library. A solution is to configure glibc with --enable-static-nss. In this case you can create a static binary that will use only the services dns and files (change /etc/nsswitch.conf for this). You need to link explicitly against all these services. For example: gcc -static test-netdb.c -o test-netdb \ -Wl,--start-group -lc -lnss_files -lnss_dns -lresolv -Wl,--end-group The problem with this approach is that you've got to link every static program that uses NSS routines with all those libraries. {UD} In fact, one cannot say anymore that a libc compiled with this option is using NSS. There is no switch anymore. Therefore it is highly recommended not to use --enable-static-nss since this makes the behaviour of the programs on the system inconsistent.
Concerning that fact is there any reasonable way now to create a full-functioning static build on Linux or static linking is completely dead on Linux? I mean static build which:
Behaves exactly the same way as dynamic build do (static-nss with inconsistent behaviour is evil!);
Works on reasonable variations of glibc environment and Linux versions;
I think this is very annoying, and I think it is arrogant to call a feature "useless" because it has problems dealing with certain use cases. The biggest problem with the glibc approach is that it hard-codes paths to system libraries (gconv as well as nss), and thus it breaks when people try to run a static binary on a Linux distribution different from the one it was built for.
Anyway, you can work around the gconv issue by setting GCONV_PATH to point to the appropriate location, this allowed me to take binaries built on Ubuntu and run them on Red Hat.
Static linking is back on the rise!
Linus Torvalds is in support of static linking, and expressed concern about the amount of static linking in Linux distributions (see also this discussion).
Many (most?) Go programming language executables are statically linked. The increased portability and backward compatibility is one reason for them being popular.
The increased portability and backward compatibility is one reason for them being popular.
Other programming languages have similar efforts to make static linking really easy, for example: Haskell (I am working on this effort) Zig (see here for details)
Haskell (I am working on this effort)
Zig (see here for details)
Configurable Linux distributions / package sets like NixOS / nixpkgs make it possible to link a large fraction of their packages statically (for example, its pkgsStatic package set can provide all kinds of statically linked executables).
Static linking can result in better unused-code elimination at link time, making executables smaller.
libcs like musl make static linking easy and correct.
Some big software industry leaders agree on this. For example Google is writing new libc targeted at static linking ("support static non-PIE and static-PIE linking", "we do not intend to invest in at this point [in] dynamic loading and linking support").
Concerning that fact is there any reasonable way now to create a full-functioning static build on Linux or static linking is completely dead on Linux?
I do not know where to find the historic references, but yes, static linking is dead on GNU systems. (I believe it died during the transition from libc4/libc5 to libc6/glibc 2.x.)
The feature was deemed useless in light of:
Security vulnerabilities. Application which was statically linked doesn't even support upgrade of libc. If app was linked on system containing a lib vulnerability then it is going to be perpetuated within the statically linked executable.
Code bloat. If many statically linked applications are ran on the same system, standard libraries wouldn't be reused, since every application contains inside its own copy of everything. (Try du -sh /usr/lib to understand the extent of the problem.)
Try digging LKML and glibc mail list archives from 10-15 years ago. I'm pretty sure long ago I have seen something related on LKML.
eglibc
fork of glibc
has made static linking at least halfway-possible again, but if you want to actually use static linking without gigantic binaries and bugs/issues like nss, you probably need to use a different libc implementation.
execve
to first line of main
after self-exec. For a trivial program with no shared libraries except libc, static linking takes about 50% less time to self-exec, and the time is in the range of tens or hundreds of microseconds depending on machine speed. Throw in a few shared libs and you can easily make it 90% less. You're comparing O(n) time to O(1) time so no matter how small the per-lib cost you can reach arbitrarily close to 100%, but in practice it only takes a few.
mmap
overhead is enough to make it significant with lots of libraries. For small libraries mmap
time will dominate anyway but for large (esp. C++) libraries, relocation time, including demand-paging in every data/GOT page that needs to be patched up, is the dominant factor.
Static linking doesn't seem to get much love in the Linux world. Here's my take.
People who do not see the appeal of static linking typically work in the realm of the kernel and lower-level operating system. Many *nix library developers have spent a lifetime dealing with the inevitable issues of trying to link a hundred ever-changing libraries together, a task they do every day. Take a look at autotools if you ever want to know the backflips they are comfortable performing.
But everyone else should not be expected to spend most of their time on this. Static linking will take you a long way towards being buffered from library churn. The developer can upgrade her software's dependencies according to the software's schedule, rather than being forced to do it the moment new library versions appear. This is important for user-facing applications with complex user interfaces that need to control the flux of the many lower-level libraries upon which they inevitably depend. And that's why I will always be a fan of static linking. If you can statically link cross-compiled portable C and C++ code, you have pretty much made the world your oyster, as you can more quickly deliver complex software to a wide range of the world's ever-growing devices.
There's lots to disagree with there, from other perspectives, and it's nice that open source software allows for them all.
Just because you have to dynamically link to the NSS service doesn't mean you can't statically link to any other library. All that FAQ is saying is that even "statically" linked programs have some dynamically-linked libraries. It's not saying that static linking is "impossible" or that it "doesn't work".
Adding on other answers:
Due to the reasons said in the other answers, it's not recommended for most of Linux distributions, but there are actually distributions that are made specifically to run statically linked binaries:
stali
morpheus
starchlinux
bifrost
From stali description:
static linux is based on a hand selected collection of the best tools for each task and each tool being statically linked (including some X clients such as st, surf, dwm, dmenu), It also targets binary size reduction through the avoidance of glibc and other bloated GNU libraries where possible (early experiments show that statically linked binaries are usually smaller than their dynamically linked glibc counterparts!!!). Note, this is pretty much contrary to what Ulrich Drepper reckons about static linking. Due to the side-benefit that statically linked binaries start faster, the distribution also targets performance gains.
Statically linking also helps to for dependency reduction.
You can read more about it in this question about static vs dynamic linking.
Success story sharing
libc
updates. But I can always provide a new binary for my (fully) statically linked program to devices in the field at any time.