There was a time where this debate was bigger. It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder. This compromise makes it easier for the maintainers of the tools / languages, but does take away choice from the user / developer. But maybe that’s not important? What are your thoughts?

  • LeberechtReinhold@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    10 months ago

    I have yet to find a memory hungry program thats its caused by its dependencies instead of its data. And frankly the disk space of all libraries is minuscule compared to graphical assets.

    You know what’s going to really bother the issue? If the program doesn’t work because of a dependency. And this happens often across all OSes, searching for these are dime a dozen in forums. “Package managers should just fix all the issues”. Until they don’t, wrong versions get uploaded, issues compiling them, environment problems, etc etc.

    So to me, the idea of efficiency for dynamic linking doesn’t really cut it. A bloated program is more efficient that a program that doesn’t work.

    This is not to say that dynamic linking shouldn’t be used. For programs doing any kind of elevation or administration, it’s almost always better from a security perspective. But for general user programs? Static all the way.

    • thirdBreakfast@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I read an interesting post by Ben Hoyt this morning called The small web is a beautiful thing - it touches a lot on this idea. (warning, long read).

      I also always feel a bit uncomfortable having any dependencies at all (left-pad never forget), but runtime ones? I really like to avoid.

      I have Clipper complied executables written for clients 25 years ago I can still run in a DOS VM in an emergency. They used a couple of libraries written in C for fast indexing etc, but all statically linked.

      But the Visual Basic/Access apps from 20 years ago with their dependencies on a large number of DLLs? Creating the environment would be an overwhelming challenge.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      But for general user programs? Static all the way.

      Does it include browsers?

  • Jamie@jamie.moe
    link
    fedilink
    arrow-up
    22
    arrow-down
    3
    ·
    10 months ago

    The user never had much choice to begin with. If I write a program using version 1.2.3 of a library, then my application is going to need version 1.2.3 installed. But how the user gets 1.2.3 depends on their system, and in some cases, they might be entirely unable unless they grab a flatpak or appimage. I suppose it limits the ability to write shims over those libraries if you want to customize something at that level, but that’s a niche use-case that many people aren’t going to need.

    In a static linked application, you can largely just ship your application and it will just work. You don’t need to fuss about the user installing all the dependencies at the system level, and your application can be prone to less user problems as a result.

    • o11c@programming.dev
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      Only if the library is completely shitty and breaks between minor versions.

      If the library is that bad, it’s a strong sign you should avoid it entirely since it can’t be relied on to do its job.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Not to disappoint you, but when I installed HL1 build from 2007, I had a lot ot libraries versions that did not exist back in 2007, but it works just excellent.

  • ono@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Shared libraries save RAM.

    Dynamic linking allows working around problematic libraries, or even adding functionality, if the app developer can’t or won’t.

    Static linking makes sense sometimes, but not all the time.

    • robinm@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Shared libraries save RAM.

      Citation needed :) I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times. This make RAM gain much less obvious. In addition static linking allows inlining which itself allow aggressive constant propagation and dead code elimination, in addition to LTO. All of this decrease the binary size sometimes in non negligeable ways.

      • ono@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times.

        That is easily disproved on my system by cat /proc/*/maps .

          • ono@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            10 months ago

            Ah, yes, I think I read Drew’s post a few years ago. The message I take away from it is not that dynamic linking is without benefits, but merely that static linking isn’t the end of the world (on systems like his).

      • ck_@discuss.tchncs.de
        cake
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        In practical terms often yes. It can be easier in practical terms to just LD_PRELOAD something than to maintain your own patched version of an RPM / APT package for example.

  • colonial@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    10 months ago

    Personally, I prefer static linking. There’s just something appealing about an all-in-one binary.

    It’s also important to note that applications are rarely 100% one or the other. Full static linking is really only possible in the Linux (and BSD?) worlds thanks to syscall stability - on macOS and Windows, dynamically linking the local libc is the only good way to talk to the kernel.

    (There have been some attempts made to avoid this. Most famously, Go attempted to bypass linking libc on macOS in favor of raw syscalls… only to discover that when the kernel devs say “unstable,” they mean it.)

    • ck_@discuss.tchncs.de
      cake
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      There’s just something appealing about an all-in-one binary.

      Certainly agree. I remember the days when you could just copy a binary from one computer to another and it would just work™ Good times…

  • gatelike@feddit.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    10 months ago

    disk is cheap and it’s easier to test exact versions of dependencies. As a user I’d rather not have all my non OS stuff mixed up.

    • Cyclohexane@lemmy.mlOP
      link
      fedilink
      arrow-up
      11
      ·
      10 months ago

      From my understanding, unless a shared library is used only by one process at a time, static linking can increase memory usage by multiplying the memory footprint of that library’s code segment. So it is not only about disk space.

      But I suppose for an increasing number of modern applications, data and heap is much larger than that (though I am not particularly a fan …)

  • o11c@programming.dev
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    10 months ago

    Some languages don’t even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that “static linking” has shades of meaning: it applies to “link multiple objects into a binary”, but often that it excluded from the discussion in favor of just “use a .a instead of a .so”.

    Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don’t care about security so I’m talking about annoyance instead. Some realistic numbers here: dynamic linking might be “rebuild in 0.3 seconds” vs static linking “rebuild in 3 seconds” vs no linking “rebuild in 30 seconds”.

    Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.

    Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there’s nothing wrong with RPATH if you’re not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

    Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of “single source of truth”. If you actually read the man pages for the tools you’re using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

    Again, keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.

    The big question these days should not be “static or dynamic linking” but “dynamic linking with or without semantic interposition?” Apple’s broken “two level namespaces” is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.

    • ck_@discuss.tchncs.de
      cake
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Even if you do use static linking, you should NEVER statically link to libc

      This is definitely not sound. You should never statically link against glibc as glibc does some very unsound things under the hood like load NSS modules. Static linking against a non-bloatware libc is fine in most cases, as kernel interfaces break rarely, or rather, because Kernel devs go to extreme lenghts not to break user space, and they do a fantastic job too.

      • o11c@programming.dev
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        The problem is that GLIBC is the only serious attempt at a libc on Linux. The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX. While I can no longer name similar catastrophes, that history gives me little confidence.

        There are some lovely technical things in MUSL, but a GLIBC alternative it really is not.

        • ck_@discuss.tchncs.de
          cake
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          10 months ago

          The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX.

          Can you share a link? I’d be genuinely interested.

          • o11c@programming.dev
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            10 months ago

            DNS-over-TCP (which is required by the standard for all replies over 512 bytes) was unsupported prior to MUSL 1.2.4, released in May 2023. Work had begun in 2022 so I guess it wasn’t EWONTFIX at that point.

            Here’s a link showing the MUSL author leaning toward still rejecting the standard-mandated feature as recently as 2020: https://www.openwall.com/lists/musl/2020/04/17/7 (“not to do fallback”)

            Complaints that the differences are just about “bug-for-bug compatibility” are highly misguided when it’s useful features, let alone standard-mandated ones (e.g. the whole complex library is still missing!)

        • ck_@discuss.tchncs.de
          cake
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          I would not agree with the “only serious attempt” path. The problem that most other libc are not drop in replacements has little to do with standard compliance and a lot to do with the fact that software is so glued to glibc behavior that you would have to be bug for bug compatible to achieve that goal, which imo is not only unrealistic, it’s also very undesirable.

    • colonial@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      NEVER statically link to libc, and probably not to libstdc++ either.

      This is really only true for glibc (because its design doesn’t play nice with static linking) and whatever macOS/Windows have (no stable kernel interface, which Go famously found out the hard way.)

      Granted, most of the time those are what you’re using, but there’s plenty of cases where statically linking to MUSL libc makes your life a lot easier (Apline containers, distributing cross-distro binaries.)

  • CasualTee@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    The main problem is that dynamic linking is hard. It is not just easier for the maintainers of the languages to ignore it, it removes an entire class of problems.

    Dynamic linking does not even reliably work with C++, an “old” language with decades of tooling and experience on the matter. You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library. So you have to wait for a crash to be certain you got it wrong. Unless you control the compilation of your dependencies, it’s fairly hard to be certain you won’t encounter dynamic linking related issues. At which point you realize that, if the license allows it, you’re better off static linking everything, including the C++ library itself: it makes it much more predictable, you’re not forcing an additional dependency on your users and most UB are now gone (especially the one about raising exception across DSO boundaries, which can happen behind your back, unless you control the compilation of all your dependencies…).

    That’s especially true if you are releasing a library where you do not know it’s runtime: it might be dynamically loaded via dlopen by a C++ binary that will load its own C++ library first, but some of your users use the version that is stuck on C++14 and your codebase is in C++23. This can be solved, by playing with LD_LIBRARY_PATH, but the application is already making use of it to load the C++ library it comes with instead of the one provided by the system (which only provides C++11 runtime), and it completely ignores the initial state of the environment variable (how could it do otherwise? It would have to guess the path to the libstdc++ is for a newer version and not the older one provided by the system). Now imagine the same issue with your own transitive dependency on top of that: it’s a nightmare.

    So dynamic linking never really worked, except maybe for C when you expect a single level of dependency, all provided by the system. And even then that’s mostly thanks to C simpler ABI and runtime.

    So I expect that is the main reason newer languages do not bother with dynamic linking: it introduces way too many issues. Look at your average rust program and how many version of a same dependency it loads, transitively. How would you solve that problem as to be able to load different versions when it matters but try first and foremost to load only one if possible? How would you be able to make the right call? By using semver? If nobody made any mistake why not, but you will rather be required to provide escape hatches that, much like LD_LIBRARY_PATH and LD_PRELOAD, will be misused. And by then, you only “solved” the simplest problem.

    Nowadays, based on how applications are delivered on Windows and OSX, and with the advent of docker, flatpack/snap and appimage, I do not see a way back to dynamic linking anytime soon. It’s just too complicated of a problem, especially as the number of dependency grows.

    • lysdexic@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      10 months ago

      The main problem is that dynamic linking is hard.

      That is not a problem. That is a challenge for those who develop implementations, but it is hardly a problem. Doing hard things is the job description of any engineer.

      Dynamic linking does not even reliably work with C++, an “old” language with decades of tooling and experience on the matter.

      This is not true at all. Basically all major operating systems rely on dynamic linking, and all of them support C++ extensively. If I recall correctly, macOS even supports multiple types of dynamic linking. On Windows, DLLs are use extensively by system and userland applications. There are no problems other than versioning and version conflicts, and even that is a solved problem.

      You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library.

      This statement makes no sense at all. Undefined behavior is just behavior that the C++ standard intentionally did not imposed restrictions upon by leaving the behavior without a definition. Implementations can and do fill in the blanks.

      ABI compatibility is also a silly thing to bring up in terms of dynamic linking because it also breaks for static linking.

      So dynamic linking never really worked,

      This statement is patently and blatantly false. There was no major operating system in use, not a single one, where dynamic linking is/was not used extensively. This has been the case for decades.

      • robinm@programming.dev
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        10 months ago

        I think you don’t understand what @CasualTee said. Of course dynamic linking works, but only when properly used. And in practice dynamic linking in a few order of magnitude more complex to use than static linking. Of course you still have ABI issue when you statically link pre-compiled libraries but in practice in statically linked workflow you are usually building the library yourself removing all ABI issues. Of course if a library is using a global and you statically linked it two times (with 2 differents versions) you will have an issue, but at least you can easily check that a single version is linked.

        There are no problems other than versioning and version conflicts, and even that is a solved problem.

        If it was solved, “DLL hell” wouldn’t be a common expression and docker would have never been invented.

        You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library.

        This statement makes no sense at all. Undefined behavior is just behavior that the C++ standard intentionally did not imposed restrictions upon by leaving the behavior without a definition. Implementations can and do fill in the blanks.

        @CasualTree was talking specically of UB related to dynamic linking and whitch simply do not exists when statically linking.

        Yes dynamic linking work in theory, but in practice it’s hell to make it work properly. And what advantage does it have compare to static linking?

        • Less RAM usage? That not even guaranteel because static linking allow aggressive inlining, constant propagation, LTO and other fun optimisation
        • Easier dependencies upgrade? That’s mostly true for C, assuming you have perfect backward ABI compatibility. And nothing proves you that your binary is really compatible with newer versions of its libraries. And staticdependencies ungrade are an issue only because most Linux distribution don’t have a workflow in witch updating a dependancy triggers the rebuil of all dependant binaries. If it was done it would then just be a question of download speed. Given the popularity of tools like docker who effectively tranforms dynamic linking into the equivalent of statically linking since all dependencies’ versions are known, I would say that a lot of people prefer the confort of static linking.

        To sum-up, are all the complications introduced specifically introduced by dynamic linking compared to static linking worth it for a non-guaranteed gain in RAM, a change in the tools of Linux maintainors and extra download time?

    • colonial@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      Nice link - it’s good to see some hard data when most of the discussion around this is based on anecdotes and technical trivia.

    • robinm@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Thank you so much. I read this when it was written, and then totally forgot where I read those information.

    • o11c@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      That’s misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.

      • robinm@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        Nothing prevent you to use dynamic linking when developping and static linking with aggressive LTO for public release.

        • o11c@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          10 months ago

          True, but successfully doing dynamically-linked old-disto-test-environment deployments gets rid of the real reason people use static linking.

  • Kangie@lemmy.srcfiles.zip
    link
    fedilink
    arrow-up
    5
    ·
    10 months ago

    Dynamically linked all the way; you only have to update one thing (mostly) to fix a vulnerability in a dependency, not rebuild every package.

  • Synthead@lemmy.world
    cake
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder.

    In what context? In Linux, dynamic links have always been a steady thing.

    • ck_@discuss.tchncs.de
      cake
      link
      fedilink
      arrow-up
      9
      ·
      10 months ago

      We could argue semantics here (I don’t really want to), but tools like Docker / Containers, Flatpack, Nix, etc. essentially use sort of a soft static link in that the software is compiled dynamically but the shared libraries are not actually shared at all beyond the boundary of the defining scope.

      So it’s semantically true that dynamic libraries are still used, the execution environments are becoming increasingly static, defeating much of the point of shared libraries.

      • uis@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        but tools like Docker / Containers, Flatpack, Nix, etc. essentially use sort of a soft static link in that the software is compiled dynamically but the shared libraries are not actually shared at all beyond the boundary of the defining scope.

        This garbage practice is imported from windows.

    • ck_@discuss.tchncs.de
      cake
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      In Linux, dynamic links have always been a steady thing.

      Hot take: This is only still the case because the GNU libc cannot be statically linked easily

  • 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Disk space and RAM availability has increased a lot in the last decade, which has allowed the rise of the lazy programmer, who’ll code not caring (or, increasingly, not knowing) about these things. Bloat is king now.

    Dynamic linking allows you to save disk space and memory by ensuring all programs are using the only one version of a library laying around, so less testing. You’re delegating the version tracking to distro package maintainers.

    You can use the dl* family to better control what you use and if the dependency is FLOSS, the world’s your oyster.

    Static linking can make sense if you’re developing portable code for a wide variety of OSs and/or architectures, or if your dependencies are small and/or not that common or whatever.

    This, of course, is my take on the matter. YMMV.

    • unique_hemp@discuss.tchncs.de
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Except with dynamic linking there is essentially an infinite amount of integration testing to do. Libraries change behaviour even when they shouldn’t and cause bugs all the time, so testing everything packaged together once is overall much less work.

      • 0x0@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        Which is why libraries are versioned. The same version can be compiled differently across OSs, yes, but again, unless it’s an obscure closed library, in my experience dependencies tend to be stable. Then again all dependencies i deal with are open source so i can always recompile them if need be.

        More work? Maybe. Also more control and a more efficient app. Anyway i’m paid to work.

        • unique_hemp@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          More control? If you’re speaking from the app developer’s perspective, dynamic linking very much gives you less control of what is actually executed in the end.

          • o11c@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            The problem is that the application developer usually thinks they know everything about what they want from their dependencies, but they actually don’t.

    • uis@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Static linking can make sense if you’re developing portable code for a wide variety of OSs

      I doubt any other OS supports linux syscalls

  • uis@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    You can statically link half a gig of Qt5 for every single application(half a gig for calendar, half a gig for file mager, etc) or keep it normal size. Also if there will be new bug in openssl, it is not your headache to monitor for vuln announcements.

    This compromise makes it easier for the maintainers of the tools / languages

    What do you mean? Also how would you implement plug-ins in language that explicitly forbids dynamic loading, assuming such language exists.

  • Johannes@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    Depending on which is more convenient and whether your dependencies are security-critical, you can do both on the same program. :D

    • Cyclohexane@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      The main issue I was targeting was how modern languages do not support dynamic linking, or at least do not support it well, hence sorta taking away the choice. The choice is still there in C from my understanding, but it is very difficult in Rust for example.

      • Johannes@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        Yeah, you can dynamically link in Rust, but it’s a pain because you have to use the C ABI since Rust’s ABI isn’t stable, and you have to miss out on exporting more fancy types

        • robinm@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          Just a remark. C++ has exactly the same issues. In practice both clang and gcc have good ABI stability, but not perfect and not between each other. But in any cases, templates (and global mutable static for most use cases) don’t works throught FFI.