RSS Feed
  0 users browsing Discussion. | 1 guest  
Main » Discussion » (Mis)adventures on Debian ((old)stable|testing|aghmyballs)
Pages: First Previous 1 2
Posted on 19-04-15, 08:49 am
Post: #35 of 35
Since: 11-01-18

Last post: 7 days
Last view: 7 hours
just so we are clear, libc(or glibc) is the C standard library, where printf lives.

im with screwtape, monkey patching isn't a viable solution if you have source access.

heck, what started this current debate was an issue with the update process and not with dynamic linking.
Posted on 19-04-15, 02:44 pm
Post: #208 of 216
Since: 01-26-19

Last post: 5 days
Last view: 6 days
Posted by Screwtape

If I understand your proposal, instead of fixing a problem by changing source-code and recompiling, you want problems fixed by reverse-engineering the binary and hex-editing out the problem parts. And instead of fixing the problem once, you want the process repeated for every single binary that (directly or indirectly) uses the library in question, a process that probably can't be automated if the binaries were built with an optimising compiler of any complexity. I'm not sure what you expect to happen with second- or third-party binaries; would customers have to send them in to be patched? Would customers need to have a security engineer on-staff to do the patching? Also, if a binary had *two* problems, would the second patch require the first patch to be present, or would you expect every possible combination of patches to be available?


No, no, no, absolutely not. That would indeed be completely insane.

Bugs in the libc would be fixed by editing the source and recompiling the affected applications. The diffs would be generated programatically, for instance with debdelta. That's just to cut down on download sizes, and not an integral part of the package update process or anything like that.

Bugs in the software would be fixed by updating the software like regular, which would also imply a libc update. The ideal would of course be that bug fixes are backported as far as possible, to avoid the situation in which a long-running security flaw forces you to update from v1.2 to v11.7, but this would take a lot of effort and not be worth it unless doing commercial support. Probably, the appropriate behavior would be to either accept that you have to force updates in case of critical security flaws, or to at least provide versions of the affected old packages with the appropriate -fsanitize or similar.

For proprietary software, it's true that if it went unmaintained and the last version included a libc with some exploitable buffer overflow somewhere, then it would be pretty bad. But that would be an equally big problem even if the rest of the system were to use dynamic linking, so not much changes there.

Statically linked distros already do exist, so I wouldn't think it's that bizarre a suggestion.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-15, 05:48 pm
Post: #33 of 34
Since: 11-24-18

Last post: 6 days
Last view: 13 hours
Your solution brings two ugly problems just by scratching on the surface:

1. New libc would literally require the entire system to be recompiled. Ask any seasoned Gentooer how much time that will take. Same for any library that is more or less used ubiquitously, like GTK3, xorg or libssl.

I do not think you realise how much time it takes to recompile every single application on your system (hint; try downloading Chromium or Firefox and compile that alone - and that's with mostly dynamic linking). There is a reason source based distros never gained much traction, not even Gentoo.

2. Your solution require access to source code, which is not a guaranteed assumption. After all, it's why the GPL was invented in the first place.

And let's not forget even Windows has a few pieces dynamicly linked, like win32 and DirectX stuff.

But by all means, you do you - simply run one of these static distros for yourself if you think they are the answer. :)
Posted on 19-04-15, 07:56 pm
Post: #209 of 216
Since: 01-26-19

Last post: 5 days
Last view: 6 days
Posted by wertigon
Your solution brings two ugly problems just by scratching on the surface:

1. New libc would literally require the entire system to be recompiled. Ask any seasoned Gentooer how much time that will take. Same for any library that is more or less used ubiquitously, like GTK3, xorg or libssl.

I do not think you realise how much time it takes to recompile every single application on your system (hint; try downloading Chromium or Firefox and compile that alone - and that's with mostly dynamic linking). There is a reason source based distros never gained much traction, not even Gentoo.

2. Your solution require access to source code, which is not a guaranteed assumption. After all, it's why the GPL was invented in the first place.

And let's not forget even Windows has a few pieces dynamicly linked, like win32 and DirectX stuff.

But by all means, you do you - simply run one of these static distros for yourself if you think they are the answer. :)

No, this wouldn't be source-based. It would be done by whatever party is usually responsible for builds, i.e. distro maintainers. In such contexts, build times aren't that big of a concern, I'd presume.

"New libc" should also happen very rarely, because it would only ever be force-updated globally in the case of a severe Heartbleed-esque vulnerability. "They sped up strcpy by 0.2%" wouldn't be a valid reason to recompile every single application. Otherwise, regular application updates would just confer libc updates (confined to the specific application being updated) without the end user having to worry about or even know what version of what libc, if any, it uses behind the scenes.

If you're using proprietary software, that's true that you run into a whole host of issues. But that's true anywhere. The easiest way is probably to handle it like Steam: each release ships with its own copy of dependencies that it's known to work with, and those dependencies update with the program, not with the OS. You could even integrate them into the binary, license permitting, to create some "effectively static" hybrid.

Windows is interfaced with via API calls. For Linux, correct me if I'm wrong, the ABI is sufficient. It's true that some libraries are too unwieldy to be statically linked, but presumably those would be the libraries that also have stable APIs. One would hope.

It ought to be mentioned that the size of a static executable is far less than (size of libraries) + (size of program), because it can throw away portions of libraries it doesn't need. For instance, on my machine, libsqlite3.so.0.8.6 is 1.03 MiB and the shell (compiled with gcc shell.c -lsqlite3 -Os and stripped) is 94.96 KiB. But a statically linked shell built with musl is only 499.25 KiB, less than half of expected, and this despite using presumably almost every facet of the library. The size should scale about linearly with the share of the library used, so if only a tiny smidgen of the library gets used then that should be reflected in the binary size.

You're right in that nobody in their right mind would want to run a static distro as they are now for daily use. They're only good for some niche uses, like servers. This is just a pipe dream, of how things ought to be one day in the future - it has absolutely nothing to do with reality. As much as I hate to admit it, don't worry man computers are cheap is a perfectly fine approach to nearly everything.

relevant xkcd

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-16, 10:51 am
Post: #34 of 34
Since: 11-24-18

Last post: 6 days
Last view: 13 hours
Posted by sureanem

No, this wouldn't be source-based. It would be done by whatever party is usually responsible for builds, i.e. distro maintainers. In such contexts, build times aren't that big of a concern, I'd presume.


This sounds like a great setup until you realise you still need to update every single packet that gets recompiled, vs downloading a single packet. Bandwidth isn't exactly cheap, and I think the static compile would lose out even if that single packet is 10x larger in size.

Posted by sureanem

The easiest way is probably to handle it like Steam: each release ships with its own copy of dependencies that it's known to work with, and those dependencies update with the program, not with the OS.


Yay for running a Firefox from 2010, because that really sounds secure!

This kind of packaging makes sense for a very narrow set of programs, namely, programs you want to run which are not actively maintained anymore. Incidentally I consider games to be the only valid use case for this, but only for non-open source ones. FOSS game engines can always be ported to a newer API; proprietary game packs for these engines would not be impacted.


Posted by sureanem

It ought to be mentioned that the size of a static executable is far less than (size of libraries) + (size of program), because it can throw away portions of libraries it doesn't need.


Which is a moot point for a general purpose distro, which has hundreds of different programs that interchangeably use the same dynamic library.

Let us assume three programs take up a, b and c space and all use the same library with a (dynamic) size of L, and they use a portion of that library x, y and z respectively.

Size of staticly linked: (a+x) + (b+y) + (c+z) = a+b+c + x+y+z
Size of dynamicly linked: a + b + c + L

Since X, y and z are a subset of L, there is quite a big chance x+y+z > L, rendering any space savings moot. So statically linked will always in general take up more space simply because it use space less efficiently.

So, to summarize; static linking is great for the narrow subset of a single program that will not be maintained ever again. It is, however, pretty shoddy for everything else. :)
Posted on 19-04-16, 12:34 pm (revision 1)
Post: #210 of 216
Since: 01-26-19

Last post: 5 days
Last view: 6 days
Posted by wertigon
This sounds like a great setup until you realise you still need to update every single packet that gets recompiled, vs downloading a single packet. Bandwidth isn't exactly cheap, and I think the static compile would lose out even if that single packet is 10x larger in size.
Posted by wertigon
Which is a moot point for a general purpose distro, which has hundreds of different programs that interchangeably use the same dynamic library.

Let us assume three programs take up a, b and c space and all use the same library with a (dynamic) size of L, and they use a portion of that library x, y and z respectively.

Size of staticly linked: (a+x) + (b+y) + (c+z) = a+b+c + x+y+z
Size of dynamicly linked: a + b + c + L

Since X, y and z are a subset of L, there is quite a big chance x+y+z > L, rendering any space savings moot. So statically linked will always in general take up more space simply because it use space less efficiently.

So, to summarize; static linking is great for the narrow subset of a single program that will not be maintained ever again. It is, however, pretty shoddy for everything else. :)

Yeah, but compression would by and large handle this. If a libc vuln requires 100 packages to be updated, then the delta patches would quite likely have a fair deal of redundancy. Also, not everything you download with apt is a binary. On my machine, only about 8.5% of / by size is marked executable. My / is about 8.5 GiB, so that's about 700 MiB of binaries. Of these, around 13% (90 MiB) are .so files. Say statically linked binaries are 2x larger. That gives around 610 MiB of dynamic binaries, 1220 MiB of static. Binaries compress to about 25% of their original size, so that would be 300 MiB downloaded in the absolute worst case. That's assuming no delta patches. I don't know just how good they are, but 300 MiB really isn't that much anyway. When was the last time glibc had a severe security vulnerability that required you to update?

I don't think bandwidth costs are that expensive. It would be trivial to switch to BitTorrent or something and make them effectively zero. You'd need to send them out, and it'd be, say, a few dozen times more with subarch compilation, but I don't think that initial seed bandwidth is a very big expense.

If it were, then why are Debian binaries (and all other distros' too) so extremely bloated?
Posted by wertigon

Yay for running a Firefox from 2010, because that really sounds secure!

This kind of packaging makes sense for a very narrow set of programs, namely, programs you want to run which are not actively maintained anymore. Incidentally I consider games to be the only valid use case for this, but only for non-open source ones. FOSS game engines can always be ported to a newer API; proprietary game packs for these engines would not be impacted.


But where do you draw the line? Ultimately, as long as there aren't any security issues, there's no point in updating. When I installed Ubuntu 10.10 on my laptop once upon a time, it ran well. If I'd try and install whatever the latest version of Ubuntu is now, it would be slow as molasses. Almost every single time I update, the message ends with that X more kilobytes of disk space would be used. For what, exactly? I don't notice any gains from it, and I don't think they've made great new strides in security each time they release an update.

You would be able to update, but it would be an entirely voluntary process. If you feel your 2010 Firefox (which, likely, would be far less bloated) still works fine, there would be nothing to say otherwise. The maintenance burden would be much lower, and there would be less unexpected breakage from updates going haywire with the system.

What does 'maintained' really mean? If we have some insane maintainer who starts to bundle malware with his software, of course he wouldn't be a maintainer for very long. But what if he starts including unpleasant features, requiring hundreds of megabytes of dependencies, and making the software altogether slow? Clearly, that should be up to us to judge whether we want to install or not. As it is now, due to the extreme interdependency of parts, one doesn't really have a choice but to update.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-16, 03:53 pm (revision 1)
Post: #36 of 40
Since: 10-31-18

Last post: 4 days
Last view: 2 hours
My (semi-maintained) vgmsplit program depends on libgme (mpyne, not kode54, but I could switch it if I felt like it).

The older release of "system libgme" present in Debian/Ubuntu has a highly inaccurate YM2612 emulator (which I discovered the hard way after downloading the updated version of libgme, compiling vgmsplit with its headers, and running it only to get inaccurate audio out).

My chosen solution (which doesn't require every user to munge around and replace/override their system libraries) was to statically compile and link in libgme. (Sadly this is uncommon and difficult to achieve in Linux.)
Posted on 19-04-16, 04:04 pm
Post: #212 of 216
Since: 01-26-19

Last post: 5 days
Last view: 6 days
Posted by jimbo1qaz
My chosen solution (which doesn't require every user to munge around and replace/override their system libraries) was to statically compile and link in libgme. (Sadly this is uncommon and difficult to achieve in Linux.)

How do you mean it's difficult to achieve?

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-22, 08:17 pm
Post: #5 of 5
Since: 11-08-18

Last post: 6 hours
Last view: 8 days
While it makes sense to me for a very limited number of basic packages, like libc, to be dynamically linked, I would ideally like everything else to be static.
Pages: First Previous 1 2
Main » Discussion » (Mis)adventures on Debian ((old)stable|testing|aghmyballs)
Kawa's Github