0 users browsing Discussion. | 46 bots  
    Main » Discussion » (Mis)adventures on Debian ((old)stable|testing|aghmyballs)
    Pages: First Previous 1 2 3 4 5 6 7 8 9 10 11 Next Last
    Posted on 19-04-15, 08:49
    Post: #35 of 202
    Since: 11-01-18

    Last post: 660 days
    Last view: 15 days
    just so we are clear, libc(or glibc) is the C standard library, where printf lives.

    im with screwtape, monkey patching isn't a viable solution if you have source access.

    heck, what started this current debate was an issue with the update process and not with dynamic linking.
    Posted on 19-04-15, 14:44
    Stirrer of Shit
    Post: #208 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by Screwtape

    If I understand your proposal, instead of fixing a problem by changing source-code and recompiling, you want problems fixed by reverse-engineering the binary and hex-editing out the problem parts. And instead of fixing the problem once, you want the process repeated for every single binary that (directly or indirectly) uses the library in question, a process that probably can't be automated if the binaries were built with an optimising compiler of any complexity. I'm not sure what you expect to happen with second- or third-party binaries; would customers have to send them in to be patched? Would customers need to have a security engineer on-staff to do the patching? Also, if a binary had *two* problems, would the second patch require the first patch to be present, or would you expect every possible combination of patches to be available?

    No, no, no, absolutely not. That would indeed be completely insane.

    Bugs in the libc would be fixed by editing the source and recompiling the affected applications. The diffs would be generated programatically, for instance with debdelta. That's just to cut down on download sizes, and not an integral part of the package update process or anything like that.

    Bugs in the software would be fixed by updating the software like regular, which would also imply a libc update. The ideal would of course be that bug fixes are backported as far as possible, to avoid the situation in which a long-running security flaw forces you to update from v1.2 to v11.7, but this would take a lot of effort and not be worth it unless doing commercial support. Probably, the appropriate behavior would be to either accept that you have to force updates in case of critical security flaws, or to at least provide versions of the affected old packages with the appropriate -fsanitize or similar.

    For proprietary software, it's true that if it went unmaintained and the last version included a libc with some exploitable buffer overflow somewhere, then it would be pretty bad. But that would be an equally big problem even if the rest of the system were to use dynamic linking, so not much changes there.

    Statically linked distros already do exist, so I wouldn't think it's that bizarre a suggestion.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-04-15, 17:48
    Post: #33 of 205
    Since: 11-24-18

    Last post: 155 days
    Last view: 27 days
    Your solution brings two ugly problems just by scratching on the surface:

    1. New libc would literally require the entire system to be recompiled. Ask any seasoned Gentooer how much time that will take. Same for any library that is more or less used ubiquitously, like GTK3, xorg or libssl.

    I do not think you realise how much time it takes to recompile every single application on your system (hint; try downloading Chromium or Firefox and compile that alone - and that's with mostly dynamic linking). There is a reason source based distros never gained much traction, not even Gentoo.

    2. Your solution require access to source code, which is not a guaranteed assumption. After all, it's why the GPL was invented in the first place.

    And let's not forget even Windows has a few pieces dynamicly linked, like win32 and DirectX stuff.

    But by all means, you do you - simply run one of these static distros for yourself if you think they are the answer. :)
    Posted on 19-04-15, 19:56
    Stirrer of Shit
    Post: #209 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by wertigon
    Your solution brings two ugly problems just by scratching on the surface:

    1. New libc would literally require the entire system to be recompiled. Ask any seasoned Gentooer how much time that will take. Same for any library that is more or less used ubiquitously, like GTK3, xorg or libssl.

    I do not think you realise how much time it takes to recompile every single application on your system (hint; try downloading Chromium or Firefox and compile that alone - and that's with mostly dynamic linking). There is a reason source based distros never gained much traction, not even Gentoo.

    2. Your solution require access to source code, which is not a guaranteed assumption. After all, it's why the GPL was invented in the first place.

    And let's not forget even Windows has a few pieces dynamicly linked, like win32 and DirectX stuff.

    But by all means, you do you - simply run one of these static distros for yourself if you think they are the answer. :)

    No, this wouldn't be source-based. It would be done by whatever party is usually responsible for builds, i.e. distro maintainers. In such contexts, build times aren't that big of a concern, I'd presume.

    "New libc" should also happen very rarely, because it would only ever be force-updated globally in the case of a severe Heartbleed-esque vulnerability. "They sped up strcpy by 0.2%" wouldn't be a valid reason to recompile every single application. Otherwise, regular application updates would just confer libc updates (confined to the specific application being updated) without the end user having to worry about or even know what version of what libc, if any, it uses behind the scenes.

    If you're using proprietary software, that's true that you run into a whole host of issues. But that's true anywhere. The easiest way is probably to handle it like Steam: each release ships with its own copy of dependencies that it's known to work with, and those dependencies update with the program, not with the OS. You could even integrate them into the binary, license permitting, to create some "effectively static" hybrid.

    Windows is interfaced with via API calls. For Linux, correct me if I'm wrong, the ABI is sufficient. It's true that some libraries are too unwieldy to be statically linked, but presumably those would be the libraries that also have stable APIs. One would hope.

    It ought to be mentioned that the size of a static executable is far less than (size of libraries) + (size of program), because it can throw away portions of libraries it doesn't need. For instance, on my machine, libsqlite3.so.0.8.6 is 1.03 MiB and the shell (compiled with gcc shell.c -lsqlite3 -Os and stripped) is 94.96 KiB. But a statically linked shell built with musl is only 499.25 KiB, less than half of expected, and this despite using presumably almost every facet of the library. The size should scale about linearly with the share of the library used, so if only a tiny smidgen of the library gets used then that should be reflected in the binary size.

    You're right in that nobody in their right mind would want to run a static distro as they are now for daily use. They're only good for some niche uses, like servers. This is just a pipe dream, of how things ought to be one day in the future - it has absolutely nothing to do with reality. As much as I hate to admit it, don't worry man computers are cheap is a perfectly fine approach to nearly everything.

    relevant xkcd

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-04-16, 10:51
    Post: #34 of 205
    Since: 11-24-18

    Last post: 155 days
    Last view: 27 days
    Posted by sureanem

    No, this wouldn't be source-based. It would be done by whatever party is usually responsible for builds, i.e. distro maintainers. In such contexts, build times aren't that big of a concern, I'd presume.


    This sounds like a great setup until you realise you still need to update every single packet that gets recompiled, vs downloading a single packet. Bandwidth isn't exactly cheap, and I think the static compile would lose out even if that single packet is 10x larger in size.

    Posted by sureanem

    The easiest way is probably to handle it like Steam: each release ships with its own copy of dependencies that it's known to work with, and those dependencies update with the program, not with the OS.


    Yay for running a Firefox from 2010, because that really sounds secure!

    This kind of packaging makes sense for a very narrow set of programs, namely, programs you want to run which are not actively maintained anymore. Incidentally I consider games to be the only valid use case for this, but only for non-open source ones. FOSS game engines can always be ported to a newer API; proprietary game packs for these engines would not be impacted.


    Posted by sureanem

    It ought to be mentioned that the size of a static executable is far less than (size of libraries) + (size of program), because it can throw away portions of libraries it doesn't need.


    Which is a moot point for a general purpose distro, which has hundreds of different programs that interchangeably use the same dynamic library.

    Let us assume three programs take up a, b and c space and all use the same library with a (dynamic) size of L, and they use a portion of that library x, y and z respectively.

    Size of staticly linked: (a+x) + (b+y) + (c+z) = a+b+c + x+y+z
    Size of dynamicly linked: a + b + c + L

    Since X, y and z are a subset of L, there is quite a big chance x+y+z > L, rendering any space savings moot. So statically linked will always in general take up more space simply because it use space less efficiently.

    So, to summarize; static linking is great for the narrow subset of a single program that will not be maintained ever again. It is, however, pretty shoddy for everything else. :)
    Posted on 19-04-16, 12:34 (revision 1)
    Stirrer of Shit
    Post: #210 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by wertigon
    This sounds like a great setup until you realise you still need to update every single packet that gets recompiled, vs downloading a single packet. Bandwidth isn't exactly cheap, and I think the static compile would lose out even if that single packet is 10x larger in size.
    Posted by wertigon
    Which is a moot point for a general purpose distro, which has hundreds of different programs that interchangeably use the same dynamic library.

    Let us assume three programs take up a, b and c space and all use the same library with a (dynamic) size of L, and they use a portion of that library x, y and z respectively.

    Size of staticly linked: (a+x) + (b+y) + (c+z) = a+b+c + x+y+z
    Size of dynamicly linked: a + b + c + L

    Since X, y and z are a subset of L, there is quite a big chance x+y+z > L, rendering any space savings moot. So statically linked will always in general take up more space simply because it use space less efficiently.

    So, to summarize; static linking is great for the narrow subset of a single program that will not be maintained ever again. It is, however, pretty shoddy for everything else. :)

    Yeah, but compression would by and large handle this. If a libc vuln requires 100 packages to be updated, then the delta patches would quite likely have a fair deal of redundancy. Also, not everything you download with apt is a binary. On my machine, only about 8.5% of / by size is marked executable. My / is about 8.5 GiB, so that's about 700 MiB of binaries. Of these, around 13% (90 MiB) are .so files. Say statically linked binaries are 2x larger. That gives around 610 MiB of dynamic binaries, 1220 MiB of static. Binaries compress to about 25% of their original size, so that would be 300 MiB downloaded in the absolute worst case. That's assuming no delta patches. I don't know just how good they are, but 300 MiB really isn't that much anyway. When was the last time glibc had a severe security vulnerability that required you to update?

    I don't think bandwidth costs are that expensive. It would be trivial to switch to BitTorrent or something and make them effectively zero. You'd need to send them out, and it'd be, say, a few dozen times more with subarch compilation, but I don't think that initial seed bandwidth is a very big expense.

    If it were, then why are Debian binaries (and all other distros' too) so extremely bloated?
    Posted by wertigon

    Yay for running a Firefox from 2010, because that really sounds secure!

    This kind of packaging makes sense for a very narrow set of programs, namely, programs you want to run which are not actively maintained anymore. Incidentally I consider games to be the only valid use case for this, but only for non-open source ones. FOSS game engines can always be ported to a newer API; proprietary game packs for these engines would not be impacted.

    But where do you draw the line? Ultimately, as long as there aren't any security issues, there's no point in updating. When I installed Ubuntu 10.10 on my laptop once upon a time, it ran well. If I'd try and install whatever the latest version of Ubuntu is now, it would be slow as molasses. Almost every single time I update, the message ends with that X more kilobytes of disk space would be used. For what, exactly? I don't notice any gains from it, and I don't think they've made great new strides in security each time they release an update.

    You would be able to update, but it would be an entirely voluntary process. If you feel your 2010 Firefox (which, likely, would be far less bloated) still works fine, there would be nothing to say otherwise. The maintenance burden would be much lower, and there would be less unexpected breakage from updates going haywire with the system.

    What does 'maintained' really mean? If we have some insane maintainer who starts to bundle malware with his software, of course he wouldn't be a maintainer for very long. But what if he starts including unpleasant features, requiring hundreds of megabytes of dependencies, and making the software altogether slow? Clearly, that should be up to us to judge whether we want to install or not. As it is now, due to the extreme interdependency of parts, one doesn't really have a choice but to update.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-04-16, 15:53 (revision 1)
    Post: #36 of 77
    Since: 10-31-18

    Last post: 1189 days
    Last view: 1116 days
    My (semi-maintained) vgmsplit program depends on libgme (mpyne, not kode54, but I could switch it if I felt like it).

    The older release of "system libgme" present in Debian/Ubuntu has a highly inaccurate YM2612 emulator (which I discovered the hard way after downloading the updated version of libgme, compiling vgmsplit with its headers, and running it only to get inaccurate audio out).

    My chosen solution (which doesn't require every user to munge around and replace/override their system libraries) was to statically compile and link in libgme. (Sadly this is uncommon and difficult to achieve in Linux.)
    Posted on 19-04-16, 16:04
    Stirrer of Shit
    Post: #212 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by jimbo1qaz
    My chosen solution (which doesn't require every user to munge around and replace/override their system libraries) was to statically compile and link in libgme. (Sadly this is uncommon and difficult to achieve in Linux.)

    How do you mean it's difficult to achieve?

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-04-22, 20:17
    Post: #5 of 21
    Since: 11-08-18

    Last post: 1254 days
    Last view: 1254 days
    While it makes sense to me for a very limited number of basic packages, like libc, to be dynamically linked, I would ideally like everything else to be static.
    Posted on 19-04-25, 11:51
    Post: #35 of 205
    Since: 11-24-18

    Last post: 155 days
    Last view: 27 days
    Posted by Wowfunhappy
    While it makes sense to me for a very limited number of basic packages, like libc, to be dynamically linked, I would ideally like everything else to be static.


    There is an open source OS that does this. It's called ReactOS and is a clone of Windows.

    Or you could just, you know, keep using Windows. :)
    Posted on 19-04-29, 10:28
    Stirrer of Shit
    Post: #225 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by http://beets.io/blog/sqlite-nightmare.html
    Assume SQLite Sleeps Whole Seconds

    If you use SQLite, you currently need to assume that some users will have a copy compiled without usleep support. If you’re using multiple threads, this means that, even under light contention, some transactions will take longer than five seconds. Either turn the timeout parameter up or otherwise account for this inevitability.

    I haven’t seen this particular quirk documented elsewhere, but it should be common knowledge among SQLite users.

    The mystery meat dynamic libraries strike again.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-04-29, 10:40
    Full mod

    Post: #231 of 443
    Since: 10-30-18

    Last post: 1101 days
    Last view: 172 days
    I mean, you can statically link with a library built with a weird configuration just as easily as dynamically linking.

    The ending of the words is ALMSIVI.
    Posted on 19-04-29, 11:13
    Stirrer of Shit
    Post: #226 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Yeah, but at least it will be your weird configuration. With the system libraries, all bets are off.


    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-04-30, 15:19
    Dinosaur

    Post: #279 of 1315
    Since: 10-30-18

    Last post: 58 days
    Last view: 18 hours
    The (Sad) State of Sega 32X nVidia Optimus under Linux, 2019 edition

    tl;dr: It still sucks. AVOID. Stick to Intel IGPs or, if you're willing to endure ATi driver hell, wait for those Ryzen laptops to arrive. I guess you can also spend some serious cash on a gaming-grade or workstation-class laptop where you are guaranteed to get one and only ONE GPU, of the discrete variety. Hopefully.

    I bought this Optimus-enabled dual-GPU laptop in late 2012. Back then, nVidia was refusing to support Optimus under anything that is not Windows, Nouveau support was simply not there, but the community still fought on and came up with a few hacky workarounds so we can at least use the hardware we've paid our hard earned money for.

    Pick your poison:

    - PRIME: This is "the way it is meant to be played™", that is, proper support for GPU offloading on hybrid graphics platforms under Linux. Except that nVidia still doesn't support PRIME for anything other than "run your desktop through the discrete GPU" (this may work for anyone that doesn't care about the heavy impact on battery life and excess heat on those designer laptops). Oh, and it's only available on some distros, mainly Ubuntu-based ones. Supposedly complete support is "coming soon™", but for anyone of us still stuck at Fermi GPUs, it's too late already (as our GPUs were moved to the legacy blob very recently). On the Nouveau front you can already enjoy proper GPU offloading... but then you might as well play on the Intel IGP and pretend there is no secondary GPU because reclocking support is not there (except for Maxwell GPUs; Fermi support is "still being worked on", and thanks to the signed firmware bullshittery by nVidia, there is little hope for Pascal and beyond)

    - Bumblebee: The most popular hack out there. It mostly works, but it comes with a rather high performance hit (still better than your Intel IGP, tho), lag is a concern for any serious player, there are several display backends (VirtualGL, primus) that are all them terrible in its own ways, it can be very fragile at times (from failures to turn off the discrete GPU when not being used, to outright refusing to work at all on some setups). On top of that, Bumblebee will never support Vulkan, and the project is essentially dead as upstream has not received a single commit in 6 years (!!!)

    - nvidia-xrun: The new kid on the block, this hack comes from a guy that noticed the big waste on performance of the Bumblebee way. Supposedly this way you can get full performance from your discrete GPU (making it very similar to PRIME, in principle), but then this relies on "let's run a secondary X server for all apps you want to run on the discrete GPU". It HAS to be invoked from a good ol' TTY console. Running Steam (the most popular use case) is not straightforward; instead you have to bring your own window manager so Steam doesn't lose its mind, and then you've effectively replicated PRIME the long way (it's nice for playing Euro Truck Simulator 2, but do you really want to run a lag-sensitive game like Super Hexagon too!?). Might as well use that way permanently and forget about power saving.

    Believe me: I tried my best to avoid Optimus laptops back then, but all AMD-based laptops in the local market were pure basement tier APU garbage. Looks like things haven't improved that much since then, because even nowadays I still hear tales from people having a hard time looking for laptops with discrete GPUs without having to go through the Optimus way since nVidia still dominates the mobile discrete GPU market.



    Licensed Pirate® since 2006, 100% Buttcoin™-free, enemy of All Things JavaScript™
    Posted on 19-06-23, 17:48
    Dinosaur

    Post: #413 of 1315
    Since: 10-30-18

    Last post: 58 days
    Last view: 18 hours
    Debian 10 "Buster" is leaving Testing to become the new ol' stable in two weeks from now on:
    https://lists.debian.org/debian-devel-announce/2019/06/msg00003.html

    Guess it's time to move yer ass and update over here, assuming my extremely flaky DSL is willing to cooperate (spoilers: it will not). Expect more broken shit because upstream hates you, as that's how software is developed nowadays.

    But on the positive side, it means I'll now be able to build Dolphin from source using only stock Debian packages and compilers.

    In other related news, the clock is running out for Jessie - LTS phase ends in one year and one week from now on (that is, June 30th, 2020). If Saki still survives to that date, maybe it's time to consider something else, maybe learn Arch, or simply ignore it as noone has hacked my shit and will not bother since you can't mine buttcoins on an (slightly overclocked) Pentium-MMX/200, and if you ever breach into its filesystems, feel free to steal all of my porn doujinshis (which are backed up anyway).

    Licensed Pirate® since 2006, 100% Buttcoin™-free, enemy of All Things JavaScript™
    Posted on 19-06-25, 11:38 (revision 2)
    Post: #66 of 205
    Since: 11-24-18

    Last post: 155 days
    Last view: 27 days
    Posted by tomman

    In other related news, the clock is running out for Jessie - LTS phase ends in one year and one week from now on (that is, June 30th, 2020). If Saki still survives to that date, maybe it's time to consider something else, maybe learn Arch, or simply ignore it as noone has hacked my shit and will not bother since you can't mine buttcoins on an (slightly overclocked) Pentium-MMX/200, and if you ever breach into its filesystems, feel free to steal all of my porn doujinshis (which are backed up anyway).


    Well, I hate to break it to you, but the only sane, guaranteed way to go on i386 going forward is source based. Either GuixSD or Lunar Linux would be my best bets. Or perhaps Gentoo.

    Good news is, you can produce a system image in a VM and simply keep Saki rsynced to that after initial boot, if you don't have the system resources to self-compile.
    Posted on 19-06-25, 13:17
    Stirrer of Shit
    Post: #440 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    You get ELTS after that. So until 2022, possibly later.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-07, 01:04
    Dinosaur

    Post: #424 of 1315
    Since: 10-30-18

    Last post: 58 days
    Last view: 18 hours
    Buster is now the new sexy hotness. Grab while it is not (too) stale now!

    This time I will read the release notes!
    https://www.debian.org/releases/stable/amd64/release-notes/ch-whats-new.en.html
    https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html

    Debian is now Secure Boot™ Compliant, whatever that implies, for good or evil.

    AppArmor is enabled by default... oh wait, it's already enabled on my systems. I don't even what the hell is this, but as long as it does not get in my way, I'll let it slide. GNOME uses Wayland by default, too bad noone sane uses GNOME AKA "please don't theme our Apps™".

    Also, this:
    2.2.12. Merged /usr on fresh installs

    On fresh installs, the content of /bin, /sbin and /lib will be installed into their /usr counterpart by default. /bin, /sbin and /lib will be soft-links pointing at their directory counterpart under /usr/. In graphical form:

    /bin → /usr/bin
    /sbin → /usr/sbin
    /lib → /usr/lib


    When upgrading to buster, systems are left as they are, although the usrmerge package exists to do the conversion if desired.

    This change shouldn't impact normal users that only run packages provided by Debian, but it may be something that people that use or build third party software want to be aware of. The freedesktop.org project hosts a Wiki with most of the rationale.

    I can hear the furious laments of pain of Troo UNIX® Way sysadmins as they lose another of their cherished '70s traditions. The rest of the world just says "meh".

    Default GCC version is 7.4 and 8.3. Yes, for the first time (ever? in a long time?) you have a choice! Not that you can use the latest GCC 9.x/10, but hey, we're now up to 2017s-era choices! Default PostgreSQL is 11, maybe it's time for me to move on... if only someone ever revived the pgAdmin III branch with support for newer PostgreSQL versions (if you didn't know, pgAdmin 4 got infected by the phoneworms and became a shittyass Chrome-in-a-can® webapp that is so unreliable it actually borders in uselessness)

    Licensed Pirate® since 2006, 100% Buttcoin™-free, enemy of All Things JavaScript™
    Posted on 19-07-07, 04:46 (revision 1)
    Custom title here

    Post: #550 of 1164
    Since: 10-30-18

    Last post: 63 days
    Last view: 8 hours
    Posted by tomman

    Also, this:
    2.2.12. Merged /usr on fresh installs

    On fresh installs, the content of /bin, /sbin and /lib will be installed into their /usr counterpart by default. /bin, /sbin and /lib will be soft-links pointing at their directory counterpart under /usr/. In graphical form:

    /bin → /usr/bin
    /sbin → /usr/sbin
    /lib → /usr/lib


    When upgrading to buster, systems are left as they are, although the usrmerge package exists to do the conversion if desired.

    This change shouldn't impact normal users that only run packages provided by Debian, but it may be something that people that use or build third party software want to be aware of. The freedesktop.org project hosts a Wiki with most of the rationale.

    I can hear the furious laments of pain of Troo UNIX® Way sysadmins as they lose another of their cherished '70s traditions. The rest of the world just says "meh".
    The more important thing here is that in trying to figure out what this is about, I found out one of the great mysteries of the Lunix file system: WHY DON'T USER FILES GO IN THE DIRECTORY NAMED USR?

    The answer, sadly, is "forty years of backwards-compatibility with an ugly hack" that no one dares change(ironic, really). But at least I understand now.

    --- In UTF-16, where available. ---
    Posted on 19-07-07, 12:07
    Stirrer of Shit
    Post: #469 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    A step in the right direction. But couldn't they have done it the other way around?
    Binaries in /bin, root binaries also in /bin (chmod 754), libraries nowhere because they're statically linked in /lib.
    Evacuate /usr entirely save for symlinks, then come 10 years maybe they can delete it.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Pages: First Previous 1 2 3 4 5 6 7 8 9 10 11 Next Last
      Main » Discussion » (Mis)adventures on Debian ((old)stable|testing|aghmyballs)
      Yes, it's an ad.