Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Posted on 19-04-14, 01:09 in (Mis)adventures on Debian ((old)stable|testing|aghmyballs) (revision 1)
Stirrer of Shit
Post: #201 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
So I ran a sudo apt-get dist-upgrade. No big deal, I thought. Updates happen all the time, and I'm running stable. Something like ten packages, none of them sounded important.

And so, two very important features break:
1) controlling backlight via the power manager (backlight keys had already been broken) - had to make a symlink and then use xbacklight
2) shutting down the bloody computer with the "log out" button in the start menu

Maybe I should switch to systemd already, but I really don't trust that piece of software. God damn it, I just want things to work. Isn't this the whole point of stable, that things don't manually shit themselves whenever you update?

What's even the point of having updates in a supposedly "frozen" distro?

Man, I want a distro where everything is statically linked and the only updates are to fix immediate security issues (in which case they should make the SMALLEST POSSIBLE CHANGE NEEDED, not exploit it to force you to update) or voluntary, based on wanting some new feature. Presumably, this is just a pipe dream because of the obscure properties of some arcane layer deep down that mere mortals cannot even begin to comprehend, but it would be nice. Maybe I shouldn't update unless I need something, but if I want to install packages then I can end up in a broken state, so I have to update even if I don't want whatever the update drags in.

I don't like Microsoft, but at least they understandood the basic rule: People do not like when you fuck their shit up for no reason, so don't do that. If they would just make one API and then stick with it, and do this everywhere, we would have a much more pleasant environment.

This is the downside of open-source software: nobody wants to work on a boring project for free (unless they're insane/ideologically driven), so the parts nobody wants to touch with a ten-foot pole remain completely undisturbed (see: X) and the parts people do care about (see: all the small unix tools people use every day and have a vested interest in making work as well as possible) are absolutely state of the art.

I suppose putting stuff in containers is a step forward, and it seems like people are finally starting to leave dynamic linking behind where it belongs (the 1980s). It's only good for loading in extensions via dlopen() and possibly system libraries which would be impractical to link in. For anything else it's rubbish, and should be replaced by static linking, or even better unity builds.

> B-but my precious storage and RAM! What about the compile times?

Compiling with static linking, proper compiler settings, and a good libc gives far smaller binaries than dynamically linking glibc with -Os, stripping, and calling it a day does. I haven't been able to test this, but I strongly suspect that integrating the libc would make for even smaller code since it could for instance inline syscalls better than LTO can.

Compile times are negligible, and also a non-argument - you wouldn't compile in -O1 for release because it "compiles faster," now would you?

Fuck dynamic libraries and fuck updates. Both cause lots of pain while bringing precious little of value to society. Windows Vista was a mistake.

EDIT: Apparently, these issues solved themselves with a reboot. Updates are still annoying though, even if it's not as bad as Arch yet.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #202 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
What's wrong with static linking? The whole "it makes it hard to update" thing is a non-issue. Because it has no external dependencies, you only need to update whenever you actually need to update. For instance, if there's a critical bug in the libc it's built against or if you actually need to update the program. And if you're only updating to replace the libc, it's not really an update proper - it can't (shouldn't) introduce any new bugs in the software, the user doesn't get any new features, etc. Of course, the libc's should follow this pattern too. So if there's a critical bug in v11.2 and below, and you have packages compiled against v5.9, your new packages should be built against v5.9.1 and not v11.2.1. It should be up to the developer what version of libc he wants to use, not LATEST ALWAYS BECAUSE LATEST IS BEST.

And since we're doing unrealistic dreams anyway, all these packages should be reproducible and subarch compiled and distributed as delta patches via BitTorrent instead of mirrors.

for the FOSS crowd, everything has been a moving target since forever.


Static linking solves this, at least in theory. There might be issues with the GUI toolkits though, since they usually read global config files. But if they'd be modified (or alternatively, the system provide config files under namespaces of some kind), then you could have GTK2 applications that didn't depend on anything but X11, which isn't linked in but communicated with over the network. Oh yeah, fuck GUI toolkits as well. Nuklear is pretty cool though.

it's now true for proprietary software too (hello, Windows).

Microsoft realized there was no business in being sane anymore, so we ended up with Psycho-Pass instead of SEL. Windows 10 is just the beginning. Soon you'll miss it, being forced to switch to DaaS.

You know the error Photoshop gives you when you try to open banknotes? Imagine it for every piece of copyrighted material in the world, as well as whatever someone (cough Chinese government cough) is willing to pay enough to have register as an "accidental" false positive.

Maybe it's like Psycho-Pass, the third/second world countries get to keep their freedom for a little longer. Maybe you'll get to switch over to glorious Astra Linux. Can't get ME'd if your CPU was made back when Bush was still president.

Man, do you even reboot more frequently? Every time I do a dist-upgrade that touches delicate/fragile bits (kernel, systemd, libc), things will act weird until the next reboot. You may not be able to reboot initially, except via the Magic SysRq combo (it occasionally happens to me). But after a couple reboots, everything is kosher again. Same applies to Windows because Windows Update is a piece of complete rubbish


Agreed. I didn't connect it at first since it was just a teeny update (man, I hate the term "upgrade" used for things which aren't appreciably better) of like ten packages. If I'd have rebooted first, I probably wouldn't have written the post...

Posted by creaothceann
You can just install new versions of the software.

Or the same version with a fixed libc, much more reliable.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-14, 14:47 in Dear modern UXtards...
Stirrer of Shit
Post: #203 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
>Slashdot infoboxes scrolling break if you adblock, making impossible to participate in polls from the front page.

Works fine on my machine. What ad blocker and lists are you using?

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-14, 15:55 in I have yet to have never seen it all.
Stirrer of Shit
Post: #204 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by CaptainJistuce
One of them is ADORABLE in how hard it tries to look like a "real gun". There's a wooden foregrip on it that looks suspiciously like it is based off the magazine of a modern rifle(such as an AR15 or AK47). It implies to me that the "gunsmith" THOUGHT that the curved thing sticking down in front of the trigger on those gun pictures he saw was a foregrip and not a magazine.

I'm not sure it's that strange. Sometimes the magazine IS used as a foregrip, even in legitimate militaries.

Assuming they don't make their weapons themselves, maybe it's easier to sell if it looks like a normal weapon rather than a slamfire pipe gun?

Pretty cool though. Why can't they use real (rifle) ammunition? Still too expensive?

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #205 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by funkyass
wouldn't updating libc mean you are reinstalling the OS, essentially? Since AFAIK, modules haven't landed in the C standards.

No, "updating the libc" would be a completely alien term. You would install a new version of an application that would include whatever version of libc it pleased, or none at all if they're writing freestanding code. It would be up to the application author (and to some extent the repo maintainers) which version of what libc to use. Presumably, newer versions would use newer libraries.

If there would be some Heartbleed-esque bug in whatever version of libc a program uses, that program's executable (and executable only) should get replaced by a version which does the absolute minimum needed to fix it, provided that part of the libc is actually being used. And if this would affect all the applications, you would indeed need to do this once per application.

However, since the ideal package manager would just apply a diff, the download size would be quite small, probably about as big as the corresponding diff for the shared library. The breakage would be minimal, limited to whatever regressions the fix manages to introduce. Most importantly, security updates wouldn't force updates of anything else. If it would have containerization, or the application has no non-library dependencies, you should be able to run software from $NOW forever and ever with such a system, assuming of course that architectures stay the same and/or the application doesn't have portability bugs.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #206 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by Wowfunhappy
Could Windows AME be legally distributed as a BPS or some other kind of diff patch?

Yeah, sure. They have instructions on the website, so they could just make it an automatic script. Most of it consists of deleting files, so it would be a very small diff patch to apply. But it's easier to ship a modified image. They don't seem to be overly worried about copyright. I sincerely hope they start worrying and taking precautions soon, for their own good. Although I suppose a court case is good PR too, so it doesn't really matter.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-14, 20:01 in Dear modern UXtards...
Stirrer of Shit
Post: #207 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by tomman

What I meant to say: I refuse to install browser addons, period.

Ad blocking should be done at the NETWORK level, communism-style. I have several devices at home, most of them running multiple OS, some of them that I don't manage, so maintaining an adblocking solution that requires client setup is not something I'm willing to do. I get that adblockers, script blockers and user scripts do allow more fine grained control for those pests, but once again, life is too short for spend your time meddling with web browsers and addons.

But why? I definitely hate configuration and tinkering too (the ideal system would have / as read-only with a tmpfs overlay and only /home persistent, with any log files I didn't explicitly request wiped on shutdown), but installing uBlock and applying whatever default lists seem pertinent takes something like five minutes. Just how often do you install software anyway?

I suppose a solution might be to install a new root certificate, then MITM all traffic going to these domains to either serve up empty files with 200 OK. If it's to be done without any configuration at all, a simpler approach might be to have them resolve to an IP where nothing is listening on port 80, leaving the connection connecting forever, instead of giving NXDOMAIN. But neither of these approaches would work for any of the examples in this thread.

Or, there is one solution, but you'd probably hate it: Firefox Sync. Log in once whenever you've installed Firefox/Seamonkey (I believe it even prompts you), then you're done.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #208 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by Screwtape

If I understand your proposal, instead of fixing a problem by changing source-code and recompiling, you want problems fixed by reverse-engineering the binary and hex-editing out the problem parts. And instead of fixing the problem once, you want the process repeated for every single binary that (directly or indirectly) uses the library in question, a process that probably can't be automated if the binaries were built with an optimising compiler of any complexity. I'm not sure what you expect to happen with second- or third-party binaries; would customers have to send them in to be patched? Would customers need to have a security engineer on-staff to do the patching? Also, if a binary had *two* problems, would the second patch require the first patch to be present, or would you expect every possible combination of patches to be available?

No, no, no, absolutely not. That would indeed be completely insane.

Bugs in the libc would be fixed by editing the source and recompiling the affected applications. The diffs would be generated programatically, for instance with debdelta. That's just to cut down on download sizes, and not an integral part of the package update process or anything like that.

Bugs in the software would be fixed by updating the software like regular, which would also imply a libc update. The ideal would of course be that bug fixes are backported as far as possible, to avoid the situation in which a long-running security flaw forces you to update from v1.2 to v11.7, but this would take a lot of effort and not be worth it unless doing commercial support. Probably, the appropriate behavior would be to either accept that you have to force updates in case of critical security flaws, or to at least provide versions of the affected old packages with the appropriate -fsanitize or similar.

For proprietary software, it's true that if it went unmaintained and the last version included a libc with some exploitable buffer overflow somewhere, then it would be pretty bad. But that would be an equally big problem even if the rest of the system were to use dynamic linking, so not much changes there.

Statically linked distros already do exist, so I wouldn't think it's that bizarre a suggestion.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #209 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by wertigon
Your solution brings two ugly problems just by scratching on the surface:

1. New libc would literally require the entire system to be recompiled. Ask any seasoned Gentooer how much time that will take. Same for any library that is more or less used ubiquitously, like GTK3, xorg or libssl.

I do not think you realise how much time it takes to recompile every single application on your system (hint; try downloading Chromium or Firefox and compile that alone - and that's with mostly dynamic linking). There is a reason source based distros never gained much traction, not even Gentoo.

2. Your solution require access to source code, which is not a guaranteed assumption. After all, it's why the GPL was invented in the first place.

And let's not forget even Windows has a few pieces dynamicly linked, like win32 and DirectX stuff.

But by all means, you do you - simply run one of these static distros for yourself if you think they are the answer. :)

No, this wouldn't be source-based. It would be done by whatever party is usually responsible for builds, i.e. distro maintainers. In such contexts, build times aren't that big of a concern, I'd presume.

"New libc" should also happen very rarely, because it would only ever be force-updated globally in the case of a severe Heartbleed-esque vulnerability. "They sped up strcpy by 0.2%" wouldn't be a valid reason to recompile every single application. Otherwise, regular application updates would just confer libc updates (confined to the specific application being updated) without the end user having to worry about or even know what version of what libc, if any, it uses behind the scenes.

If you're using proprietary software, that's true that you run into a whole host of issues. But that's true anywhere. The easiest way is probably to handle it like Steam: each release ships with its own copy of dependencies that it's known to work with, and those dependencies update with the program, not with the OS. You could even integrate them into the binary, license permitting, to create some "effectively static" hybrid.

Windows is interfaced with via API calls. For Linux, correct me if I'm wrong, the ABI is sufficient. It's true that some libraries are too unwieldy to be statically linked, but presumably those would be the libraries that also have stable APIs. One would hope.

It ought to be mentioned that the size of a static executable is far less than (size of libraries) + (size of program), because it can throw away portions of libraries it doesn't need. For instance, on my machine, libsqlite3.so.0.8.6 is 1.03 MiB and the shell (compiled with gcc shell.c -lsqlite3 -Os and stripped) is 94.96 KiB. But a statically linked shell built with musl is only 499.25 KiB, less than half of expected, and this despite using presumably almost every facet of the library. The size should scale about linearly with the share of the library used, so if only a tiny smidgen of the library gets used then that should be reflected in the binary size.

You're right in that nobody in their right mind would want to run a static distro as they are now for daily use. They're only good for some niche uses, like servers. This is just a pipe dream, of how things ought to be one day in the future - it has absolutely nothing to do with reality. As much as I hate to admit it, don't worry man computers are cheap is a perfectly fine approach to nearly everything.

relevant xkcd

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-16, 12:34 in (Mis)adventures on Debian ((old)stable|testing|aghmyballs) (revision 1)
Stirrer of Shit
Post: #210 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by wertigon
This sounds like a great setup until you realise you still need to update every single packet that gets recompiled, vs downloading a single packet. Bandwidth isn't exactly cheap, and I think the static compile would lose out even if that single packet is 10x larger in size.
Posted by wertigon
Which is a moot point for a general purpose distro, which has hundreds of different programs that interchangeably use the same dynamic library.

Let us assume three programs take up a, b and c space and all use the same library with a (dynamic) size of L, and they use a portion of that library x, y and z respectively.

Size of staticly linked: (a+x) + (b+y) + (c+z) = a+b+c + x+y+z
Size of dynamicly linked: a + b + c + L

Since X, y and z are a subset of L, there is quite a big chance x+y+z > L, rendering any space savings moot. So statically linked will always in general take up more space simply because it use space less efficiently.

So, to summarize; static linking is great for the narrow subset of a single program that will not be maintained ever again. It is, however, pretty shoddy for everything else. :)

Yeah, but compression would by and large handle this. If a libc vuln requires 100 packages to be updated, then the delta patches would quite likely have a fair deal of redundancy. Also, not everything you download with apt is a binary. On my machine, only about 8.5% of / by size is marked executable. My / is about 8.5 GiB, so that's about 700 MiB of binaries. Of these, around 13% (90 MiB) are .so files. Say statically linked binaries are 2x larger. That gives around 610 MiB of dynamic binaries, 1220 MiB of static. Binaries compress to about 25% of their original size, so that would be 300 MiB downloaded in the absolute worst case. That's assuming no delta patches. I don't know just how good they are, but 300 MiB really isn't that much anyway. When was the last time glibc had a severe security vulnerability that required you to update?

I don't think bandwidth costs are that expensive. It would be trivial to switch to BitTorrent or something and make them effectively zero. You'd need to send them out, and it'd be, say, a few dozen times more with subarch compilation, but I don't think that initial seed bandwidth is a very big expense.

If it were, then why are Debian binaries (and all other distros' too) so extremely bloated?
Posted by wertigon

Yay for running a Firefox from 2010, because that really sounds secure!

This kind of packaging makes sense for a very narrow set of programs, namely, programs you want to run which are not actively maintained anymore. Incidentally I consider games to be the only valid use case for this, but only for non-open source ones. FOSS game engines can always be ported to a newer API; proprietary game packs for these engines would not be impacted.

But where do you draw the line? Ultimately, as long as there aren't any security issues, there's no point in updating. When I installed Ubuntu 10.10 on my laptop once upon a time, it ran well. If I'd try and install whatever the latest version of Ubuntu is now, it would be slow as molasses. Almost every single time I update, the message ends with that X more kilobytes of disk space would be used. For what, exactly? I don't notice any gains from it, and I don't think they've made great new strides in security each time they release an update.

You would be able to update, but it would be an entirely voluntary process. If you feel your 2010 Firefox (which, likely, would be far less bloated) still works fine, there would be nothing to say otherwise. The maintenance burden would be much lower, and there would be less unexpected breakage from updates going haywire with the system.

What does 'maintained' really mean? If we have some insane maintainer who starts to bundle malware with his software, of course he wouldn't be a maintainer for very long. But what if he starts including unpleasant features, requiring hundreds of megabytes of dependencies, and making the software altogether slow? Clearly, that should be up to us to judge whether we want to install or not. As it is now, due to the extreme interdependency of parts, one doesn't really have a choice but to update.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-16, 16:02 in I have yet to have never seen it all.
Stirrer of Shit
Post: #211 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Paul Calder Le Roux, author of TrueCrypt

Man develops state of the art encryption software, posts harsh verbal attacks against Australia as well as racist comments on Usenet, gets involved in international crime and makes millions of dollars, then plea deals himself into twelve years for seven murders

He's like a real life Bond villain. Why hasn't anyone made a movie about this guy's life yet? Presumably, people would complain that it's too "unrealistic". Heck, even if he'd be the villain in a Bond movie people still would complain he's too unrealistic.

I know he's a criminal, but still, there's something oddly respectable about the story. Who the hell even builds a private army to invade the Maldives?

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #212 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by jimbo1qaz
My chosen solution (which doesn't require every user to munge around and replace/override their system libraries) was to statically compile and link in libgme. (Sadly this is uncommon and difficult to achieve in Linux.)

How do you mean it's difficult to achieve?

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-16, 21:30 in SNES HD mode 7
Stirrer of Shit
Post: #213 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by Broseph
So if I got this right what we're seeing in the higher res pics is effectively the original artwork/tile map or whatever before they get scaled down by the hardware somehow?

Pretty damn impressive either way.

Well, it doesn't get scaled down, just subsampled–it takes one pixel and uses it to fill a wider area, rather than taking an average which is what usually happens when you scale down. So even if you'd scale it down to native resolution in the end, it would still look "better" than naïve/fast subsampling.
(Better is subjective–in some games like F-Zero, "proper" scaling in my opinion looks like trash because the graphics were drawn so they'd look good with "bad" subsampling)

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #214 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
The legacy versions with disk drives won't be very useful if no new games are released for them, or if a software update disables it for security/piracy reasons. Buying and downloading games is just an intermediate step anyway, soon you'll only be able to subscribe to an unlimited plan.

As for the Internet, no problem. They'll just ask the major US ISPs to bump up the speed. Because it can download at off-peak hours without having to leave the carrier's net, they should be able to get far higher speeds than normal, and without counting against data caps. With good technology, you should be able to "stream" the game data to play the first few hours. Remember that games historically have been optimized to fit on a X GB disk, if they're incentivized to be small instead then they could just start compressing stuff harder and then decompress it to disk. The end user wouldn't notice anything but slightly longer first-run load screens, and would have the impression of being able to play any game he or she wanted to at any given time.

The customers would of course welcome it - "no more dealing with CDs and DVDs that might get lost or damaged, just you and your infinite games". No more having to buy games for little Timmy, because gracious Microsoft already took care of the matter and gave him them all for free.

Also think of the developers' opportunities to be able to dynamically patch the game whenever they feel like it, and to do profile-guided optimization or A/B testing with near-constant feedback. Game balance could be dynamically adjusted to maximize engagement, the only important metric. People who play more could get better "RNG" as well. Lots of interesting things you could do with that sort of data really, just look at YouTube and how they make sure everyone gets interesting recommendations.

Clearly, it's a win-win situation for everyone involved.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-17, 21:44 in I have yet to have never seen it all.
Stirrer of Shit
Post: #215 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
The best part is probably this:
Q. And, for example, you ordered the murder of a Filipino customs agent, correct?
A. That’s not correct.
Q. What is not correct about it?
A. The individual wasn’t a customs agent.


There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-17, 23:04 in SNES HD mode 7
Stirrer of Shit
Post: #216 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by jimbo1qaz
xBR https://kayo.moe/5dG7cBao.png
xBRZ https://kayo.moe/OELJLahD.png

Those links don't appear to work.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-26, 18:59 in Something about cheese! (revision 1)
Stirrer of Shit
Post: #217 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Hey, you know that meme with the Internet getting split up into different packages?

That one.

Posted by https://www.tomshardware.com/news/cox-elite-gamer-internet-fast-lane,39176.html
Cox Introduces 'Elite Gamer' Internet Fast Lane

...

Cox Communications['] ... new Cox Elite Gamer service ... allows its customers to pay an extra $15 per month to make sure their connections to multiplayer game servers are handled as optimally as possible.

...

Cox said on the Cox Elite Gamer website that the service offers 34 percent less lag, 55 percent fewer ping spikes, and 45 percent less jitter than its existing service. That's because traffic for specific games--including Apex Legends, Overwatch, and Fortnite--will be routed through a gaming-specific network. This also means that people who pay for Cox Elite Gamer won't suddenly have faster connections to other sites and services.

...

Those benefits and hindrances were previously hard to work around. Some companies offer their own network optimization tools for specific games, but generally speaking, the options were to find a different service provider or deal with the less-than-stellar connection. Cox Elite Gamer changes that.

To make things even funnier, I can't visit their website. It's geo-blocked for anyone who doesn't have an IP from (presumably a specific region of) the US. I tried with a few different proxies, but none seemed to work. Brave new world.
Someone on reddit took some screenshots though. I mean, don't you want to "get more megs from your modem"?

(it ought to be noted, this likely has nothing to do with net neutrality, judging from Reddit comments - my understanding is that only ISPs are obliged to treat traffic equally, and that an unrelated third-party can provide private "fast lanes" and ISPs sell software for using it)

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-26, 21:00 in Board feature requests/suggestions (revision 1)
Stirrer of Shit
Post: #218 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Sometimes when you post an image, it forces the post body to swell up, giving you an ugly horizontal scroll bar, and also cutting off the image.

Could some directive like this be added to the board CSS?
table td.post img
{
max-width: 100%;
}

On the previous board, I recall there was a rule like "don't post obnoxiously big images", but this would fix the problem for people who post images that are big but not quite obnoxiously big.

Here's an example of what I mean:


There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Stirrer of Shit
Post: #219 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Amazon is fighting South America over the “.amazon” domain

gTLDs were a mistake.

Why does anyone need to "govern" a TLD anyway? For ccTLDs I understand it, and it's perfectly reasonable. But for .com, .org, or .biz, there's no requirement to be a company, organization, or business respectively. .mil and .int are special cases, I suppose.

What value does this add to society? Amazon gets to have a cool domain hack, and parsing URLs becomes extremely difficult?

You used to be able to type google.com and it'd get linked. Now, how are you supposed to know without also flagging a false.positive from people fat-fingering their smartphones? You could try to match it to a gTLD, but it's only a matter of time before someone registers .positive.

(Also, I love that there's a Wikipedia article named 2018 Samsung fat-finger error)

I mean, I can see the appeal of .tokyo. But couldn't they just set up .tokyo.jp without having to involve the ICANN?

You can see the full list with usage stats here - 338 have only one domain registered. Can you find a single useful one?

My suggestion is that .bananarepublic be expropriated for use as a government domain, in exchange for Amazon Inc. getting .amazon.

There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Posted on 19-04-27, 01:00 in Something about cheese!
Stirrer of Shit
Post: #220 of 717
Since: 01-26-19

Last post: 1554 days
Last view: 1552 days
Posted by Covarr

This assessment is probably correct. That doesn't make this not-scummy, though; this very much carries the stink of snake oil with it.

Well, they're either scamming people or screwing them over. But the former doesn't cause any direct damage to the bulk of their customers, at least in theory.


There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
    Main » sureanem » List of posts
    This does not actually go there and I regret nothing.