byuu's message board

For discussion of projects related to www.byuu.org/


Previous  1, 2, 3  Next
bsnes-mercury 
Author Message

Joined: Fri 10 Apr 2009, 15:00:08

Posts: 13668
Post Re: bsnes-mercury
> I hope this is enough of my fault to take it personally

Since no actual technical critiques were raised, I wouldn't worry about it. Cheap shots are meaningless.

Probably just hating on C++. That's really trendy these days.

Mon 28 Jul 2014, 03:54:29

Joined: Tue 21 Feb 2012, 05:42:15

Posts: 2564
Post Re: bsnes-mercury
I currently have a quote from Bjarne set as my signature, so it's probably not hard to guess my feelings on the situation. Here's another good Bjarne quote, this one a bit more relevant to the issue at hand:
Quote:
I have yet to see a program that can be written better in C than in C++.
C++ is ugly. Bloated. Archaic. Whatever. I don't care, neither does any seasoned C++ coder. I've come to realize that programming languages should serve to help you, not restrict you. C++ provides you with an amazingly wide array of problem-solving tools, many of them coming at effectively no runtime cost. When I go to other languages, I find myself strangled by the intended usage. I'll take automatic storage and otherwise manual memory management over garbage collection any day, hands down. Have you ever tried doing OpenGL programming in C#? You can't do RAII because the destructor runs in another thread in an unspecified order! What would have been really simple is now mindnumbingly complicated because the language does not offer a tool to solve this particular problem. And reflective programming is really neat until you factor in how much overhead it has - sure, it won't matter for hello world, but when you're talking about a web server it's nice to know that your template meta-programming compiles down to ideal machine code.

If you are a C purist, consider for a moment that you could just write a C++ program in the subset of C features it provides. Or, you could take advantage of C++ features to solve problems, simplify things, etc. And on that note, C++ is not really an "object oriented" programming language - you don't have to make it look like some Java over-engineered piece of shit for it to compile. (In fact, often times my primary logic is simply procedural, making use of classes but not being bound by them.)

_________________
"It's easy to win forgiveness for being wrong; being right is what gets you into real trouble." --Bjarne Stroustrup

Mon 28 Jul 2014, 05:13:19

Joined: Tue 31 May 2011, 22:39:35

Posts: 348
Post Re: bsnes-mercury
jchadwick wrote:
I currently have a quote from Bjarne set as my signature, so it's probably not hard to guess my feelings on the situation. Here's another good Bjarne quote, this one a bit more relevant to the issue at hand:
Quote:
I have yet to see a program that can be written better in C than in C++.
C++ is ugly. Bloated. Archaic. Whatever. I don't care, neither does any seasoned C++ coder. I've come to realize that programming languages should serve to help you, not restrict you. C++ provides you with an amazingly wide array of problem-solving tools, many of them coming at effectively no runtime cost. When I go to other languages, I find myself strangled by the intended usage. I'll take automatic storage and otherwise manual memory management over garbage collection any day, hands down. Have you ever tried doing OpenGL programming in C#? You can't do RAII because the destructor runs in another thread in an unspecified order! What would have been really simple is now mindnumbingly complicated because the language does not offer a tool to solve this particular problem. And reflective programming is really neat until you factor in how much overhead it has - sure, it won't matter for hello world, but when you're talking about a web server it's nice to know that your template meta-programming compiles down to ideal machine code.

If you are a C purist, consider for a moment that you could just write a C++ program in the subset of C features it provides. Or, you could take advantage of C++ features to solve problems, simplify things, etc. And on that note, C++ is not really an "object oriented" programming language - you don't have to make it look like some Java over-engineered piece of shit for it to compile. (In fact, often times my primary logic is simply procedural, making use of classes but not being bound by them.)


While I can't speak to the specific RAII problem (or C# OpenGL coding), I think the general consensus is if you are coding in such a way that the order your finalizes (named destructors in C# for god knows why) matters you are "doing it wrong". As far as I know the whole thing stems from a difference in terminology in the original spec. You can still get regular behavior out of calling Dispose(), or sticking it in a using() block. Eric Lippert talked a little bit about it here.

When you get down to it the whole thing stems from the design goals of C# as a language. Eric Lippert (Yes, again! His blog was great for explaining the logic behind the weird decisions in C#) has a blog post where he goes into the whole idea of "pit of despair" vs "pit of quality" here. He actually touches on the non deterministic finalizers in that post as well.
There are a lot of things that you can do in C#, but you then lose the benefit of the language. You can use Marshal to allocate non-managed memory in C#, and then do pointer math on it. This does of course increase performance a massive amount, but then you've lost much of the benefit of using C# in the first place. The goal of C# was (partly) to reduce undefined behavior, and all the gotchas that let you shoot yourself in the foot. We have now, for better or worse, reached the point in hardware where people can legitimate argue that for most applications it's not worth the extra developer cost/time, and potential bugs when using a language like C. After all, even the best programmers can accidentally create stuff like buffer overflow issues when their programs get complicated enough. Additionally, much of your C vs C++ argument could be applied to C++ vs <insert another language like C# or D or python anything here>. Pick the right language and you could probably just do a regex replace on the paragraph.

I will also say that your web server example is a strange one. That's one of the places where I would say that your language of choice matters least. If you are going for high concurrency it's not going to be the web server that does most of the work load. Most of the scaling you are going to do is going to be with things like Varnish, or Redis. Yeah, those are written in C, but that still has nothing to do with your templates. Languages in the web world now generally seem chosen based on ease and speed of development, not razor thin performance. Which isn't something C/C++ exactly excels at.
For example, take Stack Exchange which Alexia ranks ~#50 in world traffic. They are very open about how they have their network set up. They are very largely a C# shop, and run it all off of 25 servers. Not 25 web servers. 25 servers. Granted some of their servers are ultra beefy, but still. They scaled up and not out on C#, and it's doing them fine.
Also, as much as it pains me to admit, as far as I know if you are trying to set records for the highest concurrency as you possibly can with your web sockets and/or web server you are going to be using something on top of JVM. Vert.x or http-kit or something. As far as I know the problems people are running into now with concurrency are kernel based, and not language based. The kernel chokes trying to pass that many connections to the application. I believe there are drivers that exist now solely to bypass the kernel problems, but at that point you're into the realm of theoretical ridiculousness; such as 80 million concurrent connections or something. If I need to service 80 million concurrent connections, I am probably making enough money that I could afford a while second server.


Also, just as a side point. While reflection in C# does of course have a huge overhead, you can get around it for the most part. You can use Emit to output IL directly to create your delegate, and then cache that for reuse later. Yeah, it's not quite inline assembly level stuff weren't talking about here, but it's still plenty fast for most situations you are going to run into. You can most definitely tune the output of the compiler if you find the IL it's giving you is doing something strange.


(I am also not anti C++. Different tools in your toolbox etc. There are plenty of applications where a language like that is required. I wish I spent more time with the language so I was better at it.)

Tue 29 Jul 2014, 02:53:05

Joined: Tue 21 Feb 2012, 05:42:15

Posts: 2564
Post Re: bsnes-mercury
noiii wrote:
While I can't speak to the specific RAII problem (or C# OpenGL coding), I think the general consensus is if you are coding in such a way that the order your finalizes (named destructors in C# for god knows why) matters you are "doing it wrong". As far as I know the whole thing stems from a difference in terminology in the original spec. You can still get regular behavior out of calling Dispose(), or sticking it in a using() block. Eric Lippert talked a little bit about it here.
I am aware of the Dispose pattern. I would argue it is not a very good solution to the problem. It was a significant ordeal to try to use it as I would normally use RAII.

noiii wrote:
When you get down to it the whole thing stems from the design goals of C# as a language. Eric Lippert (Yes, again! His blog was great for explaining the logic behind the weird decisions in C#) has a blog post where he goes into the whole idea of "pit of despair" vs "pit of quality" here. He actually touches on the non deterministic finalizers in that post as well.
Sure. C# as a language might have those goals. Doesn't really change the fact that I merely find myself limited. My particular example is a recognized limitation of the language.

noiii wrote:
There are a lot of things that you can do in C#, but you then lose the benefit of the language. You can use Marshal to allocate non-managed memory in C#, and then do pointer math on it. This does of course increase performance a massive amount, but then you've lost much of the benefit of using C# in the first place. The goal of C# was (partly) to reduce undefined behavior, and all the gotchas that let you shoot yourself in the foot. We have now, for better or worse, reached the point in hardware where people can legitimate argue that for most applications it's not worth the extra developer cost/time, and potential bugs when using a language like C. After all, even the best programmers can accidentally create stuff like buffer overflow issues when their programs get complicated enough.
I can answer this one with another Bjarne quote.
Quote:
I do not think that safety should be bought at the cost of complicating the expression of good solutions to real-life problems.
I don't buy into the idea that it's worth my time to be using a language that tries to save me from myself. Sure, I'm human and make mistakes. But I make mistakes in C#. In Java. In Python. In Ruby. It doesn't matter. If it's a matter of security, there are a million other ways to secure something. Modern processors, operating systems are fucking packed with security features. We are in fact often not using these features in our outward-facing devices. Ironically, it seems we are more concerned about web browsers than web servers.

noiii wrote:
Additionally, much of your C vs C++ argument could be applied to C++ vs <insert another language like C# or D or python anything here>. Pick the right language and you could probably just do a regex replace on the paragraph.
No it doesn't. If you think that's the case, then you missed my point, and misunderstood Bjarne's quote. C++ is (nearly) a FULL SUPERSET of C. That's kind of the damn point, because it means hey, either you COULD stick to "just C," or you could use a template here to do this without duplicate code. If you're not picking your language based on politics, or for some out-of-the-picture practical matter, it's probably not a difficult decision anymore.


noiii wrote:
I will also say that your web server example is a strange one. That's one of the places where I would say that your language of choice matters least. If you are going for high concurrency it's not going to be the web server that does most of the work load. Most of the scaling you are going to do is going to be with things like Varnish, or Redis. Yeah, those are written in C, but that still has nothing to do with your templates. Languages in the web world now generally seem chosen based on ease and speed of development, not razor thin performance. Which isn't something C/C++ exactly excels at.
nginx, Apache, lighttpd... all written in C. And yes, it is about "razor-thin" performance. Because every single request relies on it. That's the point!

noiii wrote:
For example, take Stack Exchange which Alexia ranks ~#50 in world traffic. They are very open about how they have their network set up. They are very largely a C# shop, and run it all off of 25 servers. Not 25 web servers. 25 servers. Granted some of their servers are ultra beefy, but still. They scaled up and not out on C#, and it's doing them fine.
...OK. I'm no expert, but I am pretty sure you are confusing "application server" with "web server." If you aren't, then it sounds like Stack Exchange has decided to merge the two.

But it's also worth considering that the effectiveness of caching is almost more important than the webserver here. So it's really the caching layer that is probably taking the most damage. And it's worth noting that their server efficiency sounds good on paper versus other websites that throw tons of servers at things as solutions to the problem, which might mean it's not actually that impressive. Note that I'm not saying it isn't impressive. But how are we supposed to know the true power of 25 beefy servers?

I do not typically use C++ for my APPLICATION servers. However, I only use web servers written in C/++, with "razor-thin" performance optimizations. My guess is that is the same for Stack Exchange, who is probably using IIS.

noiii wrote:
Also, as much as it pains me to admit, as far as I know if you are trying to set records for the highest concurrency as you possibly can with your web sockets and/or web server you are going to be using something on top of JVM. Vert.x or http-kit or something. As far as I know the problems people are running into now with concurrency are kernel based, and not language based. The kernel chokes trying to pass that many connections to the application. I believe there are drivers that exist now solely to bypass the kernel problems, but at that point you're into the realm of theoretical ridiculousness; such as 80 million concurrent connections or something. If I need to service 80 million concurrent connections, I am probably making enough money that I could afford a while second server.
I have heard of people needing the kernel mode on Windows, but not Linux so much. Windows has the disservice of a completely broken user-mode sockets API. The author of g-wan, crazy or not, did actually write a bit about this, and I believe them.

noiii wrote:
Also, just as a side point. While reflection in C# does of course have a huge overhead, you can get around it for the most part. You can use Emit to output IL directly to create your delegate, and then cache that for reuse later. Yeah, it's not quite inline assembly level stuff weren't talking about here, but it's still plenty fast for most situations you are going to run into. You can most definitely tune the output of the compiler if you find the IL it's giving you is doing something strange.
When I use C#, I usually don't have a problem with the performance of reflection. But that's primarily because I use C# in places where I don't expect performance to reasonably become an issue.
However, it did ONCE become an issue, when reflection ended up in a file I/O loop. lol.

noiii wrote:
(I am also not anti C++. Different tools in your toolbox etc. There are plenty of applications where a language like that is required. I wish I spent more time with the language so I was better at it.)
It is worth it. Trust me. I am not as much of a C# nut as you, so I don't appreciate C# for all that it is. But there is a reason I wanted to do OpenGL work in C#: it's a great language for rapidly prototyping crap. So it's not like I hate C#. On the other hand, I have come to understand why C++ has such a strong backing from people who swear by it. As an example, C++ as a language does not give you safety, but if you want safety you can practically get it by wrapping all of your unsafe things. How often do I malloc/free in C++? Almost never. new/delete? Rarely. Why? Because I don't have to a lot of the time. Smart containers/pointers, throwing shit on the stack, automatically reference-counted classes (like Qt's copy-on-write 'implicit-sharing' framework,) and so on often eat the allocation work in a controlled manner. RAII tends to work extremely well. Which is, of course, the primary reason why I will talk about it so much. Compared to C, it is my favorite feature, as it simplifies my control flow to a great degree.

_________________
"It's easy to win forgiveness for being wrong; being right is what gets you into real trouble." --Bjarne Stroustrup

Tue 29 Jul 2014, 08:18:03

Joined: Fri 10 Apr 2009, 15:00:08

Posts: 13668
Post Re: bsnes-mercury
My garbage collection analogy:

C++ is like a tidy person. You take out some food, eat it, and then you clean the dishes and throw away the trash when you are done. Your house always looks clean.

Java (or C#) is like a sloth. You take out some food, eat it, and then leave it there to fester and rot and mold. You keep doing this and the garbage just piles up all over your house. It smells bad. Then occasionally you pay a maid to come in and clean up after yourself. It costs you a good amount of money, and the maid wants you gone while she cleans, so you have to stop what you're doing and go somewhere else for a while.

It's all so pointless, too. I've had one memory leak in the last ten years: I was messing with some new allocation strategies in nall::string, and I commented out my destructor's free function while I tested copy-on-write and SSO patterns, and forgot to re-enable it. But it was immediately apparent once I got around to running valgrind.

As jchadwick says, 98% of cases work just fine with RAII. Any objects created inside a block normally get destroyed at the end of the block. Need it beyond a block, leave it in a class inside a longer-living class, right up to the main class that runs your entire application. Zero magic is necessary, and you have deterministic collection. For the remaining 1.999%, you have shared pointers (reference counting GC.) These work just fine as well with very, very little red tape. And they're so rarely needed, I think I've used them about a half dozen times in my career. The final case is when you need cyclical pointers, which requires a tracing GC. But if GC proponents can say, "just use the dispose pattern" when it comes to the most common case, bar none, then I can say, "just use weak pointers" (pointers that don't increase the reference counter) when it comes to the rarest possible case.

The argument of things like forced GC is that it makes safer programs. Bullshit. Sure, maybe it catches my string leak. But the assumption is that by dumbing down programming, you're making it easier for expert programmers. But the opposite happens: you get even dumber programmers as a result. You've lowered the bar to programming, and let in even more terrible people. And now you have these Java types who don't understand basic big-O notation and constantly write O(n^2) functions for O(n) operations like string-replace. And they make new fun errors that have much wider implications. Believe me, this is not a road you want to see to its logical conclusion of "now everybody's a programmer!" Now I know you might say, "you're just being elitist about your hobby!" Again, bullshit. Not everybody should be a programmer, just like not everybody should be open heart surgeons, nor electricians, nor auto mechanics. You want a toy language for amateurs, and the types that change their car's oil, install their own ceiling fans, and self-diagnose on webmd? Fine. But leave it away from where the professionals work.

Programming should take years to master, and be expensive to hire for. Because software runs the entire world's economy. It's important. People entrust their lives and financial security to it. That's not to be trifled with. Use your Java on your hobbyist projects, but keep it off my workstation and server.

When it comes to security:

I really am sick of the problem with it. It's to the point where I am terrified to install applications on Windows, on my cell phone, and even on my Roku box (which has a few paid logins and the ability to make purchases on my CC.)

And it's all so fucking stupid. It's so easy to solve security. And I mean totally solve it. But nobody's ever willing to just start over with a new paradigm in design except for me, it seems. But that's what you have to do.

You start over with the OS: there's this sense that security means protecting the machine's kernel and OS files. Yes, that's certainly important. But they just have this huge, "not my problem" attitude when it comes to the most essential part of the OS to anyone else: all of your personal files. If I install an application that claims to be a photo editor, it should not have ANY access to any files except for photos. The kernel should enforce this. Even then, the user should have some diligence in specifying which folders it can open photos in.

We don't need file extensions, and we don't need MIME types that try and guess based on file headers. We need filesystem metadata that flags something as a photo. Yeah it won't work if you transfer it between a different file system, too bad.

We don't just restrict access to the file system. We restrict access to opening ports (no more remote holes.) To sending data across the network (no more stealing data.) To accessing certain hardware devices (no more software watching you on your web cam.) If it's something that might be sensitive, we protect it. If an application needs these things, the user has to be damn sure it's happening, and the OS must go out of its way to broadcast that the app has enabled those features. Right on the title bar, little permissions icons.

We have to prevent a situation like with Chrome extensions and Android apps. I install an extension called "backspace takes you back a page", and Chrome tells me it needs access to all of my web browsing history, full access to modify all web pages, access to all of my cookies, and remote connectivity. And it's not just that extension. It's fucking everything. Everything always asks for permissions to everything. And people just accept it. No. If a photo app says it REQUIRES network connectivity, then you reject it from your software repository/store. Full stop. You want optional updates? Fuck you, go through the normal OS update channel. Worst case, make it optional. Not required.

Next, we don't need a new programming language (Java, C#) and a new virtual machine (JVM/.NET). We don't need native and virtualized binaries to be different. All we need is two executable launchers: one that runs a program right on the metal, which you only use for things that you really trust, that require a lot of CPU power, (eg to run at full speed, or to finish quicker), and that aren't in a position to be exploitable (no open ports), for instance ... emulators and movie encoders/decoders; and the other for everything else: web browsers, third-party applications, things you don't trust ... all of that runs inside a native processor virtualization. Like with VMware/Virtualbox/Parallels in "seamless mode" where the virtual window shows right up on your desktop instead of in another window.

The virtualized windows should have a special title bar color to identify them. The user can override this and run whatever they really need to natively, at their own risk. The virtualization should come with significant restrictions that further enforce the kernel restrictions again.

We continue to add all of the security layers we can anyway. ASLR, NX, stack protection, etc.

But when was the last time a program you ran inside Virtualbox rooted your system? Never, right? It's theoretically possible (and in rare research instances, demonstrated), but the hoops to do it are absolutely immense. Each additional protection makes it harder and harder to beat them all together.

Tue 29 Jul 2014, 15:08:40

Joined: Tue 31 May 2011, 22:39:35

Posts: 348
Post Re: bsnes-mercury
jchadwick wrote:
It is worth it. Trust me. I am not as much of a C# nut as you, so I don't appreciate C# for all that it is.


Oh I know it's often worth it. Obviously I am not as much of a C++ nut as you, and can't appreciate it as well as you can. Wish I was, but my career has never taken me down the path far enough where I've really had to do or debug anything really complicated. I've spent more time debugging C than C++, and even then I wouldn't call myself well versed in it. As with any language, it's features and designs become "better" and more familiar the more time you spend with it. I was just sharing some of my experience with it, and information you may or may not have known.

Quote:
I do not think that safety should be bought at the cost of complicating the expression of good solutions to real-life problems.

I agree with the premise, but couldn't one argue that using C++ over another more "modern" language inherently complicates the expression of a good solution. One could theoretically express the answer to a solution in much more concise way in another language than that with C++. I'm sure Haskell (or some other functional language) people would be all over that argument.

jchadwick wrote:
No it doesn't. If you think that's the case, then you missed my point, and misunderstood Bjarne's quote. C++ is (nearly) a FULL SUPERSET of C. That's kind of the damn point, because it means hey, either you COULD stick to "just C," or you could use a template here to do this without duplicate code. If you're not picking your language based on politics, or for some out-of-the-picture practical matter, it's probably not a difficult decision anymore.


That is why I said if you pick the right language. I've only read the language overview, and some intro-to stuff for D, but it seemed like it was kind of superset-y?

jchadwick wrote:
But it's also worth considering that the effectiveness of caching is almost more important than the webserver here. So it's really the caching layer that is probably taking the most damage. And it's worth noting that their server efficiency sounds good on paper versus other websites that throw tons of servers at things as solutions to the problem, which might mean it's not actually that impressive. Note that I'm not saying it isn't impressive. But how are we supposed to know the true power of 25 beefy servers?


That was kind of my point in your quote before this, and this one. It's the cache layer that tends to actually matter. Which was all sparked by your web-site metal programming compiling down to ideal machine-code comment. Maybe in my mind, a template on a web-server means something different. I don't know anyone that writes the content to be served to their user in C++. They use all sorts of things for that. From PHP, to JS, to Ruby, to Java, but not C++. You can't prototype fast enough.

Also Apache isn't exactly known for being lightning quick. Yes, nginx is widely used because it is lightning quick, but if you are going for concurrency you are using it in combination with something else. The examples I listed are programmed in Java, and clojure (on JVM). Vert.x is actually a polyglot, so can template in a number of languages, so you don't need to use Java if you don't want to. (Although you lose a whole lot of performance that way)

jchadwick wrote:
I have heard of people needing the kernel mode on Windows, but not Linux so much. Windows has the disservice of a completely broken user-mode sockets API. The author of g-wan, crazy or not, did actually write a bit about this, and I believe them.


A quick google search surfaces this link. They also might be crazy, but I've at least heard of the site before.



I am now late for work, but in a quick response to Byuu. Which I only hastily read:
That analogy is extremely one sided. True or not, I look at that and say something like "Okay, so in C++ you have a whole house, and you only every use a tiny section of it, and I have to do it all yourself. What a waste of a house, and your time. While in the Java house, I can use the whole house. Then when I make a mess of one room, I can go into a different room and some maid will come and clean it all up for me while I am doing something else! Awesome!"
There is probably something better to say about that, but gotta go!

Tue 29 Jul 2014, 15:58:35

Joined: Fri 10 Apr 2009, 15:00:08

Posts: 13668
Post Re: bsnes-mercury
Garbage collection doesn't freeze a very tiny portion of your program. It freezes the whole thing.

And sure, maybe you think a five second pause every few hours is reasonable. But what if that starts after someone just submitted their order to you? I don't want them sitting there panicking for five seconds. Maybe they hit submit again, maybe they close the window, regardless ... it looks bad on our part.

The real point isn't even the analogy. It's that it's wholly unnecessary.

Tue 29 Jul 2014, 16:07:09
User avatar

Joined: Wed 26 May 2010, 19:48:00

Posts: 708
Post Re: bsnes-mercury
Not necessarily. Concurrent garbage collection algorithms exist.

Tue 29 Jul 2014, 16:09:10

Joined: Fri 10 Apr 2009, 15:00:08

Posts: 13668
Post Re: bsnes-mercury
Well that's better, at least. Still a waste of resources, but not as bad as a full stop.

Are they in JVM 1.5+? That's what we use at work, and we constantly have our web server apps deadlock on us with the CPU pegged out at 100%.

Tue 29 Jul 2014, 16:24:37
User avatar

Joined: Wed 26 May 2010, 19:48:00

Posts: 708
Post Re: bsnes-mercury
JVM 1.5+ should have UseConcMarkSweepGC. I'm not terribly knowledgeable about JVM configuration though, so maybe someone else should comment on that.

Tue 29 Jul 2014, 17:11:32
User avatar

Joined: Wed 09 Nov 2011, 04:15:47

Posts: 84
Post Re: bsnes-mercury
byuu wrote:
We don't need file extensions, and we don't need MIME types that try and guess based on file headers. We need filesystem metadata that flags something as a photo. Yeah it won't work if you transfer it between a different file system, too bad.


This had been my big hope for Windows 7, back when Gates had been talking up the DB filesystem they had initially planned for it. Too bad that got sidelined and I don't think has been heard of since. If folks have any more recent blogs or other articles on what became of it I'm interested.

BTW, I really hate how Apple handles their meta-data, spewing .DS_Store files everywhere it goes.

byuu wrote:
No. If a photo app says it REQUIRES network connectivity, then you reject it from your software repository/store. Full stop. You want optional updates? Fuck you, go through the normal OS update channel. Worst case, make it optional. Not required.


Ah, if only Windows supported a proper update mechanism. Even on Windows 8 it's only the OS and other Microsoft components that are updated via the built in update mechanism. I thought such would change with the Windows 8 app store, but I'm not clear that it has (as most of the apps I use aren't embedded there yet). I dislike that I'm forced to either track updates manually or be forced into using something like Sumo to maintain my other software.

_________________
Eyecandy: Turn your computer into an expensive lava lamp.

Tue 29 Jul 2014, 17:30:09

Joined: Fri 10 Apr 2009, 15:00:08

Posts: 13668
Post Re: bsnes-mercury
We could probably fake filesystem metadata to some degree now.

I don't think you need a direct DB filesystem to do these things. You just need a DB stored on the filesystem somewhere, and a filesystem monitor (lots of APIs for this on every OS) to tell you when files have changed. And then file manager capabilities to tag file types. And your software should write out this metadata where it can, eg Gimp should tag all its saved files as images.

Then you need an archive format that stores this metadata, so that you can transmit files with the information and not lose it. If people started doing that, file extensions would no longer matter.

Tue 29 Jul 2014, 19:40:10
User avatar

Joined: Thu 22 Mar 2012, 04:37:56

Posts: 502
Post Re: bsnes-mercury
byuu wrote:
Then you need an archive format that stores this metadata

beat 2000: The world's first arbitrary-metadata capable archive format! Also capable of patching your games!

Tue 29 Jul 2014, 19:44:15

Joined: Tue 21 Feb 2012, 05:42:15

Posts: 2564
Post Re: bsnes-mercury
noiii wrote:
That is why I said if you pick the right language. I've only read the language overview, and some intro-to stuff for D, but it seemed like it was kind of superset-y?

But C++ features often compile down to ideal machine code. There are situations where overhead might matter (with regards to the object system mostly,) but even there the machine code is about as optimal as you'll get for the flexibility granted. For being low level, the language can be optimized a great bit. And, compiler and operating system support for C++ is probably now better than C, what with Microsoft supporting C++ to a much greater degree than C (C++11 versus C11, for example.)

What I'm trying to say is, it is pretty hard to find an argument for C over C++ that isn't merely attacking the language for being bloated or ugly.

_________________
"It's easy to win forgiveness for being wrong; being right is what gets you into real trouble." --Bjarne Stroustrup

Tue 29 Jul 2014, 19:50:28

Joined: Fri 10 Apr 2009, 15:00:08

Posts: 13668
Post Re: bsnes-mercury
wareya wrote:
byuu wrote:
Then you need an archive format that stores this metadata

beat 2000: The world's first arbitrary-metadata capable archive format! Also capable of patching your games!


You laugh, but that's what beat patch archive (BPA) does. Stores a BML header for extensible metadata.

But obviously you would want to define a strict master set of attributes, and then allow OS-specific extensions.

> What I'm trying to say is, it is pretty hard to find an argument for C over C++ that isn't merely attacking the language for being bloated or ugly.

I don't take anyone seriously once they claim C is superior to C++.

I could certainly see a lightweight version of [C with the important parts of C++], but C is just -way- too limited without so much as namespaces, basic objects, or scoped destructors. It really lives up to its reputation as being a portable assembly language. And assembly is not a very pleasant language when you try and scale it up. So with C, you get projects like GTK+ that basically reinvent C++, but all of its compile-time guarantees become run-time errors and slowdowns. And the syntax is terrifying.

Yet people act like GTK+ in C is superior to just using C++. Ridiculous.

The only part I really feel like C++ missed the ball on was in library usability. There's no official syntax for exported function names, and every compiler mangles them in different ways.

Tue 29 Jul 2014, 20:07:38
Previous  1, 2, 3  Next

Who is online

Users browsing this forum: No registered users and 0 guests

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum