Logo Pending


This is why you sanitize your inputs, 1983 edition

(This is heavily expanded from a few Twitter posts of mine.)

When you write an application that has to rename a file, you have your chosen language and platform’s standard library to do the heavy lifting for you. For example in C it’s usually int rename(const char* oldName, const char* newName), and a bunch of other languages follow suit. Why not, it’s a good function! But what does rename actually do?

In MS-DOS, this’d be handled by Interrupt 0x21, subfunction AH 0x56. By which I mean it’d set two specific processor registers (as mentioned in Save Early, Save How) to point to the old and new file names, set the AH register to 0x56, and execute the INT 0x21 instruction. A function installed by MS-DOS will then take over, doing the actual renaming, possibly returning an error value which the C function can immediately use as its’ return value. Since SCI has its own “need-to-use” library…

rename	proc	oldName:ptr byte, newName:ptr byte
	mov	dx, oldName	; ds:dx = old name
	push	ds
	pop	es
	mov	di, newName	; es:di = new name
 
	mov	ah, 56h
	int	21h
 
	.if	carry?
		xor	ax, ax
	.endif
	ret
rename	endp

(Full disclosure: the SCI code actually includes a dos macro to save the programmers some typing. I unrolled it here for illustration purposes.)

All of this pretty much matches what you can find on Ralph Brown’s list. Given a suitable function prototype in C such as the one in the second paragraph, SCI can now call its own rename function as it desires.

Enough about SCI though, its function as a practical example is at an end.

But what if you gave it bad inputs? Sure, if the old name doesn’t refer to an existing file it will return 2 “file not found”, but what if the new name isn’t quite valid? Remember, this is MS-DOS; we don’t have the luxury of long file names here. It’s 8.3 or bust. I don’t see any sanity checks in the above function, and Brown’s documentation only speaks of splats.

So what happens if we have a file boop.txt and call rename("boop.txt", "hello world.txt")?

In DOSBox, you’d end up with a file hellowor.txt. You are free to further manipulate this file in any way you please. The command line won’t choke on it, file managers won’t get confused. If you wanted to manually rename it back to boop.txt from the command line, ren hellowor.txt boop.txt will work perfectly fine.

This is actually not true in real MS-DOS. If your program were to run on a real MS-DOS installation, you’d end up with hello wo.txt, an 8.3 file with a space in it. And no contemporary file manager I’ve seen can handle that. The ren command built into command.com can’t parse it — ren hello wo.txt boop.txt is three arguments where ren expects only two, and the first isn’t an existing file’s name that it can change to wo.txt.

In cmd.exe of course you can use double quotes to make it unambiguously two arguments, but this isn’t cmd.exe. What about some file managers though? I have two, Norton Commander and its big brother Norton Desktop.

In Norton Commander, the file list shows hello wo.txt, and its rename function can handle it. So can the built-in editor and viewer. Top marks for Norton Commander!

Norton Desktop on the other hand is not so sturdy. It can show the file in the list but that’s all. Trying to rename it back to boop.txt reveals the incorrectness of the source file’s name quite succinctly:

Technically, this is true. You’re not supposed to have spaces in the middle of a FAT 8.3 file name. If a file has less than eight characters before the dot, it’s secretly padded with spaces, and so are the three extension characters. And the dot isn’t even — the true name as written in the FAT directory would be BOOP    TXT. But that’s just one way Norton Desktop trips. Its viewer seems to be passed the nonexistent hello. It shrugs and asks which existing file we want to open. Its editor is given the same argument(s?) and lets us edit a brand new file named hello. In Norton Desktop’s world, it can see the file, but it can’t do much with it.

What about a contemporary Windows? Can, let’s say, the Notepad from Windows 3.1 handle this file? Okay, so technically this is commdlg.dll talking, but we’re playing for effect here.

Of course not, what did you expect by now!? Norton Commander only worked because it didn’t care enough! Would you really think one of the companies who made the FAT file system would blithely ignore one of the cardinal rules at the time?

Pshaw!

Next time, we gettin’ hacky.

 

…Wait, hold up. Why does it say 1983 in the title? Well, if you notice on Ralph Brown’s site the rename function was introduced in DOS 2, which was first released in 1983. And so was I.

[ , , ] Leave a Comment

SCI Drivers, how do they work?

Quite well, thank you.

Now, before I begin I should mention that the extended memory and mouse support in SCI is baked into the interpreter itself, and in SCI32 so are the VGA and VESA video drivers. Oddly, the keyboard is not. But anyway!

An SCI DRV file is a piece of standalone code that is loaded on startup and provides a set of standardized routines for the interpreter to call, even if a given feature isn’t technically supported such as palette rotation in EGA or most things on PC speaker. Their format is actually pretty straightforward.

The first four bytes of the file are a single instruction that jumps to the entry point routine. Whenever a driver function is called for, the BP register is set to the requested function and that jump is called. The entry point routine then uses a lookup table, indexed on BP, to call the requested function, and returns. Easy as pie, really. But that leaves the format.

The next four bytes are the number 0x87654321. Yeah, backwards. I know, wild times. This would be used to check if a given DRV file really is a valid driver, but SCI11 at least seems not to care.

Following is a byte that specifies what type of driver we’re talking about. The install/setup program uses this to determine which list to show it in. It’s otherwise useless, especially with my own install program. From zero to seven, these types are: video, music, voice, input, keyboard, communication, mouse, and memory.

The rest of the data can go basically anywhere — they’re Pascal-style (length-prefixed) strings that should specify their file name and description, but some of them are named dude and one is  . Just a single space. And the descriptions? You wouldn’t be able to tell VGA320.DRV apart from VGA320BW.DRV if you followed this information ‘cos they both say they’re “VGA or IBM PS2, Models 25 & 30 – 256 Colors”. Except one of them is optimized for grayscale-only monitors. So yeah.

Next up is the EXTDRV marker, 0xFEDCBA98, directly followed by a four-byte value whose meaning depends on the type. Again, not actually used, purely for external bookkeeping.

Value Video Keyboard Sound
1 MDA IBM Speaker
2 Hercules Tandy AdLib
4 CGA NEC Sound Blaster/DAC
8 PC Jr. Creative Music System
10 Tandy Tandy 3-Voice
20 EGA Tandy DAC
40 MCGA PS/1 3-Voice
80 VGA PS/1 DAC
100 CGA two-color Sound Blaster Pro
200 CGA four-color MPU-401
400 Explorer Disney Sound Source
800 CD-Audio
1000 ProAudio FM
2000 ProAudio DAC
4000 Windows Sound Source
8000 No MIDI

These values can be combined, so MTBLAST.DRV for example reports 0x204, or “MPU-401 with Sound Blaster DAC”. Of course, this would only make sense for the sound drivers but what can you do?

After this, the actual driver code resumes with the dispatch table, a list of function pointers to each of the features the driver supports. Some may point to null functions, but none of them are themselves null. Because that’d be bad. Directly after this is all the general-purpose variable memory that the driver may need, all preset. For example VGA320.DRV has a standard palette and “wait hand” cursor of its own that it throws up initially.

There are three functions that all drivers must have. The rest depends on their type. These are Detect, Initialize, and Terminate, in that order. The first simply returns a few metrics. For video it’s the number of colors, for music it’s the device ID (to decide which channels to play) and how many voices it can handle. The Initialize function actually sets up the device, switching video modes and setting up timers or what have you. What Terminate does ought to be obvious.

And that’s how SCI drivers work.

[ ] Leave a Comment

Police Quest’s flashing siren lights

The flashing siren lights in the title screens for Police Quest 1 and 3 are sort of interesting, because they are not quite a simple matter of calling (Palette palANIMATE) once or twice. In fact it’s called eight times each frame! Here’s the final result:

And here’s the Script at the heart of it:

(instance cycleColors of Script
  (method (changeState newState)
    ; Fun fact: the switch isn't actually needed.
    ; Not in this use-case.
    (switch (= state newState)
      (0
        (Palette palANIMATE 208 213  1) ;blue in the middle
        (Palette palANIMATE 213 218  1)
        (Palette palANIMATE 218 223  1)
        (Palette palANIMATE 223 228  1) ;blue on the side
        ; Note that we're switching from 1 to -1 now.
        (Palette palANIMATE 229 234 -1) ;red in the middle
        (Palette palANIMATE 234 239 -1)
        (Palette palANIMATE 239 244 -1)
        (Palette palANIMATE 244 249 -1) ;red on the side
 
        ; Almost immediately do it all over again
        (= cycles 10000)
        (= state -1)
      )
    )
  )
)

The palette here has a very particular setup. The lowest colors, #208 to #249, are set up like this:

Each of the eight siren colors in the image has its own four-step palette, individually rotated! It looks kinda like this:

If that one black entry wasn’t in the way between blue and red, it’d line up better, but what can you do?

What’s particularly funny about this is of course that no SCI interpreter with fewer than 256 colors implements this feature.

The cycleColors script is still there and is still invoked. Just like with the chronostream animation in Space Quest 4.

[ , , , ] Leave a Comment

Save early, delete when you need

There’s one interesting tidbit missing here, which is how deletion (SCI1 and later) is implemented. Namely by manipulating the .DIR file in the script, and not – as any sane person would do – with a kernel call.

So wrote Iskovlun in a comment some time back. Let’s see exactly how insane it really is.

; First we open up the directory file.
; Confusing, I know, to call it a directory file. Perhaps
; "catalog" would be better considering a directory is
; already something else. And in SCI32, they did!
((= fd (File new:))
  name: (DeviceInfo diMAKESAVEDIRNAME @str (gGame name?))
  open: fCREATE
)
 
; The format of a save game directory is pretty straight-
; forward -- a word for the index, then the name, terminated
; with an $0A, repeat until done, end with $FFFF.
 
; (File write:) requires a pointer to the data it is to write,
; so we need to put values into variables, rather than just
; passing them immediately. Well, unless you have SCI11+ with
; the extra file kernel calls I nabbed from SCI32 and a matching
; File class, in which case you could just do (File writeByte:
; $0A) if you were so inclined!
(= ret $0A0A)
 
; Now we write the number and name of each saved game, EXCEPT
; for the one that was selected for deletion.
(for ((= i 0)) (< i numGames) ((++ i))
  (if (!= i selected)
    (fd write: @[nums i] 2)
    (fd writeString: @[names (* i COMMENTBUFF)])
    (fd write: @ret 1)
  )
)
 
; Now we write the terminating $FFFF to finish the catalog
; I mean directory off.
(= ret -1)
(fd
  write: @ret 2
  close:
  dispose:
)
 
; Now that that's done, we can safely delete the actual
; save game file.
(DeviceInfo diMAKESAVEFILENAME (gGame name?) [nums selected])
(FileIO fiUNLINK @str)

I almost feel like doing the so-called sane thing and adding a DeleteGame kernel call to SCI11+.

[ , ] Leave a Comment

VGA Versus VESA – A Followup

Back in the days, if you wanted square pixels in your games you had to either switch to 640×480 with only 16 colors or use Mode X, which offered 320×240 with all 256 colors but both of these stored their pixel data in those funky non-packed or non-linear formats.

Sure. Sure. With Mode X you could write a whole bunch of pixels at once. It’s a trade-off, and one that I’d rather not have to make. Give me linear frame buffers or give me death!

But what if there was a third option? Turns out on DOSBox’s approximation of the S3, mode 151h offers 320×240 pixels in 256 colors! Besides the need to switch memory windows to access the bottom 36-something lines, that’s as friendly to work with as anything! And if, unlike me, you write DOS games in 32 bits, even that shouldn’t be a problem!

Too bad whatever VirtualBox exposes on my system doesn’t include one of those — it has no 320×240 at all.

As a bonus for sticking with me for this tomfoolery, here’s DOSBox running a test of mine.

[ ] 2 Comments on VGA Versus VESA – A Followup

VGA Versus VESA

We’re all familiar here with the classic 320×200 pixels, 256 color screen mode popularized by the VGA video card, colloquially known as Mode 13h. Most old DOS games from before a particular point in time used it. But what if you want or need bigger? Or more colors? Enter the Super VGA cards with their extended VESA modes.

These VESA modes number 100h and higher, but which exactly are available and what their specs are depends on your exact hardware. As such you can’t rightly assume a certain mode is available and will be that particular resolution and color depth. What you’re supposed to do is ask the system what VESA video modes are available, walk the list to see if you find the one you need, and note its number.

All I have is a copy of DOSBox and a copy of VirtualBox, and of vesachk.exe, available here if you want to try it out yourself. This application gives you that list. Now, the two systems yield vastly different results, mirroring differences in video hardware. DOSBox for example emulates some form of S3 card.

I’ve noticed that when an SCI32 game is made for low-res it runs in plain ol’ Mode 13h, but if it’s meant for higher it’ll use Mode 101h. That’s one of the few in the list that DOSBox and VBox agree on — it has 640×480 resolution at 8 bits per pixel, packed, starting at 0xA0000000.

On the one hand, a regular old 16-bit DOS application wouldn’t be able to address all 307,200 pixels at once the way you can in mode 13h. On the other, a 32-bit application would have direct access to the full linear frame buffer no matter its size. A 16-bit application would need trickery to reach any pixel beyond a certain point, setting the window registers to basically shift the next part of video memory into that same 64 kb block.

This is why SCI32, when switching to mode 13h, just does so:

void Vga::Set320x200()
{
	union REGS reg;
 
	reg.w.ax = 0x0013;
	int386(0x10, &reg, &reg);
	SetVideoMode(1);	// clear all video memory
	SetVideoMode(0);	// back to Normal Mode 13
	lenx = 320;
	leny = 200;
}

But when it switches to mode 101h, it jumps through several hoops. First it checks if VESA is supported at all, then it switches to mode 101h, and then it does some more checks to see if things are as they should be, bailing out if they’re not.

And that’s all good.

But what if you were to find a VESA video mode that was 320×200 with 256 colors? Is there such a thing? A redundancy with mode 13h? As a matter of fact, there is! On the S3 emulated by DOSBox, it’s VESA mode 150h, and once you switch to it things work exactly the same as in mode 13h, except the memory access timing or whatever is different.

; VGA mode 13
mov ah, 0x00
mov al, 0x13
int 0x10
 
; VESA mode 150h
mov ax, 4F02h
mov bx, 150h
int 10h

But on whatever VBox has to offer, which is a vastly longer list with about a hundred more modes, this could be 146h, 346h, 546h, or 746h. And that’s why you really should ask the system about it!

But SCI32 basically assumes 101h is what it needs to be and presumably the folks at Sierra tested a bunch of contemporary cards and found this to be true.

Fun fact: SCI16 has all its video driver code in files like VGA320.DRV, but SCI32’s VGA.DRV is practically empty. It’s technically a valid SCI driver file but it’s basically just a header. Same with its VESA.DRV. All their code is in the interpreter itself, much like the mouse driver. It’s only there so the installer can offer it and the interpreter can determine which was chosen. And even then, the interpreter for later high-res only games like Gabriel Knight 2 will happily ignore all that.

[ , , ] Leave a Comment

Go home, Twitter

This is seriously something that happened earlier today on Twitter. I asked what to blog about, since my last post here was in May, and I could see I got a reply, but not what that reply was.

There was no “unavailable” placeholder or anything, no “potentially offensive” click barrier, just a non-zero counter. Logging out, switching to private mode, no difference. Then @Purrpleneko1 replied, I went to their profile to check their replies, and found what was being hidden from me: a perfectly good suggestion that unfortunately fell outside of my interests and expertise — I’m more into software than hardware. Why was this being hidden from me? I could see their other replies just fine!

So yeah. Go home, Twitter. You’re drunk on algorithms.

*sips cuppasoup*

[ , ] Leave a Comment

On Palettes

Recently, a friend who shall remain anonymous said to me he wanted to draw some pixel art and would set Aseprite to use the NES color palette because it didn’t have a SNES one.

I was quite amused by this because there is no such thing. The NES, SNES, and IBM PC-Compatible with a VGA card all have one thing in common in that they have program-specifiable palettes. On the NES, you can pick some 28 colors by my count from a fixed master palette:

This isn’t even necessarily correct because of how these colors are generated but whatever. On the SNES though? You have a 256-color palette space, split in 16 rows of 16 colors each. The first of each row is considered transparent (leaving out mode-specific stuff for simplicity) and the final background color behind all the layers is the top-left one. Also, the bottom half is reserved for sprites.

But what do you assign to them? 256 of these bad boys:

Each entry is a 15-bit RGB value, ignoring the 16th. In other words, you have 32.768 different possible colors to pick and choose roughly 256 from. The palettes in Aseprite, or really any graphics editor that has an “indexed” mode, only go up to 256 entries.

On the venerable VGA graphics card so widely used in IBM PC-compatibles and still emulated by even the latest powerhouses, you get as many colors but this time you can use all of them at once. And they’re not even 15-bit but a whopping 18, for an impressive 262.144 different shades to pick from:

Still, you’d actually have more of a point speaking of “the” VGA palette. When a SNES boots up there’s no telling what’s in the palette RAM, but when a PC boots up, you know the VGA palette will default to this:

For both SNES and PC though, you’d be better off speaking of specific applications, games, or scenes in games. For example, here’s the default from DeluxePaint:

So just like how you can refer to “the” VGA palette, you can have “the” DeluxePaint palette.


Nicole remarked on Twitter that I missed an opportunity to mock the Genesis. Here ya go, ma’am.

Nine bits, 512 possible colors, 64 at once in memory but only 61 effectively because the first one’s transparent like on the SNES. Here’s the SNES again for easy comparison:

Leave a Comment

Menu Slowdown Mystery

Yesterday I had the idea to make a small “game” demonstrating various SCI11+ features, and instead of a main menu screen full of buttons I thought I’d use a proper menu bar. The kind you’d see in the old SCI0 games with the text parser and the low-resolution version of Leisure Suit Larry 6. Which incidentally is the only SCI11 game to have such a menu bar. The high-res SCI2 version’s menu bar is implemented entirely in script but the other one is 100% the same as in SCI0. If you were to copy the interpreter from any other SCI11 game over to LSL6 and run it you’d get an error saying AddMenu is not supported.

If you were to build your own SCI11, you’d find there are four targets; with the built-in debugger, with menu bar support, both, or neither. Most SCI11 games come with a “neither”, except for LSL6.

But this post’s title mentions slowdown. Why?

I defined two menu items, File and Topic. No Sierra logo menu here, not this time. One has About and Quit, the other lists the various features. As you do. But somehow when opening the second menu things would slow down just before it got to the last item, about the support for SCI32-style no-lookup font and color change control codes.

I thought maybe I’d messed something up when I cleaned up the source code, or when I added the thing where the menu bar is drawn in the same colors as the status line.

So I built an SCI11+ with all switchable hacks turned off. No dice, the menu was just black and white now but still slow. So that ruled out the extra features.

A “base” build that’s missing all new features? Nope.

Copy the terp from LSL6 and run that. Same deal so it’s not the cleanup either.

So I tried reorganizing the menu items. And I saw my mistake.

I had used a | to separate the Unicode and SCI32 control code items where it should’ve been :. For added insult, that’s the text formatting control code.

I wonder if AddMenu would work better as a variadic function 🤔

[ ] Leave a Comment

Perspective is a tricky thing

This topic was suggested, more or less, by Phil Fortier.

What do these screenshots from DoomLeisure Suit Larry 3, and Secret of Monkey Island have in common?

Their perspective. Every single wall is a straight line. I put Doom there to show it’s not just adventure games, and Monkey Island because the arcs end in straight lines, but otherwise they all have the same perspective. Don’t believe your eyes? Here, let me spell it out for you:

This is one-point perspective, where lines converge to a single point.

Here’s a Youtube video I picked out at random from my search results while I ensured I wasn’t pulling crap out of my ass. You’ll notice a hallway like that could do well as an adventure game background.

They’re also a pain in the ass when you render your game’s backgrounds with a program that doesn’t do 1PP, like I do. I mean, I could use this copy of 3D Studio Max that I have collecting dust over here, but all my prefabs are in Daz Studio? So I gotta fake it somehow. Very carefully align the camera so the walls point straight up.

In this old version of Alhor’s Garage in The Dating Pool, the walls are not straight. So I went back and tweaked the camera along with a few other details.

I feel much better about this version. But for other scenes, to get enough floor space in view, I have to pull back the camera drastically. Normally you’d increase the floor space by angling the point of view down. I’m sure you can agree that in Chairman Kenneth’s office, the camera is pretty far up. If I tried to reproduce that image in Daz, I’d get diagonal walls. So how do you fix that?

There’s practically no floor space here! If I used this, the main character would have a line to move along, and if other characters were to try and pass there’d be almost no space to show it. Moving the camera up mostly increases the ceiling space…

And of course you could fake it by tilting the walls back to compensate.

Or you can just say fuck it and deem the perspective distortion negligible after downsampling.

*sigh*

I seriously wish I had the means to acquire some nicely painted backgrounds, even after years of demos with rendered ones.

[ , , , , , ] Leave a Comment