How I Found Myself in the Game Industry

plus my Looking Glass Studios job-application demos

Sean Barrett, 2013-05-16

Mike Abrash's recent blog post begins with a discussion of how his writings influenced the course of his career. I was reminded about how his writings influenced the course of my career. I'm not sure if I ever told him that they may have been responsible for me entering the game industry, and so I thought about writing him an email, but I decide to share it publically instead, and to wander a bit further afield.

When I was a teenager in the 80's, I1 had an Atari 800 computer, on which I did the usual things (little games, mostly BASIC). Though nearly impossible on the 800, I became interested in 3D graphics. I learned how to read Pascal and how 3D worked simultaneously by deciphering a wireframe-graphics program in Byte magazine; and did hackish attempts at the same effect on the 800. (Eventually I would ray-trace a single lambertian sphere on the 800, and then a checkerboard and reflective spheres on a friend's Atari ST.)

Looking at the Infocom games, with their sole creators, I thought I wanted to do that "when I grew up", but as the industry shifted to larger teams and Infocom started their downslide, that no longer seemed interesting/plausible to me, and as I entered college I never thought any more about that possibility.

I did stay interested in computer graphics in college. I regularly read SIGGRAPH's Computer Graphics and IEEE Computer Graphics and Applications. But I wasn't doing any real-time graphics programming anymore—I didn't even have a personal computer with graphics capability. I wasn't playing any PC computer games, either. To the extent I was programming, I was mostly writing things like text adventure engines and IOCCC entries, and the only gaming I was doing was on MUDs.

I graduated from college in 1992, at the age of 25, and for my my first job, I moved to Texas to work at a long-struggling startup that made printer software2, there joining a couple of friends of mine, John Price and Jon Davis. I say friends, but before moving to Texas I knew them solely through an LPMud (Darker Realms) which we co-administered.3

John, Jon and I, the junior staff at the company, quickly formed a little cabal. At work, after hours, we played Ultima Underworld, Ultima 7, Ultima Underworld 2, and Ultima 7 part 2 on the beefiest 386s or 486s we had in the office (peaking with a 486 50 in 1993, I think). At the same time, we discovered Mike Abrash's 1992 Graphics Programming columns in the Dr. Dobb's Journal collection at the office, and John and I started writing 3D polygon renderers on the PC, competing with each other (and Abrash) for the most speed. Mine was pretty fast, relatively speaking, until I tried texture mapping—the first one I wrote did two divides per pixel, which was impressively slow.

After the startup collapsed in early 1994, all three of us went into the game industry (but via different routes): John's MobyGames page; Jon's MobyGames page; my MobyGames page and this one (and actually also three of the entries here, although those are more like special-thanks credits anyway).

For me, since both Origin and I were both in Texas, I decided to investigate working there, so I emailed someone who worked at Looking Glass (who I knew solely through a Usenet group devoted to weird creative writing) looking for contact info at Origin, since Origin produced and published the LGS games. He invited me to interview at LGS instead, and there the story of how I got into the game industry mostly ends (save for the demos linked below).

Would I have still started doing graphics programming on the PC and ended up in the game industry if I hadn't run into those Abrash columns at the right time, and been in this little self-constructed game-design / graphics-programming crucible?4 Quite possibly, but maybe not. (For that matter, would I have ended up at LGS if I hadn't randomly had a pre-existing contact at LGS? Harder to say—the Underworlds were my favorite games, so maybe I'd have investigated anyway.)

In thinking about this, thinking about those Abrash-inspired 3D polygon renders I'd written, I became curious whether I could find them, so I went spelunking through my old archived "home" directories looking for them. I couldn't find them. I was able to find a zip of demos I sent to LGS, though, and those all actually still work when run in DOSBox, so I'll share a couple of them with you instead. (I couldn't find source to these, so in the discussion below there is some guesswork.)

First off, for reference, I want to show you Tim Clarke's 5K "Mars" demo, released in 1993. Sorry about the tearing; I don't know how to Fraps DOSBox and not get that.

I decided to reimplement this and put some other stuff into it, so I wrote my own version of it. Not sure when this happened; the EXE is dated from 1994.

Looking at the readme, I see I forgot to mention there that this was a reimplementation of the Mars demo. Maybe I assumed LGS would be familiar with it. As it turned out, they weren't, so they were perhaps more impressed with it then they should have been! And they were in the middle of developing Terra Nova: Strike Force Centauri, so this may have influenced their interest as well.

Note that it expands on the Mars demo in several ways, differing in each world. The first world has two different ground colors. The second (0:50) has a cloud-shadow-on-the-ground effect that I have never been able to make look as good in any other game/engine I've ever written. The third has fog, and the fourth has a weird lighting model and turbulent skies. The fifth has a demoscene-y plasma sky lighting the terrain (just the inverse of the shadows), and the sixth tries to evoke a moonscape.

The sideways leaning is faked; I think the sky uses a hardcoded lines-of-constant-z mapper, and the terrain baseline leans but the vertical elements don't lean at all. But the sky and overall tilt is enough that it's moderately convincing.

Some of the shimmering when moving forward is because the forward motion is quantized by map squares (but the sideways motion isn't, I think). This was just a design error. Other parts of the shimmering are because I started skipping map rows as you got further away. Which rows were skipped would switch as you stepped, instead of always skipping (say) odd rows regardless of the parity you were currently on. Another design error. When I ended up in charge of the Terra Nova terrain renderer (the third person to so be), I fixed the exact same issue in it.

I also included the first real texture-mapping engine I'd written. The map file and textures from this are dated February of 1993, so it was about a month after Ultima Underworld 2 came out, and ten months before DOOM would be released.

The Underworlds had sloped floors and could look up and down, so they required a perspective-correct texture mapper, but they didn't have one. UW1 did all texture mapping by drawing affine horizontal spans, so there was no perspective correction on walls, although floors and ceilings drew correctly. Although I hadn't figured it out at the time, UW2 split quads into triangles and drew affine triangles, so quads that needed perspective no longer warped and bent weirdly; instead straight lines were preserved, but there was a diagonal seam.

In the above demo, I'd independently (I assume--at the time I only had Abrash's columns and Usenet to go on--maybe somebody on Usenet had discussed it?) invented the wall/floor-mapping idea that would later be demonstrated so much better by DOOM. I believe I rendered front-to-back with a span-buffer to reduce fill rate costs. As you can see when I ascend higher than the wall height, there is no backface culling, but the span-buffer avoided the costs anyway. At some point I attempted to rewrite the span buffering and introduced the bug you can see when polygons sometimes drop out at the right edge, or sort wrong, although the original engine hadn't had that problem.

Finally, we have what might arguably be an eary deferred renderer, although it's not really deferred shading or deferred lighting. So I'll give some actual technical details about this one.

In this demo, I applied a dynamic light in real-time to a static scene. I was imagining this would be for a game with pre-rendered graphics, a la Alone in the Dark or BioForge. (BioForge wasn't released until 1995, but I'd definitely seen screenshots of it before I went to LGS.)

First it renders the complex texture-mapped scene in non-real time (during the startup) to an offscreen buffer, a g-buffer if you will, using perfect perspective correction. Each g-buffer pixel stores the diffuse albedo, the specular color, and the emissive color, but they are all packed into only 8 bits total per pixel (via a palette, but which is not the same as the 8-bit display palette).

In real time, it re-renders the same polygons but without texturing, just lighting. Diffuse and specular lighting for a white surface are computed at polygon vertices, and then (I assume) each are interpolated separately over the entire polygon. At each pixel the g-buffer value is loaded and the lighting is computed by packing the diffuse and specular lighting into 8-bits (I assume), and then looking up the display color through a 256x256 LUT.

Note that because lighting is only computed at vertices, it's not very good lighting. I think I figured we could dynamically subdivide polygons that needed better highlights if it came to that (some of those polys are pre-subdivided to improve the lighting).

At LGS I would sort of revisit this idea by creating a sort-of normal-mapped 8-bit sprite, in which every sprite pixel had an 8-bit color and an 8-bit "lighting index"; the lighting index conceptually indexed into a palette of normals, but in practice the index actually produced directional lighting, diffuse vs. specular, and self-shadowing. (I.e: offline render the lighting with those effects from N lighting directions; now treat that as a sprite with N 8-bit color channels, one color for the lighting under each direction, and compute an 8-bit palette for that "deep-colored" sprite. I.e. VQ compression. For real-time lighting, blend between k nearest light sources. The high-res detail and 8-bit output hid a lot of the artifacts.)

We never did another game with sprite characters after Terra Nova, though, so that was yet another of dozens of real-time graphics inventions I came up with that went unused.


1 Technically, at least originally, my family had it. I was basically the sole user.

2 This experience might seem to be of little value in the game industry, but in August 1993 I became (as far as I can remember) the sole programmer working on writing our new PostScript engine in C++; our previous one was written in Forth and 68000 assembly language, plus the new one was designed to support PostScript Level 2. Thus I was the sole programmer on a project with a language interpreter and a 2D graphics engine targeting an Adobe specification. Fifteen years later, in 2008, I began working on Iggy, a Flash-based UI library, which is primarily a language interpreter and a 2D graphics engine targeting an Adobe specification. (I switched back to C, though.)

3 To complete the weird knew-them-when, we had formed that LPMud after I met them while reaching the "wizard" rank on another LPMud, Wintermute. One of Wintermute's administrators who we looked up to (and visited our mud a few times) was Tim Cain.

4 John and I and our third co-admin had split off from Wintermute because we had strong opinions about how a good MUD should be player-fun-centric, which many LPMuds weren't. So this group didn't arise totally spontaneously, so it's not that weird we all ended up in significant programming roles in the game industry.