An Interview with Jack Crenshaw

written by Matthew Reed

Jack Crenshaw

Jack Crenshaw
Picture courtesy Jack Crenshaw

Jack Crenshaw has a long history with computers, and one of his first microcomputers was a TRS-80 Model I. Readers of 80 U.S. Journal might remember his comments about the Exatron Stringy Floppy. Others will remember his “Let’s Build a Compiler” tutorial series. His “Programmer’s Toolbox” column appears in Embedded Systems Design magazine, for which he is also a contributing editor.

This interview was conducted over January and February 2009.

Q: You mentioned that you wrote your first computer program in 1956. How were you first introduced to computers?

A: By reading gee-whiz sort of articles in the 50’s. The title of our textbook was “Giant Brains.” That ought to give you an idea of the attitudes towards computers, in those days.

Q: How did you get started with the TRS-80?

A: Well, I had been interested in computers since forever. I had worked with “giant brains” like the IBM 7094. I did a lot of work with dynamic simulations, doing trajectory analysis stuff for Project Apollo and similar programs.

Around 1966 I saw an article by Don Lancaster on Fairchild’s 900-series RTL logic circuits. At $0.40 per gate and $1.50 per flip-flop, these were the first digital ICs affordable by the average experimenter. I bought a bunch, experimented with turning lights on and off, and dreamed of having enough to do some computing.

Meanwhile, ICs were getting more complex and dense, so technology was catching up with my dreams. In December 1974, the January 1975 issue of Popular-Electronics carried the article on the Altair 8800. I ordered a kit that very day. By that time, I was serious about getting out of the Fortran/mainframe world and into the world of microcomputers. A few months later, I bought into an existing company doing embedded systems — surely one of the first such companies around. We used the 4040 and 8080, and had big machines being controlled by little processors. The company — Comp-Sultants — also came out with one of the earliest computer kits after the Altair. See the reference from Creative Computing: The first decade of personal computing.

In 1976 I attended the Personal Computer Conference in Atlantic City. I got to see the Apple I, a box-less board costing $666. There were a lot of S-100 systems, plus a couple of significant box computers: The Commodore Pet and the Processor Tech SOL-20. It was not hard to see that people wanted a true personal computer, without all the cables and geekiness.

The next year, I saw my first TRS-80, and that was that.

Q: How different was working with the Model I compared to the larger computers you had been using?

A: Like night and day. With the big mainframes, you really didn’t interact with the computer at all. You interacted with some clerk, submitting a card deck through a window. Later, you came back to get the output and, with luck, your card deck back.

The thing was, those big mainframes were expensive — $600 per hour. So management was particular about who used them, and for what, and how. In the earliest days, we weren’t even allowed to write code. GE had a “closed shop” in which you just told the Data Processing department what program you needed. They decided what to write, and how, and when. When the job was done, you were lucky to get back anything useful, instead of the programmer’s own personal favorite program.

Even later, after I was “allowed” to write code, I still had to deal with a phalanx of DP managers and Systems Administrators who decided when my program got run and for how long. I could entertain you for a week on the trials of dealing with those bureaucrats. The closest I got to the machine itself was on those rare occasions when I was allowed — by special dispensation and bought with a lot of genuflections — to stand in the same room with them as my job ran.

I dreamed of a computer that would be mine alone, to use in any way I chose, including wasting its time doing something inefficient, or even sitting there doing nothing, waiting for me to walk by and press a key. That would be its job.

Q: It was often stated that the TRS-80 Model I was more powerful than the computer used by the Apollo lunar module. As someone who actually worked on the Apollo program, do you think that was true?

A: Yes, absolutely. I’ve heard it said that a modern digital watch is more powerful than the Apollo flight computer. When you think about it, the remarkable thing is that we had a flight computer at all. Consider: There was no such thing as RAM chips, much less CPUs. The Apollo computer used 2k of magnetic core RAM and 36k wire rope (ROM) memory. The CPU was built from ICs, but not the kind we think of today. They were, in fact, the same Fairchild RTL chips I fell in love with. Clock speed was under 100 kHz. Compare that to the 16k ROM, 16-48k RAM, and 2 MHz clock of the TRS-80.

The fact that the MIT engineers were able to pack such good software (one of the very first applications of the Kalman filter) into such a tiny computer is truly remarkable.

Originally, NASA didn’t plan to use the Apollo computer for guidance at all. The intent was to use radio signals from the ground to control the spacecraft. We had worked up nomograms (graphs) for the astronauts to use in case they lost radio communications. Those plans got changed as the flight computer developed so nicely.

Q: You’ve mentioned using an Exatron Stringy Floppy for storage. How did you use it with Level I BASIC?

A: Well, I really didn’t do that, but the answer begs for some explanation and some background.

When I first got the Model I Level I, I did my best to live with Level I BASIC, writing a set of trig functions missing from the ROM. But I never liked BASIC — still don’t — and because of my background in embedded systems, I was interested from the get-go to “break out” of the Level I ROMs and get to the core machine. I picked up the phone and called the Tandy software development team. I talked to their VP on software, and asked, “How can I get from BASIC to the CPU?” In a couple of days, a package arrived at my door. It was an advanced copy of TBUG, their hex debugger. A few months later, I also had copies of an assembler/editor package (ported from the S-100 world) and a really nice disassembler. Armed with these tools, I was in hog heaven.

Your readers may be interested in how Tandy and the other vendors were able to break out of Level I. Level I called subroutines for its cassette I/O. As most of us know, a subroutine call pushes a return address onto the CPU’s stack, and a return pops it off again and jumps to that address. The data on Level I cassette tapes was broken up into blocks, and each block included the block length and a load address. To get out of Level I, all you had to do was to write a tape whose last block was addressed at the current top of the stack, and had two bytes of data: the start address for the program on the cassette. When Level I finished loading the program from cassette, it would look on the stack for its return address, but find instead the start address of the program it had just loaded. Cool.

When I finally got my wish — the ability to get to the bare machine — my first thought was, “Hey, where are my I/O functions?” It was something of a shock to realize that I really had a BARE machine there. I was on my own.

So the first thing I used my disassembler for was to disassemble the Level I ROMs. As we’ve discussed privately, Level I BASIC was based on Li Chen Wang’s Palo Alto Tiny BASIC. It was some of the finest assembly language code I’ve seen anywhere. Browsing through it was like an afternoon at the Louvre.

In a little while I located the subroutines Level I called for keyboard, CRT, and cassette I/O, so I could then use them for the rest of my programs. I did make some changes, though, just for fun. I implemented an n-key rollover for the notoriously tricky keyboard. I also extended the CRT write function to support tab, backspace, newline, etc.

At that point I had 48k of RAM, thanks to the Holmes internal expansion board. TBUG required, as I recall, 1k of RAM. The assembler was 2k, and the disassembler the most at 4k. So I could easily fit my entire toolset, plus my source file, plus the assembled object code, in RAM with room to spare. It was great. And, since there was no I/O, either from cassette, Stringy Floppy, or disk, it was FAST. A couple of years later, when I got my first S-100 system running CP/M, my first thought was, “Boy, is this thing SLOW!”

For a low-cost computer, the TRS-80’s power supply was pretty remarkable. It was a conventional supply, not a switcher, and it had fairly large filter capacitors. I was rather amazed to discover that the RAM chips would hold their data, even during a 5- to 10-second power outage. So I routinely left the computer on 24/7. It was great to be able to sit down at the computer in the morning, and find it exactly where it was the night before. I only used the cassette tape to back up my work each night, just in case.

Meantime in 1978, Tandy came out with the Expansion Interface and floppy disk drives. I got these parts as soon as they were available. I found them, with TRS-DOS Version 1.0, to be all but unusable. The whole system would crash and reset if you breathed on it hard. I sent the whole thing back to Tandy, and demanded my money back.

I didn’t really need the extra RAM — the Holmes board provided that. I had only bought the EI to support the floppies. So instead I got the Stringy Floppy to run with Level II BASIC. Again, I only used it for backups.

When I finally ripped out the Level II ROMs and went back to Level I, I had a problem: The Stringy Floppy was designed only for Level II. But by that time, I was pretty good at assembly language, so I wrote my own driver for the Stringy.

In doing so, I browsed the Exatron ROM with my disassembler. Once again, I got a chance to see Li Chen Wang’s handiwork, which was as good as ever. I wrote my own code for the Level I version, and I tried hard to match the performance Wang was getting, but was never quite able to do so. He had done meticulous cycle-counting to get square waves that were truly square (normally, Manchester encoding doesn’t care — it’s asynchronous — but if you’re trying for maximum baud rate, square is better).

Interestingly enough, while Googling for some history on the Stringy Floppy, I came across this reference:

“According to Embedded Systems magazine the Exatron Stringy Floppy used Manchester encoding, achieving 14K read-write speeds and the code controlling the device was developed by Li-Chen Wang (who also wrote a Tiny BASIC, the basis for the TRS-80 Model I Level I BASIC.)”

Guess which person at Embedded Systems said that.

Q: Why were you not impressed by Level II BASIC?

A: When I tried the same disassembly tricks on Level II, my reaction was much different from the one I’d had with Wang’s code. It was “Bleahh.”

Crenshaw’s First Law says, “There is a myriad of ways to take a simple problem and make it seem complex. Only a handful of ways to take a complex problem and make it seem simple.”

Race car builder George Miller advised, “Simplify and add lightness.”

The punch line of an old joke says, “Keep It Simple, Stupid.” KISS.

I’m a huge believer in the KISS principle. I felt that Level II BASIC had what I call “gratuitous complexity.” For starters, there seemed to be a concerted effort to thwart people like me who wanted to disassemble the ROM. It’s a practice Microsoft still follows, to this day. I can certainly understand Bill Gates' desire not to be ripped off, or have his proprietary ideas cloned by competitors. But I wasn’t a competitor. I was the guy who paid good money to BUY the software. I resented anyone telling me that it was not my property. The code used for this thwarting used valuable ROM and RAM acreage that could have been put to better use.

Throughout the Level II code, there were many references to a block of jump vectors that were filled in during the boot process. I can’t say that the purpose of this jump vector block was part of the “thwarting” process. Perhaps it was to provide hooks for future extensions. Whatever the reason, they took up precious RAM space, and slowed down the program thanks to the jumps-to-jumps process.

The last straw was the subroutine issue. As I’ve said, I located the I/O functions in the Level I ROMs and used them for my own programs. In Level II BASIC, I didn’t find ANY such callable functions. The code that did these jobs was all tangled up with the Level II program itself, and jumped back to the ROM after they were done.

Of course, there was a vast difference in complexity (some of it necessary) between Level I and Level II BASIC. Level II had trig functions. It had variable names of more than one character <!>. It supported integer as well as floating point arithmetic. Perhaps most important of all, Level I BASIC did all its parsing at run time. However often you executed a block of code within a loop, it got parsed all over again, each time. Level II BASIC tokenized the source code as it started the program, replacing variable names and keywords by single-byte tokens. This sped up program execution quite a bit.

I understand that such things are necessary to get good performance, but I still felt (and feel) that for a computer as small as the TRS-80, the KISS principle should apply.

Q: When did you move away from the TRS-80 and what did you move to?

A: In 1981. After several aborted efforts involving crooked vendors and shoddy products, I finally got a nice S-100 system running CP/M. I went on line to CompuServe at a whopping 300 baud. In 1983 the S-100 system got replaced by a Kaypro 4, which I still have today. I got a LOT of work out of that Kaypro, including my “Let’s Build a Compiler” tutorial.

Q: Do you think the computer industry today could learn some lessons from the earlier days of computers?

A: You bet. In those days, we struggled to save bytes and clock cycles wherever we could. Today, with computer capacities and clock speeds improving by the day, I fear that too many vendors just count on the hardware to hide their inefficient code. I once read a quote from a Microsoft software manager who said, “We don’t try to optimize anymore. We just throw the software out there, and wait for the hardware to catch up.”

The thing is, even as fast as today’s computers are, the hardware really HASN’T caught up. It still takes my PC, with a 2GHz AMD 64 x2, over a minute to boot. How many clock cycles is that? I once figured that, at this clock rate, I could have run every single FORTRAN simulation I ever wrote, 10,000 times over, just in the time this computer takes to boot. And fast as this computer compiles C code, my old Kaypro did it much faster.

Q: What’s your opinion of today’s operating systems? Is their level of complexity sustainable?

A: Funny you should ask. Just last week, I commented that I wasn’t happy with either Windows or Linux. Someone else asked, “Then what DO you like.” I didn’t have a good answer. The most recent operating system that I truly loved was CP/M. And the reasons were simple enough. First, it was small enough to understand. If there was something it was doing that I didn’t like, I wasn’t afraid to go in and replace or rewrite it. There was nothing going on in that computer, either the OS or its applications, that I didn’t understand. I don’t think, even at Microsoft, there’s anyone in the world that understands all of Windows.

Second, CP/M and its applications worked. Every time. There was no blue screen of death; no need for Disk Doctors, no forced reboots, no editors that couldn’t save their files. Someone asked me once if I’d ever had a floppy disk get corrupted under CP/M. I could honestly answer, “Only one, and that was because it had gotten so old that the oxide layer was flaking off.”

Oh, there were programs out that crashed. They got immediately relegated to the circular file. But the ones I used every day, did what I told them to do, every time, no exceptions. I miss that. If there’s anything that we’ve learned recently, about operating systems, it’s that you can only carry complexity so far.

Q: Your “Let’s Build a Compiler” series inspired many people (including me) to take an interest in compilers. How did it come about?

A: It’s interesting. When I first learned to program in Fortran, I asked someone “How does the computer do that? How does it know what I’m saying?” He looked at me incredulously and said, “Fortran is a compiler. It’s a computer program like any other. Only it happens to take source code as its input, and puts out machine code.”

There was a long pause, then I said, “Someone wrote that program?” I was stunned. I don’t know where I thought the compiler came from — Mount Olympus, maybe?

Anyhow, from that day to this, I’ve always been fascinated by compiler technology. I bought every book I could find on it, including the famous Aho & Ullman, “Dragon book.” The problem was, I couldn’t understand a word. Off the bat, they used symbolism and Backus-Naur form (BNF) that I didn’t understand, so the whole thing was Greek to me. I kept buying other books, with the same result.

It’s not that I think I was particularly stupid. It’s just that, because of the pressures of job, column, etc., I rarely had time to sit down and study the books as I would have done, say, in college.

The breakthrough came when I got fed up feeling like such a dunce. I resolved to “go back to school,” studying the books as I’d study for an exam. I cleaned off a desk and set up my lamp. I sharpened a dozen pencils (remember them?), and set a stack of notebook paper in place. Then I started working my way through Aho & Ullman. I worked every problem at the end of each chapter, running the code on Turbo Pascal.

At the end of the first chapter, I thought, “Hm. This isn’t so hard, so far.” At the end of the second, I thought, “In fact, it’s quite easy.”

At the end of the third, I said, “But it’s a shame the authors have over-complicated it.” And it’s true. Several places in my notes, I have asides that go, “Sorry about the change in notation. I’m just following the authors' own changes.”

At that point, I had decided that I could explain the whole idea to a layman, better than they had. So I did. It was a fun exercise. My only regret is that I never finished the tutorial, running out of gas at episode 16 of 22.

Q: Do you think that personal computers have lived up to the potential they showed when they were first introduced?

A: Oh, certainly. Look at the things we’re using them for. Things we hadn’t even DREAMED of before, like computerized drafting, or art, or spreadsheets. Today, most people are using PC’s to surf the web, download and play music, play realistic video games, exchanging email, etc., etc. Look at Google alone. I almost never even open all my textbooks anymore; I can go on line for the information, and get it faster.

In the scientific world, we are solving huge problems, running simulations with incredible detail, simulating stress and vibration in structures, predicting thermal behavior, optimizing the control of complex systems, etc., beyond anything we’d ever hoped for. And we do it with tools such as Mathwork’s graphical tool, Simulink, that takes all the work out of it.

Nobody in the 1960’s — even the most wild-eyed sci-fi writers, ever expected this.

Having said that, I do think that there’s an aspect of personal computers that has been entirely neglected. I’m talking about hobby computing.

Like its immediate predecessors, the Altair 8800 was strictly for the most geekish of hobbyists. With a base memory size of 256 bytes, it was not capable of doing much more than playing simple tunes or making the lights blink. But we got great pleasure out of tinkering with it, and trying to turn it into a useful device.

The last hobby computer, in my opinion, was the old Heathkit H-8. Its hex keypad and hex display let the user debug his software, right from the front panel. It also had enough I/O pins to let you connect hardware to it. But around 1978-9, a funny thing happened. Managers looked at the marketplace and thought, “Hm, let’s see now. I can build a hobby computer, and sell it to maybe 2000 hobbyists a year. 4000, total. Or I can build a computer that can be used for small business, and sell 100,000 a year. Decisions, decisions.”

It’s not hard to guess what their decision was. After that, all PC vendors targeted the commercial market, not the hobbyist. And it worked, and worked beyond all expectations. Today, there’s hardly a single desktop in the world of business that doesn’t have a computer sitting on it. Even my old enemies, the Systems Administrators, are back, and just as obnoxious as ever. But in the process, the interests of hobbyists have been completely ignored.

Q: Despite the incredible speeds of today’s computers, I’m always surprised when some common tasks, such as loading programs, actually take longer than they did on older computers. Have people come to expect too little from software?

A: Yes, See above concerning both boot time and reliability. I’m concerned that we are raising up whole generations who don’t understand that computers are supposed to work, all the time, every time. They’ve come to accept software bugs as facts of life, like catching a cold. A decade or so ago, we’d have been outraged by such a thing. In my days dealing with CIS departments, I was always surprised at the casual way they took reported problems. They would usually say, “Well, just reboot and try again.” That, to them, was a satisfactory response. It wasn’t, to me.

Q: How much value do you think there still is for assembly language and low-level programming?

A: It all depends on what you’re doing, and why. Computers — even embedded controllers — are getting bigger and faster all the time, so a little sacrifice of speed and code size is not much of an issue anymore. But the applications are also getting bigger, Moreover, vendors keep coming out with smaller and cheaper chips, to be embedded in places that we couldn’t afford, before. Today, you can build, say, a thermostat that sends RF signals to its HVAC. Or a toy robot with an embedded GPS receiver. So I think there will always be a place for assembly language. Even if there weren’t, some of us would still do it, just for fun. I always liked the challenge of fitting a hex debugger into 1k of RAM, and I’ve done it four times now.

Q: You’ve written a great deal about embedded systems and have a lot of experience in that field. Are embedded systems one of the last places where low-level programming is still practiced?

A: Yes, indeed. See above. Not only do we require small, efficient software, it has to be absolutely bug-free. I read somewhere that the average release of OS/360, the operating system for the old IBM mainframe, had 5000 known bugs at the time it was shipped. The worst version had 10,000. I’ve further read that Windows has similar bug densities.

This would never be tolerated in the aerospace world. When you’re going into space or setting out to blow something up, you can’t tolerate errors. The only acceptable error count is zero.

If only for that reason alone, there’s still a reason for tight and careful programming.

There was a joke going around a few years ago, based on the speculation (since realized) that we might see Windows in embedded systems. Picture, a manned mission to Mars. One of the astronauts cries out,

“Help, we’re going out of control! How do we stop this thing?”

Answer: “Press Start.”

Then the computer goes blue, and says, “Sorry, an unexpected error has occurred. Please return to earth, and restart your mission.”

My thanks to Jack Crenshaw for his participation in this interview.

Categories: Interviews

Comments

Mark McDougall says:

This page made Reddit today!

Mainframe Fan says:

I think Crenshaw is being a little hard on mainframes, after all the last thing he appears to have worked on was a 7090 or maybe a 1960’s era 360. Things have changed but we still write assembler on mainframes and they’re more reliable than any PC Crenshaw or anybody else has ever dreamed of. Come down off your high horse and ride a real stallion! :-) Thanks for the interview, good reading.