But, for Backus, the programming itself was as tedious as medical memory work. In the early 50’s, most of it was being done in binary-coded numbers the computer hardware could interpret. The simplest machine instruction was a laborious process of setting down rows of “0’s” and “l’s” in precise order.

“Much of my work has come from being lazy,” he says. “I didn’t like writing programs, and so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs for the 701. And that wound up as something called Speedcoding.”

Later, when the IBM 704 was in development up at the old Homestead lab in Poughkeepsie [N.Y.], Backus persuaded the designers to build directly into its hardware features that Speedcoding simulated.

From then on,” he said, “the question became, what can we do for the poor programmer now? You see, programming and debugging were the largest parts of the computer budget, and it was easy to see that, with machines like the 704 getting faster and cheaper, the situation was going to get far worse.

Backus decided there might be a way use mathematical notation to address the computer and have it work out its programs of “0’s” and “l’s” automatically. He wrote a letter to his boss, Dr. Cuthbert Hurd, head of the applied science department, saying that it might be possible to develop an automatic programming system for the 704, and make it practical. Hurd, who, in 1951, had the foresight to encourage the company to hire John von Neumann as a consultant, said yes.

Most people, Backus says today, “think FORTRAN’s main contribution was to enable the programmer to write programs in algebraic formulas instead of machine language. But it isn’t. What FORTRAN did primarily was to mechanize the organization of loops.” A loop, heavily used in scientific work and in computing payrolls, is a series of instructions repeated a number of times until a specific result is reached.

FORTRAN did greatly increase programmer productivity. What had previously taken 1,000 machine instructions could now be written in 47 statements. And, as intended, more scientists and engineers learned to do their own programming. But the language was slow, at first, in catching on. “Users,” says its creator, “just found it hard to believe that a machine could write an efficient program.”

It could. By the fall of 1958, more than half the machine instructions of the 704 were being generated by FORTRAN. It was soon being used on other machines as well. “In a way, FORTRAN was a great boon to our competitors,” says Backus, “because with their programs tied up in machine language, IBM customers weren’t about to reprogram for another computer. But if a competitor could come up with a program that would translate a FORTRAN program into the language of his machine, he had a selling point.”

The telephone rang, and he crossed the room to answer it. “I’ve had an associate for several months now,” he said, returning. “A former associate professor at Cornell. When I’m working at home, we often spend an hour a day on the phone.”

What had inspired his new work?

“I just got sick of seeing more and more new programming languages — what I call von Neumann languages,” he replied. “They’ve just become so baroque and unwieldy that few of them make programming sufficiently cheaper to justify their cost. FORTRAN started the trend. You see,” Backus continued, “all programming languages are essentially mirrors of the von Neumann computer. Each one may add a gimmick or two, to automate some of the dirty work, but it’s usually done at the price of a much more complicated language. Today’s programming manuals are that thick.” He held up a thumb and forefinger. “Some of them have 500 pages. It’s just a vicious circle, because language designers design to fit the computer, and computer designers think they must design to fit the languages.

“Von Neumann’s concept was brilliant. of course, and worked fine 30 years ago,” said Backus. ‘But,” he paused, making arches of his hands, “here’s my highly oversimplified analysis of the von Neumann computer. It consists of two boxes. One is the central processing unit, where the calculations take place, and the other is the store, or memory. Traffic between them takes place, figuratively speaking, through a narrow passage that I call the von Neumann bottleneck. Because it is just that. You see, the purpose of a program is to make a big change in the store. But how does it do it? By huffing and puffing words (a computer word is only 32 bits — 0s” and ‘l’s”) back and forth through the tiny passage between the store and the CPU. One word at a time.”

The result, says Backus, is that the programmer is left with an enormous task of how to get things out of the store, combine and pump them back into the store so that the ultimate result is achieved. Everything that can be said in a conventional programming language has to be thought of in advance, making the language huge and inflexible. “And because it takes pages and pages of gobbledygook to describe how a programming language works, it’s hard to prove that a given program actually does what it is supposed to. Therefore, programmers must learn not only this enormously complicated language but, to prove their programs will work, they must also learn a highly technical logical system in which to reason about them.

“Now, in the kinds of systems I’m trying to build,” he explained, “you can write a program as essentially an equation, like equations in high school algebra, and the solution of that equation will be the program you want. What’s more, you can prove your programs in the language of the programs themselves. The entire language can be described in one page. But,” he raised a finger, “there’s a catch. They’re what I call applicative languages, which means that there’s no concept of a stored memory at all.”

But surely a computer can’t do without a stored memory?

“Well, in one sense it can,” said Backus.

“What I want to do is to come up with a computing system that doesn’t depend on a memory at all, and combine that system, in a rather loose fashion, with one that has a memory but keeps the simplicity and the algebraic properties of the memoryless system. Then, hopefully, the process of algebraically speeding up programs can be mechanized so that people can write the simplest programs and not have to care whether they are efficient or not. The computer will do the hard work. And, more than that, perhaps a lot of programs can be written simply by describing the program you want with an equation.”

The sun had left the hill and, farther out, was turning the blue of the ocean to gold. “The FORTRAN language,” Backus reflected, “took about nine months to devise. I’ve been working on this project since 1970, and it’s still evolving. It’s been difficult because it requires breaking one of the traditions of spoken English.

“For example, when we write sentences, we interpret the sentence not by the words themselves, but in terms of what they refer to. When you say, ‘the cat is running,’ you don’t mean the word, ‘cat,’ you mean the animal. The same with computer programs. When you write ‘x equals y plus z,’ you are certainly not referring to adding the letters ‘y’ and ‘z.’ Yet my languages do just that. I call them anti-quote languages. When you use the word ‘cat’ in an anti-quote language, you are referring to the word, not the animal.

“Now, if you were to say that, from now on, all English sentences are to be interpreted in this new way, everybody would be terribly confused, and of course, it wouldn’t make sense in the case of English. Yet, that’s essentially the change I’ve made in programming.”

There was a taxi waiting and a plane to catch.

“I wouldn’t be surprised if I’ve boggled you,” Backus said, as he saw me down the steps. “My stuff boggles computer scientists, too, at first. It’s a terrible wrench in our accustomed way of thinking and just normal language usage. But what I can show is that if they do make the switch, then a lot of advantages flow from it.”

3-3 of 3 results | Previous