Quoted comments from rhyre [aka ralphw@MIT-AI]
replies from JSL
> I wasn't planning on wire-wrapping or using the construction
> techniques of the time.
For a system of that size and complexity, you'd have to use
wire-wrap or completely change the design. If you were
planning to migrate the logic from our schematic into an FPGA,
our design rules might be a huge obstacle.
> I have 6502B CPUs, although nothing to test them in except
> the card.
You could, of course, adapt our design to any speed of the
processors you happened to have on hand. That was kind of the
point of the way we did it. However, it would be setting a
trim-pot, not replacing a crystal. (We, in the case of the
processor board, was mostly me, with some help from Jerry at
the beginning, on the memory timing and the transition from
18-pin to 16-pin DRAMs, and from Aubrey Jaffer when I kind of
ran out of steam on the FSM for no-memory-cycle prediction.
Russell O'Connor was a huge help by pointing out errors in my
logic, generally oversights; I seem to have recalled him as
Frank before, so my memory is obviously going. Apply salt as
needed to my recollections.)
Jerry didn't want to have a clock that went from one board to
another, and the processor board was designed to optimize
access to and from the bus. To do this, we violated a great
many of the design rules most sane people used, and developed
a different set derived, I'm afraid, from some work in Tech
Square that had been discussed in an architecture course
(6.032?) on realizations of theoretical notations for
clockless systems. Transition diagrams, I think. 30 years
ago; I've gone vague.
Another way to do it would have been to have distributed a
100-MHz or faster clock from a central location, and divided
down as needed to get the times required in each case. We
decided not to do that, and the rest was falling dominos. If
you redesigned to conventional synchronous rules, it would be
a very different design.
> I did reach out to Olin via LinkedIn...
And did he respond? If he remembers it, he'd have a huge
amount of information in his head.
> Was the AMD part something like the 2911? (bit-slice part
> for building microarchitectures),
Yes, the bit-slices were pretty cool, though they were just
the combination of a 4-bit-wide register file and an ALU on
one chip. You could make an ALU as wide as you might want in
four-bit slices and keep the speed up by using outboard
look-ahead-carry chips; the bit slices just encapsulated a
multi-port register file in the same silicon. This was before
deep pipelines required register renaming in such designs, so
the register file was too small, but it was a great tool for
getting a design out the door fast.
But our AMD part was something much simpler. It was a
one-shot, also known as a monostable multivibrator. It's a
timing element. Most of our timing elements were open
collector outputs with a capacitor, a variable potentiometer
(trim-pot), and a Schmidt trigger, but in one place we needed
something fancier. There are variations on the multivibrator
circuit, like reset, retriggering, and so on; I forget what
we needed in that particular place. However, the AMD part
was a high-speed (Schottky) part which had an impressively
small delay from the trigger input to the output, like a
nanosecond. (In those days, that was considered very fast.)
Typically, we needed to time the processor and DRAM clocks
and setup times, so we needed delays in the 50-200 ns range,
and didn't have to worry about retriggering or reset, so RC
time-constant using LS was plenty fast and precise enough.
I think the processor board had a total of seven timing
elements, between the processor, memory and the external bus.
I recall that we cranked up each processor board pretty much
as fast at it could go, then backed the settings off by a
percentage to allow for temperature variation and noise
margins. So no two MicroMinds were ever quite the same speed.
It's a bit like the wires in the Cray designs that had to be
cut to length, about which the scary part was that they found
the length by measurement, not design; different lengths were
in each processor shipped.
The display board, in contrast, had a crystal oscillator,
because its product was a raster signal with precise timing.
For that, we had synchronizers, to mate the bus timing domain
with the display engine. We did exponential trials and
convinced ourselves that there would be one synchronizer
failure in 70,000 years, which were probably adequate margins.
Reading from that memory was consequently rather expensive,
but I think the write operations were fast if they weren't too
frequent. (A 6502 in a loop, even at 4 MHz, would not have
overrun it.) It's been so long I forget the equations, but
I think sigma or tau (a standard deviation? a half-life?) on
the exponential decay of the metastable state, using the
components we had, was 3 ns, so you just had to allow enough
sigmas for any desired confidence level. And you had to wait
for a slot in the pipeline to get your transfer in while a
raster line was also being generated.
> I think one version of the Xerox Alto used [bit slices], but
> I remember my 6.111 fellow classmates using those.
You can do a lot with high-level building blocks, making more
functionality than you could starting with SSI or MSI, but the
downside of that is you don't necessarily ever understand what
is inside those blocks. So, should third-graders be permitted
calculators? Does it really matter if they can ever add and
multiply without them? Is long division really a skill they
will need in later life? Like that.
The MicroMind was an example of repurposing a processor from
an embedded controller to a general purpose computer. As a
pedagogical exercise, it was pretty cool. However, we started
with a component whose chief advantage was that it cost $25,
instead of $200 - $400, and blew it because our expectations
were too far ahead of the curve. It's an extreme example of
feature-creep. The stuff we put in should have been in our
third or fourth computer product, not the first. By then we
might have had 68000s to work with. We didn't want to hear
that at the time; having upper management that had just
graduated from MIT(ERS) was both a blessing and a curse.
Icarus R Us.
Tuesday, July 11, 2006
Quoted comments from rhyre [aka ralphw@MIT-AI]