John Holland as an early Micromind user - John Holland did early work on Genetic Algorithms with his system, which is mentioned on page 190
Complexity: The Emerging Science at the Edge of Order and Chaos
Posted by Ralph at 3:03 AM
I don't believe Ron (or Dick) had much to do with the Micromind design--I think that was mostly Jerry, Spencer, Roy, and one or two others whose names I can't offhand recall (Russell O'Connor? Who else? Spencer would know.).
As I understand the story, originally ECD stood for Electronic Consulting and Design--Jerry, Ron, and Dick made money consulting. Once they had their first product they decided that it looked cool as a logo, but shouldn't stand for anything.
That first product, if you aren't aware, was the award-winning C-Meter, a $289 digital autoranging capacitance meter (0.1pF to lots of mF). Its only competition was a $2000 behemoth from General Radio that was significantly less capable and much harder to use. I still have and use my C-Meter regularly, and I wish I'd bought two back in the day in case this one fails. It's just about perfect for its job, as well as being a marvel of inexpensive, compact, efficient, low-power design. A work of genius. Those guys are brilliant engineers. I felt much the same about my HP-55, which was of similar vintage but gave up the ghost a decade ago.
The C-Meter's successor was a similarly innovative digital thermometer (the T-Meter, of course). It was not as successful, perhaps because it pushed just a little past the edge of practical design and packaging. Still, I wish I'd bought a couple of those, too.
Another person who used a lot of Microminds was Ken Skier. He's still around (I think he may even be a neighbor)--I bought some shareware from him a few years ago and we corresponded briefly when I realized who it was. He had several of them for some sort of English writing project, the details of which escape me.
Posted by Ralph at 2:38 AM
Quoted comments from rhyre [aka ralphw@MIT-AI]
replies from JSL
> I wasn't planning on wire-wrapping or using the construction
> techniques of the time.
For a system of that size and complexity, you'd have to use
wire-wrap or completely change the design. If you were
planning to migrate the logic from our schematic into an FPGA,
our design rules might be a huge obstacle.
> I have 6502B CPUs, although nothing to test them in except
> the card.
You could, of course, adapt our design to any speed of the
processors you happened to have on hand. That was kind of the
point of the way we did it. However, it would be setting a
trim-pot, not replacing a crystal. (We, in the case of the
processor board, was mostly me, with some help from Jerry at
the beginning, on the memory timing and the transition from
18-pin to 16-pin DRAMs, and from Aubrey Jaffer when I kind of
ran out of steam on the FSM for no-memory-cycle prediction.
Russell O'Connor was a huge help by pointing out errors in my
logic, generally oversights; I seem to have recalled him as
Frank before, so my memory is obviously going. Apply salt as
needed to my recollections.)
Jerry didn't want to have a clock that went from one board to
another, and the processor board was designed to optimize
access to and from the bus. To do this, we violated a great
many of the design rules most sane people used, and developed
a different set derived, I'm afraid, from some work in Tech
Square that had been discussed in an architecture course
(6.032?) on realizations of theoretical notations for
clockless systems. Transition diagrams, I think. 30 years
ago; I've gone vague.
Another way to do it would have been to have distributed a
100-MHz or faster clock from a central location, and divided
down as needed to get the times required in each case. We
decided not to do that, and the rest was falling dominos. If
you redesigned to conventional synchronous rules, it would be
a very different design.
> I did reach out to Olin via LinkedIn...
And did he respond? If he remembers it, he'd have a huge
amount of information in his head.
> Was the AMD part something like the 2911? (bit-slice part
> for building microarchitectures),
Yes, the bit-slices were pretty cool, though they were just
the combination of a 4-bit-wide register file and an ALU on
one chip. You could make an ALU as wide as you might want in
four-bit slices and keep the speed up by using outboard
look-ahead-carry chips; the bit slices just encapsulated a
multi-port register file in the same silicon. This was before
deep pipelines required register renaming in such designs, so
the register file was too small, but it was a great tool for
getting a design out the door fast.
But our AMD part was something much simpler. It was a
one-shot, also known as a monostable multivibrator. It's a
timing element. Most of our timing elements were open
collector outputs with a capacitor, a variable potentiometer
(trim-pot), and a Schmidt trigger, but in one place we needed
something fancier. There are variations on the multivibrator
circuit, like reset, retriggering, and so on; I forget what
we needed in that particular place. However, the AMD part
was a high-speed (Schottky) part which had an impressively
small delay from the trigger input to the output, like a
nanosecond. (In those days, that was considered very fast.)
Typically, we needed to time the processor and DRAM clocks
and setup times, so we needed delays in the 50-200 ns range,
and didn't have to worry about retriggering or reset, so RC
time-constant using LS was plenty fast and precise enough.
I think the processor board had a total of seven timing
elements, between the processor, memory and the external bus.
I recall that we cranked up each processor board pretty much
as fast at it could go, then backed the settings off by a
percentage to allow for temperature variation and noise
margins. So no two MicroMinds were ever quite the same speed.
It's a bit like the wires in the Cray designs that had to be
cut to length, about which the scary part was that they found
the length by measurement, not design; different lengths were
in each processor shipped.
The display board, in contrast, had a crystal oscillator,
because its product was a raster signal with precise timing.
For that, we had synchronizers, to mate the bus timing domain
with the display engine. We did exponential trials and
convinced ourselves that there would be one synchronizer
failure in 70,000 years, which were probably adequate margins.
Reading from that memory was consequently rather expensive,
but I think the write operations were fast if they weren't too
frequent. (A 6502 in a loop, even at 4 MHz, would not have
overrun it.) It's been so long I forget the equations, but
I think sigma or tau (a standard deviation? a half-life?) on
the exponential decay of the metastable state, using the
components we had, was 3 ns, so you just had to allow enough
sigmas for any desired confidence level. And you had to wait
for a slot in the pipeline to get your transfer in while a
raster line was also being generated.
> I think one version of the Xerox Alto used [bit slices], but
> I remember my 6.111 fellow classmates using those.
You can do a lot with high-level building blocks, making more
functionality than you could starting with SSI or MSI, but the
downside of that is you don't necessarily ever understand what
is inside those blocks. So, should third-graders be permitted
calculators? Does it really matter if they can ever add and
multiply without them? Is long division really a skill they
will need in later life? Like that.
The MicroMind was an example of repurposing a processor from
an embedded controller to a general purpose computer. As a
pedagogical exercise, it was pretty cool. However, we started
with a component whose chief advantage was that it cost $25,
instead of $200 - $400, and blew it because our expectations
were too far ahead of the curve. It's an extreme example of
feature-creep. The stuff we put in should have been in our
third or fourth computer product, not the first. By then we
might have had 68000s to work with. We didn't want to hear
that at the time; having upper management that had just
graduated from MIT(ERS) was both a blessing and a curse.
Icarus R Us.
Posted by Ralph at 2:34 AM
I (ralph) am considering building a replica of the ECD Micromind, a high-powered microcomputer designed in the late 1970s.
It did not establish itself in the marketplace, but the designers of the system are still around.
This e-mail exchange comes as a result of my Eagerness to build one.
It is interspersed with warnings and words of Wisdom from the original designer.
The Micromind relied on unusual system design rules, including asynchronous design]
>I'd love even more to have schematics so I can build a semi-clone.
No, trust me, you wouldn't.
Well, you might like to have schematics, so you could marvel at
them (not necessarily at their cleverness, mind you). But you don't
want to build one.
First, the systems were built on fairly large Augat wire-wrap
boards, which were machine-wrapped, except for two prototypes.
Olin wrapped the first; I'm not sure who did number two. Maybe
Dave A. (I forget how to spell his surname; he was a housemate in
the original ECD location at 232 Broadway, a residence in
Cambridge.) The boards alone cost hundreds each, and there were
three per system.
Each board was populated with something like 150 integrated
circuits. The complexity of the system excluding the 6502 surpassed
what was inside the nominal CPU in its 40-pin DIP.
> I have enough Apple 2 carcasses to get CPUs,
Those parts were not specified as fast enough. We used 4-MHz 6502s,
and the Apple systems ran quite a bit slower than that. The early
Apple 2 ran with a 2-MHz memory system, but ran the CPU at 1 MHz,
with half the cycles dedicated to the display subsystem. The late
models may have been faster, but I think the 6502 had been abandoned
(6502 for 68000) before DRAM got fast enough.
The processor and the bus were totally asynchronous. That is, the
CPU ran at a variable speed. The bus resembled a DEC Unibus, but we
went through some contortions to avoid infringing the DEC patents.
It used arbiters (asynchronous circuits composed of NAND gates and
Schmidt triggers) in a way that would jam FM radios if not shielded;
it radiated a lot of power at around 100 MHz, but not at any single
frequency. Most of the logic we designed was LS (low-power
Schottky, faster than TTL), but some was S (high-power, much faster
than TTL). The processor board alone had quite a bit of what must
now be unobtainium, including two different kinds of PROMs that may
no longer be made, except maybe the military versions, the DRAMs,
and a one-shot that was (sigh) single-sourced, an AMD part, because
none of the other vendors came close to its performance. (Boy were
we up a creek when they had process problems and couldn't make them
for a while; we had to make a variant design that used a different
We got 4-MHz processors. These could run at 125 ns for each clock
phase. However, the bulk of the work inside the CPU was done during
phase 1, so phase 2 could be 100 ns or less. A one-shot was used
p1, and p2 could vary from 100 ns to several microseconds, depending
on how long it took for the thing addressed to respond. Flat out, it
could run at 4.5 MHz, but that never lasted for long. The memory
subsystem was too slow. It used one-shots also; it always started
as soon as an address was available.
There was no cache as such, but when the CPU fetched a one-byte
instruction, it incremented the program counter to fetch the next
byte, but then failed to increment it on the second cycle since it
was on the cycle after that it fetched the next instruction. In
most 6502 designs, the byte would be read twice. We detected when
a one-byte instruction had been fetched (using an external opcode
lookup, about which more below), and did the second cycle at the
maximum speed, since the memory was not needed by the CPU.
An addition, having the address of the next instruction, we started
the memory during the second cycle of the first, effectively
pre-fetching that next opcode, so its first cycle could also run at
the maximum speed. Sadly, only a small portion of any actual
program is made up of one-byte instructions. Other cycles when the
memory was not needed were also done at full speed, e.g., the
penultimate cycle of a R/M/W instruction. There were a lot of
two-byte/two-clock instructions; I think we averaged out to about 1
Interrupts were detected by a failure of the program counter to
increment following an opcode fetch. This also caused a transition
of the address space from user to supervisor mode, as appropriate
(the mode bit was implemented externally to the 6502).
The 6502 was not specified to do anything reasonable when an
undefined instruction was encountered; we were warned the CPU might
even hang. This was unacceptable, so we looked up each opcode in a
high-speed PROM at the end of each opcode fetch, and when an invalid
opcode was detected, the clock was stopped and a break instruction
substituted on the data bus to implement an invalid opcode fault.
This made the machine safe for running arbitrary code.
I think I recall we did some other opcode chicanery to implement
other instructions external to the CPU. (It was 28 years ago; I
don't recall everything equally well.) What I recall is that the
address space translation and other states had to be implemented on
the processor board, and some method of accessing those functions
was needed. Some of those functions *had* to be in the global
address space, so they could be accessed by other processors, but I
think there were some local-access provisions. Ah, one I'm fairly
sure of was extending the one-byte return-from-interrupt instruction
to a two-byte instruction where the second byte contained flags for
the memory mapping subsystem, i.e., whether to go back to user mode.
The Apple 2 used a straight bit-map in main memory for its graphics,
which were consequently low-resolution. They did a clever thing
with the frequencies so they could fake out the chrominance
subsystem in TV sets to give even-lower resolution in color. Plus
that took care of DRAM refresh.
We wanted the performance of the best displays we had seen over in
Tech Square, and color, but RAM was not cheap. The display board by
Jerry Roberts had two memory banks, one for text, and one for fonts.
The text memory was scanned to draw the screen, and used to look up
bitmaps which were then piped out to the display. We could put up
as much text and fancy fonts as the Knight consoles, but far cheaper
with less RAM. The price of RAM has been in free fall since, but in
those days it was not cheap. The width and height of the typefaces
was dynamically programable.
Graphics were a hack; it's amazing (to me) that Space War worked,
because it relied on dynamically changing part of the character set
to contain the graphics, so only a small fraction of the screen
could actually contain non-background graphics. It took a lot of
logic to do so much with so little memory, but the display load on
the processor board was low, because that logic did so much of
the work. First, separate memory banks meant the precious main
memory bandwidth was not used, and the hardware rasterized the
characters so the feeble CPU didn't have to do that.
The remaining board done by Roy Zito was the I/O board, which became
the catch-all for any system function not on a processor module or a
display module. There could be only one per system. If I recall
correctly, there were 14 I/O interfaces and some housekeeping
functions, including bootstrap (ROM and processor selection and
system reset) and the priority-interrupt scanner. Roy might
remember more precisely; he was unhappy about how the process made
him the garbage-man of the design.
This could probably be done quite handily all in one ASIC today,
but even if the 6502 and RAM were available macros, the rest would
be a redesign. You might get closer using FPGAs, but I don't know
enough about them. I would worry about the asynchrony and timing
elements; it might all become much simpler if redesigned with a
clock, at least if you are restricted the vocabulary provided by
available programable parts, or at least you'd have to use
different async design techniques.
> and that 'up to 15' 6502 multiprocessor sounds interesting.
That limit was pretty much dictated by the bus capacitance; the
drivers could not drive any more inputs (and wire) than that. The
global address space was 26 bits, or 64 megabytes, while a
processor board only contained 16 kilobytes (later this may have
been extended when bigger DRAMs became available). We spanned the
1K to 4K to 16K transitions, I think.
With an additional interface to bridge or network several such
systems, we envisioned up to 1000 processors. We could hardly
advertise such vaporware (we were already farther out on a limb
than we understood, but without even a working prototype, even we
could see the folly of announcing it).
> There are tidbits here and there, but I found my 29-year old issues
> of Kilobaud, with the full-page color ads.
Yes. With the cats. They were pretty, and very slick.
.... additional details available from the author
email response from JSL
Posted by Ralph at 2:30 AM
This blog will expand to hold documentation on the ECD Micromind.
For its time (the mid-to-late 1970s), the Micromind had very innovative
features, and benefited form the EE design experience available at MIT.
Even Alan Baum, who designed the Apple ][ along with Steve Wozniak, has
commented approvingly of the design.
Posted by Ralph at 2:23 AM