return to main page
- a fun quote of M.V.Wilkes, comments by Rick Dill
- metastability and synchronizer problem by Robert Garner
- metastability & 1401 & TAU by Bob Feretich
a fun quote of M.V.Wilkes, comments by Rick Dill
> A fun quote > > > " ... And M.V.Wilkes once said to me something like > 'A digital circuit is like a tame animal, > the analogue circuit is a wild animal. > Every so often the tame animal reverts to the wild' :-) " > > from > > firstname.lastname@example.org > Mon, May 23, 2011 10:49 pm > > Message: 12 > From: email@example.com (Tony Duell) > Subject: Re: Oscilloscope Recommendation
I grew up in the analog world. Nonlinearity was around but weird. You could get it graphically from vacuum tube curves and it was there for diodes and galena crystal detectors. It was used for detectors and for frequency multipliers. It took me quite a while to learn to deal with the non-linearity which seemed more complicated, but actually makes things simpler.
Non-linearity is what makes logic circuits have essentially the same waveforms for input and output. It is what defines a "1" or a "0" and the indeterminate space in between. The sharpness of non-linearity is typically defined by qV/kt, which is about 23 millivolts at room temperature if my memory is right. Having started with wanting circuits to be well above that level, it has been instructive to me to watch FET operations drop to single volt levels (still large enough to avoid thermal noise issues). Of course there are a number of good reasons for those low voltages in terms of hot electrons, tunnel damage to oxides, etc.
It is really easy to drop down to the boolean level and think about logic as simple 1's and 0's and I guess that is tame .. When you talk about harmonic distortion in analog, that is bringing non-linearity in and get pretty wild to think about.
Still when we get off the logic chip and into memory or communications, the world is wildly different. How many electrons do you need to tell a 1 from a 0 on a memory capacitor. For optical sensors, particularly cooled ones, that number approaches unity. For memory it used to be more like 100,000 for radiation hardness, but that may have changed in the past couple of decades. For communications, it is all about how many points you can use in amplitude/phase space with some acceptable level of correctable errors. Starting with 56K modems, communications became adaptive with the two ends testing the transmission path for noise, delay, echos, etc. Then they decided on a transmission rate, which was individual for each direction. For some of the wireless systems there may be little enough feedback from the receiver that assumptions have to be made about the transmission path, but that is done in very non-optimum fashion.
asynchronous clocks and uncertainty, by Robert Garner
with Chaney & Molnar paper
Because our dear 1401 was designed before the synchronizer (or lack thereof) metastability problem was recognized, I assume it (and other computers of its vintage) could occasionally corrupt data due to a synchronization mismatch between an peripheral and the main CPU clock. Is our most likely candidate the tape bit stream being synchronized somewhere with the CPU memory write circuits?
Probability of occurrence is likely extremely low. There are standard equations that predict the probability of a metastable FF output of a certain duration (clock period), based on the input signal rate, clock rate, and gain of the sampling circuit (FF).
The 1402 writes directly into core memory without any clocks (only sync point would be some a signal indicating that a single card had been written into memory).
I wonder if the 360 circuits had synchronizers?
p.s. Here's the famous Chaney & Molnar paper I recall from ealry 70's.
( I met Charlie Molnar when I was in SunLabs days under Bert & Ivan Sutherland..)
metastability & 1401 & TAU by Bob Feretich
Metastability is less of a problem when the gain through a logic gate is high. It was a very serious problem for ECL logic which had gains close to 1, where the indeterminate state would propagate through several levels of logic and the metastability in the feedback path of a trigger would take longer to resolve. So I would think that the ECL machines in the high end of the 360 family had to deal seriously with it. By the time the 303x series was designed, metastability was a well known problem.
The 1401 TAU is nearly completely synchronous. The only async part that I am aware of is the detection of the beginning of a new tape record. All of the bits of the data-in bus from the tape drive are ORed together. The output of this OR gate sets a trigger that starts a high frequency counter. When the count reaches a specific value (chosen to be in the middle of that recorded bit), then the data-in bus is sampled. Its not clear to me how metastability at this trigger would impact the counter. Throwing its count off by one would only cause errors if the tape was recorded under severe skew conditions. However, it is possible that a non-one/non-zero logic state in this trigger could cause only some counter bits to count.
Overall, because the logic technology the of the 1401 was less susceptible to metastability (high gain), because only one circuit in the TAU is sensitive to it, and because the window where metastability could impact the TAU was limited (once per record read), I believe that the probability of a metastability induced tape read error was small compared to other possible sources of read errors. (Food particles from the computer operators lunch on the tape surface were probably a more frequent error source. ;-) )
Also, read errors were easily detected through vertical and longitudinal parity checking. Read errors occurred frequently enough that good programming practice required the coding of tape backspace and read retry program logic.
I believe that metastability probably caused some tape read errors on the 1401, but the percentage of read errors that they caused was probably small enough that metastability was not detected as problem source.