LynnBlog

Lynn's Blog

Welcome to Lynn's Blog.

A New Revision to the TK89C668

Tekmos is pleased to announce a revision to the popular TK89C668 microcontroller. Based on the 8051 architecture, this 5-volt microcontroller offers 64K of program storage, an I2C controller, and 8K of XRAM.

The TK89C668 was originally made at Plessey Semiconductor. Plessey made the decision to discontinue foundry services, and so the part was switched to the 0.35u fabrication line at the X-Fab Dresden facility. Plessey had been using a similar X-Fab process in their factory, resulting in minimal process differences between the two foundries.

Since we were forced to retool at X-Fab, we took advantage of the situation, and corrected all known errata on the part. We also improved the RAM design by replacing a traditional RAM with one made out of latches that has a superior voltage and temperature operating ranges.

The new part has passed our qualification, and we are ramping up production. For further information, contact This email address is being protected from spambots. You need JavaScript enabled to view it..

The TK89C668 joins our TK80C186EB, TK87C751, and TK68HC711D3 families that have already been requalified at X-Fab. Our TK80C51 is currently being requalified, with an expected release this summer. That qualification will also result in the introduction of TK87C51 programmable versions of the 80C51.

43 Hits
0 Comments

Part V Design of Dual Port RAM

In this final article about RAM design, I am going to discuss the timing necessary to make the RAM work.

Between memory cycles, the RAM is in a precharge mode. All bit lines are being held at Vdd. All word lines are off. And the data latches are holding the last value that was read or written.

The cycle begins when a chip enable signal is clocked into a flop. In a synchronous RAM, the clock is the RAM clock. In an asynchronous RAM, the clock is derived from a change in address lines. The rest of the RAM timing is asynchronously derived from the output of this flop. This means that the end of one step triggers the next step. Here is the sequence of RAM timing:

  1. Turn off the precharge circuits and put the data out latches into a read mode.
  2. Turn on the word lines.
  3. Delay for enough time for the read data to be present on the bit lines.
  4. Enable the sense amps
  5. Latch the data into the output latches.
  6. Turn off the sense amps.
  7. Turn off the word lines.
  8. Enable the precharge circuits.

 

Item 3, the read delay, is the difficult step. The time necessary for the bit cells to charge the bit lines is the major component of the memory access time. It needs to be just long enough for the bit lines to have enough differential voltage so that it can be detected by the sense amps. If it is too long, then the memory access time has been unnecessarily increased. But if it is too short, then the data cannot be reliably read, and the memory will fail. The trick is to create a delay of just the right length, plus a small bit of margin.

One way to do this is to add one additional bit line to the array, with the data in the bit cells connected to this line hard wired to zero. Since the extra bit line is part of the array, its timing will exactly duplicate the timing of the other bit lines. This bit line is fed into a ratioed inverter whose threshold is set just beyond the sensitivity of the sense amp. The buffers on the output of the inverter provide additional margin.

Setting the inverter threshold is a classic engineering tradeoff between speed and margin. While pure RAM designers go for speed, there are other considerations for ASIC RAM designers. This RAM is going in an embedded array that will be re-used in different applications. As a result, I must assume a wider temperature range. Instead of the commercial -40C to +85C, or the military -55C to +125C, I need to have it work -65C to +150C. Analog is always the weak link in a design, and I can't have the RAM be the first thing to fail. The design must also work over a wider supply range. The ASIC may be powered off a battery, which can require the minimum voltage to be on the order of a volt. And the easiest way to achieve these goals is to adjust the inverter threshold to give more margin at the expense of speed.

I can also build multiple inverters, and allow the metal programing options to select the inverter ratio. This lets me vary the speed versus margin on a customer by customer basis.

And that is how you build a dual-port block RAM.

45 Hits
0 Comments

Altera FPGA Conversions

Altera announced the discontinuation of many of their FPGA lines last December. This includes the MAX, Flex, and early Acex and Apex families. Customers are given the opportunity to make a last time buy of these products. And for many customers, this is the best choice.

But for some customers, it is not the best choice. Making a last time buy means that you have a reasonable understanding of your future requirements, and you are willing to tie up capital to buy a multi-year supply of these products. If your products are continuing to do well, or even growing in volume, then it is both difficult and risky to forecast the entire future demand, and an FPGA conversion should be considered.

The FPGA conversion will generally cost a lot less than the original FPGAs. But conversions also have an associated NRE. This produces a breakeven volume point where the unit savings equal the NRE charges. If this point is reached in 6 to 9 months, a conversion is economical. If it takes longer, then the conversion must be justified by other means, such as strategic availability.

Tekmos has developed several methods of reducing the NREs, making it possible to lower the breakeven point. The first is what we call a merge. In a merge, we combine multiple designs on a single piece of silicon. Each individual design is activated through a bond option during assembly. This allows all the designs to share the costs of a mask set and a wafer run, which are the main components to a NRE charge. Of course, each design must be individually converted, assembled, and tested. Still, the NRE charge per part is much less than the charges if each part was individually converted.

Merges have another advantage. By combining multiple designs on a single die, the volume for that die is the combined total for all the devices in the merge. Wafer fabs will make quarter half lots, but there is a financial penalty in doing so. A merge may allow a single order for multiple parts to obtain better wafer pricing.

Another way to reduce costs is to use a more advanced technology for the FPGA conversion. For 180nm and up, all the NRE charges are equivalent. In many cases, the operating voltage will determine the technology. As a very rough rule of thumb, the technology * 10 is equal to the maximum operating voltage. This sets the minimum die size, and that sets the manufacturing costs.

Tekmos can get around this limitation by using an on-chip voltage regulator to produce the lower core voltages while allowing the I/O voltages to be the same as the original FPGA. Since the newer technologies have significant density advantages, this can result in a smaller die size, and thus a lower cost.

So, if you get an EOL notice from your FPGA vendor, remember that you do have options, and that Tekmos can help you realize those options. Contact us today at This email address is being protected from spambots. You need JavaScript enabled to view it. for more information.

69 Hits
0 Comments

Tekmos Announces a New Release of the TK68HC711D3 and TK68HC11D0 Microcontrollers

Tekmos has announced the qualification and release of two of our microcontrollers, the TK68HC711D3 and TK68HC11D0. The micros were originally made at our provider Plessey Semiconductor. Plessey closed their 0.35u fab, and the designs were transferred to the X-Fab foundry located in Dresden, Germany. Originally each design had its own mask set. This reflected the fact that the parts were designed at separate times. When the designs were transferred to Dresden, the designs were merged onto a common substrate die. The new die was designed to support an optional Flash memory, which could be enabled through bond options.

There are no changes to the operation of either circuit, and each chip remains a drop-in replacement for the original NXP parts.

Having a drop-in replacement for parts has shown to be a very cost effective way to extend the life of products when the original component manufacturer discontinues a part. The availability of a drop-in replacement part eliminates the need to make the tough decision whether to redesign a printed circuit board or discontinue a product.

Tekmos continues to be the "go to" supplier when there are problems finding obsolete parts or when additional parts are needed after the date for EOL (End of Life) purchase has passed. Tekmos makes a variety of microcontrollers, microprocessors, and other miscellaneous standard products to satisfy these needs. Tekmos also continues to make custom ASIC replacement parts.

Customers are aware that buying from Tekmos ensures pin for pin, drop-in replacements that can be counted on to work in their applications, without worry about the quality of parts purchased on the grey market.

88 Hits
0 Comments

Part IV Design of Dual Port RAM

The Sense Amps Part IV

This is the fourth in a series of articles on the design of a dual port RAM.

Unify finger printWhen a memory is read, the word line enables a row of bits. Each bit is connected to two bit lines, and depending on the state of the bit, will pull one or the other to ground. The bits are weak, and there is a lot of capacitance on the bit lines, so it will take a long time to drive a given bit line to zero.

A sense amp is an analog circuit that is designed to sense very small differences in the two-bit lines, determine which line is going to ground, and which is remaining at the supply. The sensitivity of the bit lines is key to setting the memory access time, since the faster the difference can be detected, then the faster the data is available.

The sense amps' circuitry also provides a bit line precharge, bit line multiplexers, output multiplexers, and the write circuitry.

The Precharge

Before the memory can be read, the bit lines must be precharged to Vdd. This is accomplished by relatively large transistors that short the lines to Vdd. In addition to the two-bit line transistors, there is a third transistor that shorts the bit lines together. This insures that the bit lines start out with the same values prior to the beginning of the read cycle.

Instead of having a precharge cycle at the beginning of the read cycle, precharge is designed to occur in the resting state, which the processor enters at the end of a read cycle. Using large transistors means that the circuit is quickly precharged, and is ready for the next read cycle.

Bit Line Multiplexers

While every bit line needs a precharge circuit, every line doesn't need a sense amp. Doing so would be a wasteful use of power, and besides that, there is not a lot of room for that many sense amps.

Memories are organized in roughly square arrays. Our design is organized as 256 rows by 72 bits. We use a two to 1 multiplexer to connect 4 bit lines to a single sense amp. This memory is unusual in that we have word widths ranging from 1 to 36 bits. Had this been a x9 memory, we would have used an 8 to 1 multiplexer. To prevent errors in the read data, a precharge circuit is added to every internal node in the multiplexer.

The Sense Amp

The sense amp itself is just a cross coupled inverter pair that is powered through a sample clock. Transmission gates connect the inverter to the output of the bit line multiplexers. The presence of a small difference in bit line voltage levels is enough to cause the sense amp to settle in as high or low when the sample clock occurs.

My favorite analogy for a sense amp is to consider a ball that is perfectly balanced on a hump. Just the slightest difference in the bit line voltages is enough to cause the ball to fall on one side or the other.

To make the sense amp work perfectly, it is very important to make sure that the layout is perfectly symmetrical between the two-bit line sides. Any difference will tend to influence which way the sense amp switches. This can be compensated for by delaying the sample clock to allow more margin (i.e. more voltage difference) on the bit lines. Of course, this comes at the cost of increasing the RAM access time.

Output Multiplexers and Drivers

Once the sense amps have determined the data, it is stored in output latches. We will store 36 bits every read cycle. Then we will use another set of muxes to step the data down to the programmed data width.

Writing Data into the Bits

In comparison to reading, the write cycles are trivial. All that is required is to force a zero on one of the bit lines. When the word line is selected, data is written into the bits. Because the width of the memory is programmable, we might have to write to a single bit. We accomplish this by just not writing to the other bit lines. Keeping both bit lines high is just a read cycle as far as those bits are concerned and does not disturb the data.

Next Time - Self Timing

For the RAM to work correctly, the sample clock must be positioned at the exact correct point to optimize trade offs between signal size and RAM access time. I will explain self-timed RAMs next month.

81 Hits
0 Comments
Feedback