Saturday, 4 June 2016

Selling my HP1652B

Edit: It sold, within minutes. Good to know it's going to a good home where it'll be appreciated.

I really can't justify hanging on to two logic analysers, so I'm selling my HP 1652B.

This is the machine that I used to do most of the debugging to date on the Microbee Compact Flash coreboard.

Some of the specs of the machine:

  • 2 channel 100 MHz, 400 Msps digitising oscilloscope.
  • 80 channel 100 MHz timing, 35 MHz state logic analyser.
  • 1024 sample acquisition memory depth per channel.
I'm selling it with:

  • A full set of five HP 01650-61607 woven probe cables (these are the bits that I couldn't find on ebay initially)
  • Two HP 01650-61608 16 channel probe tip assemblies (allowing use of 32 channels) - more are easily available on ebay.
  • One HP10430A miniature 500 MHz 10:1 oscilloscope probe, with all accessories & booklet.
  • All disks (system disk plus self-test disk, plus backups).
I've rebuilt the power supply on this machine with brand new Panasonic FC low impedance caps. It's in really excellent condition with a bright, clear display with no burn-in.

I paid $550 for this logic analyser on ebay, plus more for the additional probe tip assemblies and woven probe cables, plus buying capacitors to rebuild the power supply.

I'm asking for $350 + postage. If there's no interest here or on MSPP, then I'll list it on ebay.

Some photos:

A front view showing the machine booting and having passed power-on self tests:

A back view showing the condition of the case:

All the accessories:

Close up of the accessories. The HP10430A by itself goes for ~$100 on ebay, as do the probe tip assemblies. The woven probe leads are really hard to get.

Running as a digitising oscilloscope. I confess I'm no fan of HP digitising oscilloscopes, but that's purely down to UI preference and not the capabilities of the instrument:

And running as a logic analyser (being used for real debugging my bee):

Tuesday, 17 May 2016


I bought some fast 20 MHz CMOS Z-80s on ebay recently. They arrived yesterday and my hubby did an unboxing for me, as I'm out bush until the weekend.

The paint that they'd applied to them before remarking them came off when he removed the packing tape...

Something tells me these chips might not be the real deal.

Edit: So now I'm home for the weekend, some close-up photos:

Firstly, a view showing the black paint they've applied to the top of the chips:

Some of that black paint has come off on the packing (film, not tape as I originally thought). I wiped off the remaining paint on a couple of them with a cotton bud soaked in isopropyl alcohol. They clearly use it to give chips where they've sanded off the original markings and remarked them a more convincing black finish, rather than the sanded grey look. The photo below shows a couple of chips where I've removed the paint, plus a couple of untouched ones. The bit in the top left corner of the second chip from the bottom here isn't a bit of paint I've missed. It's a bit of the original chip that they missed when sanding off the markings. This is what the original chip would have looked like. Note the absence of a mold release divot on the right side of this chip. Looks like they sanded that feature off.

The labelling looks really very good. I presume they use the same equipment to mark their knockoff chips that Zilog do. One tell-tale is that on real Zilog chips, the "i" is lower-case. It's capitalised on the counterfeit. Here's an 8 MHz genuine chip for comparison:

Alas I don't have a computer that I don't care about to try these chips in to see what they are. I don't even know that they're Z-80s, and I'm not going to risk damaging a loved vintage computer finding out.

After putting up a negative review on ebay, the seller was very quick to send a wheedling email asking me to remove the feedback. I think rather than that I'll just amend the negative feedback so it points to here, as this sort of thing is just not on.

Meanwhile, Mogget's found a new perch to watch my messing about from. The observant will note that _both_ my logic analysers are working happily now :)

Looking around the web, it appears that fake Z-80s are really depressingly common.

Monday, 9 May 2016

On logic analysers

I confess I've got a bit of a soft spot for old HP logic analysers. As a trainee techo, before going off and getting an engineering degree and doing lots of RF & microwave silliness, I spent a couple of years doing digital stuff for defence. That involved Xilinx chips on ISA and EISA boards, plus a bit of 8751 development. I had a HP1650 on my bench which I jealously guarded. A lovely bit of kit.

In the last decade or so for work I don't work so much in the digital domain so only have occasional need for logic analysers. Much of my needs are satisfied with Mixed Signal Oscilloscopes (MSOs). These have a small number of digital channels (typically 16 or so). I've also used an Intronix "Logicport", which I truly hate, as I need a windows PC to use it.

So for home use I wanted a logic analyser. I thought perhaps one of the Logicport or similar units might suffice, but rapidly talked myself out of the idea. The widest bus widths they typically have is 34 channels, which simply isn't enough to properly probe even an eight-bit CPU. They also need windows. I'm no fan of windows, and hate test gear that's dependent on software running on PCs to work, as PCs have a very short shelf life before they're on the junk heap, and I'd like my test gear to work for just a little longer than that thanks.

Feeling nostalgic I jumped onto ebay and looked up the HP1650. Having used these before, I had no doubt they were plenty capable. I don't particularly care that they don't connect to network or USB. I have a digital camera to take screen grabs, same as I do for my CRO. I was expecting them to be thousands of dollars. They weren't. They were typically just a few hundred, much cheaper than an equivalent CRO. I suspect people just don't have a good understanding of logic analysers, which has killed their present value. So I bought a HP1652B, which has 80 channels, plus a two channel CRO (which as it turns out is a pretty useless addition - Tektronix make much better CROs).

That arrived with a single solitary probe lead (16 probes), which was something of a pain, so I had to buy more bits before it was really useable. It appears the actual probes are common as muck on ebay (they're used even on quite new logic analysers), but the leads, not so much. I found a pair for an exhorbitant sum and have been making good use of my now 48 channel logic analyser on the compact flash interface.

After using it a few weeks it started randomly rebooting - some quick google-fu told me the power supply caps are notorious for drying out and shorting. So I pulled the power supply, found about half the 27 caps needed to re-cap it in my junk box and ordered the rest from Mouser. A week or so later I rebuilt the supply and it's been running happily ever since.

You'll note it's featured prominently in the compact flash development and has really proven it's worth. Bloody wonderful piece of kit.

Since then I've kept a lookout for probe leads, so I could expand my machine to the full 80 channels. Alas none have come up. I even saw a HP 1650B on ebay with ten leads, and asked the seller if they'd separate the leads, but got nowhere.

Eventually I was looking around and saw a dead HP1660A, which is a newer analyser with a whopping 136 channels, for real cheap (less than I'd paid for just a pair of leads). It had a full set of (different) leads, but no probes. The probes are common across the units. I wondered if perhaps I could buy that, get it going by stealing bits from my 1652B, and end up with a better analyser (4K memory depth per probe (8K in 72 channel mode) vs 1K for the 1652B). This would actually be nice, as the usual sector grab from CF is 512 bytes, plus half a dozen or so config/status accesses. If I trigger on the read or write sector command, then that's half way through my 1K memory, and I miss the last half dozen bytes at the end. It was cheap enough that even if I couldn't get it working I could sell the parts and profit.

So I pulled the trigger on the dead 1660A. It arrived today, and I've been playing with it ever since. Firstly, it was really badly packed, and a couple of the rear feet broke off. That necessitated some TLC to fix things, plus a little panel-beating to fix the aluminium case where it'd bent around the feet. And it was really, really filthy. Here's a photo showing the damage under one of the feet:

And the process of repairing it. That's boatbuilding epoxy, that is:

While it had the cover off I snapped a photo of the interior. Cop that awesome 68EC020! 32 bits of raw power. The 1652B has a measly 68000.

I swapped the voltage selector to 240V and plugged it in, and got lots of bad smell and no power. Oh well. The bad smell was coming from the power supply. The power supply is part no. 0950-2261 (HP has the service manual online - how cool is that!). The part number for my 1652B supply is 0950-1879. I wondered if it was perhaps just an earlier rev of the same thing...

Sure enough, careful perusal of the service doco for both the 1652B and 1660A shows the power supply as having all the same I/O pinouts and voltages. The difference, as it turns out, is the cover. The 1652B supply cover has a mount for a fat 720K floppy drive, and the newer 1660A supply has a cover for a more modern 1.44M skinny floppy drive. So I plugged my good supply in and turned it on, and it booted happily:

As with the 1652B, I'm confident the problems with the supply are down to dead caps. So, now I have a small stack of caps to replace, then I'll have two wonderful working vintage logic analysers. Here's both my logic analysers on the bench, ready for shenanigans. At present there's no supply in the 1652, as it's in the 1660A, with the 1660A lid on it:

The 1660A is much nicer to use than the 1652B. It has a whole keyboard on it, so entering labels is much faster and requires considerably less knob twiddling. It has a page up and page down button too, which makes navigating long lists of data heaps quicker than using the knob on the 1652. The floppies are even standard DOS format 1.44MB ones, so I can read them on my mac (with a USB floppy drive). Otherwise it's a proper bit of HP kit - intuitive in use and very powerful.

Oh, and the really hilarious bit: In the pouch, along with the eight 1660 probe leads, are three 1650 probe leads. What they were doing there I have no idea, but I most definitely have a use for them :)

Sunday, 17 April 2016

A Compact Flash Interface for the Microbee

One of the issues with my Microbee, and probably an issue that plagues many owners of very old microcomputers, is that of getting programs and files on and off. Old computers use floppy disks, or cassettes, which simply aren't supported on modern systems.

I've already covered some of the ins and outs of getting cassette data on and off a bee using an ordinary soundcard on a modern computer, resulting in being able to play games on a "cassette" bee. The next logical step is disks.

Microbees are able to run CP/M, a predecessor to DOS, which allows you to read and write floppy disks (and even hard disks), and run quite a lot of off-the shelf software, like Wordstar.

Some of my bees have floppy disk controllers, and are thus able to run CP/M. I have a couple of 3.5" double density disk drives, plus a box of blank disks and a single solitary CP/M "system" disk. My system disk is a bit iffy. I've been unable to make other system disks from it as I believe the "setsys" program is broken.

As with tapes, there is a cornucopia of MicroBee CP/M software on the internet, most notably at the Microbee Software Preservation Project. The issue is getting the software off the internet and into the Microbee. Disk drives are expensive, physically large, and fragile. The disks they take are getting increasingly difficult to get. There are, however, other ways to store disk data.

The obvious one is Compact Flash. Compact Flash cards are still widely available, due to their popularity with high-end cameras. They have an "IDE compatability" mode, whereby they pretend to be a hard disk. Late in the life of the Microbee, a hard disk model was produced, so there's a "BIOS" available for hard disks. There's been quite a bit of activity on the Bee Board (a predecessor to the MSPP) and the MSPP in getting Compact Flash cards to work, with reasonably mature software, thanks to the efforts of Kalvis. I also built some hardware about ten years ago, which never really made it to prime-time, as it was very flaky.

So early this year I figured I'd resurrect these efforts, with the goal of making an accessible "coreboard" that could replace the memory board on most-any Microbee and allow the machine to boot CP/M from Compact Flash. The Compact Flash card could then be read and written to on a PC, facilitating easy file transfer, both to and from the bee, and making a machine that's straightforward to play with and nicely self-contained.

There are a bunch of different Compact Flash interfaces already built for 8 bit systems. I've based mine on "GIDE", by Tilmann Reh. GIDE is supposed to go in the socket for your Z-80 CPU, allowing pretty-much any system with a Z-80 to have an IDE interface.

A google search on Compact Flash interfaces for 8 bit systems will show the degree of frustration that people have in getting them to work. Some cards work beautifully. Others just refuse to read, or write. I dove straight into these frustrations and I think I've worked a lot of the issues out.

Anyway, firstly, the hardware. My IDE interface uses a pair of 74HC646 registered transceivers, as per GIDE. These chips allow us to latch data going to and from the CF card, such that we can talk to a 16 bit card with an 8 bit CPU. Some (many?) cards are able to be put into an "8 bit" mode, but from reading accounts on the net, this isn't guaranteed to work from one card to another. In any case, the 16 bit interface is supported by all cards, as it's part of the IDE standard.

GIDE uses a pair of PALs to create all the enables and clocks for the registered transceivers, as well as do IO port address decoding. I rolled both these PALs into one Atmel ATF1502ASL CPLD. These chips have 44 pins, are available in a reasonably friendly PLCC package (through-hole via a socket), and run at 5V, so they play nice with the rest of the Microbee without mucking about with level shifters etc.

Here's a schematic for my IDE/CF interface:

There's just three chips involved: a pair of 74HCT646 registered transceivers, and the CPLD.

While I was laying out a board, I also added memory (up to 512K of RAM and 128K of EPROM), and a floppy disk controller:

The whole lot is laid out in a simple 2 layer board, with 12 thou tracks and clearances:

Next I had a bunch made. I used iTead, a Chinese low-volume board house. I was very impressed with the quality and price of the boards, but they took ages to arrive. Much wall climbing ensued. I then set about assembling a couple. The only challenging bit is the CF socket, which has pins on 0.635mm centres.

Getting it going started with first building a simple "SRAM" memory management PLD, which makes it pretend its a normal static RAM coreboard, with 32K of RAM and 24K of ROM. Once I had this working I got the floppy disk controller running, then went to work on the CF interface:

This was a lot harder than I anticipated. I got it working after a fashion, but it was very touchy. Probing things killed it. Touching ICs killed it. It was just really difficult. I started by porting GIDE to CUPL, and implementing it in the ATF1502ASL. Much of the touchiness with my CF card (A 64MB Sandisk one) was related to iord and iowr. These signals gate data to and from the card. On a PC, they connect pretty-much directly to pins on the 8088 CPU. GIDE uses the logical AND of rd and iorq to generate iord (all negative logic), and the logical AND of wr and iorq to generate iowr. the chip selects are generated by the logical AND of address lines and iorq.

What this means is that cs and iord/iowr happen synchronously with one another, and as I was about to learn this isn't necessarily good. After much frustration I found the Sandisk CF manual, which shows timing diagrams for "PIO mode IDE":

If you look really carefully, you'll see that cs must be asserted _before_ iord or iowr, and that it has to be held active _after_ iord or iowr are deasserted. Our simplistic method of simply gating everything with iorq just won't cut it.

Things got a whole lot more reliable once I removed iorq from the chip select logic. That ensured that chip select was asserted well before iord or iowr, and held active well after. It was still a little problematic though in that the chip select was activated for both IO and memory accesses, where it should really only be active for IO accesses. Also, further reading of the manual says that chip select should only be asserted _after_ the address lines are valid, not at the same time.

The solution to this lies in more logic. The Z80 (Microbee processor) holds iorq, rd, and wr valid for just over two clock cycles, asserting them just after the start of the T2 clock cycle, and deasserting them after the mid-point of the T3 clock cycle. Note there's always an extra "wait" clock cycle in the middle:

So if we create a cfiorq signal, asserted on the negative-going clock edge after iorq, and deassert it exactly two clocks later, using this to generate iord & iowr, we get the timing just so. This is done with a three-state state machine:

/* cfiorq state machine - generates a 2 clock pulse starting first low going clock edge after iorq */

state0.d = iorq & !state0 & !state1 # state0 & !state1 ; = !clk ; = rst ;
state0.sp = 'b'0 ;
state1.d = state0 & !state1 ; = !clk ; = rst ;
state1.sp = 'b'0 ;

cfiorq = state0 # state1 ;
Then iord & iowr are:

/* iord */

iord = tfradr & cfiorq & rd
# datadr & !lh & cfiorq & rd ; /* only assert iord for task file, cs1, and first data read */

/* iowr */

iowr = tfradr & cfiorq & wr
# datadr & lh & cfiorq & wr ; /* only assert iowr for task file, cs1, and second data write */
I've further restricted them such that they're only activated for specific accesses to the CF which probably isn't strictly necessary, but hey, it works.

So here's what my timing looks like in the flesh, on my lovely HP1652B logic analyser, which is nearly as old as the bee. Firstly a single 8-bit read, of the status register on the CF:

Accesses to the 16 bit data register make use of the GIDE lh signal, which toggles between bytes. Note that there's only one chip select and iord/iowr for every two bytes, at the first byte for the read, and second byte for the write. On alternate bytes the 74HC646 registered transceivers are clocked:

And finally here's my little bee, covered in probes to capture these waveforms:

The result of this is an interface that works with most every Compact Flash card I can throw at it, at least on this bee. Other bees, depending I think on their random mix of LS and CMOS logic, behave differently. It's a new challenge on each machine.

In any case, more detail at the MSPP, including source code for the PLDs, firmware for the ROM, and Kalvis' wonderful CF CP/M.

Friday, 4 March 2016


Here's a very typical example of getting an old cassette based game to run on the MicroBee. In this case the original came to me already as a sampled .wav audio file, but in the past I've done the sampling myself from (often very poor quality) cassette tapes.

So I start with froger-j.wav, downloaded from the Microbee Software Preservation Project website.

It's a 22.05 KHz sampled mono wav file. This is problematic, as the MicroBee cassette standard wants 1200 and 2400 Hz tones to represent binary 0's and 1's respectively. Neither 1200 nor 2400 divide cleanly into 22050, so the resulting waveform has a lot of timing jitter, which a real MicroBee hates.

Trying it on a physical bee using my Macbook as a tape drive confirms it's not a goer. The bee doesn't even get as far as detecting a valid header.

Next step is to have a go at decoding the file on the mac. Many years ago I wrote a couple of simple utilities, wav2dat and dat2wav, which convert .wav format files to data and vice-versa. They were subsequently picked up by other more talented programmers (Kalvis), who made real utilities out of them rather than the buggy I-is-coding! stuff that I write. They're rather more forgiving than a real bee, so generally have no dramas reading less-than-perfect waveforms. Sure enough, wav2dat converted the file easily, and dat2wav converted it back to a clean 9600 KHz sampled .wav file. The audacity plot below illustrates this. Because the sample periods don't line up cleanly with the data, there's a lot of zero-crossing jitter on the top (22 KHz) trace. The other two traces are 9.6 KHz versions, and they're lovely and clean.

So trying that on the real bee gets an actual file that loads. Alas after loading we just drop back to a prompt rather than running anything. This sort of behaviour is typical for copy protected games. It was popular to mess around with bits in the header, so if someone made a copy using a monitor program, for example, it wouldn't work.

But we've got much more than a monitor. We've got the tools to make any waveform we like. So comparing the header in the cleaned up file (bottom trace) to the header in the original dirty file (top trace), the differences are obvious:

The middle trace is what happens next. I brute-force edited the waveform in audacity to change the bits, so my cleaned up version has the same header as the original game.

So now, when I load the game, I get it to run, but still something is seriously wrong:

The key to what's happening here is in the size of the garbled character cells at the bottom. They're mostly square, rather than the usual very rectangular character cells that MicroBee generally uses. It looks like the programmer has run the screen in a 64 x 32 character mode with 8x8 characters rather than the more normal 64 x 16 character display with 8 x 16 characters. This is okay, as there are 2K bytes of screen RAM, enough for the 2048 characters that results. It's wasteful of PCG RAM though, as half of the PCG RAM can't be displayed.

Colour (and Premium) MicroBees expect the programmer to initialise the screen colours though. This game was clearly written before there were colour bees. The top half is initialised by basic, as that's part of the normal 64 x 16 screen. The bottom half's colours aren't initialised, so the colours are garbage. Not to worry. We can do this in basic before we load the game.

so the following code clears the top half of colour RAM to green on black, same as the first half:

10 out 8,64 : REM Enable colour RAM in top 2K of memory
20 for i=63488 to 65535 : poke i,2 : next i : REM clear all colour RAM to green / black
30 out 8,0 : REM back to normal PCG
So if we run this, then load the game again, we're in business:

Tuesday, 23 February 2016

Microbee Cherry keyswitch adapter boards

Here's what happens when you're out at work for the week and a box containing 500 little tiny circuit boards arrives, so you ask your husband to take a photo. No prizes for guessing what Perry's into...

What it is is another in the steps needed to make my Microbee keyboards totally wonderful and reliable. This little PCB goes between the keyswitch and the microbee baseboard, correcting the PCB layout.

Sunday, 14 February 2016

Designing a new Microbee

One of the neat things about the Microbee, and I guess for me the reason it has enduring appeal, is that it's design is wide-open and freely available, and was right from the start in 1982. The bee was originally conceived as a kit computer, and details of the kit, including a comprehensive "how it works" section were published in magazines at the computer's launch. Applied Technology, the makers of the Microbee, initially made their money selling electronic components for hobbyists, so it was in their financial interest to ensure that their computer was as open and well understood as possible. People would then play with it and buy parts off them to do so.

This was the heyday of electronics hacking. I remember as a teen going to "computer fairs", where hobbyists displayed their toys alongside the rapidly burgeoning industry reps, who were probably hobbyists just a few years previously, playing with S100 systems and suchforth.

So Microbee was never Apple, Commodore or Atari. There was no money for custom silicon, and that's good. The problem with custom chips is that they're built for a specific task, have a lifetime of perhaps three years before they're obsoleted by the next custom silicon, and then they're out on the scrap heap. No decent documentation ever gets published for them, as those developing the silicon are frightened that their rivals will steal all their IP. An example of this is VGA. VGA is so much more than a 15 pin connector on the back of old PCs. Its a whole display hardware system that was developed by IBM in the mid eighties with the launch of the "AT" computer, and extended the graphics potential of PCs considerably. It made windows possible.

But try finding documentation for VGA cards. Schematics. Google gives connector pinouts, scan frequencies, mode tables. But absolutely nothing on the inner workings of a real VGA card. Modern emulations are just a brute-force reverse engineering of the card. This data in these registers gives these results.

The bee is different. It's graphics are open, based on the Rockwell 6545 (a very close relative of the 6845 used in early IBM graphics cards) CRT controller. Because Applied tech couldn't afford silicon, the whole design is there for us to see and play with, right there in front of us. So let's play with it.

The objective of the exercise is to extend the Microbee's video hardware so that it's capable of playing Pacman. Not a stripped down game that looks a little like Pacman (ghostmunchers), but actually write the game on the bee and have it look and feel identical. The bee was never able to do this due to basic hardware limitations.

So let's have a look at how the bee's video hardware evolved over it's lifetime. The bee started life as a kit in '82, based roughly on a couple of S100 cards that Applied Technology were selling at the time. The DG640 was the basis of the bee's video hardware. Pre microcomputer, people used serial terminals to talk to computers. The serial terminals had a rudimentary screen and keyboard, and the computer did all the processing. Everything was very-much character based. The terminal was essentially a glass typewriter.

People didn't tend to own serial terminals in '82, so they included one in the design of the kit, that could use a modified television as the screen. This is where the 6545 comes in. The 6545 was designed as a CRT controller for terminals.

So the design of the video hardware in the original bee closely follows the standard terminal application note for the 6545. The CRT controller chip is connected to "screen RAM" which contains the ASCII value of each character position on the screen. The eight output bits of the screen RAM select a character from a character generator ROM, with it's least significant 4 bits being directly driven by the 6545. The resultant 8 bits is serialised by an 8 bit shift register, and the output data quite directly drives the intensity input of a CRT. The 6545 has a bunch of counters in it to generate the screen RAM addresses and the row addresses for the Character ROM.

Early memories were really slow. Typical static memories would do their thing in around 250ns. The access time of 250ns for the screen RAM, then a further 250ns for the character ROM or PCG RAM dictates the overall resolution. Each screen + PCG access yields 8 bits of data, stuck into a shift register, so we can generate a pixel every 500/8 = 62 ns, plus a bit for other logic. This equates to a maximum dot clock of 16 MHz. Microbee went with 12 MHz initially, and then upped the speed to a whopping 13.5 MHz for later models. At a screen redraw rate of 50 Hz, this equates to an absolute maximum of 320,000 pixels. Allowing for retrace we get quite a bit less than this. Microbee initially went with a 512 x 256 screen (131072 pixels), and later with a 640 x 275 screen (176,000 pixels). Applied Technology realised that ASCII only contains 128 characters, and the 8 bit output of the screen RAM could address 256 "characters". So they included a 2K x 8 RAM as well as the 2K x 8 character ROM, which could be loaded with values by the CPU. This "Programmable Character Graphics" (PCG) RAM allowed high resolution but incredibly limited graphics. The Microbee as originally sold had a graphics resolution of 512 x 256 (128K pixels), but there was only enough PCG RAM for 1/8th of this.

Soon after it went on sale, a colour mod was developed. The colour mod patched a second 2K x 8 RAM alongside the screen RAM. This gave eight further bits per character cell, which could define a foreground and background colour. Early colour bees used a rather strange 32 foreground (5 bits) and 8 background (3 bits) scheme, but with the release of the premium model they went with a more usual 4 bits (RGBI) foreground, 4 bits background scheme.

But there were still only 128 PCG characters, so there were no pixel addressable graphics. Late in the Microbee's life Applied Tech redesigned the bee mainboard and fixed this. They added a third 2K x 8 RAM in parallel with the screen RAM, for a total of 2K x 24 bits of screen RAM. Now 11 bits are used to select one of 2048 PCG characters, plus 8 bits used for colour, and the remainder essentially wasted, (used for flashing characters, inverse etc).

The mainboard was getting pretty crowded with all this logic. There's three RAM chips for screen, plus up to four chips for PCG, plus a flock of multiplexers and buffers to allow either 6545 or CPU to access screen and PCG memory.

The whole time the memory is still running at the same speed. Every character cell (1/8th dot clock) we access screen RAM, then we access PCG, then the data gets serialised. And the graphics is still essentially monochrome. Each pixel gets to choose either the character foreground or background colour, so if we want to render red, white, and blue in subsequent pixels we're out of luck.

But even in the late eighties, when Microbee went belly up due to the onslaught of cheap PC clones, memory was faster than this. There's a design in this, and it doesn't have to be monstrously complicated with thousands of memory chips.

Rules are necessary for this design. First, no SMD. Everything's gotta be through-hole, or at least through via a socket (ie PLCC). PALs and GALs are fine.

So we start by ditching two of the three screen RAMs. In order to get our 24 bits of screen data, we access one RAM three times each character clock, latching the data each access in a simple 'AC574 octal flip-flop. Three isn't a binary division though, so lets do four. The last access can be used for CPU, so we don't make the CPU wait until a retrace period to access screen RAM. At 13.5 MHz dot clock, this sin't stressing the memory at all. Each access is 148ns, easily doable with contemporary 120 or 100ns RAM.

So each character clock, we do screen, then attribute, then colour, then CPU. We've got enough data after attribute to do a PCG read, so the PCG access starts concurrently with the colour read.

We can do the same thing with PCG to allow us to read four planes of PCG RAM in a given character clock (one each for R, G, B, and I). That doesn't allow us to do any CPU accesses though, so if we push the PCG RAM out to 16 bits rather than 8, we get all our data in just two reads, leaving one for CPU, and one for... Hey, let's add a blitter!

A Blitter is logic or a processor that just moves things around in memory. It's really useful for graphics. One can give it data for a sprite or window, and ask it to draw it in various screen locations. In it's simplest form it's just another CPU that has access to video memory.

Now having three things trying to access one address bus is a pain. The usual Bee uses multiplexers (74LS157s) to select between 6545 or CPU addresses. Multiplexers don't scale cleanly, so let's use tri-state buffers instead. In the 6545's case it makes sense to use tri-state flip-flops for it's address lines, to ensure they're valid for the whole cclk cycle.

We still end up with four RAM chips though, simply because 64K x 8 RAMs aren't a thing.

Here's the whole lot diagramatically:

I know it looks complex, but thanks to the faster RAM and tri-state buffers rather than muxes, it's actually on a par with the Premium Bee's chipcount. The whole video memory array, including address and data steering, is 27 chips. The same circuit in the Premium bee is 26 chips.

So continuing the design exercise, let's build a new bee mainboard around this video memory array. We'll use a Z8S180 processor, because these are compatible with the Z80 but go at like 33 MHz rather than 4 MHz, plus add video game sound.

The whole lot is several sheets of schematics. First our video memory:

The CRT controller and Keyboard:

The CPU:
And finally the PIO, RTC and sound chip (a TI SN76489, which makes lovely eighties video game noises):

And to prove that the whole lot is doable, here's the layout, having hit the autoroute button. The PCB is 12 x 8.55 inches, just a smidgen bigger than the early Bee (12 x 8.4), but smaller than later Premium bees (13.4 x 8.55). I've done it in four layers (the mid layers are power planes), with 8 thou tracks and spaces, so it's a doddle to manufacture.

There's a total of nine GALs, where the premium bee used just one. The blitter isn't actually implemented - the idea is to do this on a second board, either integrated with a super-coreboard or else underneath. The coreboard sockets are a superset of the standard bee ones, at 32 pins rather than 25. This allows for the extra address lines of the Z180, as well as some more ground pins. Standard bee coreboards plug in just fine, using 25 of the 32 pins.

Monday, 25 January 2016