Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
8 years, 10 months ago.
Exactly how fast is this system?
I basically have two questions.
At 96 MHz, can the digital I/O pins read an 8-bit address and data bus (and other system I/O status bits) of a CDP1802 microprocessor running at 4 MHz clock rate? (basically to use it as a hardware debugger interface, for example - - so it can follow what the 1802 chip is doing, read the address and data lines, decode the instruction opcodes, and display the data (registers, accumulator, carry, etc.) on various LED displays)
And, can it run fast enough to service a 4 MHz interrupt with some minimally decent amount of processing time to process those interrupts?
I want to make an 1802 emulator and I want the clock input for that "chip" to be the NMI of the LPC1768. It needs to work such that this "emulated chip" can be plugged into any 1802 system and work.
If a 96 MHz native clock is not fast enough, what speed would a microcontroller need to run at to be able to do that? Do any of these family of microcontrollers run that fast?
(if I have to, I can write the needed code in "assembly" instead of C)
TIA
Question relating to:
5 Answers
8 years, 10 months ago.
There were issues with gpio switching rates at one time, but I think they have been sorted. Not sure if anybody has tried to do the same as you, but I think it should be doable. The best way to find out is to try, then come back with any problems you get. Use the forum for that, as there would be many alternate methods to explore, thus not suitable for questions.
Dave.
8 years, 10 months ago.
What is limiting you to 96Mhz? There are supported parts that run at 180 and 204 Mhz (although the peripheral clocks are generally half of the CPU clock). You can always prototype it on mbed and if you find that you lack the speed, you can always port your existing code to a bare metal IDE like LPCExpresso (NXP parts) or Atollic (ST Micro parts), both free. Just keep things modular.
8 years, 10 months ago.
You are asking two questions. You want the 1768 to read address and data lines of a running 1802 processor and have the 1768 emulate that processor? Hopefully not at the same time.
Anyway, go ahead and clock the 1768 up to 100 MHz since you only need 96 MHz for using OTG USB (which most people don't need). Now you are 25 times faster than the 1802 clock. A three-minute tour of Wikipedia told me that the 1802 takes 16 or 24 clock cycles to perform each opcode, so you will have a minimum of 400 1768 clocks per opcode. I would guess that if you are clever at taking interrupts and organizing your code, you will find that the 1768 can still "balance your checkbook" and do much more at the same time it does either of the two tasks you want to do.
If you find that you want more, you can always step up to a 1769 and clock it at 120 MHz with 100% pin compatibility. That would give you a minimum of 480 clocks per opcode, which should seem like an eternity.
8 years, 10 months ago.
In addition to Just's excellent answer:
Quote:
And, can it run fast enough to service a 4 MHz interrupt with some minimally decent amount of processing time to process those interrupts?
I want to make an 1802 emulator and I want the clock input for that "chip" to be the NMI of the LPC1768.
Why would you want to do this? What is the goal of that setup? While technically it can do it with a very simple interrupt handler, it cannot do more than very simple. An ARM already needs something like 8 clockcycles after the interrupt occured, to enter the interrupt handler. Then you generally lose a few clock cycles to store stack pointers and the like. So that all adds up.
Also in general, what exactly do you want to do with the retrieved data? Putting it on an LCD will go way too fast for a human to follow.
I try to never ask "why do you want to" something. Bill famously asked, "why would you ever need more than 768K?" and "why would you need more that eight characters in a file name?" Asking why usually means I didn't put enough thought into the problem.
In any event, Erik's second question is the winner. In any project, the goal should drive the design. I'm kind of eager to hear where the data goes also, if for no other reason than curiosity. Hopefully, the OP just needs to snitch the opcode, address, and data and put it out on a serial port. I think SPI at 10 MBits or above would easily keep up, but where does it go after that?
While I think that the 176x processor can do the task, I would not brute-force it on 4 MHz interrupts. The real beauty of Arm processors is the extra hardware magic that comes with them. Clocks, timers, and other peripherals are what makes these such great tools. The 1802 processor divides the 4 MHz clock by 8 for a cycle clock, so I don't see any reason to take the 4 MHz directly. Divide it by 8 like the target does, and then the overhead cycles will not really matter as much. This will work fine for an emulator without much extra effort, but the trick for the monitor will be synchronizing the divider so that the 1768 is looking at the busses on the best of the 8 possible interrupts. Even this may be as simple as watching the address lines for a few clocks and then starting the divider so that the interrupt comes one clock after they change.
I'm not even going to mention here that a possibly better solution would involve letting the 176x control the 1802 clock, which would allow single-stepping the 1802....
posted by 09 Jan 20168 years, 10 months ago.
Also see some potential pitfalls on this pending design. The original CDP1802 processor could run @ 5V nominal and higher voltages as noted in the respective datasheet (see Intersil). Do you wish to mimic the same voltage compatibility ? Sounds like you do.
Then we are talking a number of level (FET) shifters to pull off this task as the LPC1768 is unlikely to be natively support such high I/O voltages. You must have FET shifters is the direction of the data buffers is I/O (rather than single direction).
What about the size of this black box solution ? Is this is a requirement to fit the original 40 pin DIP format ? If yes, then the final BOM must be reviewed to fit inside the confined footprint.
Often, such emulations are performed by FPGA devices so that you are in full control of I/O pins on a cycle by cycle basis. The use of a FPGA will be as close as you can get to emulating the real 1802 CPU.
Next option you may wish to consider is the use of the XMOS line of silicon controllers. XMOS devices are the middle ground between the use of traditional CPUs and FPGA devices. FPGA devices are relatively expensive and often the tools are not free or at best limited on their features. As the market heats up on FPGA devices, the tools are being tossed in for free (ie. Lattice / Microsemi / perhaps Altera, Xilinx). XMOS tools are free.
Next, the LPC1768 is a single CPU or single threaded component. No matter the use of IRQs, a single CPU is performing the task at any given time. FPGA and XMOS devices are parallel devices. For example on XMOS, you can purchase 4 or 8 core devices on a single tile. This means that 8 processors (threads) can operate independently and can transfer data to/from other cores with short latency. Openly XMOS devices are not simple to understand and there is a learning curve. You can get started with a $15 USD kit (XMOS Startkit = 8 core CPU) to get your feet wet. See xmos.com and xcore.com for more details.
XMOS has a version which features the SiLabs ARM processor in addition to the XMOS CPU cores. We have raised the request to have the XMOS device with the ARM (believe it is M3) to be supported on MBED which we believe is under review.
MBED is an amazing tool but do review your requirements to see which technology is a proper fit. We do recall that someone (yzoer) has ported XMOS to support the Z80 processor via s/w only for MAME emulation. Not sure if the code is public. Also recall that another developer has ported the emulation of the FDC765 (believe that is the part number) for an end of life floppy controller - code is public on the github portion of xcore.com. Again, the XMOS tools / development are not as simple as MBED but may be worth a quick review. XMOS is deterministic hardware and you will be in full control of the I/O pins on a cycle by cycle basis. Also XMOS does not need nor support interrupts. It is the ultimate I/O bit banging machine. There is IP available for free to allow for USB 2.0 High Speed, Gigabit Ethernet and they are the leaders in the use of digital high end audio solutions.
1802 brings back some good memories. That was my first computer when I was 12 and hand soldered the Netronics Cosmac ELF II - have it somewhere in the lab for memory sake along with my oodles of Atari and Commodore hardware (a lot of our own designs were learned on the Atari hardware). I recall the Popular Electronics articles on the toggle switch version and the Star Trek spaceship graphic after 30 minutes of typing hex codes and using a RF modulator on my B&W TV. Somewhere also have the 1802 processor (40 pin dip).
BTW - Are you aware of the military version of 1802 available from Intersil @ $150 USD ? Still in production.
Even with the XMOS / FPGA / MBED approach, you will need to use external buffers to offer the proper voltage translation between today's technology to be compatible with the existing 5 volt (or higher) voltage swings of the 1802 hardware.
MBED is great but believe that the use of XMOS may be a better fit. With XMOS, we are talking about 500 MIPS to 2000 MIPS capable hardware in a single device.
Crossing my fingers & toes that MBED will be supporting the XMOS targets one day (at least the version with the Cortex CPU).
Hope this helps.