Retrocomputing Stack Exchange is a question and answer site for vintage-computer hobbyists interested in restoring, preserving, and using the classic computer and gaming systems of yesteryear. It's 100% free, no registration required.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

Debuggers are carefully written programs that peek and poke other programs while they run. In retrocomputers, programs could use any part of the memory they could access.

So how did debuggers insert themselves into memory so they could be executed, without overwriting - or being overwritten by - the target program or its data?

share|improve this question
    
This is a bit broad as it depends on the target. On a BBC they sat in sideways memory, on HPs there was a separate plug-in ROM. On others, e.g. Intel, they used separate hardware clipped onto the processor. – Chenmunka yesterday
1  
What makes you think they did not insert themselves into memory? In many cases, they are already there, and you can't debug unless you have prepared something, where something could be for example inserting an exception handler. The OS may already do that. This question is not answerable without giving an example of a system that acts in the way you describe. – pipe yesterday
    
@pipe I know there were different ways of this being done. But I'm pretty sure it's answerable. – wizzwizz4 yesterday
    
except for hardware emulators they did fit in memory - but we're talking about earlier, much more primitive, less capable debuggers - not visual studio or eclipse. Here, for example, is the manual for the PDP-8's DDT debugger where you can see, on page 2-2, that it fit in ~2350 12-bit words (PDP-8 maximum memory: 32K words). But small and limited as it was we got a lot done with it! – davidbak yesterday
    
BTW, earlier you asked: how did they ensure they weren't overwritten. Answer: They didn't! That was up to you. Be careful! – davidbak yesterday

Without hardware support, there is no way for a debugger to protect itself from the program being debugged. A debugger needs to protect its code and working state, as well as any hooks it's set up for its debugging (e.g. single-stepping interrupts); but in typical "retro" systems (8-bit machines), a debugger would just insert itself into memory and hope for the best.

Debuggers hosted on the same system as the code being debugged, and implemented entirely in software, would only be able to rely on the cooperation of the program being debugged and the person doing the debugging. Most debugging isn't adversarial: the code being debugged isn't actively taking measures against the debugger, so the debugger doesn't need to actively protect itself. (If a bug destroys the debugger as well as the program being debugged, well...) The debugger can offer configuration options to the end-user to facilitate things of course, but there will always be limitations that the end-user will have to live with (using a debugger leaves less RAM for the program being debugged, it might use some interrupts that the program would want for itself, etc.) — all told though the gain from having a hosted debugger makes it worthwhile.

Some early debuggers used extra hardware, without special CPU support, to reduce the level of compromise:

  • hosting the debugger in a cartridge means the code can't be touched
  • a cartridge can also provide extra RAM, which means the debugger's working state doesn't reduce the amount of memory available (at least, on systems with less RAM than their address space)

but, without a MMU, all this still happens in the same address space (even if banking is involved) so it's only reliable in non-adversarial debugging.

Adversarial debugging, or any form of complex debugging really (kernel debugging etc.) requires hardware support. This can take a number of forms:

  • a MMU to provide memory and I/O protection
  • CPU support for some level of virtualisation (e.g. V86-mode on the 386)
  • outside hardware, using a second computer for debugging — this was very common, in a variety of forms (a small debugging shim running on the host and communicating with the debugger via a serial port, the network etc.; ICEs clipped to the CPU, or replacing the CPU...)

Hosted debuggers, especially for adversarial debugging, only really took off with the arrival of MMUs and CPU-based protection; e.g. Turbo Debugger 32, SoftICE... Multi-host debugging is still used in many cases, e.g. embedded development, development on mobile phones (using emulators or real devices), kernel debugging...

There were some systems where the whole operating environment was constantly debuggable, e.g. Lisp Machines, but again that's non-adversarial debugging in most cases.

share|improve this answer
    
+1 for "If a bug destroys the debugger as well as the program being debugged, well..." – Insane yesterday

A debugger that runs inside the debugged machine is a program, so it does need memory.

Sometimes the debugger is loaded as a ROM cartridge, usually with its own RAM, so it doesn't need to take any RAM from the running program. This is the case with, for example, the Action Replay modules for the Amiga.

Sometimes, it's a regular program that takes some RAM, but it can be reallocated at load time so if you know in advance that the program to examine won't use a certain block of RAM, you can load the debugger there. This is the case with, for example, the MONS 3 debugger for the ZX Spectrum

Sometimes, the debugger cannot find any "normal" RAM to execute from, so there are debuggers that load themselves into the screen memory, like this one:

Gray display with black garbage in the top third

Yes, that's a program. You can try to dump this screen into a ZX Spectrum emulator screen and do a PRINT USR 16384 to execute it :)

The only kind of debuggers that don't really need any memory at all from the debugged machine are the so called ICE (In Circuit Emulators), which either take the place of the main CPU, emulating its behaviour and reporting data to a external machine, or are logic analyzers that can decode bus cycles and present you a disassembly dump of real instructions actually executed in real time by the machine being debugged.

share|improve this answer

On 8-bit machines any debugging support would just be a normal program in memory, and the application could accidentally overwrite the debugger. To single step through a program involved replacing some code in the application with a call or software interrupt that returned control to the debugger. It was all too easy to get a breakpoint in the wrong place and completely lose control of the debugger and the application.

Later on (in the early PC era), this principle was still the same. MS-DOS had a DEBUG application, and this would load in memory first, in turn loading the application on top of itself higher in memory. The 8088/8086 chips had no way to protect one segment of memory from another, so MS-DOS was not designed to manage memory to protect applications from each other. Consequently it was quite possible for the application under test to accidentally clobber the debugger and prevent it from working properly. Sometimes the machine just locked up and you just had to start back at the beginning.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.