Why was preemptive multitasking so slow in coming to consumer OS's?

44

3

Preemptive (rather than cooperative) multitasking was a highly-touted feature for PC's in 1996, with its inclusion in Windows 95 for the first time. It was also highly-touted for 2001 Macs when included with OS X. Earlier versions of both these mainstream consumer OS's supported cooperative multitasking.

Meanwhile, preemptive multitasking was the default for more sophisticated OS's, like Unix and OS/2, even decades earlier. And the Amiga shipped with a consumer OS with preemptive multitasking in 1985.

Notwithstanding the Amiga being 15 years ahead, why was it such a long time for preemptive multitasking to be supported by the mainstream consumer OS's?

Brian H

Posted 2018-09-26T16:20:20.847

Reputation: 16 111

15Wild speculation: Customers wouldn't care enough about it to justify the implementation cost, but when they got around to supporting it (Windows 95 changing lots of internals, and OS X completely rebuilding the OS on top of BSD) it was a great thing to market because it had the word "preemptive" in it. – wizzwizz4 – 2018-09-26T16:26:31.517

3Windows 3 had pre-emptive multi-tasking between VMs too (the DOS VMs and the Windows VM), didn’t it? (Yeah, nitpicking, since the Windows apps multi-tasked cooperatively, until one crashed...) – Stephen Kitt – 2018-09-26T16:59:09.077

3The Apple Lisa had preemptive multitasking in 1983; you just need to redefine consumer to be equal in wealth to four or five consumers. – Tommy – 2018-09-26T17:09:39.120

3I ran DESQview with pre-emptive multitasking in the late '80's as well, and I believe IBM's Topview did the same before then. – Brian Knoblauch – 2018-09-26T17:21:19.660

2

Would you consider TSRs like Borland Sidekick to be pre-emptive multitasking? They fit many of the criteria of multitasking: time-slicing, context-switching, and even system support (in the form of the InDOS flag)!

– ErikF – 2018-09-26T17:28:51.120

@ErikF I'm trying to focus on native OS features. DOS was a pretty lightweight OS, availing itself to friendly takeover by extension products with OS-like features. But you could also just run OS/2 or Xenix on PC hardware to make a clean upgrade. – Brian H – 2018-09-26T18:17:58.130

3@wizzwizz4 I think it should be obvious that applications are created for users, and OS's are created for applications developers. Can you convince that application developers had no desire for a preemptive multitasking OS? – Brian H – 2018-09-26T18:20:46.823

1@StephenKitt Yes... Windows/386 introduced a V86 multitasker that was used to preemptively multitask a Windows V86 VM and zero or more DOS VMs. Windows 3.0's 386 Enhanced mode carried this forward. (Although I'm not sure how Windows was run in protected mode.) Windows 95 is what finally brought preemption to Windows apps, but then there was a mutex around Win16 code, which was extensively used even by Win32 apps. – mschaef – 2018-09-27T00:52:36.770

1As a side note, preemptive multitasking arrived in the PC world in 1988 with Windows 386 – edc65 – 2018-09-28T09:59:59.670

Answers

18

Speaking for the Macintosh here.

TL;DR: It wasn't possible to do this in a compatible manner after the hardware was capable enough. Compatible to the already existing application base.

You'll need to see this in context with the heritage of the Macintosh Operating System itself. It was built to run in an considerably limited environment, regarding CPU power and available real memory. Add the fact that the initial release was certainly not production quality code, time pressure was high to get the whole thing running good enough for the highly anticipated release. See Folklore.org for details.

(Lacking memory remapping support in the CPU plus initial floppy-only machines prevented to implement paging in a bearable manner. See Memory Management Unit in Wikipedia for details.)

When RAM got cheaper over time but floppies still were current as permanent storage medium, the Switcher was born by chance. Applications were adapted to better behave with Switcher and vice versa.

From there, MultiFinder was the next step, coming out in 1987. It (also) takes advantage of the fact that there's relatively little to change in Applications for not only sitting in RAM, waiting to be switched to foreground but also being able to run in background. MultiFinder simply exploited the main loop every program was executing, mostly waiting for the user to do something. Applications mostly were stuck in a system routine which delivers an event (Keyboard entry, mouse click, …) to the application program to handle. MultiFinder kind of steals control from applications stuck in this routine and passes control to the next eligible Application. If an Application rarely calls this getNextEvent() routine, it behaves poorly in MultiFinder. Regardless if background or foreground. Best negative example is using Background Printing. Print Monitor searches the Network for the chosen (selected) printer. If the printer is switched off, the whole machine hangs for nearly a minute until Print Monitor returns with an error after a timeout, displaying error condition indicators and calling getNextEvent() so it can handle the user's input.

Over time, developers made Applications more multitasking-friendly by adding getNextEvent() in internal processing routines, so lengthy processing didn't block other applications. This usually doesn't feel as fluent as cooperative multitasking in a graphical environment, though.

(Novell Netware 3 and later also supported cooperative multitasking between extensions of the Kernel, server.exe. These NLMs were loaded at runtime and provided hardware services like drivers for mass storage adapters and network adapters as well as additional network protocols like AppleTalk and TCP/IP, and application support such as pserver.nlm, the print server or database support from Oracle, … I never programmed NLMs but Novell's implementation of cooperation was either very sophisticated and/or in a text based environment, the possibly non-monotonic time slices weren't as apparent as in a GUI environment like the Macintosh ones.)

System 7 added some more code to smooth out multitasking further. I can't prove it but I think, Apple changed the way to decide which Applications were eligible to get CPU control. Kind of dynamic scheduling instead of a dumb cycle through all open Applications: Foreground Application gets control more often than background ones.

In the meantime there were a lot of applications for the Mac. Commercial, free- and shareware. Apple's struggle to keep up with competition while trying to reimplement a new OS with features we today take for granted while keeping compatibility with this pool of Applications can be searched in online media archives of the late 1990s. Search terms include Taligent, Copeland, BeOS and finally NExTStep reborn as Mac OS X.

Switcher was a quick hack; MultiFinder extended the possibilities and that's about it. I think, Apple was too busy creating the next big thing instead of trying to use bolts and nuts to add stuff which involves changing basic OS stuff in an possibly incompatible way.

Btw., the Amiga had the advantage of it's coprocessors relieving the main CPU from certain tasks. I can't prove it but I think that in such an environment, preemptive Multitasking was a better way to exploit this seemingly-parallel execution environment.

PoC

Posted 2018-09-26T16:20:20.847

Reputation: 379

What were the "considerable limits" you mention in the third paragraph? – pipe – 2018-09-26T19:34:19.643

@pipe CPU power and available real memory? – immibis – 2018-09-27T01:29:28.347

IIRC there was a limited form of multitasking in early Mac OS by hooking into the video refresh interrupt. Applications could register a callback to be run every video cycle. Am I remembering correctly? – Barmar – 2018-09-28T16:29:43.367

1@Barmar: All platforms other than the Apple ][ allowed code to hook "timer tick" interrupts, but such timer-tick functions only give up control after they run to completion, which is rather different from multitasking, which allows tasks to give up control (voluntarily or involuntarily) in the middle of a function. – supercat – 2018-09-28T21:18:09.947

@supercat Of course. I remember that there was some kind of caveat saying that these callbacks should not run very long. – Barmar – 2018-09-28T21:21:53.997

@Barmar: Not only could they not run very long, but there were severely restricted with regard to what kinds of operations they could perform, with memory allocation and I/O being essentially forbidden. – supercat – 2018-10-01T17:50:48.857

30

Memory protection.

It's not that preemptive multi-tasking is expensive, or hard. It's not. It's easy. It costs (or can cost) essentially the same as cooperative multitasking. You have to save process state in both cases.

But what was holding back the older systems was their early reliance on systems without inherent memory protection, and those legacies lasted long past the availability of hardware that actually had supported Memory Management Units (MMUs).

Take the Mac, for example. Its legacy was the 68000, which did not have direct MMU support.

It was not long before the Macs started coming out with 68020 CPUs (which did have such support). But the OS has to run not just on the new hardware, but also on the old. The two systems are quite incompatible. Plus the actual impact on software design on those systems.

When you start from scratch (like OS/2) then, yes, it's easier.

MacOS heritage held it back for a long, long time before they were able to replace it with with MacOS X via it's Carbon compatibility layer for the software in the new, Unix-ish environment.

The OS for the Apple IIGS was actually a "better" MacOS in some ways, notably in process management and memory management. MacOS 2.0.

Windows suffered similarly. Recall early Windows ran on the 8088. Not the 80286, the 8088. That legacy also burdened it for quite some time.

Code has momentum, choices have consequences. Be amazed they worked at all.

Will Hartung

Posted 2018-09-26T16:20:20.847

Reputation: 3 363

36The Amiga featured preemptive multitasking without memory protection. I don't think that the kind of multitasking (preemptive vs. cooperative) and memory protection are related. The Amiga also had a 68000 CPU, without MMU. – PoC – 2018-09-26T18:45:20.100

23The Sinclair QL had preemptive multitasking even a year earlier. It had a 68008, thus no memory protection as well. Memory protection is no precondition for preemptive multitasking. – tofro – 2018-09-26T19:22:36.783

10I had ran a fully preemptive lightweight OS such as FreeRTOS even in small 8-bit AVRs that only have a few KB of internal RAM, with no memory protection, no MMU and not protected modes at all. – nbloqs – 2018-09-26T19:34:27.773

14I'd summarise the Macintosh's problem in a slightly different way: having first offered a single-tasking 128kb machine with no memory protection, it is a given that programmers would cheat and directly manipulate OS data structures. Once programmers are directly manipulating OS data structures, one cannot safely introduce preemptive multitasking because doing so would break atomicity. You'd never know when the scheduler is going to intercede with some important piece of internal structure half mutated. – Tommy – 2018-09-26T20:16:32.433

6@Tommy: The issue wasn't just OS data structures. Inside Macintosh specifically authorized applications to assume that memory associated with handles will only move when certain system calls are invoked, and that code which wasn't going to invoke any operations on that list didn't need to worry about memory blocks getting relocated out from under it. – supercat – 2018-09-26T22:10:51.877

Presumably there is a higher cost on systems with an MMU in terms of how much work a context switch needs to do ? In that sense the Amiga should be able to switch tasks faster. – PaulHK – 2018-09-27T02:19:25.583

@PaulHK A properly designed OS will not need an MMU to context switch. The context consists of register contents only, and saving these is done by the CPU anyway. – tofro – 2018-09-27T06:26:46.680

2@tofro - yes, but on systems that have an MMU, the MMU context does need switching as part of a task switch, and in many cases that's an expensive operation, often causing dozens of essentially random memory accesses as the various TLBs are refilled. Cooperative multitaskers usually use a single MMU context even on systems that can support isolation, making them more efficient. You could do the same for a preemptive design, but there are issues with that that can cause reliability problems. – Jules – 2018-09-27T07:54:05.757

Does the I&D cache need flushing on context swap (for security purposes) in a properly designed OS ? – PaulHK – 2018-09-27T08:32:04.787

@PoC - there is a relationship between them, but it isn't an obvious one. The thing is, if you want to to any advanced memory manager things (heap compaction, demand loading, swapping, etc), you either need an MMU or a way to be sure memory isnt in use before you operate on it. You can use a lock/unlock protocol, but developers don't like working with those, and often they become ineffective due to applications locking stuff for longer than necessary. In a cooperative model, you can allow programs to access resources between context switches using a simpler interface that encourages ... – Jules – 2018-09-27T11:10:53.560

... releasing resources on context switch, which then becomes much easier for application developers to work with and results in fewer leaks, or even intentionally held locks. With an MMU this issue goes away ... Memory resources are virtualized and process local, so locking and unlocking is unnecessary. So without an MMU, OS designers had to decide which features to prioritize. Once MMU s became more common those decisions were no longer necessary, because we could in fact eat our cake and have it. – Jules – 2018-09-27T11:15:33.997

Regarding OS/2, it's interesting to note the claim on Wikipedia that OS/2 1.x targetted the 16-bit 80286's protected mode. Again per Wikipedia, the 80286 included an on-chip MMU.

– a CVn – 2018-09-27T18:23:50.700

OS/2 just didn't target the 286. The 286 was designed with input from the original OS/2 group, and various modes were designed (at least in the early days) with OS/2 specifically in mind. – jdv – 2018-09-27T19:19:31.643

@Jules: An MMU will make it such things easier for independent applications that don't need to share resources (other than a pool of fungible memory). Shared resources are still easier under a cooperative multitasking model. – supercat – 2018-09-28T02:25:33.997

You don't need memory protection. Mac OS 8.6 introduced opt-in preemptive multitasking, using an API that was retained well into OS X.

– user71659 – 2018-09-28T03:00:22.180

@PaulHK have you heard of Spectre and Meltdown? – NieDzejkob – 2018-09-28T15:23:41.597

@NieDzejkob: Whether cache flushing would be required would depend upon whether hardware protects cache contents from inappropriate leakage. It's important to ensure that no task can be affected by cached information they're not supposed to access, but other approaches besides flushing could be used to achieve that. – supercat – 2018-09-28T21:23:44.547

@PaulHK It depends on whether the cache is addressed virtually or physically. Caches that (effectively) use virtual indexing/tagging must be invalidated on context switch for correct operation. – user71659 – 2018-09-29T22:21:19.853

14

To start with, the only hardware needed for preemptive multitasking is an interrupt capable timer. Everything else can be done in software. Though, some memory management would be helpful. Besides custom solutions, that hardware was already ready available off the shelf for 8-bit CPUs. Beside more generic solutions like TI's 74610 series, more advanced solutions like Motorola's 6829 were available.

Notwithstanding the Amiga being 15 years ahead,

Let's skip that subjective part, OK :))

why was it such a long time for preemptive multitasking to be supported by the mainstream consumer OS's?

Now, hold the horses. I hope you agree that CP/M was a mainstream consumer OS, wouldn't you? DR already offered its preemptive multitasking brother MP/M in 1979. It would work with 'only' 32 KiB, but of course considerably better with several 64 KiB address spaces, one for each process.

This 'huge' memory requirement also marks the primary reason why usage was limited: the price for such a computer. MP/M was targeted at 'power users' with an urgent need to have multiple applications running in parallel. While RAM prices did drop, this need for a use case stayed.

MP/M-86 was available right with the IBM-PC and turned later on into Concurent-DOS. Still it was the missing use case that stopped a broader usage. Not least due to lacking software that would benefit from multitasking at all. Other more limited products like DoubleDOS and Desq had more success in the general public by offering the chance to hold more than one program in memory, handy to reduce load times. Otherwise there was little gain, as these programs worked virtually separated.

More problem-centric solutions like Sidekick did gain much popularity, as they offered at least some (workflow) integration. Apple, on the other hand, did focus from the beginning on a more user centric approach. Already the original Finder supported clipboard exchange and accessories in addition to a (single) main application. With System 5's Multifinder more than one application could be loaded. Still, it took some time until this really got a foothold through better integration.

Bottom Line: It's about the application, stupid.

From a user's perspective there is no real difference between cooperative and preemptive multiprocessing. It's all about what can be done. No matter how we engineering orientated people value these concepts, Multiprogramming, Multitasking, Multiprocessing and all the other Multi* are only tools to enable the user to do his tasks and no real value in themselves. Separated address spaces, preemptive multitasking and so on only became standard when supporting hardware came for free. And at first it only supported programmers in creating applications (or help handling their inability to do so), not offering any direct benefit to the user.

Raffzahn

Posted 2018-09-26T16:20:20.847

Reputation: 43 298

1Preemptive multitasking among independent pieces of code is easy. If, one had a PC system with two floppy drive controllers and wanted to run two sessions, each of which was limited to accessing its own drive, a "multitasking OS" would take less than 100 bytes. What's hard is handling cases where multiple programs want to make use of a shared resource. An OS that doesn't have to handle such things can be more efficient than one that does, but unless an OS includes support for such issues from the beginning, it will be difficult to retrofit later. – supercat – 2018-09-26T21:47:50.770

3@supercat And that's related in what way to the question - or my answer? – Raffzahn – 2018-09-26T21:54:04.617

22000 - 1985 = 15 is not subjective, and that's the calculation that the questioner is clearly performing. – JdeBP – 2018-09-27T10:56:55.567

@JdeBP You mean 1996-1985 - as in the mentioned Windows is way more mainstram consumer than MacOS? But more important, you forgot about the Fanboi tag :)) – Raffzahn – 2018-09-27T11:03:26.063

1Re, "the application..." I think you're on to something there. Multitasking (of any kind) only became widespread when users started running multiple competing applications on the same host at the same time. But that didn't become widespread until PCs had sufficient graphics capability to run a desktop-with-overlapping-windows environment. – None – 2018-09-27T14:56:10.583

Your first statement is not entirely true. Without separate user/kernel protection modes (in particular, restriction of cli operation to kernel), "preemptive" multitasking is ultimately cooperative; an uncooperative task can just mask interrupts during operations it deems worthy of hogging the cpu. – R.. – 2018-09-28T04:21:11.597

@supercat That sounds like an answer to me. "It would've required large changes in the existing code-base." – wizzwizz4 – 2018-09-28T06:14:44.197

MP/M, I think, was aimed at VARs installing small business systems. – RonJohn – 2018-09-30T01:24:13.123

1@RonJohn: Pre-emptive multitasking doesn't mean that no task can sink the system; it merely means that tasks don't have to do anything special to keep things going. – supercat – 2018-10-01T16:18:33.577

@supercat I guess you picked the wrong target - but you're right about that preemptive doesn't mean protection, not that protection realy avoides any potential ill-meaning program. One can still open files until the disks ate at 100% and all tables fill up or issue many requests taking over most of the available CPU. That's as bad for the User as any other way. Not to mention, that preemptive multitasking needs maintaining moore locks to avoide a taskswitch when it shouldn't happen. – Raffzahn – 2018-10-01T16:23:02.227

@supercat since I didn't (at least I can't find where I said it) say anything about PMT making systems bulletproof, I think you might have replied to the wrong person. – RonJohn – 2018-10-01T16:31:12.250

1@R..: Pre-emptive multitasking doesn't mean that tasks can't sabotage the system, and thus doesn't require disabling CLI. What it means is that tasks don't have to make any explicit effort to yield control of the CPU during long-running operations. In cases where all tasks are trustworthy, allowing tasks to briefly disable interrupts themselves may avoid the need to have them use more heavyweight mechanisms. – supercat – 2018-10-01T16:31:30.160

@RonJohn: Yeah, the @ auto-completed to you rather than R. Sorry 'bout that. – supercat – 2018-10-01T16:31:58.077

10

If you look at early (small) computer operating system kernels like CP/M or MS-DOS, there actually isn't that much functionality supplied by those systems. Apart from a thin layer of drivers for keyboard and console, the main function supplied to the applications is the file system driver. CP/M and early MS-DOS didn't even have support for memory management. In an exaggerated sense, both CP/M and MS-DOS weren't much of a "real" OS, but rather very small pieces of code, thrown-together from small drivers and some utility programs.

An OS supporting preemptive multitasking needs to supply many more functions, memory management, interprocess communication, support to allow multiple processes to use the screen concurrently - A multitasking system can't allow programs to access the hardware directly and must instead block other tasks when one of them accesses hardware. So, it is simply a lot more effort to build such a system.

The first home computer to support preemptive multitasking was actually the Sinclair QL in 1984 - It implemented all the above and more in its OS QDOS, and its BASIC interpreter was just one of many possible jobs that could run concurrently. All of this was implemented in only 48 kBytes of ROM on this computer.

Computers like the Macintosh or Atari ST whose operating systems weren't built with multitasking in mind and had a considerable user base and applications suite were much harder to move to a preemptive multitasking system - compatibility with existing application was making the move extremely difficult - MacOS and Atari ST systems that, adopted preemptive multitasking in add-on developments like MultiTOS had large compatibility problems in the beginning and needed to take a lot of compromises to keep compatibility with existing applications (and, thus were obviously much more complicated to develop). The same is true for later PC-DOS multitasking systems like DesqView.

Another reason might have been: It was hard to convince the normal non-power user what they could probably need preemptive multitasking for. The average user argued "I can only work with one program at a time, so what?", and OS concepts that really created a tangible benefit for the end user from multitasking (preemptive or not) were only slowly evolving. Also, keep in mind, the direct benefit an end user has from a preemptive multitasking system over a cooperative one are zero. It's the software developer who gets all the benefits, and after all it is normally the former, not the latter who decides what computer is bought. Thus, as an engineer, it was presumably very hard to convince your marketing department you wanted to build such a system.

tofro

Posted 2018-09-26T16:20:20.847

Reputation: 14 237

1This is a good answer, but doesn't include the backwards compatibility issue, particularly with the Mac, that other answers seem to consider as paramount. Also, I think that OS-9 for the Coco 2 was out in 1983, and the QL followed. Not that either was nearly as mainstream or well-known today as the Amiga. – Brian H – 2018-09-27T12:21:42.227

Isn't there a benefit to the user that an unresponsive co-operative application task will likely make the computer unresponsive, whereas an unresponsive pre-empted application task will likely just slow the computer slightly but leave it responsive? – TonyM – 2018-09-28T17:20:19.580

@TonyM Maybe yes, maybe no. Marketing doesn't normally argue that way: "In case some of our unfailable software crashes, our new preemptive MS system will just slow down overall...." – tofro – 2018-09-28T17:23:23.737

I don't think it's 50-50, like 'maybe yes maybe no' implies, though. There are clear technical advantages to pre-emptive, that's why it's so favoured for responsive systems. And users could see the benefits in that responsiveness. – TonyM – 2018-09-28T20:57:22.027

The average user argued "I can only work with one program at a time, so what?" Formatting a floppy disk, while transferring files over the modem while working on a document was pretty sweet -- and quite handy -- on OS/2 Warp. You could also work while a document printed. – RonJohn – 2018-09-30T01:28:15.810

@RonJohn While that's true, note that even early versions of OS/2 were pretty heavy on system resources on contemporary systems; it took a while for hardware to catch up. I distinctly remember this being discussed as a downside of OS/2 at the time. Sure, being able to format a floppy disk while transferring a file over a dial-up link while printing a document is nice, but if the OS needs so much memory that on an affordable computer you can really only run one program at a time anyway, then there's little point to it. – a CVn – 2018-10-08T12:01:33.150

@MichaelKjörling maybe my system was just really beefy. IIRC, it was 8MB RAM and a 486DX/33. – RonJohn – 2018-10-08T12:54:18.360

@RonJohn By the time of OS/2 Warp, such a configuration doesn't seem unreasonable, even though not necessarily entry-level. But that was several years after OS/2 was introduced. IIRC even OS/2 2.0 hit the market in late 1990 or maybe 1991, not long after Windows 3.0. At that time, even 2 MB RAM was a moderately high-end configuration. – a CVn – 2018-10-08T12:58:39.713

@MichaelKjörling you make a good point. Warp is, in fact, what I used. – RonJohn – 2018-10-08T13:09:19.820

5

The "classic" Macintosh OS, as well as a lot of application software, made heavy use of what Inside Macintosh refers to as "Handles". Instead of keeping a pointer to a region of memory, application software would keep a pointer to a Master Record, whose first item was a pointer to a region of memory. If code was e.g. using a region of memory to hold a 32-bit length, followed by a sequence of short values and wanted to e.g. compute the total of all such values, it could do something like:

long addHandleContents(long **h)
{
  long n = **h;
  long total=0;
  short *p = (short*)((*h)+1);
  while(n--)
    total += *p++;        
}

Any time code performed any system call which might need to allocate memory, it would have to allow for the possibility that the storage associated with any handle that wasn't locked might get relocated. Typically, code would accommodate that possibility by either locking the handles before making such system calls, or by re-fetching the addresses of handles' memory blocks after making such calls. In cases where there were no such system calls (e.g. the code above), being able to access the storage associated with handles directly made things more efficient than would otherwise have been possible.

Some systems avoid such issues by allocating blocks of storage whose address will be fixed throughout their lifetime. Some (like .NET) require that programs contain metadata which would allow the memory manager to, at any time, find all the all the references that exist anywhere in the universe to any unpinned object that will ever be accessed. The Macintosh approach allowed better memory utilization than fixed-address allocators, and didn't need all the extra metadata required of .NET, but requires that programs know when the system might need to relocate storage. While limited preemptive multitasking might have been possible even under classic MacOS, any operation which would need to relocate any handle associated with an application would have to block until that application reaches a "safe state", thus forfeiting much of the benefit of pre-emptive multitasking.

supercat

Posted 2018-09-26T16:20:20.847

Reputation: 6 224

3The real issue behind handles was the original MACs only had 128KB of ram. If they had initially released them with more memory (perhaps 256KB), then they could have used a 32 bit flat address model, which is what the Atari ST did 3 years later with 512KB ram minimum. The IBM PC/AT, also released in 1984, had a minimum of 256KB of ram, and most were sold with much more than that (max was 16MB, but that was rare in 1984). Using pre-emption with a flat address space would be limited though. – rcgldr – 2018-09-27T03:47:12.507

@rcgldr: The use of handles is a major win in many regards, even when more memory is available, save only for the difficulties with supporting pre-emptive multitasking. While many of the 32,767-byte limits are a result of managers like TextEdit using 16-bit indices, that's a separate issue from multi-tasking. – supercat – 2018-09-27T03:49:00.753

2Maybe, but Atari ST didn't use handles, and I don't think Amiga did either, so they weren't needed. I worked at a tape backup company and worked on backup apps for the Mac, and the handles (and 32KB "segments") were a nuisance (along with the blind mode transfers needed to compensate for no DMA). Program loading with a flat address space can be done using a fixup table (the loader fixes all the 32 bit pointers in program based on that table), but the programs themselves would have to keep track of allocated memory and the OS would have to free memory upon program exit. – rcgldr – 2018-09-27T03:56:22.127

@rcgldr: I never did any Amiga programming, but my friend did, and he said fragmentation was a real problem. Using handles allows the Macintosh to relocate memory blocks to perform defragmentation when necessary. – supercat – 2018-09-27T04:33:04.513

Using handles for memory management in initial MacOS was a design decision that Apple would be regretting for a long time. Had they shipped the Mac with maybe 256kB instead, would have come them much cheaper in the long run. You can always stop multitasking or lock handles, though. The Sinclair QL came with the same amount of memory, the OS didn't use handles (but rather register-relative addressing for chunks up to 32kBytes, larger chunks were fixed) to implement floatable regions. But, admittedly, didn't come with a sophisticated windowing system initially). – tofro – 2018-09-27T06:42:22.547

@tofro: Perhaps such regret comes from not having experience with the consequences of the alternative? Memory fragmentation can be a real problem for sizes a fair bit beyond 128K. – supercat – 2018-09-27T13:52:57.167

The main problem with the handles is that initially it was transparent to the programmer how to convert a handle to a pointer that was faster (by simply dereferencing it) - so, everyone did so, and they ended up with huge amount of software that circumvented the handles (which mostly worked, except when it didn't, and made preemption a nightmare). And I disagree with memory fragmentation being a real problem with application and data sizes of the early Mac, would they have provided at least marginal headroom - After all, it wasn't a problem on other computers of the time (like Amiga or QL) – tofro – 2018-09-27T14:20:00.447

@tofro: According to my friend Ben, fragmentation was a problem he kept running into on the Amiga. I don't know about other platforms. Apple published around 1990 a guide with IIRC seven sections titled, e.g., "The Resource Manager is not a database", "TextEdit is not a word processor", and "Don't be a faker"--the latter referring to programs that assume that they can construct "fake" handles by passing the addresses of pointers. If you look at .NET or Java, their behavioral model is much closer to that of handles than to straight pointers, with the exception that... – supercat – 2018-09-27T14:43:03.877

...handles were resizable and managed objects in those frameworks aren't. The ability to resize handles was definitely useful, but the one killer aspect of it is that nearly all ways of allowing thread-safe resizing will degrade performance in most other usage scenarios. – supercat – 2018-09-27T14:44:16.600

@supercat - fragmentation is usually an issue for applications that run for extended periods of time. In some non-virtual environments, such as embedded applications, pre-allocated memory pools are used to avoid fragmentation. For more advanced .NET applications, it's common to use multiple processes with load sharing so that while one process is going through a defragment cycle, the other processes handle the load. From what I've read, Microsoft Azure implements load sharing in a manner that a programmer doesn't have to deal with it. – rcgldr – 2018-09-28T02:33:57.530

2

In the case of the Macintosh, Apple kept promising preemptive multitasking in the "next" release, going back as far as version 5 or 6 of the Mac OS, but it never happened until OS X. A version of A/UX (which is preemptive) for later Macintosh systems was released in 1988. Another gripe about Macintosh systems was they didn't have any DMA in the early years, although PC's and even some CP/M systems had it years before the first Macintosh in 1984, and third party vendors made DMA (bus-mastering) SCSI cards for the later Macintoshes long before Apple included that feature in Macs.

For PC compatibles, there was OS/2, a version of which could run on 80286 systems, but many consumers got the impression that OS/2 would only run on IBM's PS/2 systems (and vice versa for some consumers), and not the 386 EISA PC clones.

For Windows, NT 3.5.1 was released in 1993, but NT didn't become popular until NT 4.0 in 1996, and at the time it wasn't as popular as Windows 95 or 98. Windows 98 was followed by Windows ME. Windows NT 4.0 was followed by Windows 2000, XP, Vista, 7, 10 Going back to 1993, Windows 3.11 wasn't preemptive, but did include support for 80386 32 bit flat address mode via win32s extension or winmem32 (only Watcom C/C++ 10.0 compiler supported winmem32 as one of it's memory models at that time).

rcgldr

Posted 2018-09-26T16:20:20.847

Reputation: 231

Are you really referring to IBM AIX on the Apple Network Server systems (a PowerMac 9500-derived design), or are you conflating it with A/UX on the Mac II and Quadra series? – Chris Hanson – 2018-09-27T01:34:00.820

@ChrisHanson - Thanks for noticing that. It should have been A/UX - I corrected my answer. – rcgldr – 2018-09-27T03:25:53.197

A/UX itself is preemptive and runs the graphical Mac environment as an UNIX task. This environment itself runs preemptive with other UNIX tasks. The Macintosh Applications within this container still run cooperative (and without memory protection and other stuff). – PoC – 2018-09-27T20:15:06.587

2

You underestimate the power of existing programs.

At that time everybody was using MS-DOS/PC-DOS on PC's which for all practical purposes was a program loader for running one program at a time, which then was in full control of the whole machine.

For multitasking to be generally interesting you would need to be able to run multiple DOS programs simultaneously, or be restricted to a few select programs. It was not until the 386 was in widespread use this could be done well, and the users could see the utility of staying in Windows to run multiple programs at the same time.

(Until then Windows was typically just something you started to fire up an application, and quit again when you needed to do something else - this changed with Windows 95)

Thorbjørn Ravn Andersen

Posted 2018-09-26T16:20:20.847

Reputation: 176

To what event does “until then” refer? The 386 was released in 1985, the Netscape Navigator was released in 1994. We can safely assume that within that decade, Windows was used for other purposes than “to fire up Netscape to go online and quit again”. – Holger – 2018-11-12T09:04:52.717

1

The PC platform did not get off its DOS compatibility crutch: while the 80286 had segmented protected address modes, they were incompatible with the "real address mode" that the CPU started in and since Intel envisioned real mode only to be interesting for booting, switching to protected mode was not reversible ("triple fault" as a quick way to get back to real mode was more an accidental discovery than a planned feature). Real mode bogged the system down to somewhat more than 1MB of memory (expanded by chipset via EMM memory into what the processor could have addressed perfectly fine in protected mode) and running multiple applications on top of a Window system was not much of a priority given the large constraints.

And the operating system basis, DOS, did not even do cooperative multitasking: pipes, obvious candidates for that, were implemented by writing the output of the first program to disk and then running the second program with input from the temporary file created by the first. This was actually worse than what MP/M (the successor of CP/M, the "inspiration" for DOS) could do.

While there were some multitasking systems built off the protected mode of the 80286 (like Xenix and Eumel/Elan), those were not mainstream.

On the Apple side, Motorola processors did not bother a lot with memory protection. While a separate MMU was supported from the start, it was optional and consequently rarely available except for machines explicitly intended for running UNIX. The 68010 was actually the first CPU that saved enough information at a page fault to restart instructions: previous versions were not suitable for running copy-on-write schemes and similar demand-paging features.

However, the 68012 with a larger physical address space and the 68020 with 32-bit data busses throughout and quite extended addressing modes still did not come with a built-in MMU. It's just with the 68030 that the MMU became an on-chip feature (and with the 68040, the mathematical coprocessor, at least to a good degree). However, either were considered mostly overkill for home computers.

Intel was faster with the 80386 which had a 32-bit mode (the 68000 was basically a "32 bit" mode from the start) and a fully virtualizable processor with a paged MMU. This was so completely overkill with regard to the active market Intel was pitching that I have no idea how the engineers managed to push this approach (rather than integrating a mathematical coprocessor, for example) through into silicon. Uptake was rather tepid at first, but it was what started Linux off the ground after a few consumer-level UNIXes like "Interactive Unix" and the proprietary clone "Coherent" were coming about.

Windows 95 eventually got to use the 80386 modes as well, but it required significant engineering and defining a Windows-3 like OS interface over the 32-bit protected mode. The consumer Windows versions were bogged down by history, and just with Windows XP finally the more capable Windows NT approach replaced the consumer line completely.

Apple had in the mean time successfully relied on cooperative multitasking, relocatable code, relocatable memory allocation and was getting long in the tooth with MacOS. In fact, they were in dire straits, finally breaking free by basing MacOSX on a BSD running atop the Mach microkernel. They managed to pull off this rather audacious move (under Jobs) while Microsoft spent decades getting its operating system struggling through compatibility issues. Later on, Mac also managed switching to different CPUs (first PowerPC, later Intel and ARM) while Windows never managed to crawl away from binary compatibility with DOS systems.

So in a nutshell: preemptive multitasking needed additional hardware for a long time, multitasking at all was not really in the DOS/Windows scheme for a long time, impressive built-in memory protection in consumer-level devices came late with the 80386 and even later with the 68030, and MacOS was rather successful with its cooperative multitasking schemes and almost missed jumping off.

user10725

Posted 2018-09-26T16:20:20.847

Reputation:

Would the 68000 undo register updates if an instruction was aborted because of a bus fault, leaving the stacked program counter ready to retry the instruction? I thought that all kinds of bus faults were essentially disruptive. Was the problem only advanced forms of paging like copy-on-write? – supercat – 2018-09-29T18:58:32.350

@supercat No. The 68010 was the first processor in the family with the capability to restart instructions. That's why you couldn't do effective virtual memory with a 68000 – JeremyP – 2018-10-01T09:16:25.640

@JeremyP: I knew the 68010 was the first with the ability to interrupt and instruction and restart in the middle of it. If the 68000 can't even retry (from the beginnning, as opposed to restarting in the middle) I think saying it can't support "copy-on-write" [from the answer as posted] understates the limitations. Note that if a chip handles memory aborts by resetting registers to their state before the start of the failed operation, that would suffice to handle most kinds of virtual memory without having to save any special extra context, with the only problems being... – supercat – 2018-10-01T16:11:33.537

...cases where it would matter whether writes get performed once or twice [if a write is split between pages, and the first half succeeds while the second half fails, another task takes control, notices that the first location was written and changes its value, and then the original operation is retried, the first half of the write would get performed a second time, possibly causing the second task to get confused]. – supercat – 2018-10-01T16:17:09.800

1

Mostly legacy reasons. Early affordable computers didn't have enough RAM to make multitasking worthwhile, so they used monotasking operating systems like DOS and MacOS.

In 1985 the Amiga was launched with a preemptive multitasking operating system, but it was a brand new system starting from scratch. Lack of business software meant it struggled to gain a footing in offices, it was mostly sold for games and creative uses.

PCs and Macs had a lot of existing software written for their non-multitasking operating systems, and adding preemptive multitasking without breaking those apps was going to be very difficult. Before multitasking many apps accessed hardware directly and didn't even consider things like shared filesystem access, and of course were not event driven.

Microsoft took the path of running older apps in a partially virtualized DOS environment. MacOS introduced cooperative multitasking as an option and spend many years laying the groundwork for preemptive multitasking, so that when the time came the switch over was easier. MacOS also had less commercial software to support, particularly vertical apps, and was able to get most of the big ones updated.

user

Posted 2018-09-26T16:20:20.847

Reputation: 2 632

0

Besides all the technical reasons, personally I think that there was a lot of company policy involved.

In the 1980s, microprocessors like the 68000 and operating systems like Unix or OS-9 were there to build powerful multi-user machines, with the potential to become competitors to the established mainframe world.

What happened then?

IBM, earning lots of money from their mainframe branch, launched a personal computer based on a limited processor with a limited operating system that surely wouldn't support multi-user or pre-emptive multi-tasking. This machine would surely not compete with their mainframe world.

And all the IT world jumped onto this platform, no longer devoting resources to the more advanced processors and OSes. This prolonged the lifetime of the mainframe dinosaur world by at least a decade.

So, I think it was a very clever business decision of IBM to launch such a limited machine like the IBM PC. I never understood how they convinced the software industry to nearly exclusively support this platform and nearly forget about Unix for the next decade.

Ralf Kleberhoff

Posted 2018-09-26T16:20:20.847

Reputation: 731

Actually, I don't think IBM was seriously concerned about any PC being a mainframe killer. But they likely were concerned with a PC being a MINI killer - e.g., System/34. I worked on a bunch of MP/M-86 -> Concurrent CP/M -> Concurrent DOS systems which could do exactly that. The 80386 was the real key to those systems as it made a huge difference in both memory (per user and total) and DOS compatibility, but even the 80286 based systems could run as credible multi-user, multi-tasking competing against IBM & other minicomputers with a much less expensive microcomputer box. – manassehkatz – 2018-09-30T02:06:52.520

1Concurrent CP/M-86 was a really nice system. The closest equivalent today in terms of usage is a non-X11 Linux system with multiple consoles on Ctrl-Alt-F1 to Ctrl-Alt-F7. – Thorbjørn Ravn Andersen – 2018-09-30T08:35:20.840

@Ralf Kleberhoff: From my somewhat blurred memories, IBM was in need to get something running quite fast. They were in awe to lose future market share to competitors if they won't jump onto the low-cost-market as fast as possible, without investing too much money if this plan fails. – PoC – 2018-09-30T10:37:46.997

At the time the IBM PC was launched, the most successful personal computers were all eight bit machines. Also, the 68000 was not suitable for multi-user systems and Unix was certainly not running on desktop PCs. – JeremyP – 2018-10-01T09:25:39.327

1

At the university in the mid-1980s we ran 68000-based machines under OS-9 in multi-user mode with up to four serial terminals. That wasn't mass-market mainstream, but it existed. The boxes were a bit smaller than the PC or AT desktop package. And the popular German computer magazine "mc" designed a 68000-based machine to run under OS-9 around 1984 (http://www.kultboy.com/magazin/2827/)

– Ralf Kleberhoff – 2018-10-01T19:30:06.243

@JeremyP - "the 68000 was not suitable for multi-user systems" -- what, you mean like this one?

– Jules – 2018-11-13T07:13:18.730

@Jules Are you really claiming the Sun 1 was a multi-user system? – JeremyP – 2018-11-13T09:43:57.483

@JeremyP - it ran V7 Unix, which is a multiuser OS; it had a 10MHz processor and supported 1MiB of RAM, which is enough to support multiuser applications. So yes, I'd say it was a multi-user system. – Jules – 2018-11-14T07:38:30.793

@Jules But are you seriously claiming it ran Unix v7 well enough for multiple users? Get two or three people on it at once and I think you'll soon come across some limitations. – JeremyP – 2018-11-14T09:44:10.637

@JeremyP Don't forget that back in the 80s, applications were a lot more compact than today. A 4.0MHz Z80 with 64 KB of RAM was enough for text processing, and accounting programs were even less demanding. – Ralf Kleberhoff – 2018-11-16T19:12:40.153

@RalfKleberhoff The absolute maximum addressable RAM on a 68000 was 4Mb which you would have to share amongst all the processes and there was no built in support for memory protection and virtual memory could not be done even externally (the 68000 could not restart an instruction after a bus error). All processes would have to share the same address space, which would quickly get fragmented. The Sun One was designed as a single user work station. – JeremyP – 2018-11-19T15:44:13.720

@JeremyP If you read the MC68000 manual, e.g. at https://www.nxp.com/docs/en/data-sheet/M68000UM.pdf, you'll see that the MC68000 had/has 24-bit external addresses, thus capable of addressing 16 MByte. But that doesn't really make a difference, as both 4 MByte as well as 16 MByte were HUGE back then.

– Ralf Kleberhoff – 2018-11-19T18:58:24.507

My mistake. Yes 16Mb. – JeremyP – 2018-11-20T14:07:13.013