So years ago I had won an eBay auction for 3 disks:
But pretty much everything I threw at it emulation wise came up with NOTHING but green bars when trying to enter a virtual machine. I’d always thought it was a video ROM thing but VGA type ROM I put in Qemu it’s always the same thing, green jail bars.
However, I tried it again on 86box, and YES it runs!
You can see VMs running, where they are in memory and all that other fun stuff.
And even better you can run graphical PC programs on your advanced 80386, and seamlessly multitask them all, using the hotkey ALT+PRINTSCREEN to toggle between them all. Surprisingly creating and terminating VMs didn’t really mess with overall system stability. I have to imagine that had this program had a 32bit API, it would have killed OS/2 before it ever got a chance. Considering that version 1.2 is from 1988 there very well could have been a larger possibility.
It does have the ability for individual profiles to specify RAM or even where or how to boot, it has disk drivers for sharing of files (think file locking). It also has the ability to boot from floppy, or even ROM!
Indeed there is a rather good review from PC Magazine: January 1988, that goes into many features, and compares it to other contemporary multitaskers of the era.
The one big drawback is there is no data exchange facilities. The one thing that Windows/386 had bridging the gap between MS-DOS & Windows applications.
So many products like VM/386 ended up finding their niche’s in attaching dumb terminals, and turning 386 classed machines into ‘micro mini’s’ witthout the power of Unix. It’s even out of this environment Citrix was born.
But there was so much potential here to be something so much larger, but sadly that was not to come. Perhaps 1988 was just a little too early in the sense of GNU GCC/GAS/LD and some Xenix COFF help. The world would have been a lot more stranger had Microsoft lost that second vital platform war.
With the pre-christmas release of the Microsoft OS/2 betas 1.00, 1.01, 1.02, 1.03 & 1.05 on archive.org, and helping Ncommander with an upcoming video, it seemed like a good place to start, not with OS/2 but rather with MS-DOS 4.0.
Microsoft started work on a multitasking version of MS-DOS in January 1983. At the time, it was internally called MS-DOS version 3.0. When a new version of the single-tasking MS-DOS was shipped under the name MS-DOS version 3.0, the multitasking version was renamed, internally, to MS-DOS version 4.0. A version of this product–a multitasking, real-mode only MS-DOS–was shipped as MS-DOS version 4.0. Because MS-DOS version 4.0 runs only in real mode, it can run on 8088 and 8086 machines as well as on 80286 machines. The limitations of the real mode environment make MS-DOS version 4.0 a specialized product. Although MS-DOS version 4.0 supports full preemptive multitasking, system memory is limited to the 640 KB available in real mode, with no swapping.2 This means that all processes have to fit into the single 640 KB memory area. Only one MS-DOS version 3.x compatible real mode application can be run; the other processes must be special MS-DOS version 4.0 processes that understand their environment and cooperate with the operating system to coexist peacefully with the single MS-DOS version 3.x real mode application.
Because of these restrictions, MS-DOS version 4.0 was not intended for general release, but as a platform for specific OEMs to support extended PC architectures. For example, a powerful telephone management system could be built into a PC by using special MS-DOS version 4.0 background processes to control the telephone equipment. The resulting machine could then be marketed as a “compatible MS-DOS 3 PC with a built-in superphone.” Although MS-DOS version 4.0 was released as a special OEM product, the project–now called MS-DOS version 5.0–continued. The goal was to take advantage of the protected mode of the 80286 to provide full general purpose multitasking without the limitations–as seen in MS-DOS version 4.0–of a real-mode only environment. Soon, Microsoft and IBM signed a Joint Development Agreement that provided for the design and development of MS-DOS version 5.0 (now called CP/DOS). The agreement is complex, but it basically provides for joint development and then subsequent joint ownership, with both companies holding full rights to the resulting product.
As the project neared completion, the marketing staffs looked at CP/DOS, nee DOS 5, nee DOS 4, nee DOS 3, and decided that it needed…you guessed it…a name change. As a result, the remainder of this book will discuss the design and function of an operating system called OS/2.
– Inside OS/2.
Although MS-DOS 4.00M disk images have been floating around for quite some time, either a 2 360k disk set, or a single 720k disk image, I don’t think anyone (including me) really tore into it that much. It does have the ability to freeze DOS 3 programs, giving the illusion of running more than one. The session manager is pretty sparse but hitting left alt twice will pop it up giving you the ability to toggle through programs with ease.
There is a FDISK, FORMAT & SYS command making it straight forward to setup a hard disk, and copy the files over, I didn’t see any installer.
there is a PS command to show running processes. Also there is a DOSSIZE to show the memory partitioning and how much is available. Although there is a SWAPPER program I’ve been unable to get it to actually fun.
Another interesting thing if you run the unix ‘strings’ command against all the EXE’s you’ll find the string:
C Library - (C)Copyright Microsoft Corp 1985
Implying that not only was DOS 4.00M a ‘new’ DOS, but it was also written in C. No doubt this contributed to a larger file size than DOS 3, however it would also give that holy grail of portability, at least to new CPU modes. Also many files have the name of the source files baked in such as:
Okay so far, so good. But we’ve all seen this before, and scratched this OS about this far, because what else can you do? It’s not like there is any dev tools to do anything fun!
Well the tool hidden in plain sight is LINK4, which in retrospect is specific for MS-DOS 4.00M.
Microsoft (R) 8086 Object Linker Version 4.01
Copyright (C) Microsoft Corp 1984, 1985. All rights reserved.
Object Modules [.OBJ]:
There is no SDK for MS-DOS 4.00M, but they were kind enough to leave the linker in place. A quick check of the Windows 1.01 SDK shows that it also includes LINK4:
Microsoft 8086 Object Linker
Version 4.00 (C) Copyright Microsoft Corp 1984, 1985
Object Modules [.OBJ]:
It appears that if the dates and versions are to be trusted they are of the same vintage, but the Windows linker is older, and that they both output to a NE or New Executable. So to start the experiment I created a simple hello world exe from a simple:
void main(){
printf("Hello from MSC 3\n");
}
To compile this I used Microsoft C 3.0 (more on why later), and used LINK4 to create an EXE:
C:\dos\msc3>cl /c hello.c
Microsoft C Compiler Version 3.00
(C)Copyright Microsoft Corp 1984 1985
hello.c
C:\dos\msc3>msdos dos4m\link4 hello.OBJ
Microsoft (R) 8086 Object Linker Version 4.01
Copyright (C) Microsoft Corp 1984, 1985. All rights reserved.
Run File [HELLO.EXE]:
List File [NUL.MAP]:
Libraries [.LIB]:
Definitions File [NUL.DEF]
Okay, everything looks fine so far. Attempting to run this under MS-DOS just results in the error:
Program too big to fit into memory
Well now that’s odd. Checking the EXE with the Linux ‘file’ command reveals:
file HELLO.EXE
HELLO.EXE: MS-DOS executable, NE (unknown OS 0) (EXE)
So obviously it’s a NE, but it is an older/unknown version to the file map database. There is no stub so I suppose that is why MS-DOS is getting confused.
Now let’s try MS-DOS 4.00M
Well now isn’t that interesting?!
Excited with the ability to create special MS-DOS 4.00M programs, I get my favorite vintage ’87 Infocom interpreter, InfoTaskForce 87, and get it building on MSC 3.0. However instead of using the MS-DOS 4.00M linker, I thought I should try to use the Windows 1.01 linker and libraries for the exe:
NAME Infocom
DESCRIPTION 'Infocom 87 interpreter for Planetfall(83)'
DATA MULTIPLE
HEAPSIZE 1024 ; Must be non-zero to use Local memory manager
STACKSIZE 4096 ; Must be non-zero for SS == DS
; suggest 4k as minimum stacksize
SEGMENTS
_INIT PRELOAD MOVEABLE DISCARDABLE
One thing to save you the horror is that between MS-DOS 2 & 3 the way command line arguments changed. I forget the details but no matter what I tried I was unable to parse the CLI or the environment in this setup. I suppose if I had documentation of the product there would be some hint as to what tools or setup to use. Instead, I took the easy way and hard coded to load Planetfall.
Unfortunately, this success would prove to be the exception to the rule. I took trek, converted it to K&R C, as Microsoft C 3.00 from 1985 is well. old, and sadly it just won’t run. Likewise, I took Hack 1.03 and although it runs on MS-DOS it will not run on MS-DOS 4.00M. I am sure there is some fundamental reason why it’s not working, and probably tied to creating a proper DEF file. I’m sure it was all written down somewhere but I don’t know. And yes I tried specifying either floating point emulation via library or inline, and it made no difference.
Looking at OS/2 1.00
Loading up the infamous $3,000 OS/2 1.00 beta, and hitting ctrl+escape you are greeted with session manager!
Notice the R for real-mode. With the obvious implication that everything else is protected mode. Going one step further on the excellent site pcjs.org there is OS/2 betas SIZZLE and although there is no OS/2 development bits on there, the directory DOS3TOOL reveals that the C compiler for this era for at least MS-DOS is MSC 3.0. Also included is our friend LINK4.
I create a simple def file that contains the single word ‘PROTMODE’ which should give me my OS/2 binary.
So let’s run that through hello world:
msdos sizzle\DOS3TOOL\link4 hello.OBJ,hello,,,hello.def;
Microsoft (R) Segmented-Executable Linker Version 5.00.21
Copyright (C) Microsoft Corp 1984, 1985, 1986. All rights reserved.
C:\dos\msc3>
However attempting to run this just crashes amazingly.
No doubt it’s because the real-mode libc is using interrupt 21 calls, which OS/2 sure wouldn’t like. I’m pretty sure it requires an OS/2 libc that uses DOSCALLS.DLL to function, which I just don’t have any pre-release versions, nor any libc source code to really make it possible. And attempting to port one to OS/2 pre-releases just doesn’t seem so worth the time.
So for the heck of it I point the LIB variable to the OS/2 1.00 SDK’s libs and re-run the link:
C:\dos\msc3>msdos sizzle\DOS3TOOL\link4 hello.OBJ,hello.exe,hello.map,C:box0\x\MSC\LIB\slibc5.lib box0\x\LIB\DOSCALLS.LIB,hello.def;
Microsoft (R) Segmented-Executable Linker Version 5.00.21
Copyright (C) Microsoft Corp 1984, 1985, 1986. All rights reserved.
By default it’s trying to link in EM.LIB, SLIBFP.LIB, SLIBC.LIB. Trying to add them all in the command line link just hangs LINK4 maybe a response file is better suited. Anyways:
It does run on OS/2 1.00, which I guess isn’t surprising as the LINK4 & libraries are from/for this version.
As an interesting note, OS/2 links against doscalls library/DLL to interface to the OS. While MS-DOS 4.00M doesn’t have a seperate DLL, rather it’s baked into IBMDOS.COM
Noticeably absent is file I/O, No doubt allowing programs to use the standard int21 interface to the kernel for file I/O. No doubt this is in its primordial state, as the OS was going to evolve a bit more until it became OS/2. Unfortunately I have no idea how to link or call into this. Without any SDK it’s impossible to say. And even then is developing for a real mode OS worth the effort?
So what have we learned? LINK4, aka the MS-DOS 4.00M Linker, probably should have been called LINKNE for the NE format. Also there is references to it having it’s own virtual memory paging system, and being able to link larger EXE’s than the traditional link command. Sadly I was unable to get any non trivial programs running. I don’t think it was a memory model thing, although the C compiler has issues with InfoTaskForce and the large memory model for some reason, but small & medium work fine. I’d like to think that DOS 4.00M could support massive EXE’s much like Windows 1.01, however despite being from the same company and using the same tools, the memory manager for DOS 4.00M & Windows is fundamentally different.
With all these exiting OS/2 betas now available I’ll have to take some more time to explore them in more detail.
But until then I thought this genesis of DOS 4.00M was worth the look.
For the longest time VOGONS was the place to get information about VDMsound the sound blaster emulator for NTVDM, allowing a far more rich gaming experience on NT, DOSBox, the ubiquitous PC/MS-DOS emulator that is simply everywhere, and of course where I was ‘discovered’ via ‘Quake1 with WATTCP built with DJGPP on DOSBox‘ some 10+ years ago!
A long long time ago, in a distant continent I once interviewed at this small company called Citrix. It was some QA position, they didn’t need programmers. I’d passed the interviews easily as I’d been programming serial TSR’s so I was hip to the 8250/16450. Citrix was an interesting but troubled company. They had incredible contacts and more importantly a deal from Microsoft that gave them access to OS/2. Sadly OS/2 1.0 had been a dud, and by the time OS/2 2.00 saw even a limited release, Microsoft had pulled out of OS/2. Citrix was a company that had lost twice in what should be a big market. -Multi user commodity systems.
Citrix Multiuser 1.0 was based on OS/2 1.21, and was limited to 16bit protected mode apps. Citrix Multiuser 2.0 was based on the Limited Availability version which means that it cannot run “GA” or General Availability programs. So no 32bit programs here. Instead it can run the same 16bit protected mode applications, however it can also run MS-DOS based programs. DOS4/GW programs run so oddly enough the only real commercial stuff that can be run is MS-DOS.
So here we were 1994. Citrix had struck out twice, but this time it was going to be different, but the deal had to be re-struck again. I have no idea how they managed to secure this lucrative deal again, but Citrix was able to get access to the source access Windows NT, after the 3.1 release to 3rd parties (when they got DEC involved). By now the world had gone Windows, Office 4.2 was a thing, and on the high end side, NT had SQL & SNA, and there was most defiantly a market for multiuser systems as there had been from the old days of Unix, with the old mix of ASCII and network graphical terminals.
The CD looks like a normal-ish NT 3.5 Server CD although there is no MIPS or Alpha builds, as expected everyone at Citrix would be working and targeting the larger established i386 market.
As you can see this is Beta build 101.
In the text mode setup it looks like a normal setup program. No doubt they had better things to do than skins, wallpapers and themes. HOWEVER there is a silent IDE bug that many people will no doubt run into:
Although it works okay in short bursts, the IDE driver will send a command 28 zero byte and then shut down the controller. From this point it hangs. So that means we either need to generate all the floppy disk images (not going to happen!) or do the MS-DOS cross install. Yeah I’m doing that instead.
When setting up under Qemu, use the AMD PCNET card. It’s much easier. I set it to Twisted Pair, and PCI bus. I’m not sure if those matter all that much, but it works for me!
If you are going to use Hyper-V, you’ll need the GF100 NIC driver, but use the Windows NT 3.1 driver, as this is technically a beta of NT 3.5 and the production 3.5 driver will blue screen.
I set the driver to autosense.
I also had both Qemu and Hyper-V bluescreen when doing DHCP. I don’t know what the issue is, and I’m too old to care as I don’t have source code to South Beach, and even if I did I’d probably regret posting fixes. So static IP address it is!
Honestly again the air in the office when I was there is that everyone was running around like crazy to QA the product, and get ready to expand client support. While I was too much of an OS/2 fan boy, they certainly knew that from now on everything was going to be about Windows NT.
Logging into the Citrix the first fun thing to do is to define some remote terminals, using the WinStation app.
The first interesting thing is that async terminals are supported. Along with using either NetBIOS or Winsock protocols for connecting clients. Isn’t that great! TCP/IP built in!
Now for the crazy part. The only client that works is MS-DOS based. Yes there is no Win16, no Win32, no Java, no protected mode DOS, no Linux, SunOS, Solaris, DG/UX, AIX, HPUX, Xenix, UnixWare or SYSV i386ABI. ONLY Real Mode MS-DOS. Despite the connections being able to be ICA version 2 or 3, they are incompatible with newer Windows based clients from Win Frame.
This it the following list of supported protocols. Although I had Novell Lan WorkPlace and used it before for Desqview X, I can’t find it at the moment. good luck finding FTP TCP/IP, in retrospect it’s a terrible name, and for all intents and purposes it’s disappeared from the earth. So that leaves Microsoft TCP/IP. Now all the LANMAN clients have it, although this isn’t what it wants. It wants the MSCLIENT found in the “\CLIENTS\MSCLIENT\NETSETUP” path from a retail version of NT Server 3.5
The DOS client is.. very touchy. Deleting profiles can lead to a corrupted profile. Altering existing profiles well yeah can lead to a corrupted profile. I thought it was EMM386 causing issues but it locks up on it’s own.
Revenge of text mode UI
One interesting thing I found is that the text mode UI didn’t die. It’s still very much alive. As mentioned above you can connect async terminals, or even connect over the network!
Text mode does bring up a Program Manage analogue, but all my programs are graphical so it’s kind of moot. But rest assured text mode stuff works great.
So 32bit Fortran stuff works great, what about MS-DOS?
Here is MS-DOS / Qbasic editor. Running on Citrix South Beach! Great, what about OS/2?
And here we go running the f2c translator through Dungeon to get an OS/2 text mode app. As you can see forcedos reveals that this isn’t a bound executable, instead it only runs on the OS/2 subsystem.
And of course it looks better on the graphical client to mix and match them all.
Obviously somewhere post South Beach the text mode stuff dropped off. I’ll have have to dig for more, but it’s kind of neat the idea of a real text mode NT. Sadly South Beach doesn’t seem to like VMware. I haven’t dug too far, as I like WSLv2 so I’m stuck with Hyper-V. It may work fine on ESX I haven’t tested. Obviously you need the appropriate drivers, ill try to update links later, if anyone cares.
No doubt that finally Citrix was no positioned to realize the dream of multiuser commodity based hardware along with commodity applications. Of course it wouldn’t be all sunshine and rainbows, and no doubt there was a toll needing to be paid between Windows NT 4.0 and on the way to Windows 2000. But back in 1994, things were looking good!
640K ought to be enough foranyone. Well I’ve been poking around with an old beta that I had a long long time ago, lost, found, lost again, recovered, lost and found while looking for something entirely different again. I’ll spoil it later but anyways while messing around I needed a MS-DOS client, and it needs the MSNET TCP/IP stack, not to be confused with the LANMAN TCP/IP stack, and it doesn’t work with the Windows for Workgroups stack either. So yes I setup all 3, and of course found out that it really was the worst of the 3, the MSNET one.
Anyways convential memory is below 1MB. Back when the PC was new, it seemed that going from an Intel 8080 processor that could addresses a mere 64kb of RAM to the IBM PC that could address a whopping 1MB it seemed unlimited. A decision was made to segment the machine into 640kb for user programs reserving 384kb of RAM for hardware.
And then something happened where drivers became user programs, and suddenly loading a mouse driver, CD-ROM driver, audio driver, networking stack and you have not enough memory available. Welcome to the living hell that was 1988-1995. In this virtual machine although it has 64MB of RAM in MS-DOS the largest free space with everything loaded is 366KB.
Microsoft Windows and DOS (among other products) started to include this fun tool MSD, Microsoft Diagnostics that would let you explore your memory, to see what was actually in use.
Imagine the absolute frustration here. 64MB of RAM, and yet there isn’t enough free to run a simple program. HOW ANNOYING!!!
Looking back at the MSD memory map, you may noticed from the map there is memory available, and possibly available. What does that mean? It means that there is no ROMS, or device RAM in use currently using that hardware reserved memory. Sadly for the 8088/80286 users they either don’t have a MMU, or one that only really works for protected mode segmentation. The 80386 however had a MMU sophisticated enough to let you map whatever you wanted where by booting MS-DOS into a protected mode environment and using v86 mode to map whatever you wanted where, by using the included program emm386.exe I’m sure plenty of others have touched on this program, and I’m going to just make a quick glance at it.
If you look at a typical PC memory map you’ll find that location A000-AFFF is actually reserved for graphics memory. Since we are using VGA that also means B000-B7FF is also available. that means for text mode programs we can open up all this RAM for smaller program & driver use, along with the memory after the VGA BIOS, until the ROM BIOS of the computer that’s CC00-CFFF in my case, with D000-DFFF and E000-EFFF also being open. Obviously the fun comes in that not every PC has the same peripherals ROMS installed so this isn’t guaranteed to work in every instance.
In my case I don’t need EMS emulation at all I want to map it all to UMB or upper memory blocks for drivers and TSR’s. So I load emm386.exe into the config.sys like this:
I didn’t put in any exclusionary ranges as EMM386 figured it out all on it’s own in MSD, but you may need to specify ranges to leave alone.
This gives me 519KB of free conventional RAM. Oddly enough a lot of the networking stack moved itself into UMB without me having to do anything. It’s probably more so a function of the MSNET I used from a Windows NT 3.5 Server CD-ROM being dated 1994, so I didn’t have to play with the load high command.
Back when the PCem forum was up I had this config, although keeping in mind that although it was far more aggressive!
@ECHO OFF
PROMPT $p$g
PATH C:\DOS;c:\windows
SET TEMP=C:\TEMP
LH MSCDEX /D:CDROM01
LH SMARTDRV
LH IDLE
LH DOSKEY
LH SHARE
This got me a whopping 619Kb free in MS-DOS, along with 4MB of EMS, and 12MB XMS (on a 16MB config).
In the spirit of the old ‘Linking the linker‘ (I’m not certain that this is the actual article but it does certainly read the same way, didn’t Tim have 2 blogs?), I went ahead and claimed the video memory for the heck of it.
Obviously you cannot run graphical programs, but 605kb of conventional RAM, wish some 206Kb worth of network drivers! Not bad. I could probably squeeze a 32kb EMS frame in there, and get what would be an incredible 1-2-3 machine for the era. But I’m not such a big Lotus 1-2-3 fan anymore.
As always it’s 2021, and normal people will glance and WTF, you have 64MB of ram how can you be fighting for kilobytes. Anyone that used MS-DOS based networking will cringe and look the other way. These were not happy times.
In other news the client ran, sadly it’s too new for the server.
One of the more interesting things about OS/2 1.x is how it had this interesting idea of how to strattle the bridge between old and new, and it was a very common bridge tactic where you can have a shipping program that can simply run in both the older operating system, and the new one. Naturally there is trade offs, you can’t fully take advantage of all kinds of features on the new side, you will be largely held back on the old side, but all is not lost, there is space for things that fit in the ‘same but bigger’ world where you have an overlap between old and new.
For OS X, this was the Carbon era, for Windows this was the famous Win32s extensions, and for OS/2 it’s the Family API.
As a quick example, allocating memory under MS-DOS may be limited to 640kb, but under OS/2 you have access to so much more memory, the entire capacity of an IBM AT class machine. And this also got OS/2 tools into a lot of MS-DOS developer’s hands as the early compilers and tools were built around the Family API and were able to run on so called legacy environments. Although it was far better to run on OS/2, the advantage 30+ years later is that MS-DOS emulation is more common and prevalent than OS/2, especially on non x86 processors.
As an added bonus you really don’t have to mess with the API at all, as the LIBC will use it no doubt.
At any rate, using Microsoft C 6.00 (I can’t get the syntax right for 5.1 to save my life, I suspect I need to run it UNDER OS/2 to build for OS/2 properly), you can compile a typical stdio compliant program, and get an OS/2 executable.
The real fun is from the bind program which will convert that OS/2 program to a full Family mode app with the bind program.
And now on MS-DOS (Under OS/2) you can see very quickly that the OS/2 app won’t run, however the family mode one does!
So this is what let’s me run the older SDK tools as I’d simply forgotten about this great mode, letting you run programs in either environment.
Of course the added fun is the 3rd party product Phar Lap’s 286|Dos-Extender that provides some OS/2 services under MS-DOS in addition to greater memory but DLL’s! But that’s for another story.
**EDIT Oh and another edit, here is how to make the OS/2 program ‘window’ compatible with a link time definition file:
and then on the console:
And there we go with some magical flags & def file it’s now marked as being compatible with window mode. So no full screen VIO tricks for you!
(This is a guest post by Antoni Sawicki aka Tenox)
This is a continuation of the vintage DOS/Windows hypervisors and emulators for Unix series. So far I have covered things like Merge, MergePro and Wabi. This time I’m taking a closer look at VP/ix. This early DOS hypervisor was developed by Interactive Systems Incorporated (ISC). Initially released and included with their INTERACTIVE UNIX System V/386 operating system it was also available for SCO Xenix 386, Sun 386i, AT&T WGS as Simul-Task 386. The last two versions were significantly enhanced to allow DOS/Windows graphical apps run in windowed mode, which unfortunately is not the case with IX and Xenix, where graphical apps can only run on the console. VP/ix was released around the same time as Merge in 1987 and it was its main competitor. Both products are early hypervisors, they use Virtual 8086 mode and require 386+ to run on. This is in contrast to SoftPC which is a full x86 emulator that can run on different CPU/architecture hosts.
VP/ix comes with ISC INTERACTIVE UNIX that is covered in my previous article. The product was installed as part of the 50 floppy disk set. You run it with an icon in Looking Glass environment or invoke from terminal or console via “vpix” command.
VP/ix comes with it’s own custom version of MS-DOS 3.30. It allows a variety of cross unix/dos enhancements such as shared disks, automatic dos/unix file format conversion, listing unix attributes from dos as well as running unix commands from dos and vice versa. One of super cool features is that you can pipe output of DOS commands to Unix command, for example:
C:\> dir | wc -l
…will do a DOS dir and pipe it to Unix wc command. You can map Unix paths to DOS drives:
VP/ix has an interactive Menu invoked by SYSRQ + ‘m’ key:
You can load floppy disks, turn sound on/off, restart/quit or run unix shell.
As for running normal text mode apps it’s business as usual:
Multiple instances of DOS can be launched and files shared between them. Also if you are a different user on different terminal or connected remotely. Remote terminal also supports mapping dos line characters to ASCII.
The same however cannot be said about graphical DOS or Windows apps. Under INTERACTIVE UNIX and Xenix you need to run them from the text mode console:
One day I will probably want to look at VP/ix on Sun 386i or AT&T WGS as they solved this problem. Newer versions of Interactive Unix (4.x) and VP/IX also need to be investigated.
According to the documentation, you can run Windows 3.x in real mode using win /r however I did not have patience to install this.
INTERACTIVE UNIX 3.0 with VP/ix preinstalled can be downloaded here for 86Box or VBox OVA, however the later does not have networking and resolution is only 800×600. Login as root/root. When importing OVA in Vbox you may need to disable import as VDI. For 86Box version please read readme on how to circumvent licensing error.
(This is a guest post by Antoni Sawicki aka Tenox)
In a recent post about OpenServer and Merge I covered OpenServer 5 and Merge 5.3. Thanks to a comment from Uli I have learned about MergePro which looks like is a rebranded Win4Lin. Intrigued I wanted to try it especially that you can download it from SCO ftp server as Uli pointed.
I’m going to be using VMware Fusion on Mac, which is now free for personal use. They call it Fusion Player, however unlike Workstation and Player, it has exactly same features as non-free Fusion version. For the OS I’m going to use Xinuos OpenServer 6 Definitive, however you can easily download OpenServer 6.0.0Ni from the ftp. I also have copies in my archive.
Installation is straightforward. You can skip licensing and use evaluation license, however for convenience you can use following keys:
If you are installing 6.0.0Ni you will also need MP4 update. 6D2M1 is already patched.
To install MergePro you need to copy this package to the host os and install like so:
# pkgadd -d /tmp/MergePro-6.3.0-04f_pkgadd.stream
In the following step, mount Windows 2000 or XP SP1 or SP2 ISO and run:
# loadwinproCD
Once Windows is loaded you need to install it as a non-root user using:
$ installwinpro
After it’s installed, to run you type:
$ winpro
Unfortunately I have failed to install Windows XP with variety of errors and blue screens. Windows 2000 works fine, however it feels bit sluggish and mouse click doesn’t always register. It looks like there are some sort of Windows Guest Additions being injected in to the OS so one would expect this to work just fine.
During startup I have noticed that MergePro installs and uses KQEMU kernel module. Also this screen looks suspiciously familiar… where did I see this before?
The BIOS and VGABios look definitely stolen from Bochs. HDD controllers look like Win4Lin. I’m not going to go in to deeper analysis of what MergePro is made of at this time. Looks like a topic for another article or even better – your comments 🙂
Also if you want to license the copy of Merge use following key:
MergePro 6.3.0f: SCO138318 / bhtecusg
Finally for the lazy here is fully installed OVA, password is root/root and tenox/tenox for the regular user.
UPDATE: Thanks to reader Larbob we now know that you can install any guest OS, on MergePro not only Windows! Use installwinpro -c /dev/cdrom/cdrom1 -w winxppro to boot the cdrom without checking what OS is actually on it. Here is a screenshot of Solaris x86 being installed on MergePro on UnixWare:
So.. you could install UnixWare as a guest VM under OpenServer or vice versa??
The following is a guest post by NCommander of SoylentNews fame!
For those who’ve been long-time readers of SoylentNews, it’s not exactly a secret that I have a personal interest in retro computing and documenting the history and evolution of the Personal Computer. About three years ago, I ran a series of articles about restoring Xenix 2.2.3c, and I’m far overdue on writing a new one. For those who do programming work of any sort, you’ll also be familiar with “Hello World”, the first program most, if not all, programmers write in their careers.
A sample hello world program might look like the following:
#include <stdio.h>
int main() {
 printf("Hello world\n");
 return 0;
}
Recently, I was inspired to investigate the original HELLO.C for Windows 1.0, a 125 line behemoth that was talked about in hush tones. To that end, I recorded a video on YouTube that provides a look into the world of programming for Windows 1.0, and then testing the backward compatibility of Windows through to Windows 10.
Before we even get into the topic of HELLO.C though, there’s a fair bit to be said about these ancient versions of Windows. Windows 1.0, like all pre-95 versions, required DOS to be pre-installed. One quirk however with this specific version of Windows is that it blows up when run on anything later than DOS 3.3. Part of this is due to an internal version check which can be worked around with SETVER. However, even if this version check is bypassed, there are supposedly known issues with running COMMAND.COM. To reduce the number of potential headaches, I decided to simply install PC-DOS 3.3, and give Windows what it wants.
You might notice I didn’t say Microsoft DOS 3.3. The reason is that DOS didn’t exist as a standalone product at the time. Instead, system builders would license the DOS OEM Adaptation Kit and create their own DOS such as Compaq DOS 3.3. Given that PC-DOS was built for IBM’s own line of PCs, it’s generally considered the most “generic” version of the pre-DOS 5.0 versions, and this version was chosen for our base. However, due to its age, it has some quirks that would disappear with the later and more common DOS versions.
PC DOS 3.3 loaded just fine in VirtualBox and — with the single 720 KiB floppy being bootable — immediately dropped me to a command prompt. Likewise, FDISK and FORMAT were available to partition the hard drive for installation. Each individual partition is limited, however, to 32 MiB. Even at the time, this was somewhat constrained and Compaq DOS was the first (to the best of my knowledge) to remove this limitation. Running FORMAT C: /S created a bootable drive, but something oft-forgotten was that IBM actually provided an installation utility known as SELECT.
SELECT’s obscurity primarily lies in its non-obvious name or usage, nor the fact that it’s actually needed to install DOS; it’s sufficient to simply copy the files to the hard disk. However, SELECT does create CONFIG.SYS and AUTOEXEC.BAT so it’s handy to use. Compared to the later DOS setup, SELECT requires a relatively arcane invocation with the target installation folder, keyboard layout, and country-code entered as arguments and simply errors out if these are incorrect. Once the correct runes are typed, SELECT formats the target drive, copies DOS, and finishes installation.
Without much fanfare, the first hurdle was crossed, and we’re off to installing Windows.
Windows 1.0 Installation/Mouse Woes
With DOS installed, it was on to Windows. Compared to the minimalist SELECT command, Windows 1.0 comes with a dedicated installer and a simple text-based interface. This bit of polish was likely due to the fact that most users would be expected to install Windows themselves instead of having it pre-installed.
Another interesting quirk was that Windows could be installed to a second floppy disk due to the rarity of hard drives of the era, something that we would see later with Microsoft C 4.0. Installation went (mostly) smoothly, although it took me two tries to get a working install due to a typo. Typing WIN brought me to the rather spartan interface of Windows 1.0.
Although functional, what was missing was mouse support. Due to its age, Windows predates the mouse as a standard piece of equipment and predates the PS/2 mouse protocol; only serial and bus mice were supported out of the box. There are two ways to solve this problem:
The first, which is what I used, involves copying MOUSE.DRV from Windows 2.0 to the Windows 1.0 installation media, and then reinstalling, selecting the “Microsoft Mouse” option from the menu. Re-installation is required because WIN.COM is statically linked as part of installation with only the necessary drivers included; there is no option to change settings afterward. The SDK documentation details the static linking process, and how to run Windows in “slow mode” for driver development, but the end result is the same. If you want to reconfigure, you need to re-install.
The second option, which I was unaware of until after producing my video is to use the PS/2 release of Windows 1.0. Like DOS of the era, Windows was licensed to OEMs who could adapt it to their individual hardware. IBM did in fact do so for their then-new PS/2 line of computers, adding in PS/2 mouse support at the time. Despite being for the PS/2 line, this version of Windows is known to run on AT-compatible machines.
Regardless, the second hurdle had been passed, and I had a working mouse. This made exploring Windows 1.0 much easier.
The Windows 1.0 Experience
If you’re interested in trying Windows 1.0, I’d recommend heading over to PCjs.org and using their browser-based emulator to play with it as it already has working mouse support and doesn’t require acquiring 35 year old software. Likewise, there are numerous write-ups about this version, but I’d be remiss if I didn’t spend at least a little time talking about it, at least from a technical level.
Compared to even the slightly later Windows 2.0, Windows 1.0 is much closer to DOSSHELL than any other version of Windows, and is essentially a graphical bolt-on to DOS although through deep magic, it is capable of cooperative multitasking. This was done entirely with software trickery as Windows pre-dates the 80286, and ran on the original 8086. COMMAND.COM could be run as a text-based application, however, most DOS applications would launch a full-screen session and take control of the UI.
This is likely why Windows 1.0 has issues on later versions of DOS as it’s likely taking control of internal structures within DOS to perform borderline magic on a processor that had no concept of memory protection.
Another oddity is that this version of Windows doesn’t actually have “windows” per say. Instead applications are tiled, with only dialogue boxes appearing as free-floating Windows. Overlapping Windows would appear in 2.0, but it’s clear from the API that they were at least planned for at some point. Most notable, the CreateWindow() function call has arguments for x and y coordinates.
My best guess is Microsoft wished to avoid the wrath of Apple who had gone on a legal warpath of any company that too-closely copied the UI of the then-new Apple Macintosh. Compared to later versions, there are also almost no included applications. The most notable applications that were included are: NOTEPAD, PAINT, WRITE, and CARDFILE.
While NOTEPAD is essentially unchanged from its modern version, Write could be best considered a stripped-down version of Word, and would remain a mainstay until Windows 95 where it was replaced with Wordpad. CARDFILE likewise was a digital Rolodex. CARDFILE remained part of the default install until Windows 3.1, and remained on the CD-ROM for 95, 98, and ME before disappearing entirely.
PAINT, on the other hand, is entirely different from the Paintbrush application that would become a mainstay. Specifically, it’s limited to monochrome graphics, and files are saved in MSP format. Part of this is due to limitations of the Windows API of the era: for drawing bitmaps to the screen, Windows provided Display Independent Bitmaps or DIBs. These had no concept of a palette and were limited to the 8 colors that Windows uses as part of the EGA palette. Color support appears to have been a late addition to Windows, and seemingly wasn’t fully realized until Windows 3.0.
Paintbrush (and the later and confusingly-named Paint) was actually a third party application created by ZSoft which had DOS and Windows 1.0 versions. ZSoft Paintbrush was very similar to what shipped in Windows 3.0 and used a bit of technical trickery to take advantage of the full EGA palette.
With that quick look completed, let’s go back to actually getting to HELLO.C, and that involved getting the SDK installed.
The Windows SDK and Microsoft C 4.0
Getting the Windows SDK setup is something of an experience. Most of Microsoft’s documentation for this era has been lost, but the OS/2 Museum has scanned copies of some of the reference binders, and the second disk in the SDK has both a README file and an installation batch file that managed to have most of the necessary information needed.
Unlike later SDK versions, it was the responsibility of the programmer to provide a compiler. Officially, Microsoft supported the following tools:
Microsoft Macro Assembler (MASM) 4
Microsoft C 4.0 (not to be confused with MSC++4, or Visual C++)
Microsoft Pascal 3.3
Unofficially (and unconfirmed), there were versions of Borland C that could also be used, although this was untested, and appeared to not have been documented beyond some notes on USENET. More interestingly, all the above tools were compilers for DOS, and didn’t have any specific support for Windows. Instead, a replacement linker was shipped in the SDK that could create Windows 1.0 “NE” New Executables, an executable format that would also be used on early OS/2 before being replaced by Portable (PE) and Linear Executables (LX) respectively.
For the purposes of compiling HELLO.C, Microsoft C 4.0 was installed. Like Windows, MSC could be run from floppy disk, albeit it with a lot of disk swapping. No installer is provided, instead, the surviving PDFs have several pages of COPY commands combined with edits to AUTOEXEC.BAT and CONFIG.SYS for hard drive installation. It was also at this point I installed SLED, a full screen editor as DOS 3.3 only shipped with EDLIN. EDIT wouldn’t appear until DOS 5.0
After much disk feeding and some troubleshooting, I managed to compile a quick and dirty Hello World program for DOS. One other interesting quirk of MSC 4.0 was it did not include a standalone assembler; MASM was a separate retail product at the time. With the compiler sorted, it was time for the SDK.
Fortunately, an installation script is provided. Like SELECT, it required listing out a bunch of folders, but otherwise was simple enough to use. For reasons that probably only made sense in 1985, both the script and the README file was on Disk 2, and not Disk 1. This was confirmed not to be a labeling error as the script immediately asks for Disk 1 to be inserted.
The install script copies files from four of the seven disks before returning to a command line. Disk 5 contains the debug build of Windows, which are roughly equivalent to checked builds of modern Windows. Disk 6 and 7 have sample code, including HELLO.C.
With the final hurdle passed, it wasn’t too hard to get to compiled HELLO.EXE.
Dissecting HELLO.C
I’m going to go through these at a high level, my annotated hello.c goes into much more detail on all these points.
General Notes
Now that we can build it, it’s time to take a look at what actually makes up the nuts and bolts of a 16-bit Windows application. The first major difference, simply due to age is that HELLO.C uses K&R C simply on the basis of pre-dating the ANSI C function. It’s also clear that certain conventions weren’t commonplace yet: for example, windows.h lacks inclusion guards.
NEAR and FAR pointers
long FAR PASCAL HelloWndProc(HWND, unsigned, WORD, LONG);
Oh boy, the bane of anyone coding in real mode, near and far pointers are a “feature” that many would simply like to forget. The difference is seemingly simple, a near pointer is nearly identical to a standard pointer in C, except it refers to memory within a known segment, and a far pointer is a pointer that includes the segment selector. Clear right?
Yeah, I didn’t think so. To actually understand what these are, we need to segue into the 8086’s 20-bit memory map. Internally, the 8086 was a 16-bit processor, and thus could directly address 2^16 bits of memory at a time, or 64 kilobytes in total. Various tricks were done to break the 16-bit memory barrier such as bank switching, or in the case of the 8086, segmentation.
Instead of making all 20-bits directly accessible, memory pointers are divided into a selector which forms the base of a given pointer, and an offset from that base, allowing the full address space to be mapped. In effect, the 8086 gave four independent windows into system memory through the use of the Code Segment (CS), Data Segment (DS), Stack Segment (SS), and the Extra Segment (ES).
Near pointers thus are used in cases where data or a function call is in the same segment and only contain the offset; they’re functionally identical to normal C pointers within a given segment. Far pointers include both segment and offset, and the 8086 had special opcodes for using these. Of note is the far call, which automatically pushed and popped the code segment for jumping between locations in memory. This will be relevant later.
HelloWndProc is a forward declaration for the Hello Window callback, a standard feature of Windows programming. Callback functions always had to be declared FAR as Windows would need to load the correct segment when jumping into application code from the task manager. Hence the far declaration. Windows 1.0 and 2.0, in addition, had other rules we’ll look at below.
WinMain Decleration:
int PASCAL WinMain( hInstance, hPrevInstance, lpszCmdLine, cmdShow )
HANDLE hInstance, hPrevInstance;
LPSTR lpszCmdLine;
int cmdShow;
PASCAL Calling Convention
Windows API functions are all declared as PASCAL calling convention, also known as STDCALL on modern Windows. Under normal circumstances, the C programming language has a nominal calling convention (known as CDECL) which primarily relates to how the stack is cleaned up after a function call. In CDECL-declared functions, its the responsibility of the calling function to clean the stack. This is necessary for vardiac functions (aka, functions that take a variable number of arguments) to work as the callee won’t know how many were pushed onto the stack.
The downside to CDECL is that it requires additional prologue and epilogue instructions for each and every function call, thereby slowing down execution speed and increasing disk space requirements. Conversely, PASCAL calling convention left cleanup to be performed by the called function and usually only needed a single opcode to clean the stack at function end. It was likely due to execution and disk space concerns that Windows standardized on this convention (and in fact still uses it on 32-bit Windows.
hPrevInstance
if (!hPrevInstance) {
/* Call initialization procedure if this is the first instance */
if (!HelloInit( hInstance ))
return FALSE;
} else {
/* Copy data from previous instance */
GetInstanceData( hPrevInstance, (PSTR)szAppName, 10 );
GetInstanceData( hPrevInstance, (PSTR)szAbout, 10 );
GetInstanceData( hPrevInstance, (PSTR)szMessage, 15 );
GetInstanceData( hPrevInstance, (PSTR)&MessageLength, sizeof(int) );
}
hPrevInstance has been a vestigial organ in modern Windows for decades. It’s set to NULL on program start, and has no purpose in Win32. Of course, that doesn’t mean it was always meaningless. Applications on 16-bit Windows existed in a general soup of shared address space. Furthermore, Windows didn’t immediately reclaim memory that was marked unused. Applications thus could have pieces of themselves remain resident beyond the lifespan of the application.
hPrevInstance was a pointer to these previous instances. If an application still happened to have its resources registered to the Windows Resource Manager, it could reclaim them instead of having to load them fresh from disk. hPrevInstance was set to NULL if no previous instance was loaded, thereby instructing the application to reload everything it needs. Resources are registered with a global key so trying to register the same resource twice would lead to an initialization failure.
I’ve also gotten the impression that resources could be shared across applications although I haven’t explicitly confirmed this.
Local/Global Memory Allocations
NOTE: Mostly cribbled off Raymond Chen’s blog, a great read for why Windows works the way it does.
Another concept that’s essentially gone is that memory allocations were classified as either local to an application or global. Due to the segmented architecture, applications have multiple heaps: a local heap that is initialized with the program and exists in the local data segment, and a global heap which requires a far pointer to make access to and from.
Every executable and DLL got their own local heaps, but global heaps could be shared across process boundaries, and as best I can tell, weren’t automatically deallocated when a process ended. HEAPWALK could be used to see who allocated what and find leaks in the address space. It could also be combined with SHAKER which rearranged blocks of memories in an attempt to shake loose bugs. This is similar to more modern-day tools like valgrind on Linux, or Microsoft’s Application Testing tools.
Oh boy, this is a real stinker and an entirely unnecessary one at that. MakeProcInstance didn’t even make it to Windows 3.1 and its entire existence is because Microsoft forgot details of their own operating environment. To explain, we’re going to need to dig a bit deeper into segmented mode programming.
MakeProcInstance’s purpose was to register a function suitable as a callback. Only functions that have been marked with MPI or declared as an EXPORT in the module file can be safely called across process boundaries. The reason for this is that Windows needs to register the Code Segment and Data Segment to a global store to make function calls safely. Remember, each application had its own local heap which lived in its own selector in DS.
In real mode, doing a CALL FAR to jump to a far pointer automatically push and popped the code segment as needed, but the data segment was left unchanged. As such, a mechanism was required to store the additional information needed to find the local heap. So far, this is sounding relatively reasonable.
The problem is that 16-bit Windows has this as an invariant: DS = SS …
If you’re a real mode programmer, that might make it clear where I’m going with this. The Stack Segment selector is used to denote where in memory the stack is living. SS also got pushed to the stack during a function call across process boundaries along with the previous SP. You might begin to see why MakeProcInstance becomes entirely unnecessary.
Instead of needing a global registration system for function calls, an application could just look at the stack base pointer (bp) and retrieve the previous SS from there. Since SS = DS, the previous data segment was in fact saved and no registration is required, just a change to how Windows handles function epilogs and prologs. This was actually found by a third party, and a tool FixDS was released by Michael Geary that rewrote function code to do what I just described. Microsoft eventually incorporated his fix directly into Windows, and MakeProcInstance disappeared as a necessity.
Other Oddities
From Raymond Chen’s blog and other sources, one interesting aspect of 16-bit Windows was it was actually designed with the possibility that applications would have their own address space, and there was talk that Windows would be ported to run on top of XENIX, Microsoft’s UNIX-based operating system. It’s unclear if OS/2’s Presentation Manager shared code with 16-bit Windows although several design aspects and API names were closely linked together.
From the design of 16-bit Windows and playing with it, what’s clear is this was actually future-proofing for Protected Mode on the 80286, sometimes known as segmented protection mode. On 286’s Protected Mode, while the processor was 32-bit, the memory address space was still segmented into 64-kilobyte windows. The primary difference was that the segment selectors became logical instead of physical addresses.
Had the 80286 actually succeeded, 32-bit Windows would have been essentially identical to 16-bit Windows due to how this processor worked. In truth, separate address spaces would have to wait for the 80386 and Windows NT to see the light of day, and this potential ability was never used. The 80386 both removed the 64-kilobyte limit and introduced a flat address space through paging which brought the x86 processor more inline with other architectures.
Backwards Compatibility on Windows 3.1
While Microsoft’s backward compatibility is a thing of legend, in truth, it didn’t actually start existing until Windows 3.1 and later. Since Windows 1.0 and 2.0 applications ran in real mode, they could directly manipulate the hardware and perform operations that would crash under Protected Mode.
Microsoft originally released Windows 286, and 386 to add support for the 80286 and 80386, functionality that would be merged together in Windows 3.0 as Standard Mode, and 386 Enhanced Mode along with legacy “Real Mode” support. Due to running parts of the operating system in Protected Mode, many of the tricks applications could perform would cause a General Protection Fault and simply fail. This wasn’t seen as a problem as early versions of Windows were not popular, and Microsoft actually dropped support for 1.x and 2.x applications in Windows 95.
Windows for Workgroups was installed in a fresh virtual machine, and HELLO.EXE, plus two more example applications, CARDFILE and FONTTEST were copied with it. Upon loading, Windows did not disappoint throwing up a compatibility warning right at the get-go.
Accepting the warning showing that all three applications ran fine, albeit it with a broken resolution due to 0,0 being passed into CreateWindow().
However, there’s a bit more to explore here. The Windows 3.1 SDK included a utility known as MARK. MARK was used, as the name suggests, to mark legacy applications as being OK to run under Protected Mode. It also could enable the use of TrueType fonts, a feature introduced back in Windows 3.0.
The effect is clear, HELLO.EXE now renders in TrueType fonts. The reason TrueType fonts are not immediately enabled can be see in FONTTEST, where the system typeface now overruns several dialog fields.
The question now was, can we go further?
35 Years Later …
As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon. However, running 16-bit applications is complicated by the fact that NTVDM is not available on 64-bit installations. As such, a fresh copy of Windows 10 32-bit was installed.
Some pain was suffered convincing Windows that I didn’t want to use a Microsoft account to sign in. Inserting the same floppy disk as used in the previous test, I double-clicked HELLO and Feature Installer popped up asking to install NTVDM. After letting NTVDM install, a second attempt shows, yes, it is possible to run Windows 1.x applications on Windows 10.
FONTTEST also worked without issue, although the TrueType fonts from Windows 3.1 had disappeared. CARDFILE loaded but immediately died with an initialization error. I did try debugging the issue and found WinDbg at least has partial support for working with these ancient binaries, although the story of why CARDFILE dies will have to wait for another day.
In Closing …
I do hope you enjoyed this look at ancient Windows and HELLO.C. I’m happy to answer questions, and the next topic I’m likely going to cover is a more in-depth look at the differences between Windows 3.1 and Windows for Workgroups combined with demonstrating how networking worked in those versions.
Any feedback on either the article, or the video is welcome to help me improve my content in the future.
From the futon, I thought i’d publish the “Free386” of dos-extender that I had made before to GitHub.
If you want to publish it anyway, NASM and alink also included together and if there is a DOS environment, i thought that anyone can assemble it is out of luck. I found a bug in alink when generating flat mode.exe/.com file. It’s around here that i started to go crazy in a lot of ways(laughs)
Patching alink was done on Linux. I then used TOWNS-gcc to generate alink.exp, but i used the MP header format that TOWNS-gcc generates. We found a bug that the EXP file cannot run on its own. If this is not corrected, it is not possible to distribute including the development environment because it does not usually have the EXP execution environment. When I checked, there was a bug in how to allocate memory, and when the memory capacity started to exceed 8MB, i was allocating memory space that does not exist in the back.
In fact, Free386 at the time was a lot of files that didn’t work properly, and i was worried because it became unstable, it was a mistake in the allocation of memory that is not. However, to examine this, i created a tool to dump memory maps and paging (i.e., it’s included), it was quite a bit of a hassle.
Now, when the memory allocation bug is fixed, almost all DOS generic EXP files and many TOWNS software now work. However, towns-OS’s biggest mystery system is the CoCo/NSD driver around the moss, and the software written in F-BASIC386 does not start. When you come this far, you want to move it.
So we start editing the CoCo/NSD driver. After a little research, I immediately found out the following.
CoCo.EXE resides in DOS memory (real memory).
NSDD resides in extended memory.
This means that CoCo is presumed to load nsd files into extended memory and manage that information. Now the question is how to get that management information. Is there information in coco memory that resides like SYSINIT? I thought.
For now, to check the area, Free386, i attached the ability to dump the register status before and after the int service was executed by hooking up the interrupt. We analyzed \hcopy\deldrv.exp, which has the ability to remove the specified NSD driver, as “we need to find the NSD driver and the structure seems simpler” in the mechanism.
Information like this comes out a lot in turn. If you look at the changes in coco’s residency and other changes in behavior, you can see that int 8eh/AX=Cx0x is a CoCo service. At the same time, log int 8eh and make a resident.com file (included) run386. I also looked at the behavior of the EXE and explored the commonalities of both of them, and i thought, “How would I design the mechanism if I were you?” We looked up coco services from the perspective of “**.
Then we traced to a service that provides driver resident information called int 8eh/AX=C103h. Using this information, the NSD driver in extended memory could be correctly pasted into memory and implemented on the selector. To verify, I ran deldrv.exp using Free386 and was able to uninstall the NSD driver correctly.
Great. End.
…… I wish I had solved it in that way.
TOWNS-OS is an OS of a mysterious structure, and even though there is a BIOS (TBIOS) of 32bit Native mode for graphicprocessing, some services such as timers use the BIOS of FM-R compatible 16-bit operation as it is. It has an incomprehensible structure to use it from the 32bit program side while managing resources, such as a 16-bit timer BIOS.
In terrible cases, each time the processing and interrupt of real-mode resources such as timers and keyboards, switch the CPU to real mode, if during those real-mode BIOS processing, interrupt the PROCESSING of the BIOS, such as FM sound source or VSYNC occurs, it seems to return to protected mode once.
NSD driver called forRBIOS (for Real BIOS) is the intermediary for this incomprehensible structure. Just as DOS-Extender acts as an intermediary for 32-bit programs and MS-DOS, it acts as a real-mode BIOS and a 32bit program intermediary.
In a RUN386 environment, when forRBIOS.NSD is built in, interrupt vectors such as int 8eh are rewritten so that the NSD driver gets the interrupt. **Where is this information? ** That was a mystery that was left behind. However, RUN386 is a . No matter how much the INT log is done until you run EXP, it doesn’t look like it. If you look at the memory of the coco that is resident, there is no information that seems to be it.
If you’re not going to initialize the resident NSD itself. I thought, i patched the entry of the resident forRBIOS, and when the service routine was called, i tried to use the rough business of falling into an infinite loop was bingo.
Finally, you can now run exp files generated by F-BASIC386 and so on. The analysis results are recorded in the doc. By the way, when you run a program that does not require forRBIOS (written in High-C, etc.), the whole process is slower than when you initialize forRBIOS. I really think this is the specs of TOWNS-OS (laughs)
This is the first time in more than a decade since the development was suspended in 2001, and the DOS-Extender, which is compatible with RUN386, was made.