Dell Unix on 86Box

(This is a guest post by Antoni Sawicki aka Tenox)

In a recent few virtualization projects, such as QNX 1.2 (and demo disk), Interactive Unix (also older post) and Caldera (and older post), I have tried the 86Box emulator. Unlike typical hypervisors, 86Box emulates a wide variety of video and network cards. Everything I tried simply worked out of the box, so instantly fell in love. 86Box is now my daily drive for running old PC operating systems. I have decided to revisit some of previously half assed virtualization attempts with the awesome new emulator.

I have virtualized Dell Unix back in 2012 using Bochs and QEMU. Even with the community support, we have struggled to get a decent video resolution and had to resort to use of SLIP for networking. Today let me reintroduce Dell Unix more properly! With 1024×768, 256 colors video and proper networking using emula NIC.

The journey started with allsoft.img which is an image of the OS and all packages installed from a tape on Bochs. I have disabled a few services in /etc/rc2.d namely mouse daemon (mse), sendmail, uucp, lp, etc.

For X Window I have edited /usr/lib/X11/Xconfig, enabled serial mouse (Microsoft) and 1024×768 mode. I have used Tseng ET4000AX VGA which is recognized by Xmach server. This allowed X / xinit to run correctly. However for startx to work you also need to edit /usr/lib/X11/xinit/xserverrc, as it seems to be using slightly different configuration. For graphical login you can add something like x:3:respawn:/usr/bin/X11/xdm -nodaemon to /etc/inittab. However I have noticed that when ran from init, xdm seem not to pick up the Dell customized config files. Perhaps rc startup script should be created instead.

As a final note on X, the system has virtual consoles. Like other SVR4 you access them by pressing SYSRQ and F keys. F1 is a text mode console, F2 is Xserver. This is my Dell Unix hero shot:

Dell Unix running under 86Box

Networking was even easier. Dell Unix supports WD8003 and 3C503 NICs. Firstly I wanted to try the WD. In /etc/conf/pack.d/wdn/space.c you can find the predefined hardware probes. I have picked one of supported modes and the card was detected on subsequent reboot. That’s it. No need for kernel rebuild or any configuration. I have not tried 3C503 yet, but if you want the driver for it is named ie6. For TCP/IP configuration you set your IP address in /etc/hosts and gateway in /etc/inet/rc.inet file.

I was able to quickly compile Mosaic, which curiously had Makefile settings for Dell Unix. Took it for a spin on the web with help of WRP:

One could probably want to compile more recent version of Mosaic with PNG support or maybe some more recent browser all together.

The system comes with a bunch of open source software in /usr/dell, however suprisingly there is no bash or even gzip. I have compiled some essentials. They are available here and as a /usr/local tarball.

For the lazy, as usual you can get a complete os image for 86Box here. Make sure to attach pcap to your local network interface and set IP address / gateway / dns server accordingly.

If you port some cool software or find any interesting gems in Dell Unix please comment!

Have fun with virtualization!

Update: I been looking at contents of various distribution media for Dell Unix that have surfaced here and there. On a DAT tape I bought on eBay a few years back I found this file:

Whoa! Of course I want to install all of it! This is how FrameMaker 3.0 looks on Dell Unix:

I have updated the disk image for 86Box to have this included. You can run demo mode of FrameMaker by executing /usr/frame/bin/demomaker. I also imagine that this can be installed on pretty much any x86 SVR4 and above, maybe even Linux. If anyone has a license code / serial number please let me know!

Fun with OpenServer 6 and MergePro

(This is a guest post by Antoni Sawicki aka Tenox)

In a recent post about OpenServer and Merge I covered OpenServer 5 and Merge 5.3. Thanks to a comment from Uli I have learned about MergePro which looks like is a rebranded Win4Lin. Intrigued I wanted to try it especially that you can download it from SCO ftp server as Uli pointed.

I’m going to be using VMware Fusion on Mac, which is now free for personal use. They call it Fusion Player, however unlike Workstation and Player, it has exactly same features as non-free Fusion version. For the OS I’m going to use Xinuos OpenServer 6 Definitive, however you can easily download OpenServer 6.0.0Ni from the ftp. I also have copies in my archive.

Installation is straightforward. You can skip licensing and use evaluation license, however for convenience you can use following keys:

Xinuos OpenServer 6D2M1: SCO053269 / ejcaagmy
SCO OpenServer 6.0.0Ni: SCO398943 / ysloudwl

If you are installing 6.0.0Ni you will also need MP4 update. 6D2M1 is already patched.

To install MergePro you need to copy this package to the host os and install like so:

# pkgadd -d /tmp/MergePro-6.3.0-04f_pkgadd.stream

In the following step, mount Windows 2000 or XP SP1 or SP2 ISO and run:

# loadwinproCD

Once Windows is loaded you need to install it as a non-root user using:

$ installwinpro

After it’s installed, to run you type:

$ winpro

Unfortunately I have failed to install Windows XP with variety of errors and blue screens. Windows 2000 works fine, however it feels bit sluggish and mouse click doesn’t always register. It looks like there are some sort of Windows Guest Additions being injected in to the OS so one would expect this to work just fine.

During startup I have noticed that MergePro installs and uses KQEMU kernel module. Also this screen looks suspiciously familiar… where did I see this before?

The BIOS and VGABios look definitely stolen from Bochs. HDD controllers look like Win4Lin. I’m not going to go in to deeper analysis of what MergePro is made of at this time. Looks like a topic for another article or even better – your comments 🙂

Also if you want to license the copy of Merge use following key:

MergePro 6.3.0f: SCO138318 / bhtecusg

Finally for the lazy here is fully installed OVA, password is root/root and tenox/tenox for the regular user.

UPDATE: Thanks to reader Larbob we now know that you can install any guest OS, on MergePro not only Windows! Use installwinpro -c /dev/cdrom/cdrom1 -w winxppro to boot the cdrom without checking what OS is actually on it. Here is a screenshot of Solaris x86 being installed on MergePro on UnixWare:

So.. you could install UnixWare as a guest VM under OpenServer or vice versa??

Thank you!

Fun with Caldera WABI

(This is a guest post by Antoni Sawicki / Tenox)

In the previous post about SCO Merge I briefly mentioned WABI, which is a Windows ABI emulator for Unix. Initially released by Sun Microsystems, it’s believed that it came with acquisition of Interactive Systems Corp (ISC) and Interactive UNIX. It was available for SPARC, x86 and PowerPC Solaris as well as IBM AIX. Around 1997 it was released for x86 Linux by Caldera. This article will focus on Caldera’s version specifically.

Although entirely possible to install WABI on another RPM based distribution such as Red Hat, I’m a purist and wanted to try it on Caldera Open Linux. The install is pretty straightforward you mount the iso file and run install script. In a next step you need to install an update to version 2.2D. This is done by replacing /opt/wabi/bin/wabiprog with extracted version of this file. Thanks to readers of this blog post for sharing these.

When launched for the first time, you will be prompted to provide copy of Windows 3.1. This the main difference with WINE which specifically does not require copy of windows to run apps. I have noticed that WABI is rather picky about lower vs uppercase when installing software. There is an utility called wabimakelower to help there. You can also add an icon to one of Caldera Linux / Looking Glass program groups.

Once you run it, it’s Windows 3.1 as usual:

WABI was designed for running productivity apps such as Office:

You can even run Visual Studio:

Curiously WABI is not a MS-DOS emulator. In order to run DOS apps you need to install such and configure it in WABI Control Panel:

For the lazy, a readily preinstalled version is available as OVA and 86box. Root password is “caldera”.

There also is a User Guide in PDF.

Have Fun with Virtualization!

Fun with OpenServer and Merge

(This is a guest post by Antoni Sawicki aka Tenox)

A friend and I were recently discussing differences between WABI, WINE, WISE, Merge, VP/IX, FX!32 and SoftWindows. This article covers Merge specifically which is a DOS/Windows emulator initially built for AT&T 6300 Plus computer. Later ported to UnixWare, OpenServer and eventually served as basis for Win4Lin. Later versions of Merge were build using Microsoft WISE SDK which allowed to run apps without full copy of Windows kind of like WINE. I will be running it on OpenServer 5 using VirtualBox. However one could get it going on UnixWre and under any other hypervisor same as easy.

For Vbox/OSR5, when creating a VM, make it other/other type, give 256MB to 1GB RAM and 4GB HDD. Once VM is created go to Settings and change network adapter to Intel PRO/1000 MT Desktop and Attachment to Bridge mode. For some reason I could not get DHCP working out of the box. Also under Display change graphics controller to VM SVGA.

One can get the last “real” OSR5 from this link. There also are never Xuinos versions and specifically targeted for VMware, for example this one.

Boot and go through the prompts as normal. At some point you will be stopped by a lovely prompt for license number and code:

Enter SCO043568 / pnhohvqm to get past this.

Watch out for this screen:

Don’t worry about not being able to get the NIC detected at this point. Leave it as Deferred for now. You need to install MP5 update and a driver update for this to work. This will be covered later.

Select some decent resolution for VESA SVGA:

Also select PS/2 Mouse:

The rest should just fly through on autopilot. Once system boots login as root with the password you set.

First thing you will need to install MP5 update. Download the ISO file from this link and attach to your hypervisor. Open the terminal and type “custom” to install software. Or double click that fancy “Software Manager” icon on the desktop. Under Software click Install New… and select this host. Select your attached cdrom.

You will need to install Maintenance Pack 5 and GFX / NIC Drivers:

Make sure to hit install twice one for each of these items as they cannot be selected together. Once complete you will need to reboot of course.

After reboot you should be able to add and configure the NIC. You will need to either run “scoadmin” or go to System Administration – Networks – Network Configuration Manager. Add a new LAN adapter. The Intel PRO/1000 should be detected automatically. I could never get DHCP to work and just used static IP config there. Make sure to ok re-link the kernel and reboot.

Installation of merge is a little bit more complicated. The latest version can be downloaded from here. If you are installing under UnixWare then this is your folder. Transfer it to your OSR5 VM via browser, ftp, samba, iso file or however you like. Open a terminal and go to the directory with the cpio file file and run:

cpio -icv -I osr5_merge5323a_vol.cpio

This will produce a bunch of VOL* files. These are installed with “custom” as well. However instead of cdrom you select Media Images and point to the directory with extracted VOL files:

There should be an option to install Merge.

You will be also prompted by a lovely license code prompt. Enter SCO837369 / bhtepkxy to get through. You will need to reboot again.

After login there will be a new folder on the desktop with Merge tools:

Root is not allowed to run it, but you can pre-install Windows as root. To do so go to Merge Setup and open System Wide Administration. You will find a button to Load Windows CD. You can just mount any bootable Windows 9x cdrom to your hypervisor and Merge will copy it for you.

There will be a prompt for network configuration. I opted for WinSock option which is user mode emulation, it’s enough to get a web browser going. If you need to use SMB/CIFS, open network shares, etc you will need the bridged mode with extra IP address for the guest.

Finally you will need to create and login as a different user to get this thing running.

This is the final product, with 4 level inception:

For the lazy of course provided is a fully installed OVA, one with VBox NIC and one with VMware NIC. Passwords are root/root and tenox/tenox. Note that this image has a static IP address of 192.168.1.111.

Have fun with virtualization!

Update: Article about OpenServer 6 and MergePro

Virtualization Challenge IV – QNX 1.2

(This is a guest post by Antoni Sawicki aka Tenox)

This is a Virtualization Challenge. A competition to virtualize an OS inside emulator/hypervisor. (Previously 1 / 2 / 3)

This time the object of the competition is QNX version 1.2. A demo disk is covered here. This is the set of floppy disks:

As you can see the boot disk is copy-protected. As such I have imaged these disks using both KryoFlux and SuperCard Pro. The magnetic flux stream images are available here. For verification I have converted the raw stream of the demo disk in to a sector image using HFE tool. The converted disk boots and works correctly in an emulator. The demo disk can also help with analyzing the boot process since it’s known to work.

The contest is to virtualize the OS, install it and provide a fully working hard disk image with the OS installed. Any emulator of your choice or method is acceptable as long as anyone can download and run it. The prize is $100 via PayPal and of course the fame! 🙂 The winner will be whoever comments the article first with a verifiable working solution.

A bonus $50 prize will be awarded if you can patch the boot floppy disk so that it can be installed as if the copy protection was never there.

Good luck!!!

UPDATE: The competition has ben won: QNX 1.2 Virtualized

UPDATE 2 : QNX 1.2 challenge Act II – HDD Boot

UPDATE 3: Reverse-engineering QNX 1.2 to boot from HDD

QNX 1.1 Demo Disk

(This is a guest post by Antoni Sawicki aka Tenox)

Fresh from the oven, or rather Kryoflux dump – a QNX version 1.1 Demo Disk:

QNX 1.1 Demo Disk

I managed to boot it on 86Box:

QNX 1.1 booted on 86Box Emulator

For the readers with more curiosity and time at their hands please could you try it on different emulators and comment what works and what doesn’t.

For the less curious this how the demo actually looks like once you log in as demo user:

QNX 1.1 Demo Menu

As the authors demand to make as many copies of this disk as possible here it is. Please download and spread!

I also managed to dump the rest of QNX 1.2 including boot disk, utils and even c compiler. Unfortunately the boot disk is copy protected:

I have raw stream dump made with Kryoflux as well as regular disk images. If you are interested in circumventing checking the copy protection so the system could be run in an emulator let me know in a comment. Perhaps time for another Virtualization Challenge?

Previously:

Virtualizing QNX 2

QNX Windows – First Look

QNX 2.21 Arrived Today

The lost history of PReP: Windows NT 3.5x and the RS/6000 40p

The following is a guest post by PA8600/PA-RISC! Thanks for doing another great writeup on that PowerPC that was going to transform the industry!.. but didn’t.

The history of the PReP platform from IBM is quite interesting, not only because of its place in the history of Windows NT but also the history of the PowerPC architecture in general. When the PowerPC platform was new, IBM (just like a few other vendors, notably DEC) had grand plans to replace the x86 PC  clone market (they helped create) with PowerPC. Of course thanks to various factors such as Apple’s refusal to play along, the launch of the Pentium Pro CPU (and the later Itanium disaster), and high cost, this plan never ended up panning out. Later IBM PReP machines were designed for AIX and Linux use only, and they were sold as regular old RS/6000 computers.

Still, Microsoft being Microsoft and willing to port their OS to literally anything hedged their bets and made MIPS, PowerPC, and Alpha ports of Windows NT (along with a PC98 release for Japan only). In the guest post about Solaris for PowerPC I made, I talked about the history of IBM’s PReP platform some more so you should go read that post if you want an initial rundown on PReP’s flaws and history. But I have learned a bit about the Windows NT port for PowerPC, and I discovered a rare version of it as well. By now everyone with a PReP machine (or PPC Thinkpad) has run Windows NT 4.0 on it, and if PReP machines are emulated it’s guaranteed this will be the second most run OS on it aside from AIX of course.

IBM also made a half-baked OS/2 port for PowerPC as well, and then there’s the previously mentioned Solaris port. All of these are rarities and it’s worth documenting. With how rare PReP machines are and their high prices on eBay when they do turn up for sale (or their tendency to be snapped up fast), I think it’s fitting to write perhaps the most in depth look at PReP hardware that anyone has seen.

Windows NT 3.51: “The PowerPC Release”

It’s commonly accepted that Windows NT 3.51 was the first release for PowerPC hardware and it was even called this within Microsoft. Featuring HALs for most of the early PReP machines including the Moto Powerstack, the rare FirePower machines built for NT (which used Open Firmware), the Power Series 6050/70 (and maybe 7248), and the unobtanium IBM 6030, it’s pretty much what you’d expect for a first release for PPC. It’s a polished, solid OS that’s arguably faster than NT4 on the same machine. Aside from the red boot screen (on my Weitek GPU), it’s pretty much Windows NT 3.51 but on the PowerPC. It’s like running NT 3.51 on MIPS or Alpha, it’s interesting but more software will likely run on 4 anyhow (especially on Alpha).

One interesting quirk of Windows NT for PowerPC is it does not report the CPU type of your machine. It simply reports “PowerPC” and what machine you’re running it on. It does not tell you that you’re running it on a 601, it tells you that it’s running on an IBM-6015.

Unsurprisingly Visual C++ 4 works on PowerPC Windows NT 3.51 as well. This is no shock, Visual C++ 4 was designed to work on 3.51 as well as NT 4.0. The same goes with many of the pre compiled programs. One advantage Windows NT 3.51 offers over 4.0 is that it is simply faster than 4.0 on the PowerPC 601.

There’s not much else about Windows NT 3.51 for PowerPC quirk wise that hasn’t been said elsewhere about NT 4. It runs in little-endian mode (one of the few PPC OSes to), it has 16 bit Windows emulation that’s slow, and it needs specific PReP machines to run. One interesting series of articles about the “behind the scenes” of the port worth reading is the Raymond Chen article series, and this discusses the quirks of programming a PowerPC 60x CPU in little-endian mode as well. It can be installed with the same ARC disks NT4 uses, and of course the same SMS and firmware disks will work. In fact QEMU at one time was capable of booting the IBM firmware image from these disks.

Here’s something I’ve found out from research however. There was actually a limited release of Windows NT 3.5, it’s been dumped, and it is a real operating system. It also requires a very specific model of RS/6000 to work, and one with a interesting history giving it a unique place among the PReP machines. While I was unable to make it work in the end, I did discover and document a lot of interesting features of PReP machines.

Enter Sandalfoot: The IBM 7020/6015 (and demystifying PReP machines)

To understand the HCL and weirdness of Windows NT for PowerPC (and why it won’t run on Macs), we need to take a look at one such machine it runs on. This is my RS/6000 40p, a machine that was given several brand names by IBM and used as a development platform for PReP software and operating system ports. This is also perhaps the most historically significant RS/6000 model from the era. While it wasn’t the first PowerPC RS/6000 (that honor goes to the 250), it was the first to use the PCI and ISA busses and it was a few months ahead of both the initial PCI PowerMacs and other PReP boxes. It’s also one of the few true bi-endian machines as just like other PReP machines, the MIPS Magnum, HP’s Integrity, and modern Power8+ machines it has OSes for both endians available.

In 1994 (presumably October 28, if the planned availability date is correct), IBM released the RS/6000 40p (announcement letter here, codenamed Sandalbow) and the Power Series 440 (codenamed Sandalfoot). Both are near-identical machines with different faceplates and boot screens. The RS/6000 ranged in price from around $4,000-6,000 and was designed to be an entry-level AIX workstation, bundling a copy of AIX with each machine. As an AIX machine it’s relatively slow and fits the entry-level badge quite well, but thanks to the 601’s POWER instructions it served as a transition machine over to the later 604 AIX machines. Unlike the later PowerPC 603 and 604 machines, it featured POWER instructions allowing it to run both legacy AIX POWER software and later PowerPC software. The Power Series was presumably sold to those wanting a PReP box for Windows instead.

Since IBM PReP hardware is so obscure and undocumented, I’m going to document this as best as I can being the owner of an IBM Model 6015/7020. The machine features a 66mhz PowerPC 601 (similar to that of the Power Mac 6100 and RS6K 250), PCI and ISA slots, and IBM’s “Dakota” PReP firmware (more on the boot process here). It uses an off the shelf NCR 53c810 SCSI controller, Crystal CS4321 sound chip, an Intel 82378 PCI bridge, and a NIC can be inserted into the ISA slots (mine has the famous 3com Etherlink III). The Super-IO chip is also off the shelf, and is a National PC87312VF. The clock IC is a Dallas DS1385S, a close relative of the Dallas DS1387 (with internal battery). At least some of the IBM custom ICs are the chipset ICs and those are also documented. A Linux 2.4 dmesg can be found here.

Mine is also maxed out at 192mb of RAM, however there are some solder pads for more and the chipset is limited at 256mb. This makes me wonder if the system was based on a reference design of some sort. There was an ultra-rare 604 upgrade as well, but considering how there are more 7248 and 7043 machines in the wild I can assume many customers just waited for that instead due to its superior AIX performance.

If the idea sounds familiar (off the shelf chips + RISC CPU) it’s because it was the very same idea used to create the two other non-x86 Windows NT platforms. The Microsoft Jazz MIPS platform most MIPS NT boxes were influenced by was infamously based on the same idea of a “PC with a MIPS CPU”. To a lesser extent, this was also seen on the DECpc AXP 150 and other EISA/ISA/PCI based Alpha machines designed to both run Windows NT and DEC’s own OSes. Crazy undocumented custom hardware and expansion busses were thrown out the window in favor of industry standards. In fact when I posted a photo of the motherboard to a chat full of PC nerds, they stated it looked remarkably like a normal PC motherboard. The whole industry would later adopt PCI and sometimes ISA on non-x86 machines to cut costs and reuse the same expansion cards.

The main difference between the RS/6000 40p and the Power Series variant is the boot ROM logo and chime. The RS/6000 and “OEM” systems used a boot ROM that featured the PowerPC logo and just a beep, while the Power Series machines featured a logo more closely resembling the PowerPC Thinkpads complete with the chime. One can boot firmware from a floppy as well by typing in the name of the ROM image in the prompt and pressing enter, and watching as it reboots once the firmware is loaded into RAM. Here’s a video I filmed demonstrating this, along with some other quirks including there being two SMS keys: F1 for a nice flashy GUI SMS and F4 for a text based SMS, along with F2 for netbooting (with the right NIC of course).

The Sandalfoot machines were LPX form factor machines, featuring a riser card and generic sheet-metal case popular with prebuilt machines from this era. The LPX form factor was wildly popular in the mid 90s due to its versatility, seeing use by both IBM and DEC for their RISC machines, various PC builders, and even Apple for the clone program and clone based Power Macintosh 4400. The Sandalfoot machines also drove home one of the core goals of the PReP project, which was to build a PowerPC platform using as many off the shelf and PC style components as possible instead of using lots of custom ICs like Apple did. I dug out one of my cameras to take a few high-res photos of the motherboard of this computer to illustrate this. Compare this to the motherboard of the Power Macintosh 6100 or even the 601 based 7200 and notice the bigger heatsink and use of fewer custom ICs (Apple loved those).

There were three main GPU options: the famous S3 Vision864, the Weitek Power 9100 (or P9100 for short) as a higher end option, and IBM’s own GXT150P. The S3 was the entry level GPU and the Weitek was a higher-end and faster GPU. The GXT150P is beyond the scope of this because it is unsupported on the other PReP OSes, only AIX. The other two video cards are essentially unmodified Diamond PC cards with the BIOS chips missing.

The Sandalfoot machines are perhaps the most important PReP machines due to their role in PReP OS development. Both OS/2 Beta 1 and Windows NT 3.5 were written for this machine in particular as it was one of the first PowerPC machines to support PReP and feature PCI/ISA slots, unlike the NuBus Macs released a few months earlier or the first PPC box: the MCA based RS/6000 Model 250. They also often shipped with the well documented and emulated S3 Vision 864 video card, a common GPU family in PCs of the time to the point where it was even included on some motherboards and emulated in too many PC emulators/virtualization programs to count (notably 86box/PCem). In fact it’s successor (the 7248) featured one soldered to the motherboard.

Windows NT 3.5: Failed Install Attempts

An oft repeated quote about Windows NT 3.5 for PowerPC is this one from Paul Thurrott’s Windows site:

Windows NT 3.51 was dubbed the Power PC release, because it was designed around the Power PC version of NT, which was originally supposed to ship in version 3.5. But IBM constantly delayed the Power PC chips, necessitating a separate NT release. “NT 3.51 was a very unrewarding release,” Thompson said, contrasting it with Daytona. “After Daytona was completed, we basically sat around for 9 months fixing bugs while we waited for IBM to finish the Power PC hardware. But because of this, NT 3.51 was a solid release, and our customers loved it.” NT 3.51 eventually shipped in May 1995.

I think a more accurate thing to write is that there simply weren’t many PReP boxes out in late 1994. Windows NT 3.51 supported the Motorola PowerStack series, the IBM 6050/6070 (and maybe the 7248, which came out in July 1995), and rare FirePower machines. Windows NT only features HALs for the 6015 (Sandalfoot/Power 440/RS6K 40P), 6020 (Thinkpad 800), and the 6030 (a rare IBM machine that likely was only sent to a few developers). By 1995, there were more PReP machines on the market and this made the NT 3.51 release logical. NT4 even supported a few servers, mainly the RS6K E20, E30, and F30.

Windows NT 3.5 was most likely a limited release for testing purposes on the Sandalfoot machine as it’s HCL file declares it as “Build 807” with a date of October 18, 1994. The date seems to be around a week or two before the first 40p machines at least shipped. Some more files were modified later on and the folders were created on November 9th, 1994. Hardware support is very barren, and the readme file even has a section dedicated to quirks of the 40p along with a list of supported software for the x86 emulator. This might have been considered a beta as well, as an announcement letter for the Thinkpad 800 (6020) explicitly mentions Windows NT and that this version might be a beta for developers. It also talks about a Windows SDK for it and a Motorola compiler used to build 3.5 software.

However the real problem for me has to do with getting a video card. Windows NT 3.5 for PowerPC does not support the Weitek P9100 GPU that came with many RS/6000 branded machines, and neither does OS/2 for PowerPC. It only supports the S3 Vision 864 and 928 video cards. It’s listed in the setup options, but choosing it causes a txtsetup.sif error. I’m going to assume that the development units came with the S3 video card instead. My box contained a Weitek card which works for AIX, Solaris, and Windows NT 3.51/4. I bought a card from eBay to use with NT 3.5 and the OS/2 port.

 The readme also features an ominous warning with the S3 video cards, that only revision B3 is supported and that 928 cards need 2MB of VRAM for anything above 256 colors. My revision of the card I ordered was B4, so I took the risk of seeing if it worked with my system. I also removed the ROM chip as the system initializes the video card itself and that having a ROM chip can cause the system to not complete the self-test or display video. As the IBM Weitek card lacks a BIOS, I did this.

Despite the scratches on the card from possibly coming out of an ewaste pile, the card worked fine in both a PC I inserted it in for testing purposes and the IBM system. I now had a 40p with a GPU much more well supported among non-AIX or Windows NT operating systems.

Anyhow, let’s talk about the install process in closer detail here. Windows NT for PowerPC installs in a similar manner to Solaris for PowerPC on the IBM PReP machines. First the floppy disk boots ARC, then when you choose to install it the machine copies the ARC bootloader/firmware to the hard disk so it can load it from there at each boot. The floppy disk can also be used to load ARC if the loader is damaged on the hard disk. Keep in mind, on IBM machines ARC is not stored in the ROM unlike on many other ARC capable machines so this has to be done. The Firepower machines do something very similar by using an Open Firmware shim, and unsuccessful attempts at emulating PPC NT have exploited VENEER.EXE to attempt booting instead of using the IBM firmware. It fails because they’re not emulating the hardware, just trying to find a quick way to just boot NT.

Once this is done, the installer loads up and installs just like every other NT install. It checks the HAL by reading the machine ID, what video hardware the machine has, and whatnot to prepare the installer. You need a IBM 6015, 6020, or 6030 according to the HALs it has and only the S3 video cards are on the HCL.

Or that’s what should happen. I first tried using ARC 1.51 as it worked for 3.51 and was greeted with a HAL error BSOD:

I first attempted to use older ARC boot floppies and I got somewhere, the BSOD changed to the classic 07b, and then I got nothing else. Using ARC 1.48 and 1.49 gave me this, I got some i/o error with ARC 1.46 (the first 3.51 ARC floppy), and any previous ARC floppy is most likely undumped. I’m assuming either the error is due to an ARC mismatch, a weird firmware mismatch/hardware revision mismatch, or some incorrect SCSI ID Solaris style. There might very well be some weird forgotten trick to making it work (maybe a Windows expert could dig through the files and find some weirdness), but I’m going to move onto another obscure PPC rarity:

OS/2 PowerPC Boot Attempts: Beta 1 and the Final

Recently the OS/2 Museum site dumped Beta 1 for PowerPC. It’s an earlier version of OS/2 for PowerPC that insists on a Sandalfoot machine with an S3 GPU. Unlike the other OS/2 PowerPC disc, it features a verbose boot featuring the kernel it uses. If you want to really see OS/2 for PPC working, try it on a 7248 or read this post about it.

This failed to boot, throwing up an error about mounting the disk or something. I did record it doing something at least however, an improvement over the Weitek which just does nothing at the PowerPC screen. I tried several things including removing the external SCSI CD drive and that didn’t fix much. It also declares 88c05333 an unknown PCI device.

So I decided to try the “final” build. The final build requires a 6050/70, and some people did get it working on the PPC Thinkpads. I decided to see what it’d do on my machine. Unsurprisingly it did absolutely nothing but give me a blank white screen and sometimes a 00016000 error (for a trashed CMOS). If anything the 6015 loves to trash it’s CMOS contents for absolutely no reason, especially when OS/2 is involved.

Anyhow this was very anti-climatic, as the OSes I threw at it found reasons to not work on it whatsoever.  I weeded out the GPU being at fault by testing Windows NT 4.0 and finding out that it works just fine with the GPU, however I seem to have fewer resolutions available than what the Weitek card allows. It did change the boot screen font, making me wonder if the red boot screen is a GPU driver quirk.

However changing the device IDs with OS/2 PowerPC Beta 1 got me somewhere, as I now got a screen about the HDD failing to write. I formatted the HDD to FAT using the ARC diskette, then I nuked all the partitions, but not much else changed. I’m not sure what the error means, but it was a letdown.

Unless these OSes require some long lost firmware, I’m wondering if there’s else that’s causing issues with installation. Either way, it was a letdown. Nothing I tried worked and I spent hours messing with everything from SCSI IDs to using different drives.

PowerPC Solaris on the RS/6000

The following is a guest post by PA8600/PA-RISC! Thanks for doing this incredible writeup about an ultra rare Unix!

One of the weirdest times in computing was during the mid-90s, when the major RISC
vendors all had their own plans to dominate the consumer market and eventually wipe out
Intel. This was a time that led to overpriced non-x86 systems that intended to wipe out the
PC, Windows NT being ported to non-x86 platforms, PC style hardware paired with RISC
CPUs, Apple putting the processor line from IBM servers into Macs, and Silicon Graphics
designing a game console for Nintendo. While their attempts worked wonders in the
embedded field for MIPS and the AIM alliance, quite a few of these attempts at breaking into
the mainstream were total flops.

Despite this, there were some weird products released during this period that most only assumed existed in tech magazine ads and reviews. One such product was Solaris for PowerPC. Now Solaris has existed on Intel platforms for ages and the Illumos fork has some interesting ports including a DEC Alpha port, but a forgotten official port exists for the PowerPC CPU architecture. Unlike OS/2, it’s complete and has a networking stack. It’s also perhaps one of the weirdest OSes on the PowerPC platform.

  • It’s a little-endian 32-bit PowerPC Unix and possibly the only one running in 32 bit mode. Windows NT and OS/2 (IIRC) were the other 32-bit PowerPC little-endian OSes and Linux is a 64 bit little endian OS.
  • It’s a limited access release, yet feels as polished as a released product.
  • It has a working networking stack.
  • Unlike AIX, it was designed to run on a variety of hardware with room to expand if more PPC hardware was sold. You can throw in a random 3com ISA NIC for example and it will in fact work with it.
  • It shares several things with Solaris for Intel including the installer.

I’m going to demonstrate perhaps the weirdest complete PowerPC OS on fitting hardware: the IBM RS/6000 7020 40p, also known as the Power Series 440 (6015) and by its codename “Sandalfoot”. The system is a PowerPC 601 based machine, featuring the PCI and ISA buses in an LPX style case. This is also one of the few machines that can run it. All screen captures are from a VGA2USB card as emulators cannot run anything but AIX.

What you need to run Solaris PPC

To run Solaris, the system requirements are just like that of Windows NT for PowerPC. You need a PReP machine (PowerPC Reference Platform, not to be confused with the HIV prevention pill or PrEP according to Wikipedia). Now finding a PReP machine is perhaps the hardest part of setting up Solaris for PowerPC and to understand why you need to know a bit about the history of the PowerPC platform.

One of the biggest problems with PowerPC hardware to this day has been the sheer inconsistency of how each machine boots. While Alpha machines had SRM/ARC and SPARC machines had OpenBoot, each vendor had their own way of booting a PowerPC machine despite rolling out standards.

There were essentially two different camps building PowerPC machines, IBM and Apple. IBM’s plans for universal PowerPC machines consisted of industry standard, low cost machines built around a PowerPC CPU, chipset, and lots of supporting components lifted from the PC platform along with PCI and ISA. The CHRP and PReP standards were essentially PCs with PowerPC processors in them. IBM’s plan was that you were going to replace your PC with a PowerPC machine someday. This was cemented by the fact that Windows NT was ported to the PowerPC platform, that OS/2 had an ill-fated port, and that a handful of third party Windows NT PPC machines were sold.

Apple on the other hand wanted to build Macs with PowerPC CPUs. Older Power Macs featured no PCI slots or Open Firmware, only NuBus slots carried over from classic 68k Macs. In fact much of the boot and OS code was emulated 68k code. Later on Apple would lift bits and pieces of things they enjoyed from the PowerPC standards such as Open Firmware, PCI, and even PS/2 and VGA ports on the clones. Apple’s plan was to replace the PC with the Mac, and Mac clones featured Apple style hardware on LPX motherboards. While the PCI clones featured Open Firmware, this version was designed to load the Macintosh Toolbox from ROM while “futureproofing” them by adding in the ability to boot something like Mac OS X/Rhapsody or BeOS.

Despite these similarities Macs were their own computers and were nothing like the IBM systems internally, aside from sharing the same CPU and maybe Open Firmware later on. But even Macs with Open Firmware were incapable of booting from hard disks formatted for IBM systems and vice versa. This is a common problem with installing PowerPC Linux as many installers do not check which machine they’re run on. Furthermore unlike modern day Intel Macs, PPC Macs were designed to only boot operating systems specifically written for them. They were incapable of running any OS solely written for the IBM machines.

The confusion between PPC machines has also caused a forum question to pop up, “how can I install PowerPC Windows on my Mac?” Even today the new OpenPower/PowerNV machines use a different bootloader than IBM’s hardware and completely lack Open Firmware.

Anyhow IBM built several different generations of PowerPC UNIX machines under several brand names including RS/6000, pSeries, and Power. Nearly all of them (aside from the Linux models) will run AIX, and later ones will run IBM i as well. Not just any PowerPC IBM hardware will run the OSes designed for PReP hardware however.

To run these old PReP OSes you’re looking at a very specific set of machines from the 1994-95 period, many with no characteristic diagnostic display most RS/6000 machines have. To run PowerPC Solaris much of the same applies here. You need a RS/6000 40p, or 7248 43p (not the later 140 and 150 with the display). The rare PPC Thinkpads and Personal Computer Power Series machines will run Solaris as well. It’s also compatible with the PowerStack machines from Motorola and one BetaArchive user had luck running it on a VME board. These machines are hard to find and unemulated as of writing, though the firmware files exist for the 40p at least and some efforts have been made in QEMU.

Mine features a PowerPC 601 CPU, 192mb of RAM (the max), a Weitek P9100 video card (branded as the IBM S15 IIRC), and a non-IBM 3com NIC. The 3com NIC has issues with the system as during boot if the NIC is connected to the network the system will refuse to boot fully and will either freeze or BSOD (in NT). The NIC is also not supported on AIX as well, and will eventually need to be replaced.

Curiously, not only is the IBM 40p/7020/6015 not listed in the HCL but the NIC it uses is. It’s well known that the Sandalfoot systems were used for early PReP OS development and it makes sense. Unlike the RS/6000 model 250, the 40p features PCI and ISA busses along with the same 601 CPU early PowerPC machines had. 

Installation

To install PowerPC Solaris, you first need to make a boot floppy. This isn’t uncommon with PReP operating systems. PowerPC Windows NT also requires a boot floppy for the ARC loader. The difference here is that there are two boot floppies; one for Motorola machines and one for IBM machines. Even on PowerPC this wasn’t terribly unusual, both the Moto Powerstack and Apple Network Server computers required custom AIX install media as well and Windows NT had specific HALs for each PPC machine.

On the Motorola PowerStack machines you need the same firmware used to install AIX instead of the ARC firmware for NT. On the IBM machines it’s vastly easier, you just need to make the floppy and shove it in. You then press the power switch and you’ll end up dumped to an Open Firmware prompt. As these IBM machines did not have Open Firmware, the bootloader loads Open Firmware from the floppy or hard disk every time you boot the machine. Keep in mind even the system management services are floppy loaded on these machines.

You then run into the first big hurdle to installing the OS, “disk” and “net” are mapped to very specific devices and if the SCSI IDs of these are different it will not boot. If the CD drive is not at ID 3 and the HDD is not at ID 6 the commands will not work. You will need to set an environment variable and tell it to boot from these disks manually for the first install.

Booting the OS is similar to booting it on a Sun, but the installer resembles that of the Intel version. The first thing that happens is you wait for the slow 2 speed CD drive to load the OS as the screen turns Open Firmware white. You will need to set the terminal type, and then then video and mouse input before X will load. The video options are limited to the S3 864/928, the Weitek P9000 and P9100, and Moto’s Cirrus Logic GD5434. Notice how the Power Series 440 (6015)/RS6k 7020 40p is referred to by its codename “Sandalfoot”.

Once you enter this in Solaris will boot load X it does on a Sun or Intel box, and the installer will be exactly the same. This phase is very uneventful as the slow CD drive copies files to the hard disk. I didn’t take a lot of screenshots of this part because you can get the same experience with QEMU or an old SPARCStation. You set the network info, you partition the HDD, you choose what you want, and you sit back as it installs.

Then you’ll be dropped at the Open Firmware bootloader and you’ll enter the right commands to make it boot if “boot disk” doesn’t automatically boot the OS.

The installation is not complete however. The next step is to swap CDs and install the GUI. A default install will drop you at a command line, with the second disk you can install OpenWindows and CDE and get a full working desktop. Login, switch CDs, change to the correct directory, and run the installer.

Once this is done, simply type in reboot and once you login you’ll be at a desktop that looks exactly like a Solaris 2.5.1 install on any other platform with one difference. There is literally zero third party software, and for years there was literally zero way of making software for it. You’re stuck with a stock OS and whatever utilities Solaris 2.5.1 came with. You’ll want to use OpenWindows as well, CDE is vastly slower on the 601 CPU (but not as slow as AIX 4.3 for example). The platform directory also tells you what IBM machines it can run on, and all the RS/6000s are titled PPS. The 6015 is the 40p, the 6040 and 6042 are the ThinkPad models 830 and 850, the 6050/70 are the Personal Computer Power Series variants of the 7248 43p, and the PowerStacks are pretty self-explanatory.

The Compiler Problem (and solutions)

For the longest time Solaris for PowerPC was neglected among those who happened to own a PReP machine for one reason: it lacked a compiler. A compiler is perhaps the most important part of any operating system as it allows one to write code for it. As was the case with UNIX operating systems from the time, the compiler was sold separately. With any UNIX that was widely distributed this wasn’t too much of an issue, as GCC or other third party compilers existed for the platform. Furthermore most compilers for these commercial UNIX operating systems ended up dumped online.

Solaris for PowerPC lacked both of these for ages due to the obscurity and rarity of the port. But in 2018 Tenox dug up the official compiler, yet this remained unnoticed for a while. This led to someone else experimenting with cross compilation on Solaris, and managing to compile PowerPC Solaris software. They then released a port of GCC for Solaris 2.5.1 for PowerPC while posting instructions on how to compile it.

To use GCC for Solaris, you need to unzip the compiler, add it to the path, and then symlink a few files that GCC ends up looking for. This is discussed in the BetaArchive thread about this, but I’ll quote it here.

$ ls -l /opt/ppc-gcc/lib/gcc-lib/powerpcle-sun-solaris2/2.95/
total 13224
-rwxr-xr-x   1 bin      bin      5157747 Feb 16 10:30 cc1
-rwxr-xr-x   1 bin      bin       404074 Feb 16 10:30 collect2
-rwxr-xr-x   1 bin      bin       453525 Feb 16 10:30 cpp
-rw-r--r--   1 bin      bin         1932 Feb 16 10:30 ecrti.o
-rw-r--r--   1 bin      bin         1749 Feb 16 10:30 ecrtn.o
drwxr-xr-x   3 bin      bin         1024 Feb 16 10:29 include
-rw-r--r--   1 bin      bin       673012 Feb 16 10:30 libgcc.a
drwxr-xr-x   2 bin      bin          512 Feb 16 10:30 nof
-rw-r--r--   1 bin      bin         4212 Feb 16 10:30 scrt0.o
-rw-r--r--   1 bin      bin         1360 Feb 16 10:30 scrti.o
-rw-r--r--   1 bin      bin         1104 Feb 16 10:30 scrtn.o
-rw-r--r--   1 bin      bin         7868 Feb 16 10:30 specs
lrwxrwxrwx   1 root     other         24 Feb 22 21:35 values-Xa.o -> /usr/ccs/lib/values-Xa.o
lrwxrwxrwx   1 root     other         24 Feb 22 21:36 values-Xc.o -> /usr/ccs/lib/values-Xc.o
lrwxrwxrwx   1 root     other         24 Feb 22 21:36 values-Xs.o -> /usr/ccs/lib/values-Xs.o
lrwxrwxrwx   1 root     other         24 Feb 22 21:36 values-Xt.o -> /usr/ccs/lib/values-Xt.o
lrwxrwxrwx   1 root     other         26 Feb 22 21:37 values-xpg4.o -> /usr/ccs/lib/values-xpg4.o
$

Once you do this, you can now compile C code at least with GCC. This means that Solaris for the PowerPC platform now is a usable operating system, aside from the fact it has no precompiled software whatsoever. Even Windows NT for PowerPC has more software for it. Software can now be compiled using GCC or the original compiler, and cross compiled with GCC on a non-PPC box. Using the cross compiler lets you compile more basics for compiling PPC Solaris code as well such as make. In this screenshot you can also see me compiling a basic “endian test” code example to demonstrate the little endianness of the PowerPC port.

The only problem is that there’s going to be little interest until someone makes a PReP machine emulator. PReP hardware is very hard to come by on the used market these days and while in the early 2000s it might have been easy to find something like a specific RS6k, but judging by the eBay listings there were a lot more MCA, CHRP, and even later PReP models (like the 43p-140) than there are early PReP machines in circulation. QEMU can emulate the 40p somewhat, but right now its 40p emulation is less like an actual 40p and more like something to please AIX. It definitely has the novelty of being a “little-endian PowerPC Unix” however.

Examining Windows 1.0 HELLO.C

The following is a guest post by NCommander of SoylentNews fame!

For those who’ve been long-time readers of SoylentNews, it’s not exactly a secret that I have a personal interest in retro computing and documenting the history and evolution of the Personal Computer. About three years ago, I ran a series of articles about restoring Xenix 2.2.3c, and I’m far overdue on writing a new one. For those who do programming work of any sort, you’ll also be familiar with “Hello World”, the first program most, if not all, programmers write in their careers.

A sample hello world program might look like the following:

#include <stdio.h>


int main() {
 printf("Hello world\n");
 return 0;
}

Recently, I was inspired to investigate the original HELLO.C for Windows 1.0, a 125 line behemoth that was talked about in hush tones. To that end, I recorded a video on YouTube that provides a look into the world of programming for Windows 1.0, and then testing the backward compatibility of Windows through to Windows 10.

For those less inclined to watch a video, my write-up of the experience is past the fold and an annotated version of the file is available on GitHub (https://github.com/NCommander/win1-hello-world-annotations)


Bring Out Your Dinosaurs – DOS 3.3

Before we even get into the topic of HELLO.C though, there’s a fair bit to be said about these ancient versions of Windows. Windows 1.0, like all pre-95 versions, required DOS to be pre-installed. One quirk however with this specific version of Windows is that it blows up when run on anything later than DOS 3.3. Part of this is due to an internal version check which can be worked around with SETVER. However, even if this version check is bypassed, there are supposedly known issues with running COMMAND.COM. To reduce the number of potential headaches, I decided to simply install PC-DOS 3.3, and give Windows what it wants.

You might notice I didn’t say Microsoft DOS 3.3. The reason is that DOS didn’t exist as a standalone product at the time. Instead, system builders would license the DOS OEM Adaptation Kit and create their own DOS such as Compaq DOS 3.3. Given that PC-DOS was built for IBM’s own line of PCs, it’s generally considered the most “generic” version of the pre-DOS 5.0 versions, and this version was chosen for our base. However, due to its age, it has some quirks that would disappear with the later and more common DOS versions.

PC DOS 3.3 loaded just fine in VirtualBox and — with the single 720 KiB floppy being bootable — immediately dropped me to a command prompt. Likewise, FDISK and FORMAT were available to partition the hard drive for installation. Each individual partition is limited, however, to 32 MiB. Even at the time, this was somewhat constrained and Compaq DOS was the first (to the best of my knowledge) to remove this limitation. Running FORMAT C: /S created a bootable drive, but something oft-forgotten was that IBM actually provided an installation utility known as SELECT.

SELECT’s obscurity primarily lies in its non-obvious name or usage, nor the fact that it’s actually needed to install DOS; it’s sufficient to simply copy the files to the hard disk. However, SELECT does create CONFIG.SYS and AUTOEXEC.BAT so it’s handy to use. Compared to the later DOS setup, SELECT requires a relatively arcane invocation with the target installation folder, keyboard layout, and country-code entered as arguments and simply errors out if these are incorrect. Once the correct runes are typed, SELECT formats the target drive, copies DOS, and finishes installation.

Without much fanfare, the first hurdle was crossed, and we’re off to installing Windows.

Windows 1.0 Installation/Mouse Woes

With DOS installed, it was on to Windows. Compared to the minimalist SELECT command, Windows 1.0 comes with a dedicated installer and a simple text-based interface. This bit of polish was likely due to the fact that most users would be expected to install Windows themselves instead of having it pre-installed.

Another interesting quirk was that Windows could be installed to a second floppy disk due to the rarity of hard drives of the era, something that we would see later with Microsoft C 4.0. Installation went (mostly) smoothly, although it took me two tries to get a working install due to a typo. Typing WIN brought me to the rather spartan interface of Windows 1.0.

Although functional, what was missing was mouse support. Due to its age, Windows predates the mouse as a standard piece of equipment and predates the PS/2 mouse protocol; only serial and bus mice were supported out of the box. There are two ways to solve this problem:

The first, which is what I used, involves copying MOUSE.DRV from Windows 2.0 to the Windows 1.0 installation media, and then reinstalling, selecting the “Microsoft Mouse” option from the menu. Re-installation is required because WIN.COM is statically linked as part of installation with only the necessary drivers included; there is no option to change settings afterward. The SDK documentation details the static linking process, and how to run Windows in “slow mode” for driver development, but the end result is the same. If you want to reconfigure, you need to re-install.

The second option, which I was unaware of until after producing my video is to use the PS/2 release of Windows 1.0. Like DOS of the era, Windows was licensed to OEMs who could adapt it to their individual hardware. IBM did in fact do so for their then-new PS/2 line of computers, adding in PS/2 mouse support at the time. Despite being for the PS/2 line, this version of Windows is known to run on AT-compatible machines.

Regardless, the second hurdle had been passed, and I had a working mouse. This made exploring Windows 1.0 much easier.

The Windows 1.0 Experience

If you’re interested in trying Windows 1.0, I’d recommend heading over to PCjs.org and using their browser-based emulator to play with it as it already has working mouse support and doesn’t require acquiring 35 year old software. Likewise, there are numerous write-ups about this version, but I’d be remiss if I didn’t spend at least a little time talking about it, at least from a technical level.

Compared to even the slightly later Windows 2.0, Windows 1.0 is much closer to DOSSHELL than any other version of Windows, and is essentially a graphical bolt-on to DOS although through deep magic, it is capable of cooperative multitasking. This was done entirely with software trickery as Windows pre-dates the 80286, and ran on the original 8086. COMMAND.COM could be run as a text-based application, however, most DOS applications would launch a full-screen session and take control of the UI.

This is likely why Windows 1.0 has issues on later versions of DOS as it’s likely taking control of internal structures within DOS to perform borderline magic on a processor that had no concept of memory protection.

Another oddity is that this version of Windows doesn’t actually have “windows” per say. Instead applications are tiled, with only dialogue boxes appearing as free-floating Windows. Overlapping Windows would appear in 2.0, but it’s clear from the API that they were at least planned for at some point. Most notable, the CreateWindow() function call has arguments for x and y coordinates.

My best guess is Microsoft wished to avoid the wrath of Apple who had gone on a legal warpath of any company that too-closely copied the UI of the then-new Apple Macintosh. Compared to later versions, there are also almost no included applications. The most notable applications that were included are: NOTEPAD, PAINT, WRITE, and CARDFILE.

While NOTEPAD is essentially unchanged from its modern version, Write could be best considered a stripped-down version of Word, and would remain a mainstay until Windows 95 where it was replaced with Wordpad. CARDFILE likewise was a digital Rolodex. CARDFILE remained part of the default install until Windows 3.1, and remained on the CD-ROM for 95, 98, and ME before disappearing entirely.

PAINT, on the other hand, is entirely different from the Paintbrush application that would become a mainstay. Specifically, it’s limited to monochrome graphics, and files are saved in MSP format. Part of this is due to limitations of the Windows API of the era: for drawing bitmaps to the screen, Windows provided Display Independent Bitmaps or DIBs. These had no concept of a palette and were limited to the 8 colors that Windows uses as part of the EGA palette. Color support appears to have been a late addition to Windows, and seemingly wasn’t fully realized until Windows 3.0.

Paintbrush (and the later and confusingly-named Paint) was actually a third party application created by ZSoft which had DOS and Windows 1.0 versions. ZSoft Paintbrush was very similar to what shipped in Windows 3.0 and used a bit of technical trickery to take advantage of the full EGA palette.

With that quick look completed, let’s go back to actually getting to HELLO.C, and that involved getting the SDK installed.

The Windows SDK and Microsoft C 4.0

Getting the Windows SDK setup is something of an experience. Most of Microsoft’s documentation for this era has been lost, but the OS/2 Museum has scanned copies of some of the reference binders, and the second disk in the SDK has both a README file and an installation batch file that managed to have most of the necessary information needed.

Unlike later SDK versions, it was the responsibility of the programmer to provide a compiler. Officially, Microsoft supported the following tools:

  • Microsoft Macro Assembler (MASM) 4
  • Microsoft C 4.0 (not to be confused with MSC++4, or Visual C++)
  • Microsoft Pascal 3.3

Unofficially (and unconfirmed), there were versions of Borland C that could also be used, although this was untested, and appeared to not have been documented beyond some notes on USENET. More interestingly, all the above tools were compilers for DOS, and didn’t have any specific support for Windows. Instead, a replacement linker was shipped in the SDK that could create Windows 1.0 “NE” New Executables, an executable format that would also be used on early OS/2 before being replaced by Portable (PE) and Linear Executables (LX) respectively.

For the purposes of compiling HELLO.C, Microsoft C 4.0 was installed. Like Windows, MSC could be run from floppy disk, albeit it with a lot of disk swapping. No installer is provided, instead, the surviving PDFs have several pages of COPY commands combined with edits to AUTOEXEC.BAT and CONFIG.SYS for hard drive installation. It was also at this point I installed SLED, a full screen editor as DOS 3.3 only shipped with EDLIN. EDIT wouldn’t appear until DOS 5.0

After much disk feeding and some troubleshooting, I managed to compile a quick and dirty Hello World program for DOS. One other interesting quirk of MSC 4.0 was it did not include a standalone assembler; MASM was a separate retail product at the time. With the compiler sorted, it was time for the SDK.

Fortunately, an installation script is provided. Like SELECT, it required listing out a bunch of folders, but otherwise was simple enough to use. For reasons that probably only made sense in 1985, both the script and the README file was on Disk 2, and not Disk 1. This was confirmed not to be a labeling error as the script immediately asks for Disk 1 to be inserted.

The install script copies files from four of the seven disks before returning to a command line. Disk 5 contains the debug build of Windows, which are roughly equivalent to checked builds of modern Windows. Disk 6 and 7 have sample code, including HELLO.C.

With the final hurdle passed, it wasn’t too hard to get to compiled HELLO.EXE.

Dissecting HELLO.C

I’m going to go through these at a high level, my annotated hello.c goes into much more detail on all these points.

General Notes

Now that we can build it, it’s time to take a look at what actually makes up the nuts and bolts of a 16-bit Windows application. The first major difference, simply due to age is that HELLO.C uses K&R C simply on the basis of pre-dating the ANSI C function. It’s also clear that certain conventions weren’t commonplace yet: for example, windows.h lacks inclusion guards.

NEAR and FAR pointers

long FAR PASCAL HelloWndProc(HWND, unsigned, WORD, LONG);

Oh boy, the bane of anyone coding in real mode, near and far pointers are a “feature” that many would simply like to forget. The difference is seemingly simple, a near pointer is nearly identical to a standard pointer in C, except it refers to memory within a known segment, and a far pointer is a pointer that includes the segment selector. Clear right?

Yeah, I didn’t think so. To actually understand what these are, we need to segue into the 8086’s 20-bit memory map. Internally, the 8086 was a 16-bit processor, and thus could directly address 2^16 bits of memory at a time, or 64 kilobytes in total. Various tricks were done to break the 16-bit memory barrier such as bank switching, or in the case of the 8086, segmentation.

Instead of making all 20-bits directly accessible, memory pointers are divided into a selector which forms the base of a given pointer, and an offset from that base, allowing the full address space to be mapped. In effect, the 8086 gave four independent windows into system memory through the use of the Code Segment (CS), Data Segment (DS), Stack Segment (SS), and the Extra Segment (ES).

Near pointers thus are used in cases where data or a function call is in the same segment and only contain the offset; they’re functionally identical to normal C pointers within a given segment. Far pointers include both segment and offset, and the 8086 had special opcodes for using these. Of note is the far call, which automatically pushed and popped the code segment for jumping between locations in memory. This will be relevant later.

HelloWndProc is a forward declaration for the Hello Window callback, a standard feature of Windows programming. Callback functions always had to be declared FAR as Windows would need to load the correct segment when jumping into application code from the task manager. Hence the far declaration. Windows 1.0 and 2.0, in addition, had other rules we’ll look at below.

WinMain Decleration:

int PASCAL WinMain( hInstance, hPrevInstance, lpszCmdLine, cmdShow )
HANDLE hInstance, hPrevInstance;
LPSTR lpszCmdLine;
int cmdShow;

PASCAL Calling Convention

Windows API functions are all declared as PASCAL calling convention, also known as STDCALL on modern Windows. Under normal circumstances, the C programming language has a nominal calling convention (known as CDECL) which primarily relates to how the stack is cleaned up after a function call. In CDECL-declared functions, its the responsibility of the calling function to clean the stack. This is necessary for vardiac functions (aka, functions that take a variable number of arguments) to work as the callee won’t know how many were pushed onto the stack.

The downside to CDECL is that it requires additional prologue and epilogue instructions for each and every function call, thereby slowing down execution speed and increasing disk space requirements. Conversely, PASCAL calling convention left cleanup to be performed by the called function and usually only needed a single opcode to clean the stack at function end. It was likely due to execution and disk space concerns that Windows standardized on this convention (and in fact still uses it on 32-bit Windows.

hPrevInstance

if (!hPrevInstance) {
/* Call initialization procedure if this is the first instance */
if (!HelloInit( hInstance ))
return FALSE;
} else {
/* Copy data from previous instance */
GetInstanceData( hPrevInstance, (PSTR)szAppName, 10 );
GetInstanceData( hPrevInstance, (PSTR)szAbout, 10 );
GetInstanceData( hPrevInstance, (PSTR)szMessage, 15 );
GetInstanceData( hPrevInstance, (PSTR)&MessageLength, sizeof(int) );
}

hPrevInstance has been a vestigial organ in modern Windows for decades. It’s set to NULL on program start, and has no purpose in Win32. Of course, that doesn’t mean it was always meaningless. Applications on 16-bit Windows existed in a general soup of shared address space. Furthermore, Windows didn’t immediately reclaim memory that was marked unused. Applications thus could have pieces of themselves remain resident beyond the lifespan of the application.

hPrevInstance was a pointer to these previous instances. If an application still happened to have its resources registered to the Windows Resource Manager, it could reclaim them instead of having to load them fresh from disk. hPrevInstance was set to NULL if no previous instance was loaded, thereby instructing the application to reload everything it needs. Resources are registered with a global key so trying to register the same resource twice would lead to an initialization failure.

I’ve also gotten the impression that resources could be shared across applications although I haven’t explicitly confirmed this.

Local/Global Memory Allocations

NOTE: Mostly cribbled off Raymond Chen’s blog, a great read for why Windows works the way it does.

pHelloClass = (PWNDCLASS)LocalAlloc( LPTR, sizeof(WNDCLASS) );
LocalFree( (HANDLE)pHelloClass );

Another concept that’s essentially gone is that memory allocations were classified as either local to an application or global. Due to the segmented architecture, applications have multiple heaps: a local heap that is initialized with the program and exists in the local data segment, and a global heap which requires a far pointer to make access to and from.

Every executable and DLL got their own local heaps, but global heaps could be shared across process boundaries, and as best I can tell, weren’t automatically deallocated when a process ended. HEAPWALK could be used to see who allocated what and find leaks in the address space. It could also be combined with SHAKER which rearranged blocks of memories in an attempt to shake loose bugs. This is similar to more modern-day tools like valgrind on Linux, or Microsoft’s Application Testing tools.

MakeProcInstance

lpprocAbout = MakeProcInstance( (FARPROC)About, hInstance );

Oh boy, this is a real stinker and an entirely unnecessary one at that. MakeProcInstance didn’t even make it to Windows 3.1 and its entire existence is because Microsoft forgot details of their own operating environment. To explain, we’re going to need to dig a bit deeper into segmented mode programming.

MakeProcInstance’s purpose was to register a function suitable as a callback. Only functions that have been marked with MPI or declared as an EXPORT in the module file can be safely called across process boundaries. The reason for this is that Windows needs to register the Code Segment and Data Segment to a global store to make function calls safely. Remember, each application had its own local heap which lived in its own selector in DS.

In real mode, doing a CALL FAR to jump to a far pointer automatically push and popped the code segment as needed, but the data segment was left unchanged. As such, a mechanism was required to store the additional information needed to find the local heap. So far, this is sounding relatively reasonable.

The problem is that 16-bit Windows has this as an invariant: DS = SS …

If you’re a real mode programmer, that might make it clear where I’m going with this. The Stack Segment selector is used to denote where in memory the stack is living. SS also got pushed to the stack during a function call across process boundaries along with the previous SP. You might begin to see why MakeProcInstance becomes entirely unnecessary.

Instead of needing a global registration system for function calls, an application could just look at the stack base pointer (bp) and retrieve the previous SS from there. Since SS = DS, the previous data segment was in fact saved and no registration is required, just a change to how Windows handles function epilogs and prologs. This was actually found by a third party, and a tool FixDS was released by Michael Geary that rewrote function code to do what I just described. Microsoft eventually incorporated his fix directly into Windows, and MakeProcInstance disappeared as a necessity.

Other Oddities

From Raymond Chen’s blog and other sources, one interesting aspect of 16-bit Windows was it was actually designed with the possibility that applications would have their own address space, and there was talk that Windows would be ported to run on top of XENIX, Microsoft’s UNIX-based operating system. It’s unclear if OS/2’s Presentation Manager shared code with 16-bit Windows although several design aspects and API names were closely linked together.

From the design of 16-bit Windows and playing with it, what’s clear is this was actually future-proofing for Protected Mode on the 80286, sometimes known as segmented protection mode. On 286’s Protected Mode, while the processor was 32-bit, the memory address space was still segmented into 64-kilobyte windows. The primary difference was that the segment selectors became logical instead of physical addresses.

Had the 80286 actually succeeded, 32-bit Windows would have been essentially identical to 16-bit Windows due to how this processor worked. In truth, separate address spaces would have to wait for the 80386 and Windows NT to see the light of day, and this potential ability was never used. The 80386 both removed the 64-kilobyte limit and introduced a flat address space through paging which brought the x86 processor more inline with other architectures.

Backwards Compatibility on Windows 3.1

While Microsoft’s backward compatibility is a thing of legend, in truth, it didn’t actually start existing until Windows 3.1 and later. Since Windows 1.0 and 2.0 applications ran in real mode, they could directly manipulate the hardware and perform operations that would crash under Protected Mode.

Microsoft originally released Windows 286, and 386 to add support for the 80286 and 80386, functionality that would be merged together in Windows 3.0 as Standard Mode, and 386 Enhanced Mode along with legacy “Real Mode” support. Due to running parts of the operating system in Protected Mode, many of the tricks applications could perform would cause a General Protection Fault and simply fail. This wasn’t seen as a problem as early versions of Windows were not popular, and Microsoft actually dropped support for 1.x and 2.x applications in Windows 95.

Windows for Workgroups was installed in a fresh virtual machine, and HELLO.EXE, plus two more example applications, CARDFILE and FONTTEST were copied with it. Upon loading, Windows did not disappoint throwing up a compatibility warning right at the get-go.

Accepting the warning showing that all three applications ran fine, albeit it with a broken resolution due to 0,0 being passed into CreateWindow().

However, there’s a bit more to explore here. The Windows 3.1 SDK included a utility known as MARK. MARK was used, as the name suggests, to mark legacy applications as being OK to run under Protected Mode. It also could enable the use of TrueType fonts, a feature introduced back in Windows 3.0.

The effect is clear, HELLO.EXE now renders in TrueType fonts. The reason TrueType fonts are not immediately enabled can be see in FONTTEST, where the system typeface now overruns several dialog fields.

The question now was, can we go further?

35 Years Later …

As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon. However, running 16-bit applications is complicated by the fact that NTVDM is not available on 64-bit installations. As such, a fresh copy of Windows 10 32-bit was installed.

Some pain was suffered convincing Windows that I didn’t want to use a Microsoft account to sign in. Inserting the same floppy disk as used in the previous test, I double-clicked HELLO and Feature Installer popped up asking to install NTVDM. After letting NTVDM install, a second attempt shows, yes, it is possible to run Windows 1.x applications on Windows 10.

FONTTEST also worked without issue, although the TrueType fonts from Windows 3.1 had disappeared. CARDFILE loaded but immediately died with an initialization error. I did try debugging the issue and found WinDbg at least has partial support for working with these ancient binaries, although the story of why CARDFILE dies will have to wait for another day.

In Closing …

I do hope you enjoyed this look at ancient Windows and HELLO.C. I’m happy to answer questions, and the next topic I’m likely going to cover is a more in-depth look at the differences between Windows 3.1 and Windows for Workgroups combined with demonstrating how networking worked in those versions.

Any feedback on either the article, or the video is welcome to help me improve my content in the future.

Until next time,

73 de NCommander

Web Rendering Proxy – Full Page Scrolling

(This is a guest post by Antoni Sawicki aka Tenox)

Due to a popular demand I have added an option of generating full page height screenshot and allowing client browser to do the scrolling.

This makes the browsing experience much smoother, you have resources for it. Beware, a full page screenshot can be several MB in size encoded as gif/png and much more as a decoded raw bitmap on the client. I managed to crash Mosaic and OmniWeb a few times. Fortunately typical Wikipedia page is under 1 MB so for most part is should be fine. To activate just put 0 in page Height.

I have drafted a pre-release on github for testing. Please let me know any feedback. I’m also thinking whether enable this by default, or not.