Qemu’s Macintosh Quadra in alpha usability! (runs A/UX!)

I’m being a bit unfair as far as Alpha’s go it’s rough to get going but wow it’s GREAT! For starters it’s a Quadra 800 so System 7.1 through 8.1 will work. Also this has full 68040 capabilities so yes that means MMU and YES A/UX (and NetBSD!) will run

As always you can find more on emaculation, the best source for news and info on emulating the Mac.

Additionally you can find the setup guide here.

Many of my Shoebill/Cockatrice III images didn’t work at all. Some at least were picked up as blank disks. I had less luck with freshly created raw/vmdk or qcow2 disks. Not sure at all. My minimal 7 2gb disk worked fine as a donor, and even converting to a vmdk was fine. Sooo YMMV. But hey it’s an Alpha and YES IT CAN WORK.

Another plus is that the idle loop works fine so it won’t burn 100% of your CPU. This could possibly be a great gopher server!? Time will tell.

The Gould SEL Concept 32/87

Despite Gould’s location being a few minutes drive from where I first arrived in America, I never had any idea they existed, were making their own exciting machines, or were even a Unix VAR. At a time during the Unix wars one was left to choose SYSV or BSD, but Gould had gone another direction with UTX with a ‘why not both’ approach. Truly an 80’s miracle of Unix.

Well as luck has it there is a SIMH emulator! James C. Bevier has a project on github where he’s building his SEL32 emulation on SIMH!

Even better he’s included tape images, and working INI files which I was able to make into a working system! (after some help with a tape bug)

Boot
File is COFF format
-> section (.bss) size (177960) clearing at (0xcbc18)
-> section (.text) size (724800) loading at (0x1200)
-> section (.data) size (105176) loading at (0xb2140)

Start 0x1200

UTX/32 2.1B (exp) #0: Mon Apr 10 19:46:05 GMT 1989
    bsln@fenix:/usr.POWERNODE/src/src/sys/obj

V6 CPUIPU(P) configuration (IPU not present)
top of system = 0x400000
real mem = 8388608
End of kernel map 0x218464
avail memory = 7356416
using 256 buffers containing 262144 bytes of memory
using 256 mirror buffer headers
ioi: channel iop0 at 7e00 online
ioi: channel dc0 at 800 online
ioi: channel dc1 at c00 not present, dci cc=2
ioi: channel dc2 at 400 not present, dci cc=2
ioi: channel tc0 at 1000 online
ioi: channel en0 at e00 online
 -- CHECK AND RESET THE DATE!
swapping on the b partition
dmmax 512 mbswap 3576
dumplo 11776
Checking root filesystem
Check commented out, uncomment once you have edited /etc/fstab!
Automatic reboot in progress...
Mon Aug 30 05:35:46 CDT 2021
/etc/fsck -p /dev/rdk0d
/etc/fsck -p /dev/rdk0e
/etc/fsck -p /dev/rdk0f
File systems OK

Mon Aug 30 05:36:06 CDT 2021
Mounting file systems
/dev/dk0d mounted on /usr.POWERNODE
/dev/dk0e mounted on /home
/dev/dk0f mounted on /usr/local
Initializing loopback
Starting line printer daemon
Starting standard daemons: update cron.
Adding swap partitions
Standard setup functions
Invoke local rc file
Entering /etc/rc.local
dumpdirectory: No such file or directory
Starting Syslog Daemon
Starting local daemons: inetd.
Starting NFS/RPC daemons: portmap sund.
Mounting NFS filesystems
Leaving /etc/rc.local
Starting mail
Checking aliases file
Preserving editor files
Clearing /tmp - does not remove directories
Clearing pseudo terminals
Leaving rc
Mon Aug 30 05:36:07 CDT 2021


GOULD UTX/32 2.1B (noname) (console)

login:

It’s very BSD feeling on the boot and in the /usr directory there is 5bin 5lib

Sadly transferring stuff by just pasting on the console reveals that there is some IO issues in the simulator:

syncing disks... done

dumping to dev 101, offset 11776
ioi: channel dc0 at 800 online
dump succeeded

As a matter of fact doing anything too fast can/will panic the simulator. That goes for Ethernet and additional serial ports.

Interesting highlights of the platform:

Produced by hard-params version 4.1, CWI, Amsterdam
Compiler does not claim to be ANSI C

Char = 8 bits, signed
Short=16 int=32 long=32 float=32 double=64 bits
Char pointers = 32 bits
Int pointers = 32 bits

Alignments used for char=1 short=2 int=4 long=4
Character order:
    short: AB
    int:   ABCD
    long:  ABCD

Obvious issues with the platform is a lack of GCC. The PCC compiler while standard for early 80’s non PDP-11/VAX machines is a bit lacking as the years went on. I was unable to build gzip due to the following error:

# gmake
cc  -o gzip gzip.o zip.o deflate.o trees.o bits.o unzip.o inflate.o util.o crypt.o lzw.o unlzw.o unpack.o unlzh.o getopt.o
ld: warning: near subsegments too big for static base spanning
ld: gzip.o:
        no base for reloc of memref instruction at .nbtext+0x18 relative to symbol _progname
ld:
        1221 more 'no base ' errors
gmake: *** [gzip] Error 4

Sadly I don’t find much on Altavista other than this & this. It only offers this terse comment:

The constraints on address space on a Gould are quite severe.

Bummer. Additionally neither Hack 1.0/1.03 or PDP-11 Hack will build either. Surprisingly bash-1.14.7, make-3.75 and ircii-2.5 compiled. Obviously with no networking IRC is kind of pointless.

It’s an interesting time capsule of life outside of AT&T/CSRG or SUN, going in a different direction. It seems like a larger lost opportunity to take their ‘it runs both’ approach software and not have it available on different platforms. Granted for a hardware company once the software leaves the compelling reason to buy the hardware evaporates. Hello NeXT.

If anyone wants to try to re-create it, download and build the SEL32 emulator from github, and I put my vague instructions here.

Or for like minded OS tourists, you can give it a spin here: UTX32_2.1B.7z. I included a ‘9346-UTX-blank.disk’ file which is already prepared if you don’t want to go through the 15 questions to prep a disk. Likewise I made a ‘9346-UTX-biga-blank.disk’ image which is just a single large ‘a’ partition as it’s trivial to just add a bunch of big disks these days.

Full 32bit Unix machines from Ft Lauderdale! Who knew?

Double Agent, WSLv2 and named pipes

So digging around an old SDK I came across an old friend, Microsoft Agent:

This was the bold new strategy of having a digital assistant that you could interact with on the desktop to help you with common tasks, and help with common issues. Oddly enough as popular a Alexa is these days, Microsoft’s attempts didn’t work out so well.

Perhaps it was the infamous Clippy of Microsoft Office infamy that left a bad taste in the world of talking animated agents. circling back to the popular Alexa perhaps Clarke/Kubrick had it right in that people prefer an omnipresent voice rather than some animated animal. Perhaps the need to animate Cortana led to it’s downfall as well.

Agent was at least an open ended platform so 3rd parties could drive the agent. However like so many other innovate things Microsoft made in the late 1990’s like Internet Explorer, Comic Chat, and Active-X, Microsoft Agent is no longer supported on Windows 10 (I didn’t even try Vista or 7). Enter Double Agent, a 32bit/64bit Active-X emulator of the old Microsoft Agent control. Download some characters for end users, and install them as Administrator, and you are in business!

How cool. Now for the fun part I took the sample ‘Hello’ from the Microsoft Agent Web SDK for C, and added a named pipe, so it simply sits on \\.\pipe\agent1 and will speak anything you send it. Pretty simple, right?

Adding WSLv2

Now one of the cool things about WSL(Windows Subsystem for Linux), is that you can run Linux commands from the CMD prompt. For example:

C:\Users\jason>wsl uname -a
Linux remlazar 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 GNU/Linux

Although there is a mechanism for sharing Unix sockets between WSL & Windows, I opted for something more casual and simpler, stdio redirection and a named pipe. I instead opted for the simple command:

@wsl x=$(fortune %1);echo ${x,,} > \\.\pipe\agent1

I should add, I found the hard way that UPPERCASE words are read letter by letter by agent, so I have to do the ‘,,’ trick to force the output to lowercase. Pipes and redirects appear to be interpreted by CMD, so I opted for environment variables instead.

So with some pipes, and a simple example I now have one of those annoying desktop agents reading jokes to me from Linux. It’s not a terribly complicated or involved program, but sometimes it doesn’t have to be. I do like how reading from a pipe is a great LCD, as anything that can open a file can send data to a named pipe, so this makes it ubiquitious.

I guess if I was more involved, I’d add timers, and have the agent walk around, sleep disappear etc etc. But I’m happy enough for it to be acting as a text to speech. The only downside is once kids see it, it’ll be the greatest thing ever. Perhaps Microsoft wasn’t wrong it’s just that the magic of an animated bird reading ‘zippy the pinhead‘ fortunes appeals more to children than to adults.

I’m sure there is books written about user interfaces, and the rise and fall, and rise again of the PDA, but I wonder what they have to say about Microsoft Agent?

Cross compiling to AIX: or missing shr.o

I was inspired by NCommander’s MinGW to Solaris cross compiler so I thought I’d dig out the one that got me started decades ago, cross compiling to the RS/6000 from Linux some time back in 1993. For this experiment I was able to beg/borrow a copy of /usr/lib & /usr/include from AIX 3.2.5 and wanted to use that as a base. I decided to use GCC 2.7.2.2 and Binutils 2.11.2 as these were old enough t build somewhat easy enough from MinGW/MSYS 1, but I figured they also had the best luck of being able to parse the headers without needing ‘fixinc’.

I was able to build both binutils and GCC with this simple incanation

sh configure --target=ppc-ibm-aix325 --prefix=/aix3

One weird thing was that binutils completely sidestepped ld, so I had to configure that manually like this:

--target=powerpc-ibm-aix --prefix=/aix3

Also ‘eaixppc.c’ didn’t generate properly I had to rebuild binutils from Linux to get it to pick up and build that file, copy that back in to get a working cross linker. Older stuff has some issues with CR/LF from time to time, and sometimes it’s easier to deal with builds from other systems and pluck files as needed.

Surprisingly things built, and transferring the to my Qemu AIX image gave me this fun error:

exec(): 0509-036 Cannot load program /cdrom/demo/hello/hello because of the following errors:
0509-150 Dependent module libc.a(shr.o) could not be loaded.
0509-022 Cannot load module libc.a(shr.o).
0509-026 System error: A file or directory in the path name does not exist.

Surprisingly IBM has a fix!

# export LIBPATH=$LIBPATH:/usr/lib
# /cdrom/demo/hello/hello
hello world, compiled by GCC 2.7.2.2!
#

Amazing.

Of course it’s not all sunshine and rainbows, bigger programs like the ‘87 Infocom interpreter bomb like this:

C:\aix3\demo\infocom>gcc -v -o infocom file.o funcs.o infocom.o init.o input.o interp.o io.o jump.o object.o options.o page.o print.o property.o support.o variable.o term.o
gcc version 2.7.2.2
ld -T512 -H512 -btextro -bhalt:4 -bnodelcsect -o infocom /aix3/lib/crt0.o -L/aix3/lib file.o funcs.o infocom.o init.o input.o interp.o io.o jump.o object.o options.o page.o print.o property.o support.o variable.o term.o /aix3/lib/libgcc.a -lc /aix3/lib/libgcc.a
ld: section .data [0000000000000000 -> 00000000000007ff] overlaps section .text [0000000000000200 -> 0000000000009b0b]
ld: section .loader [0000000000000000 -> 00000000000014a8] overlaps section .text [0000000000000200 -> 0000000000009b0b]
gcc: Internal compiler error: program ld got fatal signal 1

Initially I thought this was a problem with the GCC Linker, but after copying the objects to Qemu, and linking from there, I found out that the GNAT gcc driver calls the linker in a different manner:

ld -bpT:0x10000000 -bpD:0x20000000 -btextro -bnodelcsect -o infocom /aix3/lib/crt0.o file.o funcs.o infocom.o init.o input.o interp.o io.o jump.o object.o options.o page.o print.o property.o support.o variable.o term.o /aix3/lib/libgcc.a /aix3/lib/libc.a /aix3/lib/libgcc.a

Reformatted for my cross, but this produces a running executable.

And finally phoon which heavily relies on floating point math:

C:\aix3\demo\phoon>ld -bpT:0x10000000 -bpD:0x20000000 -btextro -bnodelcsect -o phoon /aix3/lib/crt0.o phoon.o date_parse.o astro.o /aix3/lib/libc.a /aix3/lib/libgcc.a /aix3/lib/libm.a
/aix3/lib/libm.a(atan2.o)(.pr+0x308):atan2.c: undefined reference to __itrunc' /aix3/lib/libm.a(atan2.o)(.pr+0x33c):atan2.c: undefined reference to__itrunc'
/aix3/lib/libm.a(atan2.o)(.pr+0x3c4):atan2.c: undefined reference to `__itrunc'

I thought first I could just tack -lm onto the end. However remembering years ago, linkers ARE position dependent, and on AIX libm must come before libc.

C:\aix3\demo\phoon>make
ld -bpT:0x10000000 -bpD:0x20000000 -btextro -bnodelcsect -o phoon /aix3/lib/crt0.o phoon.o date_parse.o astro.o /aix3/lib/libm.a /aix3/lib/libgcc.a /aix3/lib/libc.a

And yep it runs!

Sadly networking is a bit goofed on 4.3.3, and Im unable to upload more than a few hundred bytes before a stall on the console so slip/ppp would be a bit useless.

Speaking of useless, if anyone is crazy enough, you can follow here: MinGW-AIX325.7z

Installing Debian Linux 5.0 on the Qemu Dec Alpha

Years ago I’d written this terribly vague, and generic quickie post that Debian 5 will in fact boot inside of Qemu. I didn’t really want to get into it as it’s a little complicated and a lot painful. People have cried out over the years, but I figured I’d help people.

Linux for the Dec Alpha

For operating system tourists, I’ll save you the story, just go here, and download qemu-2.2.50-DecAlphaDebian.7z. You’ll quickly find out it’s borderline useless, and go onto your next thing. You’re welcome.

First thing this was running in 2014, and newer Qemu’s seem to behave… strange. So I stuck with a late 2014 build of Qemu. If you deviate from here , you are on your own. I did dig up quite a few other Dec Alpha AXP emulators and put them on archive.org. -But that’s totally up to you, again I’m sticking with 2014’s Qemu.

The BIOS/Pal on Qemu’s Alpha is far from complete and cannot read disks, so no boot sectors, boot loaders, no on disk kernels. All is not lost, you can inject a kernel and initrd, however this is where the fun is. Obviously to install you need to extract both from the ISO file. Also make sure the kernel is decompressed, add a .gz extension and de-compress it, as it should be around 6MB.

qemu-system-alpha.exe -net nic -net user -drive file=alpha.vmdk,if=ide,media=disk -drive file=debian-5010-alpha-netinst.iso,if=ide,media=cdrom -initrd initrd.gz -kernel vmlinux

This is vaguely how to boot up the installer. Partition, make sure hda1 hda2 hda3 etc are created and all is fine.

Now the next amount of fun is that you need to extract the created initrd as the installer initrd always launches the installer. The busybox cannot create tar files, there is no ftp or scp, also I couldn’t get it to even try to mount NFS images.

However on Windows 10 with WSLv2 or a Linux machine you can mount the disk image (keep it raw? or convert it?)

losetup /dev/loop0 myimage.disk 
partprobe /dev/loop0
mount /dev/loop0p1 /mnt/myimage

These three steps will let you mount the disk in this case /dev/loop0p2 which is the root filesystem. Debian didn’t have partprobe installed so I had to apt-get install parted

Now that you can mount it, you can copy the boot/initrd.img-2.6.26-2-alpha-generic file.

We do need to tell Linux where the root filesystem is so to finally run Qemu it’s like this:

qemu-system-alpha.exe -net nic -net user -drive file=alpha.vmdk,if=ide,media=disk -initrd initrd.img-2.6.26-2-alpha-generic -kernel vmlinuz-2.6.26-2-alpha-generic -append "root=/dev/hda2"

Obviously this was a lot more time consuming than it should be, but now I can do useful things.

Also sometimes Qemu just sits there with a black screen. the UI is waiting for something ,not sure what. It’ll either come to life on it’s own or you got to bang it.

Zir Blazer’s latest QNX update

This reply from Zir was so large and so detailed I didn’t feel it felt to be burred on an older post but rather given it’s own chance for a full pager. -editor

I should have posted this a LONG time ago, back when the actual winner was announced, but procrastination is what it is. Better latter than never, I guess. Actually, I did receive a half-prize for my efforts, but went unmentioned (Which at times I don’t mind. Privacy is privacy).
Mihai Gaitos was a savior of sorts because he appeared at the right moment, when I was already hitting a brick wall as that was the limit of my skills. When I noticed that there was no way to complete the challenge by brute forcing but that programming skills were required to reverse engineering, or do something from scratch like the winner did, I knew it was out of my league.

Programming always eluded me even though I always considered it an essential skill. Yet, I’m still proud of the things I did to get there, as I’m normally a very lazy person that likes to read how people does stuff but rarely do it myself. At the end of the day, money makes me dance and that is why I got involved, so I’m a capitalist pig at heart, heh.

When I first saw the challenge posted, I passed it to a few programmer friends to see if they were interested in giving it a try (Who doesn’t want a chance at a significant money prize? Good friends always tell others about opportunities). Among those, one had its own hobby OS project and another is a freelancer programmer that did some private homebrew DOS games, so I considered them skilled enough to try. Sadly, none of them showed interest.
I noticed that after like two days of the challenge being posted, there were still no comments, which I thought that was rare cause the previous challenge with a 100 U$D prize receive a ton of comments and general interest, and with a prize 20 times bigger, it would guarantee the participation of highly skilled people, so I found quite weird that no one completed it by then. Either it was too hard to be done in 48 hours, which would make me totally unqualified, or no one noticed it on the first place, which would make me potentially lucky (And neozeed Blog not very popular, then. Shame on him!). Is also possible than other potential challengers also thought that with heavy competence they may not have made it before another one got crowned winner, either. I took the last two line of thoughts.

My inspiration about having enough skills to complete the challenge came from the rules of the challenge itself, as it explicitly mentioned that some things like mix-and-match Boot Loaders were allowed. As I had recently read about some people managing to get Windows XP x64 booted in UEFI Mode by using a Boot Loader from a Vista beta version before such support was removed from later versions, I thought that I had a solid idea worth trying that was possible for me to do.

That is precisely what I did. I thought that it worked, and hastily uploaded my result (My first comment in this Blog), THEN noticed that I actually mixed up the QNX 2 Kernel with 1.2 userspace, since in 1.2 the Kernel was by default out-of-filesystem thus not a file that can be easily be overwritten during a copy. I had to backtrack my claim.

By the time I did so, I caught A LOT of people’s attention. Since I thought that I was close enough, and I had to save face after my first failure, I decided to keep pushing forward (Which I don’t regret, it was both fun AND profitable). The rest is story. The single thing which I’m still not entirely convinced about is the lack of other participants until Mihai Gaitos posted that he was going to get into the challenge, given the fact than for the previous ones multiple people posted (forty being one of the previous winners), so I don’t know if monopolizing the comments section with my updates as the only contestant at that point dissuaded others of participating that would have done so had I never posted on the first place.

After both self-glorifying and self-deprecating, what comes next is obviously my QNX impressions.

I heard about QNX a few times (Ironically, I think that the first time was on an OS/2 Museum guest article made in 2013 by… guess who, Tenox), and by the comments of people that actually used it, it was held in high esteem, since a common phrase when talking about QNX was that “it was years ahead of its time”. Actually, I even found highly surprising than the two programmers I mentioned that I told about the challenge actually remembered QNX from a floppy demo that was distributed in a local computer magazine from the the middle 90’s (As they mentioned that it had a built-in browser and fitted in a 1.44M floppy, the obvious one is the QNX 4.05 Demo floppy, qnx_demo_405_network.ISO.xz in Tenox’s repository). It seems than that demo was something special if it could generate a lasting impression and made some people remember QNX just for it alone.

First, keep in mind my actual OS usage experience: I was a child playing DOS games during the golden era of the middle 90’s, and got into Windows 9x/XP like every other average consumer. My first true experience with anything non-Microsoft was Linux beginning in 2013, when I decided to try PCI Passthrough with Xen to make a Windows gaming VM as an excuse for a main Windows user to try something else (This was before doing so became common). At some point I even wrote a guide about that, but since no more than 2 or 3 people used it when it was still up to date (And due to being based on Arch Linux, I had to occasionally recheck everything to make sure that it was still updated), and then everyone and their moms began to write guides by the time that standalone QEMU with VFIO became better than Xen for passthrough, I lost interest.

The point is, I can’t really make any proper comparisons due to the fact that my first hand experience with OS variety is limited. Since I have almost no other direct experience, most of my knowledge about the existence of other OSes and their capabilities comes from Wiki articles, Blog posts, scans from computer magazines, comments from other people with first hand experience, etc. Thus, since things always fits into contexts, I can’t really compare a lot of aspects against other contemporaries, but more like my own impressions against what I know of that era.

QNX seemed perhaps far too close to a modern command line Linux distribution than I was expecting, as after I managed to understand a few command differences (Like how mount worked), the basics seemed to be mostly the same. I pondered whenever this is because QNX was “years ahead of its time”, or because UNIX OSes were already quite mature even by early 80’s, as if during the last 4 decades the basics didn’t changed that much. Yet, I can’t directly compare it to other UNIXes for what I already said. The documentation, like most 80’s stuff, is quite complete and easy to understand and follow, like the manual installation chapter, which reads like a walkthrough.

Perhaps the only thing I didn’t like was the text editor, because vi style editors actually force me to RTFM to be able to edit and save something whereas mainstream text editors tend to be usually intuitive about the most basic functionality. If I recall correctly, my issue was that I couldn’t actually get it to switch back and forth from edit mode to command mode, which seems to be because the editor manual was mentioning the location of certain keys based on the layout of the 84-Key Model F Keyboard, which I’m not used to, and I was also confused due to old terms that no one uses today (Carriage Return is supposed to be Enter, Backspace, or maybe Escape?).

Among the things I found quite interesting about QNX is that it was based on a MicroKernel paradigm. Being something from the early 80’s, the first thought was about how it fitted into the Linus Torvalds vs Tanenbaum debate:
https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate

The resume of that debate is that Torvalds favored Monolithic Kernels whereas Professor Tanenbaum was all about MicroKernels. What that debate seems to be missing, is a practical comparison between actual working, fully featured OSes instead of theoretical mumbo jumbo about Kernel design. That is where QNX fits in, since, being a very early, fully featured third party UNIX-like based on MicroKernel, looks like a good representative of that class. I think that before hearing about QNX, I didn’t knew about any other Kernel based on that design paradigm, much less a full OS, that wasn’t either squarely aimed at embedded systems, or was experimental or educational in nature, thus comparing a full blown GNU/Linux to QNX for this debate seems natural to me. I don’t know if there are other candidates to fit into such comparison by the middle of the 90’s, when this topic began.

Also, what I noticed is that people seems to associate MicroKernel based OSes (Including QNX) with “Real Time OS”, and think about those as something highly specialized usually targeting embedded systems, whereas I see no reason for them to not be usable as a generalist OS. I mean, early on its life QNX was called QUNIX, so it seems that it was intended to be related to the *NIX family. The focus on being a RTOS seems to have put it far from its original identity. Even if from a marketing or commercial point of view that was better, I find it rather curious than it scared away certain types of users who somehow don’t think RTOSes can directly compete with a *NIX.

There are, however, side stories that makes my affair with QNX far more interesting. Going back to my experiences, during my time toying around with QEMU, it became obvious that mastering virtualization was quite harder than it looks, since in order to learn all what QEMU can do via either virtualization or emulation, at some point you will start reading about stuff so old that eventually it makes convenient to start to learn orderly by the very beginning: The 1981 IBM PC. Thus I ended up with a nice hobby of digital archeology, which is the reason I usually see this Blog and OS/2 Museum, among others.

Albeit it is a massive amount of knowledge, it seemed easier to digest if you start piling stuff on top of it so you can follow how the whole platform evolved. The end result is that I wrote this (Still not finished, nor know if I ever will), focusing on how the PC platform evolved:
https://zirblazer.github.io/htmlfiles/pc_evolution.html?ver=123

From my findings, one of the most jaw-dropping moments was perhaps when I hear that Microsoft already had its own licensed UNIX version, XENIX, from BEFORE their involvement with the IBM PC. Supposedly, anything UNIX was very hard to port to the 8088 CPU due to lack of advanced (For the time) Processor features like a MMU for Virtual Memory/Memory Protection, making multitasking far harder to implement, albeit there were at least two other UNIXes that were ported to run on the IBM PC. Since QNX can be added to that list, too, being an UNIX-like with multitasking capabilities that could even run on the original IBM PC with its 8088, it impresses me a bit more as that was supposed to not be easy to do.

Just by learning about the existence of XENIX, I eventually began to look down on PC DOS. I knew that early DOS versions were a rather dull and bare OS and that it was pretty much for “lights on” purposes (I even refer to PC DOS 1.0 as being only useful as a “FAT File System API”), but when you realize what was already possible even by the early 80’s, it gets far more disappointing, and is even more surprising that it managed to last as long as it did. One has to wonder how much lease of life DOS got from things like Expanded Memory, the XMS API, and the amazing 386 features for backwards compatibility that ended up being used for EMMs then 386 DOS Extenders, all of which actually made DOS quite usable until the end of the millennium. But I’m sure that other OS alternatives could also have got there too, while being far less messy.

What picked my curiosity is that Microsoft in the early 80’s appeared to want to replace DOS with XEDOS (A single user XENIX variant) a few years before the whole OS/2 affair with IBM. Seems that DOS wasn’t considered very scalable from even its earliest days, plus the 286 MMU made UNIX on x86 far more viable. Microsoft wanting to push an enterprise UNIX Kernel as a DOS replacement somehow seems similar to what it would eventually do 15 years later when it pushed the server/enterprise NT as the Win9x replacement. Stagnation on getting a proper DOS replacement allowed those hacks that extended its useful life to proliferate, and made even harder to get rid of it.

When reading about this, I saw a sort of power vacuum around 1985-1986 (Post IBM PC/AT and pre Compaq DeskPro 386 and OS/2) where DOS was already showing its limitations, yet there were barely any means to extend it, and no solid replacement on the horizon. After noticing that power vacuum, I thought about whenever there were any chances for an “UNIX for the masses”, like XEDOS was supposed to be, to have a chance at capturing enough marke tshare to snowball as a prominent DOS replacement and dethrone it, thus changing history as we know it. This is exactly what hit me the most when hearing about QNX and its capabilities, more so after using it.

Before, I only thought about Microsoft discontinuing DOS to replace it with XEDOS to force the market that way, I never considered than a third party could potentially outdo it. QNX was an already available UNIX-like OS that ticks all the boxes, and even has FAT compatibility, so I see it as something that could have been perfect for such role (I don’t know how good compatibility was for running DOS applications from within QNX, but there was nothing stopping you from Dual Booting, and even the QNX MBR Boot Loader was capable of that without changing the Active Partition, whereas the MBR Boot Loader from DOS didn’t and forced you to do so).

At this point what I pondered is whenever QSS ever had any idea similar to that, or if they just were comfortable enough with selling QNX to those that would typically use UNIX systems, namely enterprise customers (I don’t know when QNX began to focus on marketing itself as an embedded RTOS instead of a third party UNIX-like). Yet, these things are in the realm of business decisions more than any technical merit, which QNX seems to have plenty of.

I don’t know specifics about QNX and UNIXes prices in general, but is not hard to figure out that they fetched quite a lot thus it may not have made much sense to pursue individual hobbyists in low value, volume markets, and drag down the price of commercial UNIXes as a whole (Which incidentally is what GNU/Linux would eventually do by giving a full UNIX-like for free). I privately asked Mr. Dodge about whenever back then they ever thought about getting QNX into the mainstream, but he dodged that question (Pun intended). However, the ICON computer for education markets being powered by QNX somehow reminds me of the Apple II which was also prominent in that market segment, but I don’t know whenever QSS actually thought of it as a means to get into the average consumer mindshare, or if they never pursuit mainstream or hobbyist users at all.

This is the part where I get into Bill Gates, and how Microsoft took over the world with the Jurassic DOS. What I find most notorious about his business strategy was that from early on, he always seemed to want to go for the bottom of the barrel, by bundling Microsoft Software everywhere. But perhaps the most interesting revelation, was on the way that he consciously used piracy as a second route to end users, as said by him in this famous 1998 quote:
https://www.latimes.com/archives/la-xpm-2006-apr-09-fi-micropiracy9-story.html

“Although about 3 million computers get sold every year in China, people don’t pay for the software. Someday they will, though,” Gates told an audience at the University of Washington. “And as long as they’re going to steal it, we want them to steal ours. They’ll get sort of addicted, and then we’ll somehow figure out how to collect sometime in the next decade.”

So piracy ain’t so bad if you can get a return in some way or another. Is also interesting to note than the same Bill Gates wrote the less known Open Letter to Hobbyists in 1976, and is worth to notice the difference in tone:
https://en.wikipedia.org/wiki/Open_Letter_to_Hobbyists

Seems that at some point in time (Perhaps the middle of the 80’s, as Microsoft seems to have dropped most copy protections by then. The breaking point was after some false positives with menacing messages in Word that made it to the news, if I recall correctly), Bill Gates figured out that piracy was highly beneficial to get the installed user base of some of its mainstream middleware products as widespread as possible. That strategy always made total sense to me, because even if people doesn’t outright pay for the Software they consume, it gets vastly more exposition to the public, making their developers more known (Brand recognition) and also becoming a better target for developers wanting to port Software to other platforms with more potential customers (Thus an OS may be considered middleware). It also pushes your formats (Like Word .doc) as some de facto standard that forces even more people to use or at least be compatible with your products since you can’t really miss support for the formats of the market share leaders, at least if you share documents or any stuff with any other normal human being instead of just your geek circle. It is a very wide snowball effect, and is not entirely detrimental for the ecosystem of the pirated Software, but I would say that it is worse for any possible competition, as they lose potential sales and get absolutely NOTHING in return. This was part of Bill Gates stratagem, for sure.

As one of the challenges involved cracking the copy protection, I wondered if QNX would have benefited in the long run from piracy on the early days if people get to know it more, and growth its user base at the cost of potential legit sales. If getting mainstream was the goal, maybe. Just ask AutoDesk, perhaps it wouldn’t be as known if it wasn’t because piracy of its Software like AutoCAD was rampant (And its availability discourages people to look around for other cheaper or even free Software), ignoring the fact than the original licenses were usually a four digit number, just like an UNIX was these days.

This whole piracy point is where I think that Bill Gates was ahead of the curve, the race-to-the-bottom to get Microsoft Software everywhere, bundled with everything, and at any cost, and saw piracy as a means to an end. I can’t think of any other viable plan for an “UNIX for the masses” that was commercial in nature without involving piracy, giving than it was going to compete with “free” Microsoft products (Unless you could bundle as much as Microsoft). Then in the 90’s we got Linux and all the free BSD variants, yet still they couldn’t take over the already well established mainstream DOS, Windows, and their massive Software ecosystem. It was too late by then.

I always found all this stuff quite interesting, since after all, all those decisions shaped our current world, and I have a fetish for “what if” scenarios. For example, what would have happened if IBM, for the IBM PC, had picked the Motorola 68000 instead of the Intel 8088, which was one of the other potential choices? Would the rest of events have been unfold in the same way? Would the IBM PC be at least as successful and influential as it was? Would the 68K give programmers less headaches due to the supposedly cleaner ISA? Would we still be using 68K ISA based Processors with 40 years of backwards compatibility? Would a modern Processor based on the 68K ISA be smaller, faster, less buggy, than our current x86 Processors? Would Intel and AMD still exist as such?

Perhaps the most awe-inspiring story is how the Intel 386 came to be. What makes it one of my favorite interviews, is that you get ALL of the juicy details, that gives you the complete context you need to understand how the decisions were made, why one option was picked over the other, and what these other choices were. How was the internal situation at Intel, what they thought about their competition, what were the plans and how they changed over time, etc. You can get a complete picture of how it was being part of making history. For those that are interested in stories like that, I absolutely recommend this one:

https://archive.computerhistory.org/resources/access/text/2015/06/102702019-05-01-acc.pdf
https://archive.computerhistory.org/resources/access/text/2015/06/102702022-05-01-acc.pdf

Come to think: Intel thought about the entire x86 lineup as some stopgap filler product because the next generation iAPX 432 was going to be the long term, forward thinking architecture. When it was finally available, it was a catastrophe. Then simultaneously, Intel DRAM business also began to take a downturn due to the dramatically increased competition from Asian semiconductor manufacturers. The only thing left for Intel, was putting everything onto the x86 basket thanks to the newfound success of the IBM PC, and the end result was nothing short of a miracle. I still recall seeing on retro forums ex-IBMers mentioning than Intel would be irrelevant or not exist at all if wasn’t because IBM picked the 8088 on the first place, and I can’t say that they aren’t right.

There are things that in retrospective are incredible to read, like that in the initial design stages they were still discussing about whenever the 386 should be backwards compatible or not with the previous x86 CPUs, which would be a rather obvious choice right now (Intel would eventually do a non-Real Mode compatible, 32 Bits only 386: The 376. If you never hear about it before, you can guess how successful it was). Or how several decisions made for the 286 due to the pressure of the Zilog MMU ended up becoming a totally useless baggage that 35 years later is still present in every x86 Processor.

Point is, neither x86 nor DOS were actually designed to scale in a forward looking manner. They were both stopgap products by both Microsoft and Intel, which, by accident, ended up becoming the de facto standards even though there were superior options at the time. Yes, I know that this is something that happened a lot of times both in computing history and in other completely different technologies, and each person has his own favorite technology that should have taken over the world but lost against another, inferior one. And yes, I know that QNX ended up being successful in the embedded market and that is probable than either directly or indirectly I have used it at some point as part of another product, yet I find a bit depressing that it isn’t very known outside its niche, nor part of the computing pop culture.

Why would I care about all that? Because I have ambitions of Total World Domination™, of course! So learning about why certain things took over the world (Specially those that shouldn’t have) is something that I’m always fascinated about.

My first attempt was with pushing passthrough based setup relying on IOMMU virtualization as a means to make a fully functional Windows gaming VM, so that you could containerize Windows and leave bare metal to host something else (Initially Xen + Linux, then Linux with QEMU-KVM-VFIO) with just minor loss of performance and compatibility, as getting Windows out of direct Hardware access serves as a means of transitioning to something else. Albeit I score some users, the vast majority of people was like, “why bother?”. Ironically, years later the Techtubers made these setups popular and there are plenty of users now, so I can say that the tech was successful, but I failed at making outside people notice.

My next (And current) attempt was trying to push Coreboot (Or any other means of open source Firmware) to the masses, as a means of being able to provide long term support to a Motherboard Firmware instead of being dependent on the Motherboard vendor will to fix issues or add Firmware features (Which they will never do, cause this kills a lot of incentive to upgrade to newer ones during the same platform lifespan). You can read my efforts trying to educate consumers here:
https://zirblazer.github.io/htmlfiles/coreboot.html?ver=123

Ironically, I got into this after noticing how broken IOMMU support was in the early generations of the tech even when the Hardware was fully capable, simply because no consumer Motherboard vendor bothered to consistently implement it properly at the Firmware level, nor they fixed it upon requests. At this point I got annoyed of having to deal with them for Firmware support and decided to look for alternatives.

Perhaps what surprised me the most is that there is less people interested in this than passthrough, even though there are thriving BIOS modding communities where everyone tries to achieve what the Motherboard vendors didn’t wanted to openly provide. But most people seems to not care about alternatives that solve the issue from the root since if they can get what they want via modding the proprietary BIOSes, there is less incentive for a proper solution (Reminds me of DOS Extenders and all those things that allowed Jurassic DOS to have a two decade useful life). Thus there is a lack of consumer demand even among people that should think like me.

In resume, I’m still looking for the meaning of life, existence, and everything.

Citrix South Beach: aka the missing link from text to graphics

A long long time ago, in a distant continent I once interviewed at this small company called Citrix. It was some QA position, they didn’t need programmers. I’d passed the interviews easily as I’d been programming serial TSR’s so I was hip to the 8250/16450. Citrix was an interesting but troubled company. They had incredible contacts and more importantly a deal from Microsoft that gave them access to OS/2. Sadly OS/2 1.0 had been a dud, and by the time OS/2 2.00 saw even a limited release, Microsoft had pulled out of OS/2. Citrix was a company that had lost twice in what should be a big market. -Multi user commodity systems.

Citrix Multiuser 1.0 was based on OS/2 1.21, and was limited to 16bit protected mode apps. Citrix Multiuser 2.0 was based on the Limited Availability version which means that it cannot run “GA” or General Availability programs. So no 32bit programs here. Instead it can run the same 16bit protected mode applications, however it can also run MS-DOS based programs. DOS4/GW programs run so oddly enough the only real commercial stuff that can be run is MS-DOS.

So here we were 1994. Citrix had struck out twice, but this time it was going to be different, but the deal had to be re-struck again. I have no idea how they managed to secure this lucrative deal again, but Citrix was able to get access to the source access Windows NT, after the 3.1 release to 3rd parties (when they got DEC involved). By now the world had gone Windows, Office 4.2 was a thing, and on the high end side, NT had SQL & SNA, and there was most defiantly a market for multiuser systems as there had been from the old days of Unix, with the old mix of ASCII and network graphical terminals.

The CD looks like a normal-ish NT 3.5 Server CD although there is no MIPS or Alpha builds, as expected everyone at Citrix would be working and targeting the larger established i386 market.

As you can see this is Beta build 101.

In the text mode setup it looks like a normal setup program. No doubt they had better things to do than skins, wallpapers and themes. HOWEVER there is a silent IDE bug that many people will no doubt run into:

Although it works okay in short bursts, the IDE driver will send a command 28 zero byte and then shut down the controller. From this point it hangs. So that means we either need to generate all the floppy disk images (not going to happen!) or do the MS-DOS cross install. Yeah I’m doing that instead.

When setting up under Qemu, use the AMD PCNET card. It’s much easier. I set it to Twisted Pair, and PCI bus. I’m not sure if those matter all that much, but it works for me!

If you are going to use Hyper-V, you’ll need the GF100 NIC driver, but use the Windows NT 3.1 driver, as this is technically a beta of NT 3.5 and the production 3.5 driver will blue screen.

I set the driver to autosense.

I also had both Qemu and Hyper-V bluescreen when doing DHCP. I don’t know what the issue is, and I’m too old to care as I don’t have source code to South Beach, and even if I did I’d probably regret posting fixes. So static IP address it is!

Ready to login

Honestly again the air in the office when I was there is that everyone was running around like crazy to QA the product, and get ready to expand client support. While I was too much of an OS/2 fan boy, they certainly knew that from now on everything was going to be about Windows NT.

Logging into the Citrix the first fun thing to do is to define some remote terminals, using the WinStation app.

The first interesting thing is that async terminals are supported. Along with using either NetBIOS or Winsock protocols for connecting clients. Isn’t that great! TCP/IP built in!

Now for the crazy part. The only client that works is MS-DOS based. Yes there is no Win16, no Win32, no Java, no protected mode DOS, no Linux, SunOS, Solaris, DG/UX, AIX, HPUX, Xenix, UnixWare or SYSV i386ABI. ONLY Real Mode MS-DOS. Despite the connections being able to be ICA version 2 or 3, they are incompatible with newer Windows based clients from Win Frame.

This it the following list of supported protocols. Although I had Novell Lan WorkPlace and used it before for Desqview X, I can’t find it at the moment. good luck finding FTP TCP/IP, in retrospect it’s a terrible name, and for all intents and purposes it’s disappeared from the earth. So that leaves Microsoft TCP/IP. Now all the LANMAN clients have it, although this isn’t what it wants. It wants the MSCLIENT found in the “\CLIENTS\MSCLIENT\NETSETUP” path from a retail version of NT Server 3.5

The DOS client is.. very touchy. Deleting profiles can lead to a corrupted profile. Altering existing profiles well yeah can lead to a corrupted profile. I thought it was EMM386 causing issues but it locks up on it’s own.

Revenge of text mode UI

One interesting thing I found is that the text mode UI didn’t die. It’s still very much alive. As mentioned above you can connect async terminals, or even connect over the network!

Text mode does bring up a Program Manage analogue, but all my programs are graphical so it’s kind of moot. But rest assured text mode stuff works great.

PowerStation Oregon Trail

So 32bit Fortran stuff works great, what about MS-DOS?

Here is MS-DOS / Qbasic editor. Running on Citrix South Beach! Great, what about OS/2?

OS/2 F2C Dungeon

And here we go running the f2c translator through Dungeon to get an OS/2 text mode app. As you can see forcedos reveals that this isn’t a bound executable, instead it only runs on the OS/2 subsystem.

As you can see the os2.exe/os2srv components of the OS/2 subsystem

And of course it looks better on the graphical client to mix and match them all.

Win32/Win16/OS/2 all at once!

Indeed Word & Excel for NT work great alongside everything else.

Obviously somewhere post South Beach the text mode stuff dropped off. I’ll have have to dig for more, but it’s kind of neat the idea of a real text mode NT. Sadly South Beach doesn’t seem to like VMware. I haven’t dug too far, as I like WSLv2 so I’m stuck with Hyper-V. It may work fine on ESX I haven’t tested. Obviously you need the appropriate drivers, ill try to update links later, if anyone cares.

No doubt that finally Citrix was no positioned to realize the dream of multiuser commodity based hardware along with commodity applications. Of course it wouldn’t be all sunshine and rainbows, and no doubt there was a toll needing to be paid between Windows NT 4.0 and on the way to Windows 2000. But back in 1994, things were looking good!

Conventional RAM aka that old foe

640K ought to be enough for anyone. Well I’ve been poking around with an old beta that I had a long long time ago, lost, found, lost again, recovered, lost and found while looking for something entirely different again. I’ll spoil it later but anyways while messing around I needed a MS-DOS client, and it needs the MSNET TCP/IP stack, not to be confused with the LANMAN TCP/IP stack, and it doesn’t work with the Windows for Workgroups stack either. So yes I setup all 3, and of course found out that it really was the worst of the 3, the MSNET one.

Anyways convential memory is below 1MB. Back when the PC was new, it seemed that going from an Intel 8080 processor that could addresses a mere 64kb of RAM to the IBM PC that could address a whopping 1MB it seemed unlimited. A decision was made to segment the machine into 640kb for user programs reserving 384kb of RAM for hardware.

MSD memory map

And then something happened where drivers became user programs, and suddenly loading a mouse driver, CD-ROM driver, audio driver, networking stack and you have not enough memory available. Welcome to the living hell that was 1988-1995. In this virtual machine although it has 64MB of RAM in MS-DOS the largest free space with everything loaded is 366KB.

Microsoft Windows and DOS (among other products) started to include this fun tool MSD, Microsoft Diagnostics that would let you explore your memory, to see what was actually in use.

Imagine the absolute frustration here. 64MB of RAM, and yet there isn’t enough free to run a simple program. HOW ANNOYING!!!

Looking back at the MSD memory map, you may noticed from the map there is memory available, and possibly available. What does that mean? It means that there is no ROMS, or device RAM in use currently using that hardware reserved memory. Sadly for the 8088/80286 users they either don’t have a MMU, or one that only really works for protected mode segmentation. The 80386 however had a MMU sophisticated enough to let you map whatever you wanted where by booting MS-DOS into a protected mode environment and using v86 mode to map whatever you wanted where, by using the included program emm386.exe I’m sure plenty of others have touched on this program, and I’m going to just make a quick glance at it.

Typical PC memory map

If you look at a typical PC memory map you’ll find that location A000-AFFF is actually reserved for graphics memory. Since we are using VGA that also means B000-B7FF is also available. that means for text mode programs we can open up all this RAM for smaller program & driver use, along with the memory after the VGA BIOS, until the ROM BIOS of the computer that’s CC00-CFFF in my case, with D000-DFFF and E000-EFFF also being open. Obviously the fun comes in that not every PC has the same peripherals ROMS installed so this isn’t guaranteed to work in every instance.

In my case I don’t need EMS emulation at all I want to map it all to UMB or upper memory blocks for drivers and TSR’s. So I load emm386.exe into the config.sys like this:

DEVICE=C:\DOS\EMM386.EXE NOEMS I=B000-B7FF I=D000-DFFF I=CC00-CFFF I=D000-DFFF I=E000-EFFF

I didn’t put in any exclusionary ranges as EMM386 figured it out all on it’s own in MSD, but you may need to specify ranges to leave alone.

This gives me 519KB of free conventional RAM. Oddly enough a lot of the networking stack moved itself into UMB without me having to do anything. It’s probably more so a function of the MSNET I used from a Windows NT 3.5 Server CD-ROM being dated 1994, so I didn’t have to play with the load high command.

Back when the PCem forum was up I had this config, although keeping in mind that although it was far more aggressive!

DEVICE=C:\DOS\HIMEM.SYS
DEVICE=C:\DOS\EMM386.EXE 4096 frame=d000 x=a000-afff i=b000-b7ff x=b800-bbff x=c000-c7ff i=c800-cfff i=e000-efff ram
DEVICEHIGH=C:\DOS\CD1.SYS /D:CDROM01
DEVICEHIGH=C:\DOS\SETVER.EXE
DEVICEHIGH=C:\DOS\ANSI.SYS
DOS=HIGH,UMB
FILES=40
@ECHO OFF
PROMPT $p$g
PATH C:\DOS;c:\windows
SET TEMP=C:\TEMP
LH MSCDEX /D:CDROM01
LH SMARTDRV
LH IDLE
LH DOSKEY
LH SHARE

This got me a whopping 619Kb free in MS-DOS, along with 4MB of EMS, and 12MB XMS (on a 16MB config).

In the spirit of the old ‘Linking the linker‘ (I’m not certain that this is the actual article but it does certainly read the same way, didn’t Tim have 2 blogs?), I went ahead and claimed the video memory for the heck of it.

Using range A000-AFFF (64KB EGA/VGA Graphics RAM)

Obviously you cannot run graphical programs, but 605kb of conventional RAM, wish some 206Kb worth of network drivers! Not bad. I could probably squeeze a 32kb EMS frame in there, and get what would be an incredible 1-2-3 machine for the era. But I’m not such a big Lotus 1-2-3 fan anymore.

As always it’s 2021, and normal people will glance and WTF, you have 64MB of ram how can you be fighting for kilobytes. Anyone that used MS-DOS based networking will cringe and look the other way. These were not happy times.

In other news the client ran, sadly it’s too new for the server.