How to fix rsync slowing down over time (SOLVED)

(This is a guest post by Antoni Sawicki aka Tenox)

I often make copies of large data archives, typically many TB in size. I found that rsync transfer speed slows down over time, typically after a few GB, especially when copying large files. Eventually reaching crawl speeds of just few KB/s. The internet is littered with people asking the same question or why rsync is slow in general. There really isn’t a good answer out there, so I hope this may help.

After doing some quick profiling I found out that the main culprit was rsync's advanced delta transfer algorithm. The algorithm is super awesome for incremental updates as it will only transfer changed parts of a file instead of the whole thing. However when performing initial copy it’s not only unnecessary but gets in the way and the CPU is spinning calculating CRC on chunks that never could have changed. As such…

Initial rsync copies should be performed with -W option, for example:

$ rsync -avPW <src> <dst>

The -W or --whole-file option instructs rsync to perform full file copies and do not use delta transfer algorithm. In result there is no CRC calculation involved and maximum transfer speeds can be easily achieved.

Long term, rsync could be patched to do a full file transfer if the file doesn’t exist in destination.

Also while copying jumbo archives of many TB I don’t want to see every individual file being copied. Instead I want a percentage of the total archive size and current transfer speed in MB/s. After some experiments I arrived at this weird combo:

$ rsync -aW --no-i-r --info=progress2 --info=name0 <src> <dst>

Ready to run OpenVMS VM – Student Kit from VSI

(This is a guest post by Antoni Sawicki aka Tenox)

I was recently registering a new OpenVMS Community License. In the process I learned that there is a ready to run, pre-installed and pre-configured VM with OpenVMS 8.4. Completely free for non-commercial purposes. You don’t even need to register or leave your details (WOW). Just download and run! Thank you VSI!

https://training.vmssoftware.com/student-license/

The student kit runs only on Windows as contains FreeAXP emulator. However it’s super easy to download, install and run.

VSI OpenVMS Student Kit

I’m hoping that in near future once x86 OpenVMS port is ready there will be images for x64 hypervisors like VMware, VirtualBox, Hyper-v and QEMU/KVM hopefully.

Undocumented Madness – 2.9BSD on XHomer

This is a guest post by Seal331

Since I’ve been dealing with XHomer a lot lately in order to get the two dumped VENIX/PRO versions to work, I noticed that the XHomer documentation mentions a thing called “maintenance mode” and the DEC Pro port of 2.9BSD, so I was interested.

After doing a bit of searching around I found some install notes on www.frijid.net from real hardware, so I decided to adapt these notes for XHomer and install it. TL;DR – I did it, here I’m explaining all this stuff.

Step 1 – Acquiring XHomer

XHomer is a DEC Pro 350 emulator that can run P/OS, Venix, 2.9BSD and possibly RT-11, but I didn’t get to installing the last one yet. There is a statically linked binary but, since I’m a Gentoo Linux person (but I didn’t use Gentoo for this particular install)and prefer compiling everything I can, I grabbed the source code (https://xhomer.isani.org/xhomer/xhomer-9-16-06.tgz) and quickly compiled it on my Linux box. It was pretty simple, just install a development toolchain (build-essential on Debian based systems), the libX11 development package (libx11-dev on Debian based systems) and the XShm extension which is included in libxext-dev on Debian based systems. During make it spit out a bunch of warnings but I got a working xhomer binary. Also it kind of messes the xterm settings a bit after being closed, so I’d recommend running it in a separate xterm window. Since there’s no install target in the Makefile I just copied the xhomer binary to /usr/bin, and that was it. From here on, I will assume that the XHomer binary is called xhomer and is somewhere in your PATH, if not just modify the way I run XHomer.

Step 2 – Acquiring the distribution

Thanks to the people at the same www.frijid.net site I mentioned earlier, I was able to easily piece together a distribution set. Since we don’t really rely on how many physical floppies we have with an emulator, I grabbed the recommended root disk set and the 15 disk usr set with the source code, although we won’t be compiling the kernel in this post. Maybe next one? We’ll see.

The site with the floppies is http://www.frijid.net/download/pro350/bsd/raw/ and here’s what I used for my install:

box#0/maintenance0.img
box#1/usr+k00.img
box#1/usr+k01.img
box#1/usr+k02.img
box#1/usr+k03.img
box#1/usr+k04.img
box#1/usr+k05.img
box#1/usr+k06.img
box#1/usr+k07.img
box#1/usr+k08.img
box#1/usr+k09.img
box#1/usr+k10.img
box#2/usr+k11.img
box#2/usr+k12.img
box#2/usr+k13.img
box#2/usr+k14.img
box#2/usr+k15.img
box#2/root1.img
box#2/root2.img
box#2/root3.img
box#2/root4.img
box#2/root5.img

The 3 disk usr set in box#2/ doesn’t include the source, so I didn’t grab it.
The maintenance disks are all the same, so I just grabbed the one in box#0/.
The 6 disk root set in box#0/ does include some extra dev files and something that appear to be leftovers from the development DEC Pro, but it’s missing /bin/ed and /bin/passwd, so I suggest using the 5 disk set instead.

There is also box#2/procomm.img which was labeled as containing “PRO/COMM terminal emulation” but when I mounted it to install, there was only an empty lost+found directory. Perhaps the original disk had gone bad over the years or someone accidentally reformatted it? We may never know.

Step 3 – XHomer configuration & serial port preparation

Since the maintenance (install) floppy uses a serial terminal interface over the printer port and XHomer only allows us to send its output over serial, I had to do some searching again since most PCs nowadays don’t have a serial port to use. Thanks to cantoni over at StackOverflow I managed to find instructions for using socat in order to generate a pty, which actually worked for me. At first you need to install socat (bruh) and then run “socat -d -d pty,raw,echo=0 pty,raw,echo=0”. Something like this will be printed out on the terminal:

Then we do a quick test. I use putty to connect to the pty’s output, in my case it’s /dev/pts/3. Just use the default settings for serial connection with speed 9600 and the device as /dev/pts/3. If everything goes well, you will get a blank putty terminal window. Don’t panic, the fact it’s blank is normal.

Let’s test if our serial port works. Echo something in the pty’s input, in my case it’s /dev/pts/4. For example, “echo “Test” > /dev/pts/4″. If the word “Test” appears on the screen, congratulations, you have successfully set up the pty to a point where BSD will happily talk to it when we set up the connection later. !! DO NOT CLOSE THE PUTTY WINDOW AT ANY POINT DURING THE INSTALL UNTIL WE NO LONGER NEED IT (at the initial hd boot) !!

Now we configure XHomer. At first, let’s make a disk image. BSD only supports RD51 or RD50, we’ll use RD51 as it’s slightly bigger. If you get the hard disk wrong, BSD will silently hang at boot. Here’s the command to make a 10MB RD51 disk image for use with XHomer:

dd if=/dev/zero of=29bsd.rd bs=10027008 count=1

Let’s make the XHomer config file. Note that everything after the symbol | including the symbol itself does not need to be inputed, it’s just my notes.

screen = window | make the emulator window mode
window_position = 0, 0
window_scale = 2
full_scale = 3
screen_gamma = 10
pcm = on
framebuffers = 0
serial0 = /dev/pts/4 | change to your needs, pty input
la50 = null
la50_dpi = 300
kb = lk201
ptr = serial0 | DO NOT CHANGE, we'll replace it later when we no longer need serial
com = null
rd_dir = ./
rx_dir = ./
rd0 = 29bsd.rd, 4, 306, 16 | change if not using suggested disk
force_year = 99 | fix y2k bugs by forcing year to 1999
maint_mode = on | DO NOT CHANGE, bsd install uses maintenance mode for terminal
int_throttle = off | random workarounds for clocks or older linux systems, we don't need this on new stuff
nine_workaround = off
libc_workaround = off
lp_workaround = off

Save the file as xhomer.cfg.

Now run the xhomer binary. If everything goes right, you should have something like this on your screen:

If you didn’t run the test documented above or changed the string, the “Test” string will not be in the terminal or will be some other text, this is all okay.

Step 4 – BSD install

In order to feed floppies to XHomer, you have to use the XHomer control menu. In order to get to it, press Ctrl+F1 when the emulator window has focus. The two floppy drives we need are rx0: and rx1:, these are equivalents of A: and B: in DOS. Insert the maintenance0.img disk in rx0. If all goes okay, the floppy disk picture should disappear from the display window, leaving just the DIGITAL logo. The putty window should then display something like this:

40Boot
:

(all following input is in the terminal unless otherwise stated)

If all is okay, congratulations, you have booted from the installation diskette. Now type the following in the putty window after the : symbol:

r5(0,0)rdfmt

Then, if you inserted an RD51 10MB disk in the emulator as suggested, type 0 when asked for drive type. If you inserted the 5MB RD50 instead, type 1. If you don’t know the exact disk sizes and types, refer to the XHomer documentation, specifically the Emulated Hard Disk part. The formatting shouldn’t take long, then it will dump you back in the 40Boot prompt. Now you need to boot the UNIX kernel, type this in the putty window:

r5(0,0)unix

If everything goes okay, you should have something like this now:

If you get a boot hang instead (like me), restart both XHomer and putty, connect putty back to the pty, then in XHome insert the maintenance0 floppy back and boot the UNIX kernel again. DO NOT FORMAT THE DRIVE AGAIN!!

Install time!

At first, create the root filesystem by running:

/etc/mkfs /dev/rrd0a 2240

Then insert the root1 disk in the 2nd floppy drive (rx1) and restore the root filesystem dump from the 5 root set floppies:

restor rf /dev/rr51 /dev/rrd0a

When it says “Last chance before scribbling on /dev/rrd0a.” just press Enter.
When it says “Mount volume N”, just insert the right floppy and press Enter. Volume number == floppy number in this case.

After the “end of tape” message, verify the rootfs:

/etc/fsck /dev/rrd0a

If it succeeds, create the usr filesystem by running:

/etc/mkfs /dev/rrd0c 6528

Then insert the usr+k00 disk in the 2nd floppy drive (rx1) and restore the usr filesystem dump from the 16 usr set floppies:

restor rf /dev/rr51 /dev/rrd0c

When it says “Last chance before scribbling on /dev/rrd0c.” just press Enter.
When it says “Mount volume N”, just insert the right floppy and press Enter. Floppy number == volume number – 1 in this case.

After the “end of tape” message, verify the usr fs:

/etc/fsck /dev/rrd0c

(all following input is in on the Pro display unless otherwise stated)

If everything is okay, run sync two times and shut down the emulator. Restart it with only the maintenance floppy in rx0, then type this in the terminal (NOT the Pro display):

rd(0,64)unix

This should boot up Berkeley UNIX (BSD). We’re not done yet, but it’s close.

Type the following to install the hard disk bootblock:

dd if=/rdboot of=/dev/rrd0h count=17

If everything goes okay, set the root password:

passwd root

Congratulations, you have successfully installed 2.9BSD. Here are the cleanup and hdboot prep stuff:

Bring the OS to single user mode:

shutdown +1

(you can close putty now)

Then run sync two times and shut down the emulator.

Step 5 – Booting the OS

In order to boot the OS, you need to do the following:

Open the xhomer.cfg file;

Remove the serial0 = line;

Change the ptr = serial0 line to ptr = null;

Change the maint_mode = on to maint_mode = off.

Then save, after running XHomer you should be able to just log in.

Congratulations, you have successfully installed 2.9BSD for the DEC Pro 350! Sadly it’s pretty unstable, and due to emulation issues in XHomer vi completely crashes BSD, but there’s always ed 😉

Appendix A – Transferring Files

In order to transfer the files (up to 400KB per file) you will need some additional utilities. Here’s a guide on how to install them:

(the following steps are done on the Linux host side)

  1. Grab the following files:

https://xhomer.isani.org/xhomer/BSD/f2rx
https://xhomer.isani.org/xhomer/BSD/rx2f.c
https://xhomer.isani.org/xhomer/BSD/lbn2xhomer.c

  1. Apply the following patch to lbn2xhomer.c:
--- lbn2xhomer.c   2015-07-05 07:51:19.000000000 +0300
+++ lbn2xhomer.c        2021-12-30 17:13:28.539768500 +0300
@@ -25,6 +25,7 @@

 #include <string.h>
 #include <stdio.h>
+#include <stdlib.h>

 #define SECSIZE                512
 #define SECTORS                10
@@ -66,7 +67,7 @@
   if (fptr_v == NULL)
   {
     printf("Unable to open %s\n", argv[1]);
-    exit();
+    exit(1);
   }

   fptr_x = fopen(argv[2], "w");
  1. Compile lbn2xhomer:
cc -o lbn2xhomer lbn2xhomer.c
  1. Set up f2rx for operation:
chmod +x f2rx
  1. Make a floppy image with the BSD side utility:
./f2rx rx2f.c
  1. Run XHomer and attach the generated rx2f.c.dsk to rx0

(the following steps are done on the BSD side)

  1. Grab the file from the floppy:
dd if=/dev/r50 of=rx2f.c skip=18 bs=1 count=891
  1. Compile the utility:
cc -o rx2f rx2f.c

You’re now ready to transfer files.

Short file transfer handbook:

  1. Run f2rx FILE on the host box, FILE being the file to use;
  2. Insert FILE.dsk into rx0 on XHomer;
  3. Run rx2f on the BSD side.

Appendix B – Init: no more children issue workaround

On some hosts, programs from the install floppy may randomly die with the “no more children” message. A workaround is to disable RTC mode and enable IOTRACE mode in the XHomer Makefile and recompile, leading to a much more slower (due to accurate timing) and working XHomer. After the installation, you can revert to normal settings and it should work, as the programs installed on the hard drive to not appear to suffer from the same issue.

Appendix C – Sequels

Possibly coming soon to VirtuallyFun:

Undocumented Madness 2 – Big hard drives on 2.9BSD XHomer
Undocumented Madness 3 – Custom Kernel on 2.9BSD XHomer

Revisiting Windows NT 4.0 MIPS on QEMU

(This is a guest post by Antoni Sawicki aka Tenox)

This was previously well covered by Gunkies and Neozeed, however as almost a decade passed, some improvements could be made and annoyances fixed.

Firstly NT MIPS now works in 1280×1024 resolution under QEMU. It previously had issues with mouse tracking, but this is now fixed. So the new image has a higher resolution.

Secondly the old images were made with FAT filesystem which I didn’t like too much. The reason for that is the infamous RISC NT osloader needs to be placed on a FAT partition. Then, if NT is installed on a second NTFS partition the default drive will be D:\, C:\ being the just the osloader drive. This was super annoying in practice. So a common procedure was to just have one FAT partition for both osloader and winnt. I have fixed it by supplying a pre-partitioned disk and specified the second partition for osloader and the first for NT.

Also I only had just a bare/vanilla image with no additional software installed. The new image includes most of the available apps, including IE3, some editors, Reskit and Visual Studio.

Lastly I wanted to figure out all the right settings and flags for qemu as they were discrepancies between different sources and nothing seem to work smoothly. The correct flags seem to be:

qemu-system-mips64el -hda nt4.qcow2 -M magnum -global ds1225y.filename=nvram -L . -rtc "base=1995-07-08T11:12:13,clock=vm" -nic user,model=dp83932

The -rtc flag is not really needed if you are ok with having the current date in the guest.

Thanks to Neozeed for figuring out the network settings! Unfortunately the old/legacy -net nic -net user is no longer working while the new -device doesn’t like dp83932. The documentation was quite helpful.

Thanks to reader Mark for pointing out the correct NVRAM settings! See comments below.

The new image with all the apps preinstalled is here and a plain “vanilla” here.

Curiously this now works right out of the box on QEMU 6.1 and is pretty smooth and stable compared to what it was before. Good job QEMU team and thank you! Just in case I still keep the old binaries for Windows made by Neozeed here.

Update: I built Yori for NT MIPS! You can download here!

Fun with Nano Server

(This is a guest post by Antoni Sawicki aka Tenox)

While everybody is busy buzzing about Windows 11, I wanted to commemorate the finest operating system ever made by Microsoft – Nano Server.

For most of people Nano Server was esoteric, distant and unapproachable. It had a rather high entry barrier, requiring you to build it on a Windows Server 2016 host using PowerShell magic spells. You couldn’t just simply download and run it. Even if you managed to get it running, there wasn’t anything you could actually do with it for fun. People didn’t bother to even check it out. My goal is to demystify this a bit, lower the entry bar and made it easy for people to hack it.

Background info (you can skip it)

Nano Server was an interesting attempt at creating a datacenter grade OS that’s not managed via local GUI, keyboard and mouse, but rather full automation, remote tooling and code. It went one step further than Server Core or Windows PE by completely removing GUI components and local shell. Hence it’s not actually called “Windows” or “Windows Nano” but rather simply “Nano Server”. Rumor has it, it started as MinWin. The OS has a rudimentary text mode console with functionality similar of VMware ESXi console. However Nano was much more than a bare metal hypervisor. It was a fully fledged operating system. Unlike ESXi you can develop and install services/apps for it and hypervisor wasn’t even it’s default role.

Ever since I first saw a demo on Microsoft Ignite (previously known as TechEd) I wanted to run aclock on the text console. Much like the WinNT BSOD edition. This article started around my efforts to run (or port if needed) aclock to this platform. At the time of writing, the technology has been dead for several years now. However all the artifacts and documentation are still available on Microsoft’s website. Probably not for long, so a good moment to do it now, before everything gets deleted in to oblivion.

How to quickly deploy Nano Server and run command line apps on the console

The hard way: you need to download Windows 2016 Server (eval) and run a PowerShell command to produce a bootable VHD file.

Microsoft provides (soon to be deleted) Nano Server Quick Start. However the steps are trivial so you can totally skip that and just do this:

  • Launch PowerShell terminal window on the WS2016 host.
  • Run: Import-Module D:\NanoServer\NanoServerImageGenerator -Verbose
    (D:\ drive being where Windows Server CDROM is mounted)
  • Run: New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath d:\ -BasePath c:\nano -TargetPath c:\nano.vhdx -ComputerName nano -Development
    (c:\nano folder and c:\nano.vhdx image will be created for you)

Done! This will build a .vhdx image that can be run under Hyper-V as “Gen-2” VM. For Gen-1 or to run it on any other hypervisor change .vhdx to .vhd in -TargetPath while running the PowerShell command.

The easy way: you can just download a pre-built VM image from here. There are VHD for Hyper-v Gen-1 and VHDX for Hyper-v Gen-2 and OVA for everything else.

First Boot

Once you boot it up you will be greeted with a PowerShell prompt. Just like that! You can type cmd to launch the good old cmd.exe shell. MS-DOS 2016?

Keep in mind, this is a developer mode (see -Development flag). Normally you would be greeted with a login prompt and a boring menu that allows to change some networking settings and not much beyond that. In production mode you need to resort to hacks (or this) to get stuff running, fortunately nothing like that needed here.

So what can you run on it?

Firstly in order to get some external utilities going, you can mount a SMB share using net use in cmd or New-SMBMapping in PS world. Nano being a server and all, you can also share out a folder via net share or use C$ (you may need to create a user by using either net user /add in cmd or New-LocalUser in PS). Alternatively you can install Posh-SSH and use SCP to transfer files. If you don’t have working network you can just shut it down, mount the vhd image on the host and copy stuff in to the image then detach the VHD.

Aclock worked on the first run, no issues, using standard win64 exe:

aclock running on Nano Server Console

Wow! So looks like Nano console does have basic terminal controls. That opens quite a lot of possibilities. But can you run more complex apps? Text editors? Web browsers? GAMES?

Well, yes…, but likely not, but it really depends – on dependencies (read: DLLs).

From all the editors I tried XVI is probably the best:

XVI Editor Running on Nano Server Console

Everything else has a variety of issues:

  • The font is lacking line drawing characters. Some editors like YEdit allow to use ASCII drawing characters fortunately.
  • There is no reverse video. This manifests mostly in menus, etc. however it also applies to the cursor.
  • There is no cursor, or rather the cursor is an underscore and not transparent cell. Moving arrow left in the CLI doesn’t actually move the cursor it erases characters. There is no line editing.
  • Also related to reverse video, it appears Nano console has some weird issues with colors.
  • Missing DLLs. Nano Server not being a “Windows” OS is missing a lot of Windows DLLs and it has its own nano DLL hell. This has actually been acknowledged in MinWin. As such a lot of apps will not launch due to dependencies.

For example YEdit works remarkably well except for the menus, which use reverse video:

YEdit running on Nano Server Console

Update: Malcolm has fixed it in latest version of YEdit! Thank you!

Update: thanks to Ron Yorston you can also run BusyBox on Nano! All you need to do is get the 64bit version and before you run it set an environmental variable to disable ANSI emulation. In CMD set BB_SKIP_ANSI_EMULATION=0 in PS $env:BB_SKIP_ANSI_EMULATION=0. Done!

BusyBox on Nano Server

You even get ls colors and vi editor works flawlessly! Unix shell on Nano, thats awesome!

So what about games?

Initially nothing worked as expected. Either due to line drawing, colors or previously mentioned DLL hell. There was one game that actually worked – PowerShell adaptation of snake:

PowerShell Snake running on Nano Server Console

But I wanted something better. I had high hopes for ascii-patrol, which is pure text mode and they build it for win64. Unfortunately the game requires a bunch of multimedia / sound DLLs from Windows which are not present in Nano.

Thankfully Neozeed has stepped in, took the source code, amputated all the multimedia stuff, borrowed the Unix clock code and gettimeofday, and used an older Visual Studio to build it. But he managed to produce a fully working and playable version!!!! Truly amazing stuff!

ASCII Patrol Running on Nano Server Console

The binary is available here. To play the game scroll down one screen to start a mission. If you enter profile customization simply press ESC to get out. Thanks again Neozeed!

I’m hoping readers can find more text mode/ascii apps and games that will work on the console. Please comment and send links!

In another dimension, having a working text editor, Yori shell, smb/scp, maybe with help of mingw64, sdk tools or borrowed compilers from Visual Studio, one could have a self hosted developer workstation with this.

For now please just download the pre-build image, or make one yourself and run it in your favorite hypervisor and have some fun with it!

With this, goodbye Nano Server! You will be always remembered. I know folks at Redmond tried really hard to make it such beautiful gem.

Zir Blazer’s latest QNX update

This reply from Zir was so large and so detailed I didn’t feel it felt to be burred on an older post but rather given it’s own chance for a full pager. -editor

I should have posted this a LONG time ago, back when the actual winner was announced, but procrastination is what it is. Better latter than never, I guess. Actually, I did receive a half-prize for my efforts, but went unmentioned (Which at times I don’t mind. Privacy is privacy).
Mihai Gaitos was a savior of sorts because he appeared at the right moment, when I was already hitting a brick wall as that was the limit of my skills. When I noticed that there was no way to complete the challenge by brute forcing but that programming skills were required to reverse engineering, or do something from scratch like the winner did, I knew it was out of my league.

Programming always eluded me even though I always considered it an essential skill. Yet, I’m still proud of the things I did to get there, as I’m normally a very lazy person that likes to read how people does stuff but rarely do it myself. At the end of the day, money makes me dance and that is why I got involved, so I’m a capitalist pig at heart, heh.

When I first saw the challenge posted, I passed it to a few programmer friends to see if they were interested in giving it a try (Who doesn’t want a chance at a significant money prize? Good friends always tell others about opportunities). Among those, one had its own hobby OS project and another is a freelancer programmer that did some private homebrew DOS games, so I considered them skilled enough to try. Sadly, none of them showed interest.
I noticed that after like two days of the challenge being posted, there were still no comments, which I thought that was rare cause the previous challenge with a 100 U$D prize receive a ton of comments and general interest, and with a prize 20 times bigger, it would guarantee the participation of highly skilled people, so I found quite weird that no one completed it by then. Either it was too hard to be done in 48 hours, which would make me totally unqualified, or no one noticed it on the first place, which would make me potentially lucky (And neozeed Blog not very popular, then. Shame on him!). Is also possible than other potential challengers also thought that with heavy competence they may not have made it before another one got crowned winner, either. I took the last two line of thoughts.

My inspiration about having enough skills to complete the challenge came from the rules of the challenge itself, as it explicitly mentioned that some things like mix-and-match Boot Loaders were allowed. As I had recently read about some people managing to get Windows XP x64 booted in UEFI Mode by using a Boot Loader from a Vista beta version before such support was removed from later versions, I thought that I had a solid idea worth trying that was possible for me to do.

That is precisely what I did. I thought that it worked, and hastily uploaded my result (My first comment in this Blog), THEN noticed that I actually mixed up the QNX 2 Kernel with 1.2 userspace, since in 1.2 the Kernel was by default out-of-filesystem thus not a file that can be easily be overwritten during a copy. I had to backtrack my claim.

By the time I did so, I caught A LOT of people’s attention. Since I thought that I was close enough, and I had to save face after my first failure, I decided to keep pushing forward (Which I don’t regret, it was both fun AND profitable). The rest is story. The single thing which I’m still not entirely convinced about is the lack of other participants until Mihai Gaitos posted that he was going to get into the challenge, given the fact than for the previous ones multiple people posted (forty being one of the previous winners), so I don’t know if monopolizing the comments section with my updates as the only contestant at that point dissuaded others of participating that would have done so had I never posted on the first place.

After both self-glorifying and self-deprecating, what comes next is obviously my QNX impressions.

I heard about QNX a few times (Ironically, I think that the first time was on an OS/2 Museum guest article made in 2013 by… guess who, Tenox), and by the comments of people that actually used it, it was held in high esteem, since a common phrase when talking about QNX was that “it was years ahead of its time”. Actually, I even found highly surprising than the two programmers I mentioned that I told about the challenge actually remembered QNX from a floppy demo that was distributed in a local computer magazine from the the middle 90’s (As they mentioned that it had a built-in browser and fitted in a 1.44M floppy, the obvious one is the QNX 4.05 Demo floppy, qnx_demo_405_network.ISO.xz in Tenox’s repository). It seems than that demo was something special if it could generate a lasting impression and made some people remember QNX just for it alone.

First, keep in mind my actual OS usage experience: I was a child playing DOS games during the golden era of the middle 90’s, and got into Windows 9x/XP like every other average consumer. My first true experience with anything non-Microsoft was Linux beginning in 2013, when I decided to try PCI Passthrough with Xen to make a Windows gaming VM as an excuse for a main Windows user to try something else (This was before doing so became common). At some point I even wrote a guide about that, but since no more than 2 or 3 people used it when it was still up to date (And due to being based on Arch Linux, I had to occasionally recheck everything to make sure that it was still updated), and then everyone and their moms began to write guides by the time that standalone QEMU with VFIO became better than Xen for passthrough, I lost interest.

The point is, I can’t really make any proper comparisons due to the fact that my first hand experience with OS variety is limited. Since I have almost no other direct experience, most of my knowledge about the existence of other OSes and their capabilities comes from Wiki articles, Blog posts, scans from computer magazines, comments from other people with first hand experience, etc. Thus, since things always fits into contexts, I can’t really compare a lot of aspects against other contemporaries, but more like my own impressions against what I know of that era.

QNX seemed perhaps far too close to a modern command line Linux distribution than I was expecting, as after I managed to understand a few command differences (Like how mount worked), the basics seemed to be mostly the same. I pondered whenever this is because QNX was “years ahead of its time”, or because UNIX OSes were already quite mature even by early 80’s, as if during the last 4 decades the basics didn’t changed that much. Yet, I can’t directly compare it to other UNIXes for what I already said. The documentation, like most 80’s stuff, is quite complete and easy to understand and follow, like the manual installation chapter, which reads like a walkthrough.

Perhaps the only thing I didn’t like was the text editor, because vi style editors actually force me to RTFM to be able to edit and save something whereas mainstream text editors tend to be usually intuitive about the most basic functionality. If I recall correctly, my issue was that I couldn’t actually get it to switch back and forth from edit mode to command mode, which seems to be because the editor manual was mentioning the location of certain keys based on the layout of the 84-Key Model F Keyboard, which I’m not used to, and I was also confused due to old terms that no one uses today (Carriage Return is supposed to be Enter, Backspace, or maybe Escape?).

Among the things I found quite interesting about QNX is that it was based on a MicroKernel paradigm. Being something from the early 80’s, the first thought was about how it fitted into the Linus Torvalds vs Tanenbaum debate:
https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate

The resume of that debate is that Torvalds favored Monolithic Kernels whereas Professor Tanenbaum was all about MicroKernels. What that debate seems to be missing, is a practical comparison between actual working, fully featured OSes instead of theoretical mumbo jumbo about Kernel design. That is where QNX fits in, since, being a very early, fully featured third party UNIX-like based on MicroKernel, looks like a good representative of that class. I think that before hearing about QNX, I didn’t knew about any other Kernel based on that design paradigm, much less a full OS, that wasn’t either squarely aimed at embedded systems, or was experimental or educational in nature, thus comparing a full blown GNU/Linux to QNX for this debate seems natural to me. I don’t know if there are other candidates to fit into such comparison by the middle of the 90’s, when this topic began.

Also, what I noticed is that people seems to associate MicroKernel based OSes (Including QNX) with “Real Time OS”, and think about those as something highly specialized usually targeting embedded systems, whereas I see no reason for them to not be usable as a generalist OS. I mean, early on its life QNX was called QUNIX, so it seems that it was intended to be related to the *NIX family. The focus on being a RTOS seems to have put it far from its original identity. Even if from a marketing or commercial point of view that was better, I find it rather curious than it scared away certain types of users who somehow don’t think RTOSes can directly compete with a *NIX.

There are, however, side stories that makes my affair with QNX far more interesting. Going back to my experiences, during my time toying around with QEMU, it became obvious that mastering virtualization was quite harder than it looks, since in order to learn all what QEMU can do via either virtualization or emulation, at some point you will start reading about stuff so old that eventually it makes convenient to start to learn orderly by the very beginning: The 1981 IBM PC. Thus I ended up with a nice hobby of digital archeology, which is the reason I usually see this Blog and OS/2 Museum, among others.

Albeit it is a massive amount of knowledge, it seemed easier to digest if you start piling stuff on top of it so you can follow how the whole platform evolved. The end result is that I wrote this (Still not finished, nor know if I ever will), focusing on how the PC platform evolved:
https://zirblazer.github.io/htmlfiles/pc_evolution.html?ver=123

From my findings, one of the most jaw-dropping moments was perhaps when I hear that Microsoft already had its own licensed UNIX version, XENIX, from BEFORE their involvement with the IBM PC. Supposedly, anything UNIX was very hard to port to the 8088 CPU due to lack of advanced (For the time) Processor features like a MMU for Virtual Memory/Memory Protection, making multitasking far harder to implement, albeit there were at least two other UNIXes that were ported to run on the IBM PC. Since QNX can be added to that list, too, being an UNIX-like with multitasking capabilities that could even run on the original IBM PC with its 8088, it impresses me a bit more as that was supposed to not be easy to do.

Just by learning about the existence of XENIX, I eventually began to look down on PC DOS. I knew that early DOS versions were a rather dull and bare OS and that it was pretty much for “lights on” purposes (I even refer to PC DOS 1.0 as being only useful as a “FAT File System API”), but when you realize what was already possible even by the early 80’s, it gets far more disappointing, and is even more surprising that it managed to last as long as it did. One has to wonder how much lease of life DOS got from things like Expanded Memory, the XMS API, and the amazing 386 features for backwards compatibility that ended up being used for EMMs then 386 DOS Extenders, all of which actually made DOS quite usable until the end of the millennium. But I’m sure that other OS alternatives could also have got there too, while being far less messy.

What picked my curiosity is that Microsoft in the early 80’s appeared to want to replace DOS with XEDOS (A single user XENIX variant) a few years before the whole OS/2 affair with IBM. Seems that DOS wasn’t considered very scalable from even its earliest days, plus the 286 MMU made UNIX on x86 far more viable. Microsoft wanting to push an enterprise UNIX Kernel as a DOS replacement somehow seems similar to what it would eventually do 15 years later when it pushed the server/enterprise NT as the Win9x replacement. Stagnation on getting a proper DOS replacement allowed those hacks that extended its useful life to proliferate, and made even harder to get rid of it.

When reading about this, I saw a sort of power vacuum around 1985-1986 (Post IBM PC/AT and pre Compaq DeskPro 386 and OS/2) where DOS was already showing its limitations, yet there were barely any means to extend it, and no solid replacement on the horizon. After noticing that power vacuum, I thought about whenever there were any chances for an “UNIX for the masses”, like XEDOS was supposed to be, to have a chance at capturing enough marke tshare to snowball as a prominent DOS replacement and dethrone it, thus changing history as we know it. This is exactly what hit me the most when hearing about QNX and its capabilities, more so after using it.

Before, I only thought about Microsoft discontinuing DOS to replace it with XEDOS to force the market that way, I never considered than a third party could potentially outdo it. QNX was an already available UNIX-like OS that ticks all the boxes, and even has FAT compatibility, so I see it as something that could have been perfect for such role (I don’t know how good compatibility was for running DOS applications from within QNX, but there was nothing stopping you from Dual Booting, and even the QNX MBR Boot Loader was capable of that without changing the Active Partition, whereas the MBR Boot Loader from DOS didn’t and forced you to do so).

At this point what I pondered is whenever QSS ever had any idea similar to that, or if they just were comfortable enough with selling QNX to those that would typically use UNIX systems, namely enterprise customers (I don’t know when QNX began to focus on marketing itself as an embedded RTOS instead of a third party UNIX-like). Yet, these things are in the realm of business decisions more than any technical merit, which QNX seems to have plenty of.

I don’t know specifics about QNX and UNIXes prices in general, but is not hard to figure out that they fetched quite a lot thus it may not have made much sense to pursue individual hobbyists in low value, volume markets, and drag down the price of commercial UNIXes as a whole (Which incidentally is what GNU/Linux would eventually do by giving a full UNIX-like for free). I privately asked Mr. Dodge about whenever back then they ever thought about getting QNX into the mainstream, but he dodged that question (Pun intended). However, the ICON computer for education markets being powered by QNX somehow reminds me of the Apple II which was also prominent in that market segment, but I don’t know whenever QSS actually thought of it as a means to get into the average consumer mindshare, or if they never pursuit mainstream or hobbyist users at all.

This is the part where I get into Bill Gates, and how Microsoft took over the world with the Jurassic DOS. What I find most notorious about his business strategy was that from early on, he always seemed to want to go for the bottom of the barrel, by bundling Microsoft Software everywhere. But perhaps the most interesting revelation, was on the way that he consciously used piracy as a second route to end users, as said by him in this famous 1998 quote:
https://www.latimes.com/archives/la-xpm-2006-apr-09-fi-micropiracy9-story.html

“Although about 3 million computers get sold every year in China, people don’t pay for the software. Someday they will, though,” Gates told an audience at the University of Washington. “And as long as they’re going to steal it, we want them to steal ours. They’ll get sort of addicted, and then we’ll somehow figure out how to collect sometime in the next decade.”

So piracy ain’t so bad if you can get a return in some way or another. Is also interesting to note than the same Bill Gates wrote the less known Open Letter to Hobbyists in 1976, and is worth to notice the difference in tone:
https://en.wikipedia.org/wiki/Open_Letter_to_Hobbyists

Seems that at some point in time (Perhaps the middle of the 80’s, as Microsoft seems to have dropped most copy protections by then. The breaking point was after some false positives with menacing messages in Word that made it to the news, if I recall correctly), Bill Gates figured out that piracy was highly beneficial to get the installed user base of some of its mainstream middleware products as widespread as possible. That strategy always made total sense to me, because even if people doesn’t outright pay for the Software they consume, it gets vastly more exposition to the public, making their developers more known (Brand recognition) and also becoming a better target for developers wanting to port Software to other platforms with more potential customers (Thus an OS may be considered middleware). It also pushes your formats (Like Word .doc) as some de facto standard that forces even more people to use or at least be compatible with your products since you can’t really miss support for the formats of the market share leaders, at least if you share documents or any stuff with any other normal human being instead of just your geek circle. It is a very wide snowball effect, and is not entirely detrimental for the ecosystem of the pirated Software, but I would say that it is worse for any possible competition, as they lose potential sales and get absolutely NOTHING in return. This was part of Bill Gates stratagem, for sure.

As one of the challenges involved cracking the copy protection, I wondered if QNX would have benefited in the long run from piracy on the early days if people get to know it more, and growth its user base at the cost of potential legit sales. If getting mainstream was the goal, maybe. Just ask AutoDesk, perhaps it wouldn’t be as known if it wasn’t because piracy of its Software like AutoCAD was rampant (And its availability discourages people to look around for other cheaper or even free Software), ignoring the fact than the original licenses were usually a four digit number, just like an UNIX was these days.

This whole piracy point is where I think that Bill Gates was ahead of the curve, the race-to-the-bottom to get Microsoft Software everywhere, bundled with everything, and at any cost, and saw piracy as a means to an end. I can’t think of any other viable plan for an “UNIX for the masses” that was commercial in nature without involving piracy, giving than it was going to compete with “free” Microsoft products (Unless you could bundle as much as Microsoft). Then in the 90’s we got Linux and all the free BSD variants, yet still they couldn’t take over the already well established mainstream DOS, Windows, and their massive Software ecosystem. It was too late by then.

I always found all this stuff quite interesting, since after all, all those decisions shaped our current world, and I have a fetish for “what if” scenarios. For example, what would have happened if IBM, for the IBM PC, had picked the Motorola 68000 instead of the Intel 8088, which was one of the other potential choices? Would the rest of events have been unfold in the same way? Would the IBM PC be at least as successful and influential as it was? Would the 68K give programmers less headaches due to the supposedly cleaner ISA? Would we still be using 68K ISA based Processors with 40 years of backwards compatibility? Would a modern Processor based on the 68K ISA be smaller, faster, less buggy, than our current x86 Processors? Would Intel and AMD still exist as such?

Perhaps the most awe-inspiring story is how the Intel 386 came to be. What makes it one of my favorite interviews, is that you get ALL of the juicy details, that gives you the complete context you need to understand how the decisions were made, why one option was picked over the other, and what these other choices were. How was the internal situation at Intel, what they thought about their competition, what were the plans and how they changed over time, etc. You can get a complete picture of how it was being part of making history. For those that are interested in stories like that, I absolutely recommend this one:

https://archive.computerhistory.org/resources/access/text/2015/06/102702019-05-01-acc.pdf
https://archive.computerhistory.org/resources/access/text/2015/06/102702022-05-01-acc.pdf

Come to think: Intel thought about the entire x86 lineup as some stopgap filler product because the next generation iAPX 432 was going to be the long term, forward thinking architecture. When it was finally available, it was a catastrophe. Then simultaneously, Intel DRAM business also began to take a downturn due to the dramatically increased competition from Asian semiconductor manufacturers. The only thing left for Intel, was putting everything onto the x86 basket thanks to the newfound success of the IBM PC, and the end result was nothing short of a miracle. I still recall seeing on retro forums ex-IBMers mentioning than Intel would be irrelevant or not exist at all if wasn’t because IBM picked the 8088 on the first place, and I can’t say that they aren’t right.

There are things that in retrospective are incredible to read, like that in the initial design stages they were still discussing about whenever the 386 should be backwards compatible or not with the previous x86 CPUs, which would be a rather obvious choice right now (Intel would eventually do a non-Real Mode compatible, 32 Bits only 386: The 376. If you never hear about it before, you can guess how successful it was). Or how several decisions made for the 286 due to the pressure of the Zilog MMU ended up becoming a totally useless baggage that 35 years later is still present in every x86 Processor.

Point is, neither x86 nor DOS were actually designed to scale in a forward looking manner. They were both stopgap products by both Microsoft and Intel, which, by accident, ended up becoming the de facto standards even though there were superior options at the time. Yes, I know that this is something that happened a lot of times both in computing history and in other completely different technologies, and each person has his own favorite technology that should have taken over the world but lost against another, inferior one. And yes, I know that QNX ended up being successful in the embedded market and that is probable than either directly or indirectly I have used it at some point as part of another product, yet I find a bit depressing that it isn’t very known outside its niche, nor part of the computing pop culture.

Why would I care about all that? Because I have ambitions of Total World Domination™, of course! So learning about why certain things took over the world (Specially those that shouldn’t have) is something that I’m always fascinated about.

My first attempt was with pushing passthrough based setup relying on IOMMU virtualization as a means to make a fully functional Windows gaming VM, so that you could containerize Windows and leave bare metal to host something else (Initially Xen + Linux, then Linux with QEMU-KVM-VFIO) with just minor loss of performance and compatibility, as getting Windows out of direct Hardware access serves as a means of transitioning to something else. Albeit I score some users, the vast majority of people was like, “why bother?”. Ironically, years later the Techtubers made these setups popular and there are plenty of users now, so I can say that the tech was successful, but I failed at making outside people notice.

My next (And current) attempt was trying to push Coreboot (Or any other means of open source Firmware) to the masses, as a means of being able to provide long term support to a Motherboard Firmware instead of being dependent on the Motherboard vendor will to fix issues or add Firmware features (Which they will never do, cause this kills a lot of incentive to upgrade to newer ones during the same platform lifespan). You can read my efforts trying to educate consumers here:
https://zirblazer.github.io/htmlfiles/coreboot.html?ver=123

Ironically, I got into this after noticing how broken IOMMU support was in the early generations of the tech even when the Hardware was fully capable, simply because no consumer Motherboard vendor bothered to consistently implement it properly at the Firmware level, nor they fixed it upon requests. At this point I got annoyed of having to deal with them for Firmware support and decided to look for alternatives.

Perhaps what surprised me the most is that there is less people interested in this than passthrough, even though there are thriving BIOS modding communities where everyone tries to achieve what the Motherboard vendors didn’t wanted to openly provide. But most people seems to not care about alternatives that solve the issue from the root since if they can get what they want via modding the proprietary BIOSes, there is less incentive for a proper solution (Reminds me of DOS Extenders and all those things that allowed Jurassic DOS to have a two decade useful life). Thus there is a lack of consumer demand even among people that should think like me.

In resume, I’m still looking for the meaning of life, existence, and everything.

Installing SCO Unix 4.2, part 3: LBA disks

This is a guest post by Friggigatto

In the previous post we managed to install a Compaq-branded version of SCO Open Desktop. One of the recommendations was to use a small hard drive and avoid LBA, since SCO Unix does not recognize it.

It turns out, however, that SLS UOD429A, the bootdisk + patch that we used in the first post of this series to install ODT, also adds LBA support (as found out on A.P. Lawrence’s excellent website).

Apart from enabling you to fully use larger disks (you can install on a disk larger than 2048 cylinders, as long as you set its size to 2048 cyls during installation; you are of course going to “waste” a lot of space), LBA is more convenient if you want to have a large root partition, since the root partition has to be entirely in the first 1024 cyls.

So of course I tried repeating the installation of the Compaq version by booting off UOD429A, inserting the N2 from SCO ODT, and… I quickly found out that it would not recognize the CD as a valid installation media. Annoying.

Eventually I found out that the N1 disk from Compaq has a ramdrive compressed in the kernel, from where the initial installation script is run, while the rest of the files (mostly installscript) are on the CD itself.

The fix was, in the end, relatively simple. All I needed to do was mounting the N2 floppy in the VM I had created before, copying “N2.Z” on my harddrive. I then uncompress-ed it, extracted it (it’s a regular tar file), and replaced the installation script with the one provided on Compaq’s CD. Then, the reverse process can be done: recreate the “N2” tar file, compress it, and copy it in the mounted N2 disk. If you don’t want to go through the same process again, you can download the patched disk.

The installation process is then simple: boot off UOD429A, insert the updated N2 disk when requested, and proceed with the rest of the installation as usual.

This way I managed to install SCO Unix on a TI TravelMate 6050, alongside DOS and Windows 95. It took a bit of trial and error (reading SCO technical support documents was, again, very helpful), but in the end this is more or less what I did:

  1. install Unix as LBA with a ~20mb DOS partition
  2. hide DOS partition & create a new one as primary active partition
  3. install Win95 on that
  4. install boot manager (I used Paragon’s Boot Magic) on first DOS partition (the hidden one)

Steps 1 and 2 were done in 86box, after creating a disk with the same cylinders, heads and sectors found in the BIOS of my TravelMate (using the LBA setting in the BIOS of both laptop and emulated machine, of course). After installing DOS and SCO Unix (I’m not sure anymore in which order), I copied the Win95 installation files on the new partition and finished the installation process on the laptop, after dd-ing the image to the CF card I’m using as hard drive.

After configuring Boot Magic (and creating a custom background and icon), now I’m greeted with this every time I boot up the laptop:

Since Windows 95 is installed in a FAT16 partition, I can mount it or access it via dosls/doscp inside SCO Unix too, which is convenient for sharing files (I tried installing a 3Com 3C589 PCMCIA card directly in Unix, since according to the docs it’s supported; unfortunately, the provided drivers only work with IBM PCMCIA controllers).

SCO Unix software

A large collection of ports for SCO Unix can be found at ftp.celestial.com, but it’s faster to use the ISO with all the ports I uploaded on archive.org.
To mount the CD with lowercase filenames, run #mount -rf HS /dev/cd0 /mnt

It’s worth noting that, before using the CD, we need to install it with mkdev cdrom (yes, even if we did install the whole system from a CD). In the process we will be asked whether we want a CD-ROM/TAPE device, which can be used to install more components for the system (CD-ROM/TAPE is the format used by the setup CD), and if we want to add to the kernel ISO9660 support, which of course we need. As usual, SCO documentation has a lot of information about this.

Gzip is included in the Celestial ports, but I also managed to compile an early version of bzip2 (here is the binary). If you compile it yourself use gcc, the code will be faster. The provided Makefile undefines __STDC__; gcc sets the flag and this creates problems at linking time, resulting in a call to a missing “__unlink” function.

Bonus content: recovery disks

In the process of getting more familiar with the installation process of SCO Unix, I realized I could benefit from having a set of spare bootdisks that would allow me to mount the hard drive and modify files at will (including after the first part of the installation process). So, I created them, using a ramdrive + compressed disk (similar to what SCO’s install process does, but also the boot floppy of Windows 98) to pack as many utilities as possible on a single disk.While I was at it, I did the same for Xenix 386.

Installing SCO Unix 4.2, part 2: the devsys

This is a guest post by Friggigatto

In the previous post we saw how to install SCO Unix 4.2 and SCO ODT on a virtual machine. Sadly, both distributions lack the development system, making them a very limited toy.

At some point I noticed that the filesize of the ISO of SCO ODT 3.0 branded by Compaq (found again on the Internet Archive or WinWorld) is way larger than the other available distributions: could it be that it includes the Development System as well? I decided to find out.

Inside the ISO we can find a N1.IMG file, and we can start the installation by booting from that.

At the serial request I discovered that this version is not the same as regular ODT, and thus the serials I had did not work. I tried extracting a to-be-serialized file from inside the CD.IMG file found on the ISO by opening it with a hex editor (the file is not in ISO9660 format; it’s specific to SCO and somewhat emulates a tape drive, with multiple tar files in it. Opening it with a hex editor, it’s easy to see where one of these tar files starts and ends), extracting it with tar, and running it through brandy to generate a new serial.

Brandy, however, generated the same serials/activation I had already, indicating that the validation mechanism used by the installer in this release is different. I was afraid it would be a Compaq-specific addition, thus almost unrecoverable, but after searching Usenet I found this post (mirror) which suggests that different versions of ODT have different generation mechanisms; in any case, the keys provided in the “OSE” (Open Server Enterprise) column work.

Anyway, after inserting the serial the installation proceeds smoothly, and we can even select to install the Development System:

The DevSys also requires a serial, and for that I used one found on the archive of Tenox. The installation started with the incredibly slow process of badtracking the hard disk (and I had selected the “quick” check!) and proceeded smoothly, until it tried to install the “Compaq EFS for SCO Unix”:

The error interrupts the installation scripts and leaves the system in a half-baked state: we can reboot from the HD and load the kernel, but instead of getting to a terminal or login prompt we are dropped in a broken installation script that won’t proceed.

To fix the issue, I opened up again the ISO with a hex editor and looked at the install script (/inst5/customize). The fix is easy: search for the string “cleanup $FAIL” inside the CD (line 238 of the customize script), and replace the initial “c” with a “#” to comment out the line entirely (a neater solution would be to change the script so that it won’t install the Compaq EFS in the first place; I tried to do that as well, but it didn’t work).
Since we are at it, we can also modify the params.stz file in the ISO and disable badtracking completely (search for badtrk_none) and speed up the next installation considerably.

Restarting the installation once again with the same settings will still give the error, but this time it won’t kill the installation script and it should now complete successfully (with some warning messages since it’s not an EISA machine).

After the reboot, we should be finally welcomed into “SCO Open Server (From Compaq) Enterprise System Release 3.0”.

We can now remove the whole Compaq EFS using custom, or just the UPS drivers /etc/rc.0d/*ups and /etc/rc2.d/*ups, in addition to /usr/bin/compaq. We can also apply the patch to the disk driver to run on faster machines, as mentioned in the previous post. Finally, we can install SCO supplements from SCO’s FTP, and in particular:

  • uod426d – Y2k fix;
  • uod374a – better CD support (you can run programs from ISO-9660 CDs, for example from early SCO Skunkware releases; you can also mount CDs forcing each name to lowercase, instead of the annoying default where everything is in uppercase);
  • net382e – better TCP/IP support.

Now we have a working SCO Unix 4.2 system with the development system! The good thing about SCO Unix is that the C compiler is more modern than the one provided by SCO Xenix, but can still target Xenix (with the -l2.3 directive). This means we can compile slightly more recent software for both systems, for example bash 1.13.5 and bzip2 0.1pl2.

Continued in Part 3!

Installing SCO Unix, part 1

This is a guest post by Friggigatto

I’ve been messing around with SCO Xenix for about 10 years now, and in the process I have been playing with OpenServer 5/6 as well (mostly as a mean to copying big/many files to a Xenix VM: I’d just create an ISO file, mount that in OpenServer, then share the Xenix HD with OSR5 and copy the files over); however, I never got around to use SCO Unix.

A while ago I decided to change this, but it took many tries to get to install everything, especially the Development System; so, when I eventually managed, I decided to do a writeup of what I did (and part of what stumbling blocks I encountered along the way).This is the “first episode”, which should give you enough info to install SCO OpenDesktop 3.0 as found on WinWorld or on archive.org, and the ODT Server 3.0 version from BetaArchive. ODT is nothing else than SCO Unix 4.2 bundled with X11 and TCP/IP (while on Xenix these are separate products).

Installing SCO ODT, floppy version

The secret to installing the 4.2 floppy version was to use the updated N1 boot diskette (SLS uod429a from SCO). Once you have it, the installation process is quite straightforward and self-documenting, especially if you are used to the slightly more convoluted Xenix install. This version can even be installed in VMWare.

The serial/activation is included in the release files; create a VM with an hard drive <2gb, during the setup process select “Floppy” as the install media, a “quick” bad track scan type and then simply confirm every step. You will be asked to insert all the disks in order, and the only challenge should be surviving the mind-numbing boredom of handling more than 40 floppies.
Unfortunately, the network and graphics card are not supported on VMWare (I suggest to boot the first time in single-user mode and disable the GUI from starting automatically with “scologin disable”), so it’s a good idea to install on 86box instead.

While we are at it, we can even spare ourselves some of the boredom by using the CD version instead.

Installing SCO ODT, CD version

For the ODT CD version, I looked up at what SCSI devices are supported (mostly by running ‘strings’ on the kernel inside the boot floppy image, looking at the device driver names and comparing them with those of OpenServer 6), and created a machine on the latest unstable 86box build (3.0.0.2983) like this:

  • i486-socket 2 and 3: [i420EX] ASUS-PVI-486AP4 (many other boards work as well, but faster CPUs/machines would give me issues… more on this later)
  • Intel i486SX 33mhz + 487SX
  • 32 Mb RAM
  • Serial Logitech mouse, 3buttons
  • Video: ISA16 Orchid Farenheit 1280 [note for the setup: the emulated bios is 2.0 – supports 1024×768@256 colors]
  • SCSI controller: aha-154xA
    Address 0x330, IRQ 11, DMA 5
    Host ID 0
    BIOS C800H
  • SCSI cdrom
    Controller 0, ID 5
  • IDE hard drive, <2Gb, non-LBA (check the BIOS settings)
  • If you want Ethernet, use WD8013EBT (drivers are included)
    IRQ3 address 240

The OpenServer release I found on BetaArchive was missing the N2 disk, but the one from the floppy release works fine. The process is simple: boot from N1, the SCSI adapter should be recognized by the kernel (a line that starts with “%adapter” and then the IRQ settings etc.), and so should be the disk drive (%disk):

You can use the same serial as for the floppy release, but this time indicate “SCSI CD-ROM” as the install media, and it should install fine. You should however deselect the DOS Services, as Unix will crash after the first reboot while trying to install them.

Once the installation is complete and the system restarted, it will greet you with this very dramatic login screen (and ironic too: SCO and Open Systems in the same logo) and its pastel-colored UI:

Running on faster machines

The 33 mhz CPU is surely not a beast by today’s standards, and the emulated system feels sluggish enough also under ODT; however, switching to a faster CPU would crash the system. Luckily, SCO’s former support website (I created a mirror of the tech articles on archive.org) has a solution for this: we can modify a driver to avoid kernel panics on quick systems. After booting into single-user mode, we can run

# cd /etc/conf/pack.d/pit
# cp Driver.o Driver.orig
# _fst -w Driver.o
* spinwait+2D?w F989 FEE2
* $q
# cd /etc/conf/cf.d
# ./link_unix -y

Finally we can safely reboot, this time with a better CPU. The fastest machine I could test is a Socket 5 (i430NX Gigabyte GA-586IP) Pentium MMX Overdrive 200Mhz. When a faster system is selected (e.g. those based on Socket 7), the mouse stops registering the vertical axis.

In the next post, we’ll see how to install ODT with the development system.

Running VMWare ESXi on Raspberry PI

(This is a guest post by Antoni Sawicki aka Tenox)

Just for fun with virtualization I wanted to try out VMWare ESXi for ARM64, most specifically Raspberry PI. ESXi for ARM has been around for a couple of years now. Since PI4 packs 8GB of RAM and has a reasonably fast CPU it can be a worthwhile experience. Also more OSes for Raspberry PI are now available in UEFI boot mode.

Not going to go through exact installation steps as these are all around the web and youtube. Just to summary you will need to download an image from VMWare website as well as bunch of UEFI firmware files from github and combine it all together on to a SD card. When you boot it you will go through an install process which is straightforward. You can overwrite install media and use it as the target so no need for multiple SD cards. Once it boots you will see familiar ESXi boot screen:

ESXi booting on Raspberry PI 4

In order to get it going you will obviously need to add some storage. You can use NFS, iSCSI or locally attached USB drive. For the latest you need to disable USB arbitrator.

# /etc/init.d/usbarbitrator stop
# chkconfig usbarbitrator off

What can it run?

ESXi ARM only officially supports only UEFI boot based OSes. Fortunately this is a default option for Ubuntu PI, Free/Net/OpenBSD also work and so does Windows. But what about OSes that use U-Boot? Since ESXi-ARM Fling 1.1 you can boot oses in a “direct” mode with no UEFI! This is a huge step, but unfortunately as of today it doesn’t support UEFI-less VGA, only a serial port. Hopefully this can be fixed in future. I would love to have a RISC OS and/or Plan 9 VM. On the other hand Plan 9 supports EFI boot so an image could be made.

Windows guest install was also much easier than I expected. Thanks to UUP dump you basically roll your own bootable ISO. I think it’s actually easier to get it going on ESXi than natively on RPI hardware or QEMU.

Windows 10 Guest VM on ESXi Fling Raspberry PI

NIC driver obviously did not work by default, but there is a VMXNET3 ARM64 driver in the wild:

VMXNET3 for Windows 10 ARM64 on ESXi Fling on Raspberry PI

What is it good for?

Right now probably just for fun. But I can easily see datacenters filled in with ARM servers running ESXi. Future is bright and free of Intel! Personally I will keep it around for development purposes if I need to make builds for ARM on various OSes.

Interestingly enough you can even run VMWare ESXi ARM on QEMU with nested virtualization!

Also this is the official VMWare ESXi ARM Blog worth checking for future updates.