So retrohun is doing their blog thing on github of all things, and the latest entry, is of course Xenix tales. As mentioned in comments on this blog & other places they found another driver for Xenix TCP/IP!
Going back years ago, the tiny NIC driver support for the elderly Microsoft/SCO Xenix 386 v2 included 3COMA/B/C and SLIP. However it’s been recently unearthed that D-Link had drivers for their DE-100 & DE-200 models, and as it happens the DE-200 is a NE-2000 compatible card!
That means that Qemu can install/run Xenix, and it can get onto the internet* (there is a catch, there is always a catch).
You can download the driver either from github or my password protected mirror. Simply untar the floppy under Xenix (tar -xvf /dev/fd0) and do the install via ‘mkdev dlnk’
Setting up the driver is… tedious. Much like the system itself.
I found Qemu 0.90 works great, and is crazy fast (in part to GCC 3) even though Qemu 0.9’s floppy emulation isn’t good enough to install or read disks. With all the updates to Qemu 3.1 use that, it’ll read the disks, and allow for networking.
To give some idea of speed I ran the age old Dhrystone test, compiled by GCC 1.37.1 and scored the following:
Dhrystone(1.1) time for 5000000 passes = 8
This machine benchmarks at 625000 dhrystones/second
When compared to the SGI Indy’s 133Mhz R4600SC score of 194,000 @ 50000 loops that makes my Xeon W3565 322 times faster, under Qemu 0.90! And that’s under Windows!
Setting up the commandline/launching is pretty much this:
qemu.exe -L pc-bios -m 16 -net nic,model=ne2k_isa -net user -redir tcp:42323::23 -hda ..\xenix.vmdk
adding a [GenuineIntelC] family 5 model 4 stepping 3 CPU
added 16 megabytes of RAM
trying to load video rom pc-bios/vgabios-cirrus.bin
added parallel port 0x378 7
added NE2000(isa) 0x320 10
pci_piix3_ide_init PIIX3 IDE
ide_init2  s->cylinders 203 s->heads 16 s->sectors 63
ide_init2  s->cylinders 0 s->heads 0 s->sectors 0
ide_init2  s->cylinders 2 s->heads 16 s->sectors 63
ide_init2  s->cylinders 0 s->heads 0 s->sectors 0
added PS/2 keyboard
added PS/2 mouse
added Floppy Controller 0x3f0 irq 6 dma 2
Bus 0, device 0, function 0:
Host bridge: PCI device 8086:1237
Bus 0, device 1, function 0:
ISA bridge: PCI device 8086:7000
Bus 0, device 1, function 1:
IDE controller: PCI device 8086:7010
BAR4: I/O at 0xffffffff [0x000e].
Bus 0, device 1, function 3:
Class 0680: PCI device 8086:7113
Bus 0, device 2, function 0:
VGA controller: PCI device 1013:00b8
BAR0: 32 bit memory at 0xffffffff [0x01fffffe].
BAR1: 32 bit memory at 0xffffffff [0x00000ffe].
In the file /etc/tcp the default installation does a terrible job of setting up the NIC. I changed the ifconfig line to this:
ifconfig dlink0 10.0.2.15 -trailers broadcast 10.0.2.255 netmask 255.255.255.0
Which at least brings it up a bit better. I added in a gratuitous ping also in the startup script to build the arp for the gateway.
ping 10.0.2.2 32 1
Which brings us to the next point, the routing command is broken after loading the D-Link driver. I tried all the available TCP/IP drivers for Xenix (1.1.3.f 1.2.0e).
# route add default 10.0.2.2 1
add net default: gateway 10.0.2.2 flags 0x3: No such device or address
So no dice there. And yes, for SLIP/no interfaces the route command works as expected, just not with the DLINK driver.
However local connections from the host machine do work, so yes, you can telnet into the VM!
This makes using Xenix far more usable say for managing files, control/compiling etc.
For you die hard IRC fans, all is not lost, you can simply run a local proxy (See: Teaching an old IRC dog some new tricks) on your host machine, and point the irc client to 10.0.2.2
So there you go, all 20 Xenix fans out there! Not only a way to get back online, but to do it in SPEED!
Thanks to Mark for pointing out that there has been tremendous progress with version 3.1 of Qemu, and it’s TCG user speed is up to the 0.90 levels of speed (at least with dhrystone/Xenix), and it just takes a little (lot) of massaging to get up and running with Xenix with the right flags:
qemu-system-i386.exe -net none -L . -m 16 -hda xenix.vmdk -device ne2k_isa,mac=00:2e:3c:92:11:01,netdev=lan,irq=10,iobase=0x320 -netdev user,id=lan,hostfwd=tcp::42323-:23
This is based off my old post, Running Netware 3.12 on Qemu / KVM 2.8.0 although with a few more flags to assert the user mode tcp redirect.
Can you set the route prior to loading the interface driver?
Being from 1990, there is no dynamic kernel modules, it’s statically linked in.
Xenix 386 2.3.4 GT may be too new.
where did you get the image/install media for this? i would really like to try it on real hw, even though i have an intel card, it might even have drivers for xenix..
dlink drivers are still exists on their ftp: ftp://ftp.d-link.co.za/D-LinkFTP/products/NIC/de200/Driver/uncompressed/SCOXENIX/
Do the version 2.3.4 support X11?
there is a port, but I haven’t bothered with it, as X11 in 16MB of ram with no shared libraries is going to be miserable.
There is a binary distribution for 2.3.4 but it requires a socket emulation library that has been lost to the sands of time.
There is a set of patches for the original X11R4 sources, but I’ve never managed to compile it. I probably still have a VM disk somewhere with the sources, if you want to give it a shot.
There is several TCP/IP kits on archive.org now so the emulation thing isn’t needed.
yeah someone managed to upload them.
Wow, Qemu 0.9 is positively ancient! Is this because something broke which means that you can’t use the latest version?
The CPU emulation in the ancient versions that uses GCC 3 is far superior to the newer versions.
New versions numbers and ‘newer’ software doesn’t mean progress.
Not to mention the system/disk emulation is far more accurate in 0.9 which lets me run NeXTSTEP and other fringe OS’s, and at great speed.
Oh certainly I understand that version numbers are just an arbitrary label, but when you mention superior CPU emulation in the older versions, are you talking about performance or something else? I’m mostly curious since over time improvements to TCG have noticeably improved the performance of non-X86 architectures (and 0.9 would have been back in the dyngen days).
I’d be very interested to see some performance numbers of 0.9 vs 3.1 if you have them since real-world use cases can certainly help inform further discussions on how QEMU can be improved.
And yes, the NeXTSTEP disk emulation issues get me on SPARC too 😉 As previously mentioned it’s related to the asynchronous IO in later versions of QEMU and the NeXTSTEP SCSI driver seems to depend upon certain timing constraints. One day I’d love to get to the bottom of this but it’s one of those things much lower down my list…
Wow version 3.1 certainly improved a LOT since the disaster that was 1.x/2.x when it came to TCG performance!
I added in another 0 to stress it harder..
Dhrystone(1.1) time for 50000000 passes = 160
This machine benchmarks at 312500 dhrystones/second
And this completed in 2:40.6
3.1 (x64 binary/i386 emulation)
Dhrystone(1.1) time for 50000000 passes = 158
This machine benchmarks at 316455 dhrystones/second
And that completed in 2:38.4
I’m using my ancient (home) machine, a 2006 Mac Pro (Xeon 5130 @2Ghz), so not exactly modern or all that fast, but it’ll run Windows 10 just fine.
I tried to run MS-DOS 5.0 + DooM v1.1 as my standard ‘how does it feel’ performance wise. Oddly enough enabling the sound blaster 16 prevents the VGA card from switching to graphics mode. Without sound though the performance is very choppy when compared to 0.90
I’m using these flags:
qemu-system-i386.exe -L . -m 16 -device adlib -device sb16,iobase=0x220,irq=5 -drive file=doom11.vmdk,if=ide,bus=0,unit=0,media=disk -parallel none -monitor tcp::4400,server,nowait -vga std -M isapc
And the disk is.. SLOW. I think I didn’t notice on Xenix as it’s so damned tiny, but I guess I’m missing more, or it’s the usual Qemu thing that nobody cares how poorly it is for running old stuff. 🙁
slow disk issue happens in qemu > 0.15.1 and till now still have no fix.
This is all really useful information. neozeed, can you confirm that the slowdown appears between 0.14 and 0.15 in your tests? If it does, and you can make your Doom test image available, I can run a bisect to see which commit has introduced the delay and take it from there.
This is on a windows host, isn’t it? At least for neozeed.
I’m idly curious and wonder if once kvm acceleration became stable that the qemu devs simply optimized for kvm. In this case, meaning that they may have used kvm to make up for any slowdown -by offloading it to kvm.
Since windows doesn’t have that acceleration, then performance would naturally turn into a dog.
Unfortunately, I’m not sure if I could get older versions to compile in Ubuntu 18.04 and it’s doubtful how meaningful they’d be anyway (since this would be in vmware).
There is also some Intel acceleration available for Qemu, it’s not 100% tied to KVM. Although considering the CPU doesn’t seem to be the lagging component it feels more in line with the video performance.
Overall I know that Qemu isn’t focused at all on being ‘correct’ or running anything legacy, rather it’s just an IO subsystem for KVM and running Linux.
PCem offeres a better solution with more vintage hardware support, enough to run legacy BIOS & chipset stuff to give a more authentic (painful) experience.
I remember an old Xenix system at work running TCP/IP via a ‘smart’ Excelan branded network card. The card had its own processor (80186? – I forget) on which the TCP/IP stack ran. The TCP/IP software came bundled with the card.
The Excelan network card wasn’t the only ‘smart’ communications device we used – there was also a 6-port serial card that ran in the servers we sold and supported. This was equipped with an 80286, and I’m sure that on one
customer site the 80286 in the comms card was running at a higher clock speed than the one in their 80286 equipped server.
I didn’t think they had intelligent peripherals going back that far, network offload ones to boot!
Then agian by the time I was old enough to get into the Enterprise grade stuff it was all those Intel i860/i960’s that nobody wanted that were on so many SCSI and host adapters. Although I don’t recall protocol offload engines until the 00’s.
I guess it makes sense with the high speed serial adapters, as 165550’s just weren’t enough, you needed/wanted something that could basically DMA serial access, and handle all those IRQ’s.
Fun things indeed!
86Box emulator (unofficial PCem fork) has 3Com 3C503 network adaptor (ported from MAME). 3c503 is build around DP8390 and shouldn’t be very complicated to emulate…