This has been a rush of excitement! Rairii published their port of the ARC & Drivers needed to get NT 4.0 working on commodity PowerMac hardware over on github. And what about running it under emulation? Once more again Rairii provided a custom fork of dingusppc, again over on github!
A custom CD-ROM worked best (for me?!) for installation, combining the ARC & Drivers, along with a copy of Windows NT Workstation onto a single disc. Rairii provided the magical recipie for creating the ISO:
# ext. xlate creator type comment
.hqx Ascii 'BnHx' 'TEXT' "BinHex file"
.sit Raw 'SIT!' 'SITD' "StuffIT Expander"
.mov Raw 'TVOD' 'MooV' "QuickTime Movie"
.deb Raw 'Debn' 'bina' "Debian package"
.bin Raw 'ddsk' 'DDim' "Floppy or ramdisk image"
.img Raw 'ddsk' 'DDim' "Floppy or ramdisk image"
.b Raw 'UNIX' 'tbxi' "bootstrap"
BootX Raw 'UNIX' 'tbxi' "bootstrap"
yaboot Raw 'UNIX' 'boot' "bootstrap"
vmlinux Raw 'UNIX' 'boot' "bootstrap"
.conf Raw 'UNIX' 'conf' "bootstrap"
* Ascii '????' '????' "Text file"
I went ahead and made the image, and added in Service Pack 2, Internet Explorer 3 and IIS3 onto the same CD-ROM to make things easier for me to deal with. It’s on archive.org.
On Discord and impromptu porting session broke, out and we got NP21 up and running!
Unfortunately, it is very slow. I have no idea how it performs on real hardware, it’s entirely possible that it really is unplayable. It’s still pretty amazing that the OS booted up and I could actually compile something!
Even the usual fun text mode stuff from Phoon, Infocom’87, F2C, compiled!
But will it run DooM?
Of course, it runs! I’m using the 32bit C code from Sydney (ChatGPT), which runs just great.
Into 3D space
I was able to compile GLuT on the way to try to build ssystem but there is two textured OpenGL calls missing, meaning that the more fun OpenGL stuff simply will not work.
Setting expectations
As a matter of fact, lots of weird stuff doesn’t work, the install is very touchy so don’t expect a rock-solid experience, but instead it was incredibly fun to try to get a bunch of stuff up and running.
Thanks again to @Rairii for all their hard work! This is beyond amazing!
— it’s 3am and I’m exhausted, but I had to share this out some how some way!
(this is a guest post by Antoni Sawicki aka Tenox)
Previously I wrote a boring lengthy article about need for “simple html mode” in WRP. Today I want to introduce addition of images to this contraption! You can now browse modern web like it was in 1994!
The say that image is better than 1000 words, so here we go:
You can regulate the image size and make them however big you want, also PNG, GIF and JPEG of course:
The simple html mode is still quite buggy and needs a lot of fixes. I see some 400 errors here and there, Captha problems, etc. I think these can be all fixed in time.
I had originally planned on doing this for the 4th of July, but something happened along the way. I had forgotten that this is 1995, not 2024, and things were a little bit different back then.
Back in the early days of the internet, when Al Gore himself had single handedly created it out of the dirt, The idea of address space exhaustion didn’t loom overhead as it did in the late 00s. And in those days getting public addresses was a formality. It was a given that not only would the servers all have public TCP/IP addresses, but so would the clients. Protocols like FTP would open ports not only from client to server, but also server to client. This was also the case for RealAudio. Life was good.
The problem with trying to build anything with this amazing technology is that while I do have a public address for the server, it’s almost a given that YOU are not directly connected to the internet. Almost everyone these days uses some kind of router that’ll implement Network Address Translation (NAT), allowing for countless machines to sit behind a single registered address, and map their connections in and out behind one address. For protocols like FTP, they have to be built to watch and dynamically add these ports. FTP is popular, RealAudio is not. So, the likelihood of anyone actually being able to connect to a RealAudio 1.0 server is pretty much nil.
The software is pretty easy to find on archive.org, (mirrored). Since it’s very audio centric, I decided to install the server onto a Citrix 1.8 server using Qemu 0.9. I had gone with this, as the software is hybrid 16bit/32bit and I need a working sound card, and I figured the Citrix virtual stuff is good enough.
First thing first, you need some audio to convert. Thankfully in modern terms ripping or converting is trivial unlike the bad old days. First off, I needed a copy of the Enclave radio, and I found that too on archive.org. The files are all in mp3 format, but the RealAudio encoder wants to work with wav files. The quickest way I could think of was to use ffmpeg.
ffmpeg -i Enclave Radio - Battle Hymn of the Republic.mp3 -ar 11025 -ab 8k -ac 1 enc01.wav
This converts the mp3 into an 11Khz mono wav file. It’s something the encoder can work with. Another nice thing about Citrix is how robust it can use your local drives, cutting out the whole part of moving data in & out of the VM.
One thing about how RealAudio works is that first there is the ability to load up a .ram or playlist file. In this case, I took the ‘enclave playlist’ from Fallout 3, and made a simple playlist as enclave.ram:
The encoder allows for some metadata to be set. Nothing too big.
And then it thankfully takes my i7 seconds to convert this, even under emulation, using a shared drive. And import option to deselect is to enable playback in real-time, as it’ll never work as it cannot imagine a world in which the processor is substantially faster than the encoder.
Converting the 8 files took a few minutes, and then I had my RealAudio 1.0 data.
The playlist should be served via HTTP, and I had just elected to use an old hacked up Apache to run on NT 3.1. As it only has to serve some simple files.
The scene is all set, the RealAudio player pulls the playlist from Apache, then it connects on TCP port 7070 of the RealAudio server to identify itself and get the file metadata. Then the RealAudio server then opens a random UDP port to the client and sends the stream, as the client updates the server via UDP of how the stream is working. And this is where it all breaks down, as there is not going to be any nice way to handle this UDP connection from the server to the client.
Well, this was disappointing.
In a fit of rage, I then tried to see if ffmpeg could convert the real audio into FLAC so you could hear the incredible drop in quality, and as luck would have it, YES it can! To concatenate them, I used a simple list file:
And thanks to ‘modern’ web standards, you can now listen to this monstrosity!
This takes about 10MB of WAV audio derived from 8MB of MP3’s, and converted down to 472kb worth of RealAudio. Converting that back to a 4.4MB FLAC file.
To keep in mind what network ports are needed at a minimum it’s the following:
TCP 1494 * Citrix
TCP 7070 * RealAudio
UDP 7070 * RealAudio (statistics?)
TCP 80 * Apache
And of course, it seems to limit the RealAudio server to the client in the 7000-7999 range but that is just my limited observation. This works find at home on a LAN where the server is using SLiRP as the host TCP/UDP ports appear accessible from 10.0.2.2, while giving the server a free-standing IP also works better, but again it needs that 1:1 conversation greatly limiting it in today’s world.
Also, as pointless as it sounds, you can play the real audio files from the Citrix server for extra audio loss.
Personally, things could have gone a lot better on the 3rd of July, I thought I’d escaped but got notified on the 5th they forgot about me. Oh well Happy 4th for everyone else.
(this is a guest post by Antoni Sawicki aka Tenox)
I often need to install a specific / older version of QEMU on a Mac using Homebrew. If you search for how to do it, typical answers are create a local tap, extract some files and other nonsense. Building from sources is equally retarded because configure can’t easily find includes and libraries installed by Homebrew.
This is how to do it in a simplest possible way. Find QEMU Homebrew Formula file on Github. Then click history on the top right corner. Browse for the desired version. Then on the right of the version, click a little icon saying “View code at this point”. It should show you an older version of the same formula. You can click download raw file or copy the URL and use curl to fetch it. Then simply run brew install ~/Downloads/qemu.rb or wherever you saved it. Magic! Hope it helps!
(this is a guest post by Antoni Sawicki aka Tenox)
TL;DR WRP now allows rendering web pages in to a simplified HTML, compatible with old browsers (in addition to Image Map).
Long version WRP or “Web Rendering Proxy” is a proxy server that allows to use vintage web browsers on the modern web. Originally inspired by Opera Mini/Turbo rendering proxy for mobile devices. I wanted a similar service that would translate modern web pages, but in to it’s older HTML version. This not only proved very difficult, but I realized that the web is advancing in a way that it would not be very future proof. I’m talking about dynamic pages, JavaScript generated content and WASM. Instead, I took a different approach – generating a screenshot of a page with clickable Image Map. This allows to faithfully represent a fully rendered web page on a vintage machine + allow to click anywhere on it and perform actions. At a cost of performance. Rendering GIF or JPEG and transferring over network feels rather slow and clunky.
I have been using WRP for some 10 years now. I began to realize that, this approach, while pretty awesome for show and bragging, is not very practical for day to day use. In fact, my use of web browsers on vintage workstations typically revolves around reading documentation, blogs, wikis and other, “mostly text” websites. It would be much better if these were not clunky screenshots but rather some form of text output.
I again started poking around the original idea of simplified HTML. Looked at various reader modes, print to PDF, etc. In particular, I have noticed recent advancements in so called “web scraping”, extraction and html to markdown conversion services. Likely fueled by the recent AI/LLM craze, as robots scrape the web to learn about humans. What caught my attention are various “html to markdown” services. They can fully render dynamic JS pages and extract contents as it was in a browser. Also, Markdown, if you think about it, is in fact a simplified HTML.
After doing some research, in couple of evenings and less that 100 lines of code I got a basic version going. The principle is as follows: First capture the page HTML, convert to Markdown, do some manipulation like adding link prefixes and remove images (we’ll come back to that later). Then render Markdown back to HTML. Wrap it in a vintage HTML header an off we go. The results are amazing!!
For the “mostly text” pages this is way better than screenshot mode. Not only is way faster and more responsive, you can select and copy text, but also you use the old web browser more like it was originally intended. At any time, if you want to view the screenshot mode, you can simply switch back to PNG/GIF/JPG mode with couple of clicks.
Another interesting aspect of this is extensibility and potential for improvement. For the screenshot mode there just isn’t that much stuff you could add. It’s just a screenshot. For Markdown and simple HTML there’s a million things one could add. Both down and up converters offer a wide variety of plugins and filters. We can improve formatting, layout, processing, add translation and other features. Perhaps also different features based on client browser version. Maybe even input forms and …images.
Lets talk about images. Right now they are completely deleted from markdown. This is for several reasons, compatibility, performance, load time, size, formatting, etc. I’m thinking that perhaps images could be added in some converted form. For example downsized to a small JPG or maybe converted in to ASCII art. Suggestions more than welcome!
Wasting time doing more “research” on old GCC, and thanks to suggestions I thought that in addition to the old 1.x stuff, but I should include my old favorite 2.5.8, and the stalled 2.7.2.3, and the EGCS Pentium improved GCC fork. I figured re-treading on old ground with the xMach/OSKit build on x86_64 should be safe/quick & easy.
My cross chain fails when trying to build libgcc.a How annoying but I already have one, so I bypass it, and GCC then tries to build the crt (c runtime library startup code) and that fails too!
I’m using GCC 12.2.0 on Debian 12. Ok maybe I’ve finally hit drift, so let me try some other binutils. binutils-2.10.1, binutils-2.14. I had originally been lying saying I’m a Dec Alpha running either OSF or Linux as it matches the size & endian alignment, but no dice. I found out about the ‘linux32’ command that’ll fake it’s environment as an i686 processor to fake out a lot of builds. But the same result over and over. So, I break down and fire up GDB.
(gdb) r
Starting program: /root/src/xmach/binutils-2.14-bulid/gas/as-new crtstuff.S -o crtstuff.o
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x0000555555592ef0 in md_estimate_size_before_relax (fragP=fragP@entry=0x555555668fa8, segment=segment@entry=0x555555668730) at ../../binutils-2.14/gas/config/tc-i386.c:4441
4441 return md_relax_table[fragP->fr_subtype]->rlx_length;
(gdb) bt
#0 0x0000555555592ef0 in md_estimate_size_before_relax (fragP=fragP@entry=0x555555668fa8, segment=segment@entry=0x555555668730) at ../../binutils-2.14/gas/config/tc-i386.c:4441
#1 0x000055555558bce2 in relax_segment (segment_frag_root=0x555555668f30, segment=segment@entry=0x555555668730) at ../../binutils-2.14/gas/write.c:2266
#2 0x000055555558c39c in relax_seg (abfd=<optimized out>, sec=0x555555668730, xxx=0x7fffffffe960) at ../../binutils-2.14/gas/write.c:659
#3 0x000055555559b01f in bfd_map_over_sections (abfd=0x55555565e030, operation=operation@entry=0x55555558c370 <relax_seg>, user_storage=user_storage@entry=0x7fffffffe960)
at ../../binutils-2.14/bfd/section.c:1101
#4 0x000055555558b501 in write_object_file () at ../../binutils-2.14/gas/write.c:1565
#5 0x000055555556e288 in main (argc=2, argv=0x5555556302d0) at ../../binutils-2.14/gas/as.c:924
(gdb) quit
The whole issue revolves around md_relax_table! I’d seen a ‘fix’ where you add in a pointer, and it’ll satisfy GCC and sure it’ll compile. Years ago, I had #ifdef’d it out until when I needed it, but the real answer is to embrace 1989 and set the compiler flags to “-std=gnu89”
I can’t help but think at some point soon 1989 will be removed as it’s only wierdos like me building this stuff.
Just as the old Unix error status of sys_nerr has been removed for ‘reasons’ so may as well amputate all the old code:
- if (e > 0 && e < sys_nerr)
- return sys_errlist[e];
Nothing much you can do about it, Linux isn’t trying to be Unix anymore.
64/32
In the end it doesn’t seem to matter. OSkit fails to build:
And surprisingly mig does build, but Mach does not.
i586-linux-gcc -c -MD -DLINUX_DEV=1 -DHAVE_VPRINTF=1 -DHAVE_STRERROR=1 -Di386 -DMACH -DCMU -I- -I. -I../../../kernel/libmach/standalone -I../../../kernel/libmach/c -I../../../kernel/libmach -I/root/src/xmach/xMach/object-kern/libmach -I/root/src/xmach/xMach/object-kern/../kernel/generic/libmach/standalone -I/root/src/xmach/xMach/object-kern/../kernel/generic/libmach/c -I/root/src/xmach/xMach/object-kern/../kernel/generic/libmach -I../../../kernel/include/mach/sa -I../../../kernel/include -I/root/src/xmach/xMach/object-kern/../kernel/generic/include -I/root/src/xmach/xMach/object-kern/include -I/root/src/xmach/xMach/object-kern/../kernel/generic/include/mach/sa -nostdinc -O1 /root/src/xmach/xMach/object-kern/libmach/bootstrap_server.c
/root/src/xmach/xMach/object-kern/libmach/bootstrap_server.c: In function `_Xbootstrap_privileged_ports':
/root/src/xmach/xMach/object-kern/libmach/bootstrap_server.c:90: `null' undeclared (first use this function)
/root/src/xmach/xMach/object-kern/libmach/bootstrap_server.c:90: (Each undeclared identifier is reported only once
/root/src/xmach/xMach/object-kern/libmach/bootstrap_server.c:90: for each function it appears in.)
Needless to say, this is why I don’t use OS X anymore. Not having a 32bit userland basically killed it for me.
I guess the next step is to go ahead with qemu-user mode wrappers to fake it.
Sorry if you were hoping for some great conclusion.
First off, I got a new VPS to house this on, size wise, I’d just plain outgrown the old one, even with SquashFS. Over on lowend box, I had spotted this one: LuxVPS
It’s not an AD, just thought the pricing seemed pretty good for 5€. One of the nice things about converting so much of my data to SquashFS is that moving single files is WAY easier to deal with!
Mice in my 1970’s teletype text editor?!
But editing text files had me facing off some feature of VIM I’d somehow not dealt with that Debian 11 set by default, and that is mouse integration!
CAN YOU BELIVE IT?
Somewhere out there, is people who use a mouse with a VI clone.
It bares repeating
SOMEONE THINKS YOU NEED A MOUSE TO USE VI.
So much so, it’s the system default.
Good lord.
The fix is to edit /etc/vim/vimrc:
set mouse=
set ttymouse=
Problem solved. Obviously, I’m not going to remember this, but now I can right click/paste the way G’d intended it!
Stale encryption
The next source of annoyance is the ancient stunnel 4.17 that I use for altavista.superglobalmegacorp.com. I’m kind of trapped with this setup as it needs to be a 32bit ‘workstation’ OS, and I don’t want to run something as heavy as XP or Vista when NT 4.0 is more than enough. Anyways OpenSSL won’t talk to this ancient encryption, throwing this error trying to do a connection with “openssl s_client -connect 192.168.23.6:443”:
error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
Unable to establish SSL connection.
Now when I connect to stunnel, I can verify that I am indeed using ancient crap level security:
New, SSLv3, Cipher is AES256-SHA
Server public key is 1024 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : AES256-SHA
Session-ID: 19D20D30E0026E8417E00402DE939E90770D4658C3A9CFE4DB4E5F2A5454DE9D
Session-ID-ctx:
Master-Key: 498C648E77E9B9C944A8B1D16242240A161A05A087881C6AD300718DD9B8C443EA12FB76440B666B7C6634A7E7DBE9D5
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1718352960
Timeout : 7200 (sec)
Verify return code: 10 (certificate has expired)
Extended master secret: no
---
DONE
I don’t care about the encryption, I could as a matter of fact just run without it, as I only need the reverse proxy aspect of it, to make the AltaVista web server accessible over the LAN/WAN/INTERNET. It’s all fronted with CloudFlare so from the end use POV it’s all encrypted anyways
A rainbow of happiness
Another nice side benefit of this SquashFS setup is that I can forever rebase the disks as the content never changes.
One thing is for sure, it makes hosting AltaVista a bit easier to deal with. And for the sake of archiving, I uploaded a pre-loaded & indexed dataset Altavista Pre-Loaded (squashfs). I found that you can just copy the databases into a new VM, as long as you keep the drive letters the same as your source. So luckily, I had kept the OS on C:, installed AltaVista on D: with all the usenet posts on U:. Even better, for those strapped for space, you don’t technically need the U: drive, if you just want to search. Of course, you probably do want to look at them, but we’ve gone down this road before. And we know where it leads.
Time goes on, and things are lost, and it’d come up somewhere about actually building Linux from Windows, so I thought I’d show it off.
The one thing is that modern machines are just so fast, that it’s almost hard to believe that a 386DX 16 with 4MB of ram would struggle for seemingly hours, what an i7 can churn out in mere seconds.
Time sure flies!
It’s my usual ‘DO IT LIVE’ style, I tried to clean up the audio, but I lost the steps… One day I’ll try to script & build a PowerPoint so it’s more cohesive.
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 29G 27G 2.1G 93% /
It’s a problem that we will all face sooner or later in shared environments, running out of disk space. Back in the old days we would just run stacker and be done with it, but what on earth can we do in this modern age?
Well, there is squashfs which is great at creating ultra-compressed read-only filesystems! Well, that is great, but it is READ-ONLY after all, so that is going to suck right? Well thanks to the magic of file system overlays, we can compress our website, and get the much-needed COW (Copy on Write) to another directory giving us the best of both worlds. It’s a common thing in many live CD’s or any seemingly appliance-based OS where you have a hardened read-only OS core that a user cannot delete/infect but gives the appearance of allowing you to update files. Well, that’s all nice but how do you do it manually?
The first thing I did was shut down Apache so I could get a clean compress of my web document root: mksquashfs is pretty easy to use, and in a few minutes of downtime I was able to create a read-only version of my blog’s filesystem. (NOTE that this doesn’t include the database! so anyone wanting to quick & easily archive WordPress, remember there is always more than just the files!).
root@ukweb:/srv/www/blog# mksquashfs . /usr/local/blog.sqshfs
Parallel mksquashfs: Using 1 processor
Creating 4.0 filesystem on /usr/local/blog.sqshfs, block size 131072.
[===================================================================================================-] 67497/67497 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments,
compressed xattrs, compressed ids
duplicates are removed
Filesystem size 4604333.36 Kbytes (4496.42 Mbytes)
82.78% of uncompressed filesystem size (5562424.58 Kbytes)
Inode table size 480413 bytes (469.15 Kbytes)
33.86% of uncompressed inode table size (1418977 bytes)
Directory table size 430607 bytes (420.51 Kbytes)
32.31% of uncompressed directory table size (1332573 bytes)
Number of duplicate files found 519
Number of inodes 38856
Number of files 32640
Number of fragments 7872
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 6216
Number of ids (unique uids + gids) 2
Number of uids 2
www-data (33)
root (0)
Number of gids 2
www-data (33)
root (0)
Before compression the blog sat at 5.6GB worth of space. After compressing, it now sits at 4.4GB. Not that awesome, but not that bad either! the blog.sqshfs file can be easily mounted on the command-line like this:
mount -o loop /usr/local/blog.sqshfs /srv/www/blog
And it mounted up just fine, and astonishingly the blog worked. Although it being a read-only filesystem means that I cannot upload new content so all the media would be frozen in time, just as I would no-longer be able to make any updates to the pluggins or the software.
Enter the overlayfs, which lets you specify an ‘upper’ and ‘lower’ level for your filesystem where you can have a read-only lower level, and a read-write upper level. Perfect!
I moved the blog read-only mount to /srv/www/blog-ro created a blog-tmp & blog-rw directories as well and mounted up in overlay mode like this:
mount -t overlay -o lowerdir=/srv/www/blog-ro,upperdir=/srv/www/blog-rw,workdir=/srv/www/blog-tmp overlay /srv/www/blog
You’ll notice that despite all the documentation mentioning overlayfs, along with all the posts, as of Linux 5.15 it’s now called overlay.
root@ukweb:/lib/modules/5.15.0-101-generic/kernel/fs/overlayfs# ls overlay.ko
At least that was easy enough to find.
But you might say, THATS ALL MANUAL! How on earth are you going to deal with a reboot? rc.local?!
And just like that, I now have a read-only version of the blog data, in a single easy to backup file, along with writes going to a much more manageable directory for updates.
I guess I should add that for sites that use caching, you’ll want to purge the wp-content/cache directory as it’ll become stale, and there really is no point having a read only version of the chache.
If you can see this, then clearly the site is working!
**UPDATE
So I do have a qemu image piggy-backing on my VPS that runs the Apache on NT 3.1 (superglobalmegacorp.com) site. It’s not very complicated, just NT 3.1 with my terrible apache site. Content doesn’t change, it’s a “just because I can” thing.
So you can happily shut down the VM, and in this case I’m using VMDK’s but it really doesn’t matter, I just like having a more neutral container if I want to move stuff around. Just squash the VMDK by itself into a new squash fs file:
# mksquashfs nt31as.vmdk /usr/local/vmdk/NT31_AdvancedServer.vmdk.squashfs
Parallel mksquashfs: Using 1 processor
Creating 4.0 filesystem on /usr/local/vmdk/NT31_AdvancedServer.vmdk.squashfs, block size 131072.
[=====================================================================================================-] 1390/1390 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments,
compressed xattrs, compressed ids
duplicates are removed
Filesystem size 72383.38 Kbytes (70.69 Mbytes)
40.68% of uncompressed filesystem size (177925.66 Kbytes)
Inode table size 3918 bytes (3.83 Kbytes)
69.64% of uncompressed inode table size (5626 bytes)
Directory table size 31 bytes (0.03 Kbytes)
93.94% of uncompressed directory table size (33 bytes)
Number of duplicate files found 0
Number of inodes 2
Number of files 1
Number of fragments 0
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 1
Number of ids (unique uids + gids) 1
Number of uids 1
root (0)
Number of gids 1
root (0)
Now we create the backing file to point to the original VMDK where all write operations will take place. And of course this means that the site can be reverted very quickly if something goes wrong.
Now that we’ve moved beyond the initial shockwave of the MS-DOS 4.00 source code dump, I thought it was time to try to pull off the ultimate trick of the time, building under OS/2 and using the exciting feature of the time “DOS from Drive A:” Long before VMware / Virtual PC for the PC OS/2 took Intel’s 80386’s hardware virtualization mode, “v86 mode” to the logical conclusion allowing you to boot native MS-DOS under OS/2. Sadly, the old 1989-1991 OS/2 betas do not include this feature. Although I have to wonder if it did exist and just wasn’t publicly available.
Many of the programs used to build MS-DOS are off the shelf, the MASM assembler, Microsoft C 5.1, and its associated tools are just retail versions. To change things up, I did use the 386MASM assembler just to see if it maintained MASM 5.1 compatibility. And it does. The only gotcha is that all the tools are *NOT* marked Presentation Manager compatible, so launching them from a window opens a full screen session. Very annoying!
I’m guessing the fix is in a toolkit? Either way, in Microsoft C 6.0, the utility exehdr lets us modify an OS/2 executable so it’ll now be WINDOWCOMPAT. So at least it ‘feels’ better now.
One thing is for sure, building DOS under OS/2 is a lot more enjoyable than doing a native build as you can minimize the build task, although the MS-DOS only programs do pop up when it generates text indexes & tables. But you do retain some control of your machine during the build, which is great! Although E is a terrible editor for source code, and the one in 6.78 has a nasty bug where it’ll truncate large files. Were people really using text mode editors for everything back then? I guess i like the fonts of the GUI, despite having used machines of the era.
Otherwise, the end result is the same, you get a build of DOS 4.
I went ahead and tried to build using 6.78 and no doubt compiling DOS is an absolute torture test. So far, the DOS Box has locked OS/2 once, and PM Shell has crashed once as well.
I altered the Makefiles to use ‘rm’ instead of the built in ‘del’ command, because if you try to delete a file that doesn’t exist, del returns an error, which then triggers an end to the NMAKE process. Very annoying! However, the ‘rm’ included in Microsoft C 5.1 doesn’t suffer the same defect. Using 86Box with an 83Mhz Pentium OverDrive it took about 18 minutes to build DOS-4.
I did capture the video and converted it to a GIF so you can quickly see the reboot & the UI crashing. FUN!
And it even boots!
For anyone interested I’ve put zips on archive.org that can be extracted under OS/2. I also made a pkzip disk set incase loading a 6MB zip file is an issue.