While trying to see if STAC ever went into the disk compression busness again, I ran across this interesting bit of software, DMSDOS, which is a kernel module for versions 2.0.36 and 2.2.3 of the Linux kernel support read/write access to STACKER version 3 and 4 containers.  It also support’s MS-DOS’s drive space, and doublespace!

The latest version dmsdos- even runs on modern x86_64 and i386 boxes!  Although in library mode.

# ./mcdmsdos list /tmp/STACVOL.000
mcdmsdos version 0.2.0 (for libdmsdos 0.9.x and newer)
libdmsdos: DMSDOS library version 0.9.2pl3-pre2(alpha test) compiled Jun 12 2014 17:14:06 with options: read-write, doublespace/drivespace(<3), drivespace 3, stacker 3, stacker 4
libdmsdos: mounting CVF on device 0x3 read-write…
libdmsdos: CVF end padding 1 sectors.
libdmsdos: CVF is in stacker 4 format.
-rwxr-xr-x 1 0 0 10396254 Dec 16 1993 13:35 doom.wad
-rwxr-xr-x 1 0 0 68923 Dec 15 1993 01:01 setup.exe
-rwxr-xr-x 1 0 0 20850 Dec 15 1993 01:01 readme.exe
-rwxr-xr-x 1 0 0 627155 Dec 16 1993 13:53 u1_94.exe
-rwxr-xr-x 1 0 0 579187 Dec 16 1993 13:47 doom.exe
-rwxr-xr-x 1 0 0 439 Dec 15 1993 01:01 file_id.diz
-rwxr-xr-x 1 0 0 190 Dec 15 1993 01:01 use1_95.bat
-rwxr-xr-x 1 0 0 218 Dec 15 1993 01:01 use1_94.bat
-rwxr-xr-x 1 0 0 7527 Dec 15 1993 01:01 license.doc

How cool?

Even better it can extract files from the stacker volume!

# ./mcdmsdos copyout /tmp/STACVOL.000 /doom.wad doom.wad
mcdmsdos version 0.2.0 (for libdmsdos 0.9.x and newer)
libdmsdos: DMSDOS library version 0.9.2pl3-pre2(alpha test) compiled Jun 12 2014 17:14:06 with options: read-write, doublespace/drivespace(<3), drivespace 3, stacker 3, stacker 4
libdmsdos: mounting CVF on device 0x3 read-write…
libdmsdos: CVF end padding 1 sectors.
libdmsdos: CVF is in stacker 4 format.
# md5sum doom.wad 981b03e6d1dc033301aa3095acc437ce doom.wad

And if you google ‘981b03e6d1dc033301aa3095acc437ce’ you’ll know it’s the registered DOOM version 1.1 data file!

The author tired to get it to work with Microsoft Visual C, and it does not.  It also doesn’t work with MinGW or Cygwin, and the reason is once more again the way GCC handles it’s structures on Linux vs Windows.  Sadly there is no silverbullet fix for this, the structures oddly enough are too small on Windows, and too big for what they should be on Linux.

But at any rate, I though it was cool.  For anyone interested all versions that I’ve found I put online on my cvs server,

I forget what I was looking for

when I stumbled onto Haruhiko Okumura’s lzss.c, but I was really intrigued.  Every time I’ve seen anything to do with compression, it’s insanely massive.

Except for this.

Including the ‘main’ portion of the source, it’s 180 lines long.  4.3Kb.  That’s microscopic by today’s standards.  On OS X I get a 13kb executable, compared to 76kb for gzip, or 1.6MB for 7za.

And googling around I found a few other variations.  So I figured it would be slightly fun to have a ‘bakeoff’ with the ‘tradtional’ Calgary Corpus, which includes some variable types of data.

Unsurprisingly, 7zip is the best of the bunch.

$ ./
cleaning up
running…….The winner (smallest) is :
261057 6 Jun 20:18 book1.7z
53167 6 Jun 20:18 geo.7z
9472 6 Jun 20:18 obj1.7z
17322 6 Jun 20:18 paper1.7z
43596 6 Jun 20:18 pic.7z
15060 6 Jun 20:18 progl.7z
16748 6 Jun 20:18 trans.7z
30716 6 Jun 20:18 bib.7z
169894 6 Jun 20:18 book2.7z
119399 6 Jun 20:18 news.7z
61758 6 Jun 20:18 obj2.7z
27310 6 Jun 20:18 paper2.7z
12605 6 Jun 20:18 progc.7z
10428 6 Jun 20:18 progp.7z

But the source to 7zip is unwieldy at best.  So how did the small lzss and variants stuff do?

Compression percentage

Compression percentage

Honestly I’m surprised gzip put up a good fight.  Bzip2 & 7zip really fought for the top, The surprise to me was lzhuf leading the old stuff, which has it’s roots back in 1988/1989.  So let’s look at the data without anything modern in the way.

Old Compression only

Old Compression only

So from the numbers, we can see that lzs2 and lz3 run almost identical, with lzs & lzs4 at the bottom.  Now when we look at time, we get something different.

Compression duration

Compression duration

Both lzs & lzs4 take eight or more seconds!  So they are both out, as I’m shopping for something good/fast, and taking this long is out of the question!  So it comes down to how complicated lzhuf2, lzs2 and lzs3 are.

source lines
lzs.c 4360
lzs4.c 4623
lzs2.c 8308
lzs3.c 12844
lzhuf.c 18323
lzhuf2.c 22556

While lzs.c is still pretty impressive for the size, for what I’m going to try thought, I’m going to use lzs2.c as it’s 8kb, and seems to fit the bill.

For anyone who’s interested in running this on their own, here is the package.  I only tested on OS X, it may run on other UNIX stuff, it may not.  Extract it and run ‘’.  And it may even work!  The only thing on OS X I had to add was ‘-Wno-return-type’ for compiling, as clang doesn’t like ancient source like this…

Some follow up on Stacker

From my OS/2 experiments before I roll it out… 
  • It only supports FAT.
  • Maximum of 2GB ‘compressed’ drives
  • Swapping drive letters assumes 1 disk 1 volume
  • You create the compressed disks in DOS

So, since I’m thinking of BBS space, I can leave part of the disk uncompressed for zip’s then the compresses partition for databases & doors.

I guess the thing to keep in mind is that 1991-1994 2GB disks were not in the hands of your average user.. And the idea of using that much storage seemed crazy.  It’s a shame they did that deal with Microsoft and basically got pushed out of the market.  It’s a shame that the OS/2 product doesn’t actually run on OS/2, nor support HPFS.  The idea of bigger disks, and long file names would have been nice.
Oh well I guess that is how the older stuff dies out.

Stac Electronics Stacker for OS/2

Once upon a time hard disks were expensive.  A device that could hold a terrabyte would cost hundreds of thousands in the late 1990’s!  I remember Windows NT 3.51 taking 3 days to format 890GB!!!

Even when I got my first 20MB hard disk, it along with the controller cost several hundred dollars.  I had upgraded to a 286 from a Commodore 64, and even a 720kb floppy felt massive!  I figured it’d take a long while to fill 20MB so I was set.  Well needless to say I was so wrong!  Not to mention there was no way I could afford a 30MB disk, I wasn’t looking for a questionable used 10MB disk, and a 40MB disk was just out of the question!

Until I saw this:

Stacker 2.0

Stacker changed everything for me, now via software compression not only could I fit 40+MB worth of crap on my 20MB disk, but I could even get more data on my floppies!  The best part about stacker, unlike pack/zoo/pkzip and friends is that the compression was transparent.  Meaning you load the driver, and pickup a new driver letter for the compressed volume, and from that point onward everything you do to that disk is compressed.  All MS-DOS programs work with it.  Yes really from Windows 3.0, to dbase, BBS packages, even pkzip (although it’ll get a 1:1 compression)

Stacker for OS/2

So for a long while life was good with Stacker although like everything else, things changed.  I wasn’t going to run MS-DOS forever, and when I switched to OS/2 I was saddened to find out that there was no support for OS/2. Also Linux support was not going to happen either.  Although they did eventually bring out a version for OS/2 it did not support HPFS volumes, only FAT.

And as you can see while Warp was the main target it could function with OS/2 2.0 and 2.1 as long as you had updated them to the latest fixpacks.  And preferably installed the OS onto a FAT partition as the setup process hinges on it being able to modify the config.sys & boot directories from a DR-DOS boot floppy included in the package.

One of those funny things about disk compression is that for very heavy disk access programs that tend to compress pretty good (say databases with lots of text records) is that compression can greatly speed them up.  If you are getting 16:1 compression (ie you have LOTS of spaces…) you only have to read the hard disk 1/16th as you would on an uncompressed volume.  It’s still true to this day, although people tend to think disk compression added a significant amount of overhead to your computer (I never noticed) it can make some things faster.

Another thing that STAC was involved in was selling compression coprocessors.

Stac co-processor ad

While I’m not aware of these cards being that big of a seller, It is interesting to note that these co-processors were also available for other platforms namely the cisco router platform.  Since people were using 56kb or less links, the idea of taking STAC’s LZS compression and applying it to the WAN was incredible!  Imagine if you were printing remotely and suddenly if you got even a 4:1 boost in speed, suddenly things are usable!  The best part is that while there were co-processors cisco also supplied the software engine in the IOS.  Enabling it was incredibly easy too:

interface serial0/0


And you were good to go.  The real shame today is that hardly anyone uses serial interfaces so the compression has basically ‘gone away’.  Cisco did not enable it for Ethernet interfaces as those are ‘high speed’… Clearly they didn’t foresee that people would one day get these infuriating slow “high speed” internet connections being handed off on Ethernet and how compressing them would make things all the better.

I think the general thrust has been to a ‘black box’ approach that can cache files in the data stream, provide compression and QoS all in one.

So until I re-install OS/2 on a FAT disk, let’s run this through the Windows 3.0 Synchronet BBS I was playing with.


Stacker starts up with some ascii art of a growing hard disk.  Maybe one day I’ll figure out how to dump Qemu’s video into something to make animated GIFs with.  Until then… well.

Setup is pretty straight forward, pick a disk to compress, then decide if you’ll do the whole disk and incorporate the drive swapping fun, or just do the free space to create a sub volume, and manually manage it.  I’m not in the mood to reconfigure anything so I’ll do what most people did and compress the whole drive.  What is kind of fun to note, is that in modern implementations of compress & encryption you have to select one or the other you cannot do both.  However using emulators that can support encrypted disks, you *could* then compress from within.  I don’t know why the new stuff doesn’t let you, maybe the layered containers gets too much.  But I do bet there is something out there for Windows that’ll mount a file as a disk image (like Windows 7 can mount VHD’s) on an encrypted volume, then turn on compression…….

Anyways with the target selected, it will then copy the files, then create the volume…

Which took a minute, then it’ll defrag the disk using Norton Speed disk (from that point onward it seems that speed disk disappears..?)

And once that is done, we are basically good to go.  So how did it do on my 80MB ‘test’ disk?

I’d have to say pretty good, 2.5:1 is pretty snazzy!  And my BBS is working, I didn’t have to change a single line, it’s 100% transparent.

Eventually IDE hard disks took over the world, and got larger capacities, faster and cheaper.  Not to mention the world was switching operating systems from MS-DOS to Windows 95, or Windows NT and STAC just got out of the market after the big lawsuit.

But it’s funny looking at old disk ads…  And what a catastrophic thing it was to fill the things up.

But before then, we had stacker…