Legally Buying Duke Nukem 3D in 2021

Well back in 2017 it turns out that Jordan Freeman was working a deal with 3D realms on some new series that didn’t pan out…

Jordan Freeman Group and ZOOM Platform Announce 3D Realms Partnership for Shadow Stalkers Episodic Computer and Video Game Series

However it appears that in addition to trying to get a game out, they also secured rights to resell the catalogue in some perpetual fashion. Neat

So for all the zoomer’s who’ve somehow never played the greatest boomer shooter of all time, you can score Duke Nukem 3D on the aptly named zoom-platform.com

The holy trinity!

It’s currently $4.99 USD, and it comes with all the DLC/Episodes!

Now let’s look at the infamous Megaton Edition on Steam (Yes I bought it years ago, no I didn’t hoard keys, because it didn’t feel like the game would up and disappear). So as you can see it has the addons Duke It Out in D.C., Nuclear Winter, and Caribbean: Life’s A Beach.

Megaton Edition Launcher
Steam Megaton / Atomic

And launching from Steam, you can see it’s the ATOMIC edition. So there you go, not only can you sill buy it on Zoom Platform, it has MORE content, and yes it’s not $100+++ on the open market of keys, instead it’s for normal retail sale.

Let’s look at the GRP files:

From Steam:

D:\Program Files (x86)\Steam\steamapps\common\Duke Nukem 3D\gameroot\duke3d.grp
D:\Program Files (x86)\Steam\steamapps\common\Duke Nukem 3D\gameroot\addons\dc\dukedc.grp
D:\Program Files (x86)\Steam\steamapps\common\Duke Nukem 3D\gameroot\addons\nw\nwinter.grp
D:\Program Files (x86)\Steam\steamapps\common\Duke Nukem 3D\gameroot\addons\vacation\vacation.grp
D:\Program Files (x86)\Steam\steamapps\common\Duke Nukem 3D\gameroot\classic\DUKE3D.GRP

And MD5’s:

22b6938fe767e5cc57d1fe13080cd522 duke3d.grp
8ab2e7328db4153e4158c850de82d7c0 addons\dc\dukedc.grp
1250f83dcc3588293f0ce5c6fc701b43 addons\nw\nwinter.grp
1c105ced73b776c172593764e9d0d93e addons\vacation\vacation.grp
22b6938fe767e5cc57d1fe13080cd522 classic\DUKE3D.GRP

And Zoom Platform

C:\ZOOM PLATFORM\3D Realms\Duke Nukem 3D - Atomic Edition\DUKE3D.GRP
C:\ZOOM PLATFORM\3D Realms\Duke Nukem 3D - Atomic Edition\AddOns\DUKE!ZON.GRP
C:\ZOOM PLATFORM\3D Realms\Duke Nukem 3D - Atomic Edition\AddOns\DUKEDC.GRP
C:\ZOOM PLATFORM\3D Realms\Duke Nukem 3D - Atomic Edition\AddOns\NWINTER.GRP
C:\ZOOM PLATFORM\3D Realms\Duke Nukem 3D - Atomic Edition\AddOns\PENTHOUS.GRP
C:\ZOOM PLATFORM\3D Realms\Duke Nukem 3D - Atomic Edition\AddOns\VACATION.GRP

And MD5’s

22b6938fe767e5cc57d1fe13080cd522 DUKE3D.GRP
4cd0e3d3f170107b97aeafe4ade09ecf AddOns\DUKE!ZON.GRP
ddb7149855b0a0d0f073c3f5bedf7161 AddOns\DUKEDC.GRP
1250f83dcc3588293f0ce5c6fc701b43 AddOns\NWINTER.GRP
6a5a1505e38070ac0ab75eb60835988d AddOns\PENTHOUS.GRP
860367326139361078d0fc8cb623b548 AddOns\VACATION.GRP

Interestingly the main game is the same, but the additional levels are different. I’ll have to dig more to see what is going on. At least Nuclear Winter is the same.

Granted I’ve owned them all in some fashion over the years so I can’t even keep track of how many times I’ve bought Duke Nukem 3D, but here we go.

Apparently Randy can’t do anything about it, as he’s pulled the Duke from everything else.

A mildly annoying 32bit adventure, also happy 30th PGP!

It’s been 30 years since the initial launch of PGP! Hard to believe what a firestorm it ignited i the 1990’s and the real pity of how the crypto field is just as baffling and confusing to people today as it was back then.

It’s crazy how crypto went from being an obtuse tool, to suddenly being in the hands of normal people with a public web of trust, and widely available source. And of course it was that widely available source that led to the first real people of trying to geofence on the internet, and it was naturally impossible to contain, even in the era before VPN’s people were able to circumvent any and all “protections” and download away. Strong cryptography went from being something considered ‘weapons grade’ and thusly requiring a munitions license to produce and distribute to suddenly being available to the world at large.

Investigations were launched, agencies contacted, and in spite of it all people had signing parities to exchange public keys, and sign the trust building the web. Try as some people may have demanded ‘back door access’ or black box crypto chips, the cat was out of the bag, and all you needed was a C compiler and a zip file small enough to easily fit on a low density 5 1/4″ diskette. It is 1991 after all, and there is still a sizable amount of XT/AT class machines out there, along with the 68000 Amiga/Atari/Macintosh (upgraded QL’s? 128kb really isn’t enough).

PGP 1.0 is from another era, originally written in the late 80’s cleaned up and released in 1991 where mass produced 64bit machines were still a bit off, and thusly PGP 1.0 really supports 16bit & 32bit OS’s. For the purpose of this ‘revival’ I went with the Unix port, the aptly named unix_pgp10.tar.gz. And from the MS-DOS version I extracted the test data to make sure it works in the file pgp10-test-data.tar.gz

$ file pgp
 pgp: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=cd9ecbf51fab24abbb7153a2cc04bb01bbf2ae91, not stripped
$ ./pgp testfile.ctx
 Pretty Good Privacy 1.0 - RSA public key cryptography for the masses.
 (c) Copyright 1990 Philip Zimmermann, Phil's Pretty Good Software.  5 Jun 91
 File is encrypted.  Secret key is required to read it.
 Key for user ID: Bond, James (007)
 288-bit key, Key ID A27A1F, created Sat Oct 19 23:56:24 3006391
 You need a pass phrase to unlock your RSA secret key.
 Enter pass phrase:

While it was simple enough to build, sadly on x64 WSL instance it doesn’t work. There is no pass phrase for the test data.

Normally I have one of usual two choices a) try to fix PGP to be 64bit friendly or b) run it under a 32bit environment. Normally I would do b, but I went digging into some porting strategies for the a choice and ran into this totally underused tech x32.

Long story short you keep your 32bit integers, you run like it’s a 32bit process but you are mapped into a 64bit address space. Even better -static works!

On Debian 10 the environment can be installed with the following:

apt-get install gcc-7 lib32gcc-7-dev libgcc-7-dev libx32gcc-7-dev gcc-7-multilib

Then to invoke it, use gcc-7 -mx32 . It’s that easy.

WSLv1 vs WSLv2

$ ./pgp
 -bash: ./pgp: cannot execute binary file: Exec format error
$ file pgp
 pgp: ELF 32-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=2aa5f030603018ca1dc6c5c10aa979751b006aca, for GNU/Linux 3.4.0, not stripped

Notice it is now a 32-bit LSB executable, but also in the x86-64 address space! However under the WSLv1 environment it won’t work. Time to update to v2

   wsl --set-version Ubuntu-20.04 2
   Conversion in progress, this may take a few minutes…
   For information on key differences with WSL 2 please visit https://aka.ms/wsl2
   WSL 2 requires an update to its kernel component. For information please visit https://aka.ms/wsl2kernel 

And now with the instance converted:

$ ./pgp
 Pretty Good Privacy 1.0 - RSA public key cryptography for the masses.
 (c) Copyright 1990 Philip Zimmermann, Phil's Pretty Good Software.  5 Jun 91
 For details on free licensing and distribution, see the PGP User's Guide.
 For other cryptography products and custom development services, contact:
 Philip Zimmermann, 3021 11th St, Boulder CO 80304 USA, phone (303)444-4541
 Usage summary:
 To encrypt a plaintext file with recipent's public key, type:
    pgp -e textfile her_userid      (produces textfile.ctx)
 To sign a plaintext file with your secret key, type:
    pgp -s textfile your_userid     (produces textfile.ctx)
 To sign a plaintext file with your secret key, and then encrypt it
    with recipent's public key, producing a .ctx file:
    pgp -es textfile her_userid your_userid
 To encrypt with conventional encryption only:  pgp -c textfile
 To decrypt or check a signature for a ciphertext (.ctx) file:
    pgp ciphertextfile [plaintextfile]
 To generate your own unique public/secret key pair, type:  pgp -k
 To add a public or secret key file's contents to your public
    or secret key ring:   pgp -a keyfile [keyring]
 To remove a key from your public key ring:     pgp -r userid [keyring]
 To view the contents of your public key ring:  pgp -v [userid] [keyring]
$

And we are in business! Now we can run the example crypto test:

$ ./pgp testfile.ctx
 Pretty Good Privacy 1.0 - RSA public key cryptography for the masses.
 (c) Copyright 1990 Philip Zimmermann, Phil's Pretty Good Software.  5 Jun 91
 File is encrypted.  Secret key is required to read it.
 Key for user ID: Bond, James (007)
 286-bit key, Key ID A27A1F, created (null)
 Advisory warning: This RSA secret key is not protected by a passphrase.
 Just a moment-- .
 File has signature.  Public key is required to check signature. .
 Good signature from user "Smart, Maxwell (86)".
 Signature made Thu Jun  6 05:28:52 1991
 Plaintext filename: testfile

And there we are!

PGP 1.0 suffers from 2 real defects of the era the first being the home brew bassomatic that is apparently full of all kinds of flaws, and the second lurking in rsalib.c

 The RSA public key cryptosystem is patented by the Massachusetts Institute of Technology (U.S. patent #4,405,829).  Public Key  Partners (PKP) holds the exclusive commercial license to sell and  sub-license the RSA public key cryptosystem.  The author of this  software implementation of the RSA algorithm is providing this  implementation for educational use only.  Licensing this algorithm  from PKP is the responsibility of you, the user, not Philip Zimmermann, the author of this implementation.  The author assumes no liability for any breach of patent law resulting from the unlicensed use of this software by the user. These routines implement all of the multiprecision arithmetic necessary for Rivest-Shamir-Adleman (RSA) public key cryptography.

And it ignited so much of a war about licensing the RSA cryptography base. It wasn’t until 1992/1993 that the RSA released their own aptly named rsaref that at least clarified and addressed their licensing restrictions. As we found out later it wasn’t the DOJ shutting down encryption, nor wild acts of congress instead it was US Patent 4,405,829 which finally expired in Sept 21, 2000, along with US patent 4,200,770 Hellman Diffie Merkle, public-key cryptography which expired in September of 1997. So in the end it was the lawyers who were to be feared, not the the US Government.

Another source of annoyance was the public/private key files are stored in a binary format (hence the 16/32/64 issues I’m sure!).

C:\temp>pgp -v jason.pub
 Pretty Good Privacy 1.0 - RSA public key cryptography for the masses.
 (c) Copyright 1990 Philip Zimmermann, Phil's Pretty Good Software.  5 Jun 91
 Key ring: 'jason.pub'
 Type bits/keyID   Date     User ID
 pub  990/F7CAD5 12-Jun-21  Jason Stevens
 1 key(s) examined.
 C:\temp>type jason.pub
 °ü½╟╓iº½t↕Hï╜Æ(↑ªα&E☼lKL$*⌠=└¥╒[׊s,â•”kÃ¥r~▐MFBv≥≡╫Eâ”´â•ŸTÿ║µó â•¨6,♣â—„Ermo▼æ▄;± ùî
 C:\temp>

So naturally you have to use uuencode which led to MIME collisions and other fun stuff down the road. yay!

begin 666 jason.pub
MF9,`$!C$8`U*87-O;B!3=&5V96YSW@/5RO>TFV)_9@%49RW3NYGD<8*H`3X1
MZ>D'/F/D7$)OKD9&K+>A<@4<,$RV.+M?9VR;17)M;Q^1W#OQ()>,#?B!J\?6
M::>K=!)(B[V2*!BFX"9%#VQ+3"0J]#W`!YW56]>*<RS):X9R?MY-1D)V\O#7
/1<''5)BZYJ+_T#8L!0`1
`
end

Even though today we have widespread SSL, and all kinds of apps that encrypt by default, but Operation Trojan Shield shows that that an app is simply not enough, and you cannot trust anything.

Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed and “turned the tide” in the Allies’ favour.[15][16]

-Wikipedia

And just like the spy movies good crypto is tedious, bulky and rarely used properly*.

Yes please don’t seriously rely on pgp 1.0!

xMach

Back when I started blogging, I started with a few quick things on the top of my head, that being Netware, SIMH, CP/M Zork, Xenix and of course Mach. Back in the shiny new world of 1999 we may have survived the Y2K apocalypses but the valley was firmly set in the mind of start-up fever that would lead to the looming .com crash but the menatlity of IPO & dump would remain to this very day.

The Utah projects around Mach were wound down with the OSKit being the next wave for pistachio, but that set the scene of more than average users now having ‘fast’ machines at their disposal and the ability to rifle though published source and roll their own. It was this environment of trying to make it in the full force of the post ‘year of the desktop’ Linux where xMach appeared, had some interest and quickly died. For the longest time I had thought all evidence of it had been eradicated as the domain had been excluded from wayback, and searching went nowhere. But one bored night I tried exploring sourceforge, a veteran of the pre .com bubble crash to see if it had anything, and yes it did!

https://sourceforge.net/projects/xmach

Oh sure in retrospect it’s pretty obvious. All the cool kids had public repositories on the internet.. And then after the .com implosion so many took their CSV’s and went home. And so many of those just died in selfhosted, self imposed silence.

From what I remember one of the first goals was to time the source to make it easier to cross compile from either NetBSD or Linux, and utilize what was the double edged sword of Lites, the ability to run either 386BSD/NetBSD binaries or Linux binaries. And the Linux side was already incorporating shared libraries something that was desperately needed in 1993 for X11. But this was 2000! And of course the downfall of running Linux under xMach/Lites is that you will of course compare the performance, and Linux won hands down. Not to mention interest in improving and adding to Linux was in full mainstream support by 2000 so it made xMach feel all the more ancient.

But we didn’t come here for practicality!

Getting the source & tools

First off I’m going to use the cross tool chain for building OS Kit. i586-linux2.tar.gz (password and 404 apply) as I not only don’t feel like fighting why MIG doesn’t work in 64bit, and it’s just easier to enable 32bit usermode. Speaking of for Ubuntu 20.04.2:

dpkg --add-architecture i386
apt-get update
apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386 libc6-dev:i386 libfl-dev:i386
apt-get install sharutils gcc-multilib build-essential

And for people like me running on Windows 10, You absolutely MUST enable WSL v2. Without an actual kernel the multiarch simply doesn’t work. No doubt it has something to with actually running Linux vs emulating it. Probably mucking about with the LDT/GDT which you certainly cannot do as a process on Windows.

Next just add the old GCC 2.7.2.3 chain into your path:

PATH=/usr/local/i586-linux2/bin:/usr/local/i586-linux2/lib/gcc-lib/i586-linux/2.7.2.3:$PATH
export PATH

Next up you’ll need the source, and being all new and trendy I put it over on GitHub. Clone the repo, and you are almost ready to start compiling!

Update permissions

As it turns out git didn’t preserve the executable scripts so you have to manually run this:

chmod +x kernel/c*
chmod +x kernel/*sh
chmod +x lites/conf*
chmod +x lites/conf/*

You may have to pay attention to the build to make sure stuff doesn’t complain about ‘permission denied’

bash: ../kernel/configure: Permission denied

Basically if you see this, you didn’t do it right.

Building the xMach kernel

Make an object directory and run configure like this:

../kernel/configure --host=i586-linux --target=i586-linux --build=i586-linux --enable-elf --enable-libmach --enable-linuxdev --prefix=/usr/local/xmach;cp ../updated-conf/kernel/Makeconf .

For some reason configure never populates the compiler tools right, and I can either mess with autoconf (no thanks) or I can just include a known good file. It’s not hard to figure out, you can diff it if you want to, but it’s just set for a native 32bit build of MIG, and then using the cross tools to build the kernel.

type make, and wait 13 seconds, give or take…

i586-linux-ld -r  -L/home/jsteve/src/xMach/kernel-build/lib -nostdlib \
         -o bsdboot.o crt0.o about_to_die.o anno.o boot_info_dump.o boot_start.o cpu.o cpu_init.o cpu_tables_init.o cpu_tables_load.o crtn.o die.o do_boot.o exit.o gdt.o idt.o idt_inittab.o idt_irq_init.o main.o malloc.o panic.o phys_mem_add.o pic.o putchar.o puts.o real_tss.o real_tss_def.o rv86_real_int.o rv86_real_int_asm.o rv86_reflect_irq.o rv86_trap_handler.o serial.o trap_dump.o trap_dump_die.o trap_handler.o trap_return.o tss.o tss_dump.o -lmach_exec -lmach_c
 make[1]: Leaving directory '/home/jsteve/src/xMach/kernel-build/boot/bsd'
 MACHBOOTDIR=cd /home/jsteve/src/xMach/kernel-build/boot/bsd; pwd \
 /home/jsteve/src/xMach/kernel-build/boot/bsd/mkbsdimage -o /home/jsteve/src/xMach/kernel-build/Mach /home/jsteve/src/xMach/kernel-build/kernel/kernel /home/jsteve/src/xMach/kernel-build/bootstrap/bootstrap
 real    0m13.135s
 user    0m12.060s
 sys     0m1.383s

I cannot stress just how incredibly fast this is. I’m pretty sure it literally took hours for this to run, assuming nothing went wrong, which it almost always did. As part of the fun, next do a ‘make install’

Building Lites

This is somewhat similar to the kernel build, however it is far more touchy. You absolutely MUST copy / fix the Makeconf file otherwise the build will be corrupted and the only way to fix is to remove the build directory and try again.

../lites/configure --host=i586-linux --target=i586-linux --build=i586-linux --enable-mach4 --prefix=/usr/local/xmach --with-mach4=../kernel;cp ../updated-conf/lites/conf/Makeconf conf

And once more again make the project and just wait a few seconds. BSD without the hardware support is pretty tiny

i586-linux-ld -o emulator.Lites.1.1.u3.out ./emulator_base  -L/home/jsteve/src/xMach/lites-build/liblites -L/usr/local/xmach/lib -e __start ecrt0.o  e_mig_support.o emul_generic.o error_codes.o emul_init.o emul_mapped.o emul_misc_asm.o e_bsd.o e_43ux.o bsd_1_user.o emul_mach_interface_user.o e_mach_msg_server.o e_stat.o e_bnr.o emul_exec.o signal_server.o e_uname.o e_mapped_time.o e_sysvipc.o e_linux.o e_sysv.o e_exception.o e_signal.o e_machinedep.o e_linux_trampoline.o e_linux_sysent.o e_isc4_sysent.o e_cmu_43ux_sysent.o e_lite_sysent.o emul_vector.o e_trampoline.o e_linux_getcwd.o    -llites -lthreads -lmach_sa  /usr/local/i586-linux2/lib/gcc-lib/i586-linux/2.7.2.3/libgcc.a && \
 i586-linux-size emulator.Lites.1.1.u3.out && \
 mv emulator.Lites.1.1.u3.out emulator.Lites.1.1.u3
    text    data     bss     dec     hex filename
  130195   16904     160  147259   23f3b emulator.Lites.1.1.u3.out
 make[1]: Leaving directory '/home/jsteve/src/xMach/lites-build/emulator'
 make[1]: Entering directory '/home/jsteve/src/xMach/lites-build/bin'
 make[1]: Nothing to be done for 'all'.
 make[1]: Leaving directory '/home/jsteve/src/xMach/lites-build/bin'
 real    0m6.423s
 user    0m6.006s
 sys     0m0.551s

Yes that’s six seconds. do a ‘make install’ and now your /usr/local/xmach directory is fully populated and ready to transport somewhere to do the ‘install’ and boot.

Installation

One thing to take notice of xMach over the standard last Mach4 is that the Linux boot options have been removed. There is all kinds of changes in 16bit assembly handling that need to be rewritten or fixed. I’m sure someone at some point has, but apparently it’s easier to go with the bsdboot and use either the machboot or GRUB (v1!) to get it going. Although Linux knows how to mount loop ‘diskfiles’ I don’t remember how to deal with partitions and BSD slices. So I did my typical trick of booting up an existing image, and using telnet/uucp to transfer xMach into an existing disk.

Booting however was a bit more fun, as the kernel is now compiled as a BSD ELF image and Grub 1.x won’t boot it. I used a ‘Super Grub2 disk hybrid‘ ISO image, and was able to just hit escape and type in:

kfreebsd (hd0,msdos2,bsd1)/Mach
boot
Booting xMach kernel

Because of the massive drift in multiboot vs ‘BSD ELF’ the drive isn’t passed correctly so I have to manually tell it ‘/dev/hd0a/mach_servers’ and it’ll pick up Lites, and boot up!

And there we go, with my old Mach4/Lites image from 2009 with a ‘fresh’ cross compiled build from Windows 10 / Ubuntu 20.04

Where to go from here?

Realistically, nowhere. Seven years ago there was OpenMach, which seems to have done a few things, but even moving to a GAS from 2015 didn’t help me at all trying to build the 16bit stuff so the answer must lie in the past. Otherwise the fundamental problem as always is at best you will have a 4.4BSD system which again in 1992 would be awesome, but 2021? yeah it’s a curiosity at best.

Running VMWare ESXi on Raspberry PI

(This is a guest post by Antoni Sawicki aka Tenox)

Just for fun with virtualization I wanted to try out VMWare ESXi for ARM64, most specifically Raspberry PI. ESXi for ARM has been around for a couple of years now. Since PI4 packs 8GB of RAM and has a reasonably fast CPU it can be a worthwhile experience. Also more OSes for Raspberry PI are now available in UEFI boot mode.

Not going to go through exact installation steps as these are all around the web and youtube. Just to summary you will need to download an image from VMWare website as well as bunch of UEFI firmware files from github and combine it all together on to a SD card. When you boot it you will go through an install process which is straightforward. You can overwrite install media and use it as the target so no need for multiple SD cards. Once it boots you will see familiar ESXi boot screen:

ESXi booting on Raspberry PI 4

In order to get it going you will obviously need to add some storage. You can use NFS, iSCSI or locally attached USB drive. For the latest you need to disable USB arbitrator.

# /etc/init.d/usbarbitrator stop
# chkconfig usbarbitrator off

What can it run?

ESXi ARM only officially supports only UEFI boot based OSes. Fortunately this is a default option for Ubuntu PI, Free/Net/OpenBSD also work and so does Windows. But what about OSes that use U-Boot? Since ESXi-ARM Fling 1.1 you can boot oses in a “direct” mode with no UEFI! This is a huge step, but unfortunately as of today it doesn’t support UEFI-less VGA, only a serial port. Hopefully this can be fixed in future. I would love to have a RISC OS and/or Plan 9 VM. On the other hand Plan 9 supports EFI boot so an image could be made.

Windows guest install was also much easier than I expected. Thanks to UUP dump you basically roll your own bootable ISO. I think it’s actually easier to get it going on ESXi than natively on RPI hardware or QEMU.

Windows 10 Guest VM on ESXi Fling Raspberry PI

NIC driver obviously did not work by default, but there is a VMXNET3 ARM64 driver in the wild:

VMXNET3 for Windows 10 ARM64 on ESXi Fling on Raspberry PI

What is it good for?

Right now probably just for fun. But I can easily see datacenters filled in with ARM servers running ESXi. Future is bright and free of Intel! Personally I will keep it around for development purposes if I need to make builds for ARM on various OSes.

Interestingly enough you can even run VMWare ESXi ARM on QEMU with nested virtualization!

Also this is the official VMWare ESXi ARM Blog worth checking for future updates.

Is reddit finally dying?

A king has his reign, and then he dies.

Granted I’m not netcraft so I really have no way to confirm but I found something kind of interesting the last week or so while fighting various link386’s.

not quite netcraft but it’ll do

Granted with redirects I’m a large referrer to myself. But it’s no surprise that in the top ten that 3 of them are google, duckduckgo is becoming a real force to be reckoned with and BING?! It must be no rumor that BING has always been incredibly profitable, to the point where Microsoft had been giving away Windows 8/8.1 licenses for free with the condition that they were basically BING machines. It’s too bad the UWP thing and the constant rebrands and failure of the phone killed it all as I liked the idea of sub $100 personal computers.

But the real news here is ycombinator aka Hacker News. It’s the new slashdot, and it’s not surprisingly eclipsed the more insular lobste.rs, but also both are ahead of the once mighty juggernaut reddit!

Looking at local graphs & cloudflare

local blog stats

Ever since I had to use cloudflare the stats never report anywhere near correctly but you can pretty much see when I post, and the uptick, except when some stuff gets crazy popular years later for seemingly no reason like processing NASA images of Saturn.

Cloudflare freebie graphing.

Now compare and contrast with Cloudflare and you can see how nothing aligns. I do have the cloudflare plugins and stuff on WordPress but it never seems to do the right job. oh well I guess I can’t complain too much.

So what’s the big deal?

Its always about user engagement. And the other issue being that despite reddit’s horrible reputation for censorship and group thing anyone can sign up. When engagement happens over there anyone is free to join in. Hacker News also allows user account creation, but lobst.rs however is different requiring an invite. And for new upstarts or anyone getting a start it really sucks to see an audience behind a gate and you aren’t invited to that private club.

Looking at Ycombinator’s Hacker News, you can see far more engagement and crossposting. Neat! And over at Lobst.rs there is a bit of posting but far less engagement. Ending it out of course is reddit. Or maybe reddit is just fresh ground to crosspost. I know it’s bad taste to post your own stuff over and over, and I’m not going to publish on a 3rd party site, ever since the massive blogger outages of 2011.

Is reddit really in decline?

Has reddit finally lost its appeal? Or do people just post directly to there hoping their posts and images don’t get erased? I can’t imagine putting so much time and effort into something to only potentially lose it all because of someone else. I know the US political scene certainly turned a LOT of people off of American sites as it became wall to wall USA. It was so crazy I had people calling me in Hong Kong wanting either donations or votes, despite the fact that I’m nether there nor American. Otherwise it’s just a meme fest over on RLM, 80’s design, retro-cgi, unixporn, and of course The Stop Girl.

Takeaways

I occasionally see people asking about blogging, writing or even the video thing. I’d love to do videos but being in Hong Kong there is no space I’m always getting people walking into my office, I have young kids screaming and crying and of course thanks to the RIAA and tall buildings I get people’s music overlapping.

Years ago it was the slashdot effect, then getting DUG at DIGG, then being reddited. how is YC’d? Are lobst.rs boiled? One thing is for sure find your audience and engage them everywhere. That’s my advice to anyone crazy enough to get started, but it’s never too late. And own your data, own your platform, even if it’s bigger elsewhere, but you cannot depend on a 3rd party to ever care as much as you do.

Elijah Miller’s NEC v30 on a Pi hat

v30 on a board

While talking about home brew 8080 and 8086 systems on Discord an ebay search brought me to Elijah’s store page where this small little curiosity was up for sale. It’s literally just a NEC v30 on a Raspberry Pi hat, for a mere $15 USD! Interestingly enough the v30 can operate at 3.3v meaning no special hardware is required to interface to the GPIO bus on a Pi. This reminds me so much of the CP/M cartridge for the Commodore 64, and the price being so right I quickly ordered one and eagerly awaited to 2 weeks shipping to Asia.

While I have Pi 4’s that I run Windows 10 on to drive some displays & power point, I wanted to use the slightly faster Pi400 for this. The Pi400 has a compatible GPIO expansion port so just like a cartridge it’s a simple matter of slotting the card, powering up and building the software. While there is an included binary, it’s a 32bit one, and I’m running Manjaro on the Pi400 for a similar look/feel as the PineBook Pro. Anyways the dependences are SDL2, and an odly named ‘wiringPi’ library that allows C programs to interface to the GPIO.

You can download the emulator over on homebrew8088, specifically the Raspberry Pi Second Project. The last ‘ver 2’ download has the project configured for a v30 which is an 8086 analogue, unlike the v20 which is an 8088. When physically interfacing to the processor things like this really matter!

With the emulator built it was pretty simple to fire it up, and boot into MS-DOS:

first boot!

I have to admit I was a little startled at first as I really had no idea if this was going to work at all. I’d spoken to an engineer friend and he was saying plugging a CPU directly into the GPIO bus, and toggling connections to actually emulate the board was both crazy and that without any electrical buffers it’d most likely either fry the processor and maybe the Pi as well. I suspect this being low voltage may be sparing both, although I have no EE so I’m not going to pretend to know.

Loading up Norton SI confirms what Elijah had posted on Ebay is that it runs very slowly about 1/3rd the speed of an XT. Now I may not know anything about hardware but this seemed at least something a profiler could at least tell me what is going on, and if someone like me helicoptering in on the shoulder of giants could see something.

gcc -I/usr/include/SDL2 -pg -O2 *.cpp -o pi -lSDL2 -lwiringPi -lpthread -lstdc++

This will build a profiled version of the emulator that’ll let us know which functions are being called both the number of times, and how much time to do so. Not knowing anything but having profiled other emulators, the usual pattern is that you spend most time fetching and possibly translating memory; Both in feeding instructions and pushing/popping data from stack and pointers. Waiting is usually for initialisation and for IO.

Once you’ve run your profiled executable, it’ll dump a binary file gmon.out which you can then use gprof to format to a text file like this:

gprof pi gmon.out > report.txt

And then looking at the report you can see where the top time, along with top calls are. Some things just take a while to complete and other well they get called far too often.

Each sample counts as 0.01 seconds. % cumulative self self total
time seconds seconds calls s/call s/call name
39.91 0.71 0.71 286883 0.00 0.00 Print_Char_9x16(SDL_Render er*, int, int, unsigned char)
16.30 1.00 0.29 1 0.29 1.02 Start_System_Bus(int)
12.37 1.22 0.22 1100374 0.00 0.00 Data_Bus_Direction_8086_OUT()
7.87 1.36 0.14 5954106 0.00 0.00 CLK()

As expected Start_System_Bus takes 1 second, followed by 1,100,374 calls to set the Data_Bus_Direction_8086_OUT (no doubt the Pi needs to alternate between reading and writing to the CPU), followed by 5,954,106 ticks of the CLK function. Of course the real culprit is Print_Char_9x16 which was called 286,883 times, and is responsible for nearly 40% of the tuntime!

Obviously for a simple MS-DOS boot the screen should not be calling any print char anywhere near this many times. Clearly something is amiss. Not knowing anything I added a simple counter to block at the top of the Print_Char_9x16 function to let it only execute 1:1000 times, and I got this:

Obviously it’s not right, which means that the culprit really isn’t Print_Char_9x16 but rather what is calling it. It was a simple change to each of the Mode functions to only render a fraction of the time, and I changed it to a define to let me fire it more often. This is a simple diff, assuming WordPress doesn’t screw it up. It’s not pretty but it gets the job done.

$ diff -ruN ver2/vga.cpp ver2-j/vga.cpp 
--- ver2/vga.cpp	2020-07-29 10:36:51.000000000 +0800
+++ ver2-j/vga.cpp	2021-06-04 01:51:33.546124473 +0800
@@ -1,5 +1,9 @@
 #include "vga.h"
 
+static int do9x16 = 0;
+#define VIDU 5000
+
+
 void Print_Char_18x16(SDL_Renderer *Renderer, int x, int y, unsigned char Ascii_value)
 {
 	for (int i = 0; i < 9; i++)
@@ -23,6 +27,12 @@
 
 void Mode_0_40x25(SDL_Renderer *Renderer, char* Video_Memory, char* Cursor_Position)
 {
+do9x16++;
+if(do9x16>VIDU)
+        {do9x16=0;}
+else
+        {return;}
+
 	int index = 0; 
 	for (int j = 0; j < 25; j++)
 	{
@@ -36,6 +46,7 @@
 	Print_Char_18x16(Renderer, (Cursor_Position[0] * 18), (Cursor_Position[1] * 16), 0xDB);
 	SDL_RenderPresent(Renderer);	
 }
+
 void Print_Char_9x16(SDL_Renderer *Renderer, int x, int y, unsigned char Ascii_value)
 {
 	for (int i = 0; i < 9; i++)
@@ -57,6 +68,12 @@
 }
 void Mode_2_80x25(SDL_Renderer *Renderer, char* Video_Memory, char* Cursor_Position)
 {
+do9x16++;
+if(do9x16>VIDU)
+        {do9x16=0;}
+else
+        {return;}
+
 	int index = 0; 
 	for (int j = 0; j < 25; j++)
 	{
@@ -102,6 +119,12 @@
 
 void Graphics_Mode_320_200_Palette_0(SDL_Renderer *Renderer, char* Video_Memory)
 {
+do9x16++;
+if(do9x16>VIDU)
+        {do9x16=0;}
+else
+        {return;}
+
 	SDL_RenderClear(Renderer);
 			int index = 0; 				
 			for (int j = 0; j < 100; j++)
@@ -156,6 +179,12 @@
 }
 void Graphics_Mode_320_200_Palette_1(SDL_Renderer *Renderer, char* Video_Memory)
 {
+do9x16++;
+if(do9x16>VIDU)
+        {do9x16=0;}
+else
+        {return;}
+
 	SDL_RenderClear(Renderer);
 			int index = 0; 
 			for (int j = 0; j < 100; j++)

While it feels more responsive on the console, it’s still incredibly slow. SI was returning the same speed which means that although we aren’t hitting the screen anywhere near as often it’s still doing far too much. Is it really a GPIO bus limitation? Again I have no idea. But the next function of course is the clock.

First I tried dividing the usleep in half thinking that maybe it’s not getting called enough. And running SI revealed that I’d gone from a 0.3 to a 0.1! Obviously this is not the desired effect! So instead of a divide I multiplied it by four:

diff -ruN ver2/timer.cpp ver2-j/timer.cpp 
--- ver2/timer.cpp	2020-08-12 00:32:13.000000000 +0800
+++ ver2-j/timer.cpp	2021-06-04 02:06:25.505904407 +0800
@@ -7,7 +7,7 @@
 {
    while(Stop_Flag != true)
    {
-      usleep(54926); 
+      usleep(54926*4); 
       IRQ0();
    }
 }

Now re-running SI I get this:

Norton SI with clock multiplied by four

Now it’s scoring a 1.5! Obviously these are all ‘magic numbers’ and tied to the Pi400 and more importantly I haven’t studied the code at all, I’m not trying to disparage or anything, if anything it’s just a quick example why profiling your code can be so important! At the same time trying to run games is so incredibly slow I don’t even know if my changes had any actual impact to speed as emulation of benchmarks can be such a finickie thing.

My goto game, Battletech 3025 Crescent Hawks Inception loads to the first splash but then seems to hang. I could be impatient or there could be further issues but I’m just some impatient tourist with a C compiler…

With my changes and re-running the profiler I now see this:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  us/call  us/call  name    
 95.41    129.23   129.23 22696621     5.69     5.69  Read_Memory_Array(unsigned long long, char*, int)
  2.90    133.15     3.92                             Start_System_Bus(int)
  0.88    134.34     1.19 64369074     0.02     0.02  CLK()
  0.30    134.74     0.40                             keyboard()
  0.16    134.96     0.22   412873     0.53     0.53  Print_Char_9x16(SDL_Render
er*, int, int, unsigned char)
  0.08    135.07     0.11 11273939     0.01     0.01  Data_Bus_Direction_8086_OUT()

Which is now what I expect with the bulk of the emulation now calling Read_Memory, with the Clock following that and of course our tamed screen renderer (although its still called far too much!) with the Data_Bus_Direction being further down the list. No doubt some double buffering and checking what changed in between calls would go a LONG way to optimise it, just as would actually studying the source code.

The one cool thing about this is that if I wanted to write a PC emulator this way gives me the confidence that the CPU is not only 100% cycle accurate, but it’s 100% bug for bug accurate since we are using a physical processor.

And again for $15 USD + Shipping I cannot recommend this enough!

Enable Hyper-V on Windows 10 Home

So you are in a hurry and need to build a network in a box. It was a bit of a surprise, and you have no time. On site there is ONE computer, it’s a NUC. A tiny one. And you cannot replace the base OS for “reasons”… No problem, you say, just add in Hyper-V and you can build an ‘older’ but useful domain controller, exchange server, VPN & utility servers and then yeah you find out the killer:

Ugh

Windows 10 Home.

Well it turns out that you actually *CAN* install Windows 10 with a little command line shake and bake:

pushd "%~dp0"
dir /b %SystemRoot%\servicing\Packages\*Hyper-V*.mum >hyper-v.txt
for /f %%i in ('findstr /i . hyper-v.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i"
del hyper-v.txt
Dism /online /enable-feature /featurename:Microsoft-Hyper-V -All /LimitAccess /ALL
pause

I picked up this tip from TheWindowsClub. And yep, it works!

Super cool!

Of course this also means you can turn your unsuspecting parent’s home machine into a remote server….

oneAPI Base Toolkit from Intel [free as in beer]

I’ve been informed that the toolkit includes some fancy memory tools to detect incorrect access types when you use void pointers for fun and profit, but accidentally copy in too much ( or little ) and can really mess stuff up. Just because of alignment and it ‘fits’ doesn’t mean you are doing what you think you are doing!

Anyways, link is here!

The intel toolkit is expected to integrate with Visual Studio 2017 or 2019. I have the ‘community version’ and it picked it up fine. In addition 2019 has ASAN which also helps combat the infamous memory issues of C/C++

<need quote from [HCI]Mara’akate…>

With the win being the profiling tools, and the memory leak tool. I just haven’t had time lately, I’ve been busy IRL, and wanting to wrap up some a.out to OMF adventures.

Epyx games on Steam!

Wow so many!

So I’d been sleeping under a rock and missed the whole Epyx last last year had put a bunch of games onto steam! I’d just been talking to someone about Impossible Mission as I had no idea what was going on in the first one, but as a kid I’d actually beat Impossible Mission II! Oddly enough the Summer Games & Jump Man Jr are absent, perhaps some long standing deal elsewhere. But at prices between $12 & $28 ($1.55 & $3.61 USD) I was hyped and bought a bunch.

I was curious which system would they be for? Commodore 64? Atari 400/800? Commodore Amiga or ST?

Impossible Mission II for the PC

Turns out there was an IBM PC version. I never knew, but back in ’88 I was still using my Commodore 64, and everyone else I knew had either an Amiga or Atari ST. School had those QNX machines but that was just a big missed opportunity from everyone an everything.

The steam/PC version. .. is certainly not right out of the box. As you can see from the box it supports CGA/EGA/Tandy/VGA-MCGA. And it’s configured for EGA by default. Strange, as the MCGA/VGA is the best of all platforms with it’s 256 colours. However here is where it falls flat very quickly is that back in ’88 they didn’t code for that big thing last year, the 1987 AdLib. Ooff! Even worse no IBM PCjr audio (same as the Tandy) either.

So yes it’s glorious PC SPEAKER. Yuck.

Turns out they are all PC versions. Well except for Impossible Mission which is ‘remastered’… And honestly it’s not that good. It’s super laggy. The javascript version is MUCH more superior.

What went wrong?

The Cinemaware Anthology: 1986-1991 managed somehow to get an Amiga emulator that boots up from ROM disks and is so transparent it’s easy to forget there is any emulation.

So let’s use 7zip and rip it apart!

10/11/2014 10:29 pm 402,960 .bind
10/11/2014 10:29 pm 19,962,368 .data
10/11/2014 10:29 pm 27,136 .rdata
10/11/2014 10:29 pm 87,040 .reloc
24/05/2021 09:36 pm .rsrc
10/11/2014 10:29 pm 174 .rsrc_1
10/11/2014 10:29 pm 402,944 .text
28/05/2019 08:12 pm 20,901,904 Anthology.exe

So inside of the executable there is 20MB of data. let’s rip further:

28/10/2014 09:29 am floppy
24/10/2014 11:21 am img
22/10/2014 12:37 am maps
28/10/2014 09:30 am os
28/10/2014 09:30 am rlst
28/10/2014 09:29 am romdisk

floppies, rom lists, rom disks and an OS?

28/10/2014 09:30 am .
28/10/2014 09:30 am ..
03/07/2013 12:02 am 262,144 Kick12.rom
27/10/2014 09:45 am 1,794 rdd.rom
2 File(s) 263,938 bytes

Really? Did they get some kind of sweetheart deal?

I copied of UAE, and fired it up, and yep it’s v1.2

Pretty cool. And sad too as Pixel Games UK obviously couldn’t secure such a cool deal, and all we have is the sadly inferior PC versions. And that is the crux of it, on the one hand I want to support them, but the PC versions frankly are unplayable. Jump Man barely can pick up 2 keys at the same time so it’s almost impossible to run and jump. Maybe it’s a DOSBox thing, maybe its an ancient DOSBox, I don’t know. On the other hand the only one that really doesn’t suffer keyboard issues, is from it’s design and that’s Rogue.

so I’m really mixed on this. And 33 years ago someone should have told them at Epyx to get one of those ‘peanut’ machines and get the PCjr sound effects going or at the least get one of those new fangled AdLibs that Sierra Online kept harping about.

Speaking of Sierra check out Not All Fairy Tales Have Happy Endings, by Ken Williams. It’s not so much about the games, but more so about the business wheeling and dealings, and yes the AdLib and peanut are important.

Ultra rare IBM OS/2 ad featuring OS/2!

I honestly didn’t think anything like this existed

Stay 2'ned. Can't help but think of 2ine.