1.1.1.1

So cloudflare decided to launch their own DNS, on 1.1.1.1 and 1.0.0.1 .  Apparently in a bid to fix global censorship.  I’m on the road, out of China right now, so I can’t test at the moment, but later in the week I’ll be back, and check out how the Great Firewall handles it.

I’m guessing this is another bid to increase their case for being a content neutral safe harbour, although their CEO personally screwed that up last year showing that they can and will police content when it suits them….  Talk about oops.

As always that is the consequence of speech, some people are really secret assholes.  Although by teyitr to go all cultural revolution on them, you end up not only making them maryters, but also prove that they cannot be countered with words, but only through censorship.

This to me is the scary consequence of everything being commercial, and the right of free association.  Even some moron who thinks the moon is made of cheese can still get mail delivery, but will they be able to work, open a bank account, get internet, or even get food?

I other news, dumping Facebook drops cortisol levels after 5 days.  Turns out that hippy paradise of everyone being able to instantly communicate and share is actually a living hell.

Dump Facebook, hit the gym, get a life.

Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” — Ferris

Happy April fool’s day.

Update, turns out the DNS works from China.  Naturally none of the sites load.

 
$ nslookup
> www.google.com
Server: 1.1.1.1
Address: 1.1.1.1#53

Non-authoritative answer:
Name: www.google.com
Address: 69.63.180.173
> youtube.com
Server: 1.1.1.1
Address: 1.1.1.1#53

Non-authoritative answer:
Name: youtube.com
Address: 203.98.7.65
>

Re-building old GCC distribution tars AKA patching backwards

AKA for everyone but me, who never read the readme.  For some reason I got pointed back to my old GCC 1.27 on MS-DOS article, and wanted to see when the 386 really did first appear, and after a bunch of messing around it was shipped in GCC 1.25

Sat Jul 16 14:18:00 1988 Richard Stallman (rms at sugar-bombs.ai.mit.edu)

* *386*: New files.

So there we are, July 16th 1988!

But looking a the ‘old-releases/gcc-1‘ directory there is no gcc-1.25.tar file! So how to get there from here?  Well the simple answer is to take gcc-1.27, and reverse patch it down using the patches in the patches directory.  The only catch of course is to read prior patches to the reverse to see if any files need to be renamed, otherwise there will be failures… specifically in the file gcc.diff-1.25-1.26

So normally I’ve always patched going up, but with the magical -R flag, you can go backwards!  So taking 1.27 you can go to 1.26 by running

patch -p1 -R < ../gcc.diff-1.26-1.27

And this will take 1.27 and downgrade it to 1.26.  As mentioned above the renames for going from 1.26 to 1.25 needs to be done in reverse:

Before installing these diffs, rename files as follows:

mv typecheck.c c-typeck.c
mv decl.c c-decl.c
mv parse.y c-parse.y
mv parse.h c-parse.h

so do the opposite, and then you can reverse diff.

For anyone who cares I put up the tar files on sourceforge here.

Fun with Empire EFI & OS X 10.6 on Intel

Who needs one, when you can have two?

So I wanted to get 10.6.3 running after I somehow ended up with not just one, but two retail copies on my last trip to America… So I’m using the positively ancient Chameleon boot loader, 2.0-RC5 .  I used to use the trendy Empire EFI boot loader, but it’s not working for me anymore with modern CPU setups.

I setup VMWare to use a Windows 10 x64 profile, but removed the hard disk, and re-add it as a SATA drive.  The default SCSI hard disk won’t work at all, but the available SATA works just fine.

Chameleon v2.0-RC5pre7

Boot up the Chameleon boot loader, and then drop to the text prompt (F5/tab) and then put in the following string to the boot loader.

platform=x86pc cpus=1 busratio=7 -v

After a minute or so it’ll boot up, and prompt for a language, afterwards the apple menu will appear, letting us select the disk took, where we can partition & format the disk.

After that it’s just as simple as choosing your options, accepting the license, and then you are off to the install part.

And just like that you are teleported to the magical world of OS X on VMWare.

Personally I like 10.6 as it’s the last version that supported Rosetta, although I guess if you want to run old stuff, you may as well just run 10.4.x in a VM now.  With a copy of Darwin 8.0.1 & 3 disks you can even boot up the deadmoo image, make an image of another deadmoo disk to yet another one, then install Darwin in a much larger disk, then boot back to deadmoo, and restore your 10.4.1 back onto the larger disk, fix permissions, and boot into a larger disk.

phew.

One thing is for sure, it’s a lot of work to get some kind of development machine to mess with WebObjects.  It’s probably easier than buying a G5, but I found yet another one in the States (hence the physical copies of 10.6) and lugged it onto the airplane.  Sigh the suitcase I bought for the trip broke, with one of the wheels coming off the suitcase, and as my G5 was over the 50lb weight limit, I had to pay a $100 USD fee to American Airlines to get my G5 home to Hong Kong.  I packed my “new” Studio Display incorrectly, so the 3rd ‘resting’ leg snapped. Sigh.

Computer Show – Printers

Oh sure, it's shameless marketing at it's finest, but wow, it's hitting the 80's vibe pretty hard.

RIP Gary.

Fun with Docker

Well it’s not really all that fun.

SO… in the start of the year I had decided I didn’t want to play site admin all day, and went to a hosted platform.  Things went well for a few months, then things didnt go well with constant database issues.

Then we went down hard for over 24 hours.  I was going to move back, but then everything started to work again.  But things had been spiraling down to unusability again.

So instead of just making a big VM like I had done before , I thought I’d try using Docker to host my website, with a few containers, namely each tier separate.

And oh boy does everyone love edge case docker stuff, but when it comes to actually moving something *INTO* docker, its basically you are on your own.

So yes, the http-https redirect is broken.  My categories are all missing. lots of stuff is busted.  And the supergloblamegacorp.com redirect stuff is missing. I’ll have to re-create that one after I get more stuff sorted out.

I haven’t given up yet…

Half of the fun was setting up the haproxy container, which in itself wasn’t so bad, although some times it wouldn’t pick up any config file changes, so I had to destroy it a few times, but naturally once I ask someone to look, and it’s working fine now.

So for the hell of it, here is my haproxy.cfg


global
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms

frontend http-in
bind *:80
bind *:443 ssl crt /etc/haproxy/haproxy.pem
http-request set-header Host virtuallyfun.com if { hdr(host) -i virtuallyfun.superglobalmegacorp.com }
http-request set-header Host virtuallyfun.com if { hdr(host) -i superglobalmegacorp.com }
redirect scheme https code 301 if !{ ssl_fc }
mode http
acl host_virtuallyfun hdr(host) -i virtuallyfun.com
acl host_virtuallyfun hdr(host) -i virtuallyfun.superglobalmegacorp.com
acl host_virtuallyfun hdr(host) -i superglobalmegacorp.com
use_backend virtuallyfun if host_virtuallyfun

backend virtuallyfun
balance leastconn
option httpclose
option forwardfor
reqadd X-Forwarded-Proto:\ https
server node1 172.17.0.3:80

I wanted to use Let’s Encrypt to ‘secure’ access to the domains I have, and running the certbot manually…. in a ‘dry run’ I always got this fun and informative error:

NewIdentifier : ACMESharp.AcmeClient+AcmeWebException: Unexpected error
+Response from server:
+ Code: BadRequest
+ Content: {
“type”: “urn:acme:error:malformed”,
“detail”: “Error creating new authz :: DNS name does not have enough labels”,
“status”: 400
}

Which of course got me absolutely nowhere searching.  I thought it may be docker screwing things up, so I shut it down, and fire up an old fashioned standalone copy of Apache, and run the following:

certbot certonly –dry-run –non-interactive –register-unsafely-without-email –agree-tos –expand –webroot –webroot-path /docker/wordpress/html –domain virtuallyfun.com –domain virtuallyfun.superglobalmegacorp.com –domain superglobalmegacorp.com

And get the same result.

I get to the point of absolute frustration, and just decide to forget the dry run all together, as I know I can run it at least 5 times a day before I get banned, for a while, but maybe I’ll get something more useful.

# certbot certonly –non-interactive –register-unsafely-without-email –agree-tos –expand –webroot –webroot-path /var/www/html –domain virtuallyfun.com –domain virtuallyfun.superglobalmegacorp.com –domain superglobalmegacorp.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for virtuallyfun.com
http-01 challenge for virtuallyfun.superglobalmegacorp.com
http-01 challenge for superglobalmegacorp.com
Using the webroot path /var/www/html for all unmatched domains.
Waiting for verification…
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0000_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0000_csr-certbot.pem

IMPORTANT NOTES:
– Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/virtuallyfun.com/fullchain.pem. Your cert
will expire on 2018-06-26. To obtain a new or tweaked version of
this certificate in the future, simply run certbot again. To
non-interactively renew *all* of your certificates, run “certbot
renew”
– If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let’s Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

Except it actually worked.

Creating the needed haproxy.pem is simple as:

cd /etc/letsencrypt/live/virtuallyfun.com/
cat fullchain.pem privkey.pem > /docker/haproxy.pem

To put the needed key along with the certs.  Naturally when this expires I’ll have to scramble to figure out how I did this.

Managing docker is fun as well. I went ahead and tried out portainer.io, which  naturally deploys as a container.  And it can manage remote servers, which I though was a plus as that means I could deploy it in my office, then simply connect to my server.  But that is where I found out that the config files for Debian are hard coded to always listen on a local socket, which breaks setting the proper JSON file to tell it to listen on a socket, and TCP/IP.  So just edit /etc/systemd/system/docker.service.d/docker.conf and either hard code it all there, or remove it from there and place it in /etc/docker/daemon.json

As always documentation is conflicting and all over the place.

My current feelings about docker…

Windows 10 ARM64 on QEMU

(This is a guest post by Antoni Sawicki aka Tenox)

Microsoft is releasing Windows 10 for ARM64 CPUs and this time, unlike Windows RT fiasco, there will be a full desktop app support including a dynamic binary translator to allow running existing x86 apps on ARM CPU, much like FX!32 on Alpha NT or Rosetta on Mac OS X.

Latest Visual Studio updates now bring official ARM/ARM64 support for Desktop Apps, little hidden, but here is how to enable it.

Being able to compile Windows ARM apps, I wanted to try to actually run them, but … on what exactly? There are some developer evaluation boards. Apparently someone managed to run it on Raspberry PI. Most importantly however you can run Windows 10 ARM64 on QEMU. This is some serious Fun With Virtualization!

Windows 10 ARM64 running on QEMU

I’m not claiming to be the first. Clever people have already done it. I just wanted to make it little easier for the lazier of us. Here is how.

Follow the link above but skip the shady UUP business in step #3 and download ready made iso instead. You can google the iso image from windows.cmd and it will take you to this link. You need the rest of the files like UEFI firmware and virtio drivers.

For the even more impatient here is a ready to run image with Windows pre-installed. Because QEMU now comes with DLL HELL I’m not including it in the archive. You will have to install it separately.

If you want to transfer files in/out of the image a tip from Pete Batard of Rufus:

Create a folder named say “transfer” and add the following option to the launch script:
-hdb fat:rw:transfer

This will create a second FAT32 formatted disk, that maps your transfer\
directory to the QEMU virtual machine. In our case, Windows 10 will see it
and make it automatically accessible as “QEMU VVFAT (D:)”. You can even
use this to write file from the VM to the host (though, depending on how
fast Windows flushes its disk cache, they may take a while to appear).

Go have fun and port some apps to ARM64 with free community edition of Visual Studio. I’m going to start with Aclock 🙂

Microsoft Editor

(This is a guest post by Antoni Sawicki aka Tenox)

In a recent blog post Wanted: Console Text Editor for Windows I lamented about lack of a good console/cmd/PowerShell text editor for Windows. When researching for the article I made rather interesting discovery. There in a fact has been a native Windows, 32bit, console based text editor. It was available since earliest days of NT or even before. But let’s start from…

…in the beginning there was Z editor. Developed by Steve Wood for TOPS-20 operating system in 1981. Some time after that, Steve sold the source code to Microsoft, which was then ported to MS-DOS by Mark Zbikowski (aka the MZ guy) to become the M editor.

M editor

The DOS-based M editor was included and sold as part of Microsoft C 5.1 (March 1988), together with the OS/2 variant, the MEP editor (perhaps M Editor Protected-mode). The official name of M/MEP was simply Microsoft Editor. The same editor was also available earlier (mid-1987) as part of the MS OS/2 SDK under a different name: SDKED. Note that normally SDKED insists in operating in full screen mode. Michal Necasek generously spent his time and patched it up so that it can be run in windowed mode for your viewing pleasure.

SDKED on OS/2

However my primary interest lies with Windows NT. The NT Design Workbook mentions that in the early days a self-hosting developer workstation included compiler, some command line tools and a text editor – MEP. Leaked Windows NT builds sometimes include IDW, Internal Development Workstation, which also seem to contain a variant of the same editor. In fact some these tools including MEP.EXE can be found on Windows NT pre-release CD-ROMs (late 1991) under MSTOOLS. It was available for both MIPS and 386 as a Win32 native console based application.

MEP on Windows NT Pre-Release

The editor was later also available for Alpha, i386, MIPS, and PowerPC processors on various official Windows NT SDKs from 3.1 to 4.0. It survived up to July 2000 to be last included in Windows 2000 Platform SDK.

M editor
M editor from NT SDK 3.1 running on Alpha AXP
MEP from NT SDK on Windows NT 4.0

The Win32 version of MEP also comes with an icon and a file description which calls it M Editor in NT 3.1 SDK and later Microsoft Extensible Editor.

From time perspective, it was rather unfortunate that this gem was buried in the SDK rather than being included on Windows NT release media. However I can understand that the editor wasn’t very user friendly or intuitive, compared to say edit.com from MS-DOS. It came from a different era and similar to VI or Emacs, didn’t have “PC user friendly” key bindings or menus.

But that’s not the end of the story. The editor of many names survives to this day, at least unofficially. If you dig hard enough you can find it on OpenNT 4.5 build. For convenience, this and other builds including DOS M, OS/2 MEP and SDKED, NT SDK MEP can be downloaded here.

Digging through the archive I found not one but two copies of the editor code, lurking in the source tree. One under the name MEP inside \private\utils\mep\ folder and a second copy under name Z (which was the original editor for TOPS) in \private\sdktools\z folder. MEP was included in Platform SDK, while Z was only available as part of IDW.

Doing a few diffs I was able to get some insight on the differences. Looks like MEP was initially ported from OS/2 to NT and bears some signs of being an OS/2 app. The Z editor on the other hand is a few years newer and has many improvements and bug fixes over MEP. It also uses some specific NT only features.

Sadly the internal Z editor was never released anywhere outside of Redmond. All the versions outlined so far had copyrights only up to 1990, while Z clearly has copyright from 1995. Being a few years newer and more native to NT I wanted to see if a build could be made. With some effort I was able to separate it from the original source tree and compile stand alone. Being a pretty clean source code I was able to compile it for all NT hardware platforms, including x64, which runs comfortably on Windows 10. You can download Z editor for Windows here.

Z editor on flashy Windows 7 x64

But how do you even use this thing?

Platform SDK contains pretty solid documentation in tools.chm.

Here is a handy cheat sheet:

Last but not least there is a modern open source re-implementation of Z editor named K editor. It’s written from scratch in C++ and LUA and has nothing to do with the original MEP source code. K is built only for x64 using Mingw. There are no ready to run binaries so I made a fork and build.

K editor on Windows 10 x64

The author Kevin Goodwin has kindly included copies of original documentation if you actually want to learn how to use this editor.

Building MAME 0.1 for MS-DOS / DJGPP

So as promised, a while back I had built a GCC 2.7.2.3 / Binutils 2.8.1 cross compiler toolchain suitable for building old Allegro based programs, such as MAME.  Of course the #1 reason why I’d want such a thing is that being able to do native builds on modern machines means that things compile in seconds, rather than an hour + compiling inside of DOSBox.

Why not use a more up to date version of both GCC/Binutils?  Well the problem is that the pre EGCS tools ended up with macro and inline assembly directives that were dumped along the way so that later versions simply will not assemble any of the later video code in Allegro, and a lot of the C needs updating too.  And it was easier to just get the older tool chain working.

It took a bit of messing around building certain portions inside of each step of the tools, but after a while I had a satisfactory chain capable of building what I had needed.

So for our fun, we will need my cross DJGPP v2 tool chain for win32, MAME 0.1, Allegro 3.12 and Synthetic Audio Library (SEAL) Development Kit 1.0.7 .

Lib Allegro is already pre-built in my cross compiler tool chain, all that I needed to add was SEAL, with only one change, 1.0.7 is expecting an EGCS compiler, which this is not, so the -mpentium flag won’t work, however -m486 will work fine.

Otherwise, in MAME all I did was alter some include paths to pickup both Allegro and SEAL, and in no time I had an executable.  And the best part is checking via DOSBox, it runs, with sound!

MAME 0.1 on DOSBox PACMAN hiding

Thankfully MAME has been really good about preserving prior releases, along with their source tree, and it’s pretty cool to be able to rebuild this using the era correct vintage tools, and I can’t stress how much more tolerable it is to build on faster equipment.

fread and fwrite demystified: stdio on UNIX V7

(This is a guest post by xorhash.)

1. Introduction

Did I say I’m done with UNIX Seventh Edition (V7)? How silly of me; of course I’m not. V7 is easy to study, after all.

Something that’s always bothered me about the stdio.h primitives fread() and fwrite() are their weak guarantees about what they actually do. Is a short read or write “normal” in the sense that I should normally expect it? While this makes no answer about modern-day operating systems, a look at V7 may enlighten me about what the historical precedent is.

As an aside: It’s worth noting that the stdio.h functions are some of the few that require a header. It was common historical practice not to declare functions in headers, just see crypt(3) as an example.

I will first display the man page, then ask the questions I want to answer, then look at the implementation and finally use that gained knowledge to answer the questions.

2. Into the Man Page

The man page for fread() and fwrite() is rather terse. Modern-day man pages for those functions are equally terse, though, so this is not exactly a novelty of age. Here’s what it reads:

NAME

fread, fwrite – buffered binary input/output

SYNOPSIS

#include <stdio.h>

fread(ptr, sizeof(*ptr), nitems, stream)
FILE
*stream;

fwrite(ptr, sizeof(*ptr), nitems, stream)
FILE
*stream;

DESCRIPTION

Fread reads, into a block beginning at ptr, nitems of data of the type of *ptr from the named input stream. It returns the number of items actually read.

Fwrite appends at most nitems of data of the type of *ptr beginning at ptr to the named output stream. It returns the number of items actually written.

SEE ALSO

read(2), write(2), fopen(3), getc(3), putc(3), gets(3), puts(3), printf(3), scanf(3)

DIAGNOSTICS

Fread and fwrite return 0 upon end of file or error.

So there are the following edge cases that are interesting:

  • In fread(): If sizeof(*ptr) is greater than the entire file, what happens?
  • If sizeof(*ptr) * nitems overflows, what happens?
  • Is the “number of items actually read/written” guaranteed to be the number of items that can be read/written (until either EOF or I/O error)?
  • Is the “number of items actually written” guaranteed to have written every item in its entirety?
  • What qualifies as error?

3. A Look at fread()

Note: All file paths for source code are relative to /usr/src/libc/stdio/ unless noted otherwise. You can read along at the TUHS website.

rdwr.c implements fread(). fread() is simple enough; it’s just a nested loop. The outer loop runs nitems times. The outer loop sets the number of bytes to read (sizeof(*ptr)) and runs the inner loop. The inner loop calls getc() on the input FILE *stream and writes each byte to *ptr until either getc() returns a value less < 0 or all bytes have been read.

/usr/include/stdio.h implements getc(FILE *p) as a C preprocessor macro. If there is still data in the buffer, it returns the next character and advances the buffer by one. Interestingly, *(p)->_ptr++&0377 is used to return the character, despite _ptr being a char *. I’m not sure why that &0377 (&0xFF is there. If there is no data in the buffer, it instead returns _filbuf(p).

filbuf.c implements _filbuf(). This function is a lot more complex than the other ones until now. It begins with a check for the _IORW flag and, if set, sets the _IOREAD flag as well. It then checks if _IOREAD is not set or if _IOSTRG is set and returns EOF (defined as -1 in stdio.h) if so. These all seem rather inconsequential to me. I can’t make heads or tails of _IOSTRG, however, but it seems irrelevant; _IOSTRG is only ever set internally in sprintf and sscanf for temporary internal FILE objects. After those two flag checks, _filbuf() allocates a buffer into iop-<_base, which seems to be the base pointer of the buffer. If flag _IONBF is set, which happens when setbuf() is used to switch to unbuffered I/O, a temporary, static buffer is used instead. Then read() is called, requesting either 1 bytes if unbuffered I/O is requested or BUFSIZ bytes. If read() returned 0, the FILE is flagged as end-of-file and EOF is returned by _filbuf(). If read() returned <0, the FILE is flagged as error and EOF is returned by _filbuf(). Otherwise, the first character that has been read is returned by _filbuf() and the buffer pointer incremented by one.

According to its man page, read() only returns 0 on end-of-file. It can also return -1 on “many conditions”, namely “physical I/O errors, bad buffer address, preposterous nbytes, file descriptor not that of an input file”

As an aside, BUFSIZ still exists today. ISO C11 § 7.21.2 no. 9 dictates that BUFSIZ must be at least 256. V7 defines it as 512 in stdio.h. One is inclined to note that on V7, a filesystem block was understood 512 bytes length, so this was presumably chosen for efficient I/O buffering.

4. A Look at fwrite()

rdwr.c also implements fwrite(). fwrite() is effectively the same as fread(), except the inner loop uses putc(). After every inner loop, a call to ferror() is made. If there was indeed an error, the outer loop is stopped.

/usr/include/stdio.h implements putc(int x, FILE *p) as a C preprocessor macro. If there is still room in the buffer, the write happens into the buffer. Otherwise, _flsbuf() is called.

flsbuf.c implements _flsbuf(int c, FILE *iop). This function, too, is more complex than the ones until now, but becomes more obvious after reading _filbuf(). It starts with a check if _IORW is set and if so, it’ll set _IOWRT and clear the EOF flag. Then it branches into two major branches: the _IONBF branch without buffering, which is a straight call to write(), and the other branch, which allocates a buffer if none exists already or otherwise calls write() if the buffer is full. If write() returned less than expected, the error flag is set and EOF returned. Otherwise, it returns the character that was written.

According to its man page, write() returns the number of characters (bytes) actually written; a non-zero value “should be regarded as an error”. With only a cursory glance over the code, this appears to happen for similar reasons as read(), which is either physical I/O error or bad parameters.

5. Conclusions

In fread(): If sizeof(*ptr) is greater than the entire file, what happens?
On this under-read, fread() will end up reading the entire file into the memory at ptr and still return 0. The I/O happens byte-wise via getc(), filling up the buffer until getc() returns EOF. However, it will not return EOF until a read() returns 0 on EOF or -1 on error. This result may be meaningful to the caller.

If sizeof(*ptr) * nitems overflows, what happens?
No overflow can happen because there is no multiplication. Instead, two loops are used, which avoids the overflow issue entirely. (If there are strict filesystem constraints, however, it may be de-facto impossible to read enough bytes that sizeof(*ptr) * nitems overflows. And of course, there’s no way you could have enough RAM on a PDP-11 for the result to actually fit into memory.)

Is the “number of items actually read/written” guaranteed to be the number of items that can be read/written (until either EOF or I/O error)?
Partially: Both fread() and fwrite() short-circuit on error. This causes the number of items that have actually been read or written successfully to be returned. The only relevant error condition is filesystem I/O error. Due to the byte-wise I/O, it’s possible that there was a partial read or write for the last element, however. Therefore, it would be more accurate to say that the “number of items actually read/written” is guaranteed to be the number of non-partial items that can be read/written. A short read or short write is an abnormal condition.

Is the “number of items actually written” guaranteed to have written every item in its entirety?
No, it isn’t. A partial write is possible. If a series of structs is written and then to be read out again, however, this is not a problem: fread() and fwrite() only return the count of full items read or written. Therefore, the partial write will not cause a partial read issue. If a set of bytes is written, this is an issue: There will be incomplete data – possibly to be parsed by the program. It is therefore to preferable to write (and especially read) arrays of structs than to write and read arrays of bytes. (From a modern-day perspective, this is horrendous design because this means data files are not portable across platforms.)

What qualifies as error?
Effectively, only a physical I/O error or a kernel bug. Short fread() or fwrite() return values are abnormal conditions. I’m not sure if there is the possibility that the process got a signal and the current read() or write() ends up writing nothing before the EINTR; this seems to be more of a modern-day problem than something V7 concerned itself.

2ine the OS/2 emulator

So this is really super cool! Ryan C. Gordon has written a Wine like program to run OS/2 programs!

Using 32bit Linux, and some native libraries, 2ine can load up an LX (32bit) executable and try to run it under Linux, much in the same way that Wine can run Windows programs.  And yes it’ll run EMX built stuff.  Although keep in mind the original Microsoft based languages, programs and tools is all 16bit.  After the whole Windows 3.0 thing and the split of Microsoft from the OS/2 project all their tools are either 16 bit, or 32bit LE format, which IBM had dumped for the LX format once OS/2 2.0 had shipped.

You can read about his incredible progress, and all the trials and tribulations of running OS/2 programs, along with the craziness that is thunking back and forth to the 16bit space for the old VIO calls that had never were updated to 32bit in that transition phase where a good chunk of OS/2 never was updated from 16bit, over on his patreon page here.

Attempting to run anything 16bit or LE will give you:

./lx_loader CL.EXE
not an OS/2 LX module

But let’s try my crazy Win32 hosted EMX 0.8h cross compiler!

C:\emx\demo\dhry>gcc -v dhyrstone.c -o dhyrstone.exe
gcc version 2.5.8
cpp -lang-c -v -undef -D__GNUC__=2 -D__GNUC_MINOR__=5 -D__32BIT__ -D__EMX__ -Di386 -D__32BIT__ -D__EMX__ -D__i386__ -D__32BIT__ -D__EMX__ -D__i386 -Asystem(unix) -Asystem(emx) -Acpu(i386) -Amachine(i386) dhyrstone.c C:\Temp/cca13032
GNU CPP version 2.5.8 (80386, BSD syntax)
#include “…” search starts here:
#include <…> search starts here:
/usr/local/include
D:/pcem/building/MinGW/msys/1.0/local/emx/include
/emx/include
/usr/include
End of search list.
cc1 C:\Temp/cca13032 -quiet -dumpbase dhyrstone.c -version -o C:\Temp/ccb13032
GNU C version 2.5.8 (80386, BSD syntax) compiled by GNU C version 5.1.0.
as -o C:\Temp/ccc13032 C:\Temp/ccb13032
ld -o dhyrstone.exe /emx/lib/crt0.o -L/emx/lib C:\Temp/ccc13032 -lgcc -lc -lgcc -lemxst -los2 -lemx2

And now running that on Linux…

root@alpharacks:/usr/src/2ine-4a8318f4056f# file dhyrstone.exe
dhyrstone.exe: MS-DOS executable, LX for OS/2 (console) i80386, emx 0.8c
root@alpharacks:/usr/src/2ine-4a8318f4056f# ./lx_loader dhyrstone.exe
Dhrystone(1.1) time for 5000000 passes = 3
This machine benchmarks at 1666666 dhrystones/second

You’d never know that this was an OS/2 program, if I didn’t tell you.

I tried the old 87 Infocom interpreter, and it’ll run great too!

root@alpharacks:/usr/src/2ine-4a8318f4056f# file infocom.exe
infocom.exe: MS-DOS executable, LX for OS/2 (console) i80386, emx 0.8c

root@alpharacks:/usr/src/2ine-4a8318f4056f# ./lx_loader infocom.exe advent.z3

At End Of Road Score: 36/0
Welcome to Adventure! Do you need instructions? (y/n) >n

ADVENTURE
A Modern Classic
Based on Adventure by Willie Crowther and Don Woods (1977)
And prior adaptations by David M. Baggett (1993), Graham Nelson (1994), and
others
Adapted once more by Jesse McGrew (2015)
Release 1 / Serial number 151001 / ZILF 0.7 lib J3

At End Of Road
You are standing at the end of a road before a small brick building. Around you
is a forest. A small stream flows out of the building and down a gully.

At End Of Road Score: 36/0
>

Again it’s works so well it’s amazing!

You can find the 2ine source over on icculus.org here.  I had to tweek the heck out of the CmakeList.txt to get it to build, and since I was interested in the command line, I ended up disabling all the SDL / PM stuff, and make sure I had the ‘wide/unicode’ version of ncurses installed.

I don’t think there really was any killer 32 bit OS/2 applications, but with clean room versions of:

  • doscalls.dll
  • kbdcalls.dll
  • msg.dll
  • nls.dll
  • quecalls.dll
  • sesmgr.dll
  • tcpip32.dll
  • viocalls.dll

Not to mention being able to call into Linux DLL’s and using ‘clean’ OS/2 DLL’s would let you embrace and extend OS/2.. Or maybe even let you build the proverbial fantasy of both RISC & 64 bit OS/2. …..