I’ve been using Sandboxie for a long while to run all those questionable downloads on Windows. It’s a great light weight sandbox (as the name implies) for running random downloads, or even going to questionable websites as Sandboxie does a great job of isolating processes, and their filesystem access.
It’s not been all that cheap, but I felt it was worth it. I went to check to see how much it is as the conversation had come up on discord, and it turns out that Sophos had bought Sandboxie, and opened up the source code!
The downside is that there will be no further official development of the product. So I guess at some point I’ll have to break down and get a signing cert and re-build it if I want to keep using it.
Last version is locally mirrored here, as I understand it’ll be deleted soon enough.
The following is a guest post by NCommander of SoylentNews fame!
For those who’ve been long-time readers of SoylentNews, it’s not exactly a secret that I have a personal interest in retro computing and documenting the history and evolution of the Personal Computer. About three years ago, I ran a series of articles about restoring Xenix 2.2.3c, and I’m far overdue on writing a new one. For those who do programming work of any sort, you’ll also be familiar with “Hello World”, the first program most, if not all, programmers write in their careers.
A sample hello world program might look like the following:
#include <stdio.h>
int main() {
 printf("Hello world\n");
 return 0;
}
Recently, I was inspired to investigate the original HELLO.C for Windows 1.0, a 125 line behemoth that was talked about in hush tones. To that end, I recorded a video on YouTube that provides a look into the world of programming for Windows 1.0, and then testing the backward compatibility of Windows through to Windows 10.
Before we even get into the topic of HELLO.C though, there’s a fair bit to be said about these ancient versions of Windows. Windows 1.0, like all pre-95 versions, required DOS to be pre-installed. One quirk however with this specific version of Windows is that it blows up when run on anything later than DOS 3.3. Part of this is due to an internal version check which can be worked around with SETVER. However, even if this version check is bypassed, there are supposedly known issues with running COMMAND.COM. To reduce the number of potential headaches, I decided to simply install PC-DOS 3.3, and give Windows what it wants.
You might notice I didn’t say Microsoft DOS 3.3. The reason is that DOS didn’t exist as a standalone product at the time. Instead, system builders would license the DOS OEM Adaptation Kit and create their own DOS such as Compaq DOS 3.3. Given that PC-DOS was built for IBM’s own line of PCs, it’s generally considered the most “generic” version of the pre-DOS 5.0 versions, and this version was chosen for our base. However, due to its age, it has some quirks that would disappear with the later and more common DOS versions.
PC DOS 3.3 loaded just fine in VirtualBox and — with the single 720 KiB floppy being bootable — immediately dropped me to a command prompt. Likewise, FDISK and FORMAT were available to partition the hard drive for installation. Each individual partition is limited, however, to 32 MiB. Even at the time, this was somewhat constrained and Compaq DOS was the first (to the best of my knowledge) to remove this limitation. Running FORMAT C: /S created a bootable drive, but something oft-forgotten was that IBM actually provided an installation utility known as SELECT.
SELECT’s obscurity primarily lies in its non-obvious name or usage, nor the fact that it’s actually needed to install DOS; it’s sufficient to simply copy the files to the hard disk. However, SELECT does create CONFIG.SYS and AUTOEXEC.BAT so it’s handy to use. Compared to the later DOS setup, SELECT requires a relatively arcane invocation with the target installation folder, keyboard layout, and country-code entered as arguments and simply errors out if these are incorrect. Once the correct runes are typed, SELECT formats the target drive, copies DOS, and finishes installation.
Without much fanfare, the first hurdle was crossed, and we’re off to installing Windows.
Windows 1.0 Installation/Mouse Woes
With DOS installed, it was on to Windows. Compared to the minimalist SELECT command, Windows 1.0 comes with a dedicated installer and a simple text-based interface. This bit of polish was likely due to the fact that most users would be expected to install Windows themselves instead of having it pre-installed.
Another interesting quirk was that Windows could be installed to a second floppy disk due to the rarity of hard drives of the era, something that we would see later with Microsoft C 4.0. Installation went (mostly) smoothly, although it took me two tries to get a working install due to a typo. Typing WIN brought me to the rather spartan interface of Windows 1.0.
Although functional, what was missing was mouse support. Due to its age, Windows predates the mouse as a standard piece of equipment and predates the PS/2 mouse protocol; only serial and bus mice were supported out of the box. There are two ways to solve this problem:
The first, which is what I used, involves copying MOUSE.DRV from Windows 2.0 to the Windows 1.0 installation media, and then reinstalling, selecting the “Microsoft Mouse” option from the menu. Re-installation is required because WIN.COM is statically linked as part of installation with only the necessary drivers included; there is no option to change settings afterward. The SDK documentation details the static linking process, and how to run Windows in “slow mode” for driver development, but the end result is the same. If you want to reconfigure, you need to re-install.
The second option, which I was unaware of until after producing my video is to use the PS/2 release of Windows 1.0. Like DOS of the era, Windows was licensed to OEMs who could adapt it to their individual hardware. IBM did in fact do so for their then-new PS/2 line of computers, adding in PS/2 mouse support at the time. Despite being for the PS/2 line, this version of Windows is known to run on AT-compatible machines.
Regardless, the second hurdle had been passed, and I had a working mouse. This made exploring Windows 1.0 much easier.
The Windows 1.0 Experience
If you’re interested in trying Windows 1.0, I’d recommend heading over to PCjs.org and using their browser-based emulator to play with it as it already has working mouse support and doesn’t require acquiring 35 year old software. Likewise, there are numerous write-ups about this version, but I’d be remiss if I didn’t spend at least a little time talking about it, at least from a technical level.
Compared to even the slightly later Windows 2.0, Windows 1.0 is much closer to DOSSHELL than any other version of Windows, and is essentially a graphical bolt-on to DOS although through deep magic, it is capable of cooperative multitasking. This was done entirely with software trickery as Windows pre-dates the 80286, and ran on the original 8086. COMMAND.COM could be run as a text-based application, however, most DOS applications would launch a full-screen session and take control of the UI.
This is likely why Windows 1.0 has issues on later versions of DOS as it’s likely taking control of internal structures within DOS to perform borderline magic on a processor that had no concept of memory protection.
Another oddity is that this version of Windows doesn’t actually have “windows” per say. Instead applications are tiled, with only dialogue boxes appearing as free-floating Windows. Overlapping Windows would appear in 2.0, but it’s clear from the API that they were at least planned for at some point. Most notable, the CreateWindow() function call has arguments for x and y coordinates.
My best guess is Microsoft wished to avoid the wrath of Apple who had gone on a legal warpath of any company that too-closely copied the UI of the then-new Apple Macintosh. Compared to later versions, there are also almost no included applications. The most notable applications that were included are: NOTEPAD, PAINT, WRITE, and CARDFILE.
While NOTEPAD is essentially unchanged from its modern version, Write could be best considered a stripped-down version of Word, and would remain a mainstay until Windows 95 where it was replaced with Wordpad. CARDFILE likewise was a digital Rolodex. CARDFILE remained part of the default install until Windows 3.1, and remained on the CD-ROM for 95, 98, and ME before disappearing entirely.
PAINT, on the other hand, is entirely different from the Paintbrush application that would become a mainstay. Specifically, it’s limited to monochrome graphics, and files are saved in MSP format. Part of this is due to limitations of the Windows API of the era: for drawing bitmaps to the screen, Windows provided Display Independent Bitmaps or DIBs. These had no concept of a palette and were limited to the 8 colors that Windows uses as part of the EGA palette. Color support appears to have been a late addition to Windows, and seemingly wasn’t fully realized until Windows 3.0.
Paintbrush (and the later and confusingly-named Paint) was actually a third party application created by ZSoft which had DOS and Windows 1.0 versions. ZSoft Paintbrush was very similar to what shipped in Windows 3.0 and used a bit of technical trickery to take advantage of the full EGA palette.
With that quick look completed, let’s go back to actually getting to HELLO.C, and that involved getting the SDK installed.
The Windows SDK and Microsoft C 4.0
Getting the Windows SDK setup is something of an experience. Most of Microsoft’s documentation for this era has been lost, but the OS/2 Museum has scanned copies of some of the reference binders, and the second disk in the SDK has both a README file and an installation batch file that managed to have most of the necessary information needed.
Unlike later SDK versions, it was the responsibility of the programmer to provide a compiler. Officially, Microsoft supported the following tools:
Microsoft Macro Assembler (MASM) 4
Microsoft C 4.0 (not to be confused with MSC++4, or Visual C++)
Microsoft Pascal 3.3
Unofficially (and unconfirmed), there were versions of Borland C that could also be used, although this was untested, and appeared to not have been documented beyond some notes on USENET. More interestingly, all the above tools were compilers for DOS, and didn’t have any specific support for Windows. Instead, a replacement linker was shipped in the SDK that could create Windows 1.0 “NE” New Executables, an executable format that would also be used on early OS/2 before being replaced by Portable (PE) and Linear Executables (LX) respectively.
For the purposes of compiling HELLO.C, Microsoft C 4.0 was installed. Like Windows, MSC could be run from floppy disk, albeit it with a lot of disk swapping. No installer is provided, instead, the surviving PDFs have several pages of COPY commands combined with edits to AUTOEXEC.BAT and CONFIG.SYS for hard drive installation. It was also at this point I installed SLED, a full screen editor as DOS 3.3 only shipped with EDLIN. EDIT wouldn’t appear until DOS 5.0
After much disk feeding and some troubleshooting, I managed to compile a quick and dirty Hello World program for DOS. One other interesting quirk of MSC 4.0 was it did not include a standalone assembler; MASM was a separate retail product at the time. With the compiler sorted, it was time for the SDK.
Fortunately, an installation script is provided. Like SELECT, it required listing out a bunch of folders, but otherwise was simple enough to use. For reasons that probably only made sense in 1985, both the script and the README file was on Disk 2, and not Disk 1. This was confirmed not to be a labeling error as the script immediately asks for Disk 1 to be inserted.
The install script copies files from four of the seven disks before returning to a command line. Disk 5 contains the debug build of Windows, which are roughly equivalent to checked builds of modern Windows. Disk 6 and 7 have sample code, including HELLO.C.
With the final hurdle passed, it wasn’t too hard to get to compiled HELLO.EXE.
Dissecting HELLO.C
I’m going to go through these at a high level, my annotated hello.c goes into much more detail on all these points.
General Notes
Now that we can build it, it’s time to take a look at what actually makes up the nuts and bolts of a 16-bit Windows application. The first major difference, simply due to age is that HELLO.C uses K&R C simply on the basis of pre-dating the ANSI C function. It’s also clear that certain conventions weren’t commonplace yet: for example, windows.h lacks inclusion guards.
NEAR and FAR pointers
long FAR PASCAL HelloWndProc(HWND, unsigned, WORD, LONG);
Oh boy, the bane of anyone coding in real mode, near and far pointers are a “feature” that many would simply like to forget. The difference is seemingly simple, a near pointer is nearly identical to a standard pointer in C, except it refers to memory within a known segment, and a far pointer is a pointer that includes the segment selector. Clear right?
Yeah, I didn’t think so. To actually understand what these are, we need to segue into the 8086’s 20-bit memory map. Internally, the 8086 was a 16-bit processor, and thus could directly address 2^16 bits of memory at a time, or 64 kilobytes in total. Various tricks were done to break the 16-bit memory barrier such as bank switching, or in the case of the 8086, segmentation.
Instead of making all 20-bits directly accessible, memory pointers are divided into a selector which forms the base of a given pointer, and an offset from that base, allowing the full address space to be mapped. In effect, the 8086 gave four independent windows into system memory through the use of the Code Segment (CS), Data Segment (DS), Stack Segment (SS), and the Extra Segment (ES).
Near pointers thus are used in cases where data or a function call is in the same segment and only contain the offset; they’re functionally identical to normal C pointers within a given segment. Far pointers include both segment and offset, and the 8086 had special opcodes for using these. Of note is the far call, which automatically pushed and popped the code segment for jumping between locations in memory. This will be relevant later.
HelloWndProc is a forward declaration for the Hello Window callback, a standard feature of Windows programming. Callback functions always had to be declared FAR as Windows would need to load the correct segment when jumping into application code from the task manager. Hence the far declaration. Windows 1.0 and 2.0, in addition, had other rules we’ll look at below.
WinMain Decleration:
int PASCAL WinMain( hInstance, hPrevInstance, lpszCmdLine, cmdShow )
HANDLE hInstance, hPrevInstance;
LPSTR lpszCmdLine;
int cmdShow;
PASCAL Calling Convention
Windows API functions are all declared as PASCAL calling convention, also known as STDCALL on modern Windows. Under normal circumstances, the C programming language has a nominal calling convention (known as CDECL) which primarily relates to how the stack is cleaned up after a function call. In CDECL-declared functions, its the responsibility of the calling function to clean the stack. This is necessary for vardiac functions (aka, functions that take a variable number of arguments) to work as the callee won’t know how many were pushed onto the stack.
The downside to CDECL is that it requires additional prologue and epilogue instructions for each and every function call, thereby slowing down execution speed and increasing disk space requirements. Conversely, PASCAL calling convention left cleanup to be performed by the called function and usually only needed a single opcode to clean the stack at function end. It was likely due to execution and disk space concerns that Windows standardized on this convention (and in fact still uses it on 32-bit Windows.
hPrevInstance
if (!hPrevInstance) {
/* Call initialization procedure if this is the first instance */
if (!HelloInit( hInstance ))
return FALSE;
} else {
/* Copy data from previous instance */
GetInstanceData( hPrevInstance, (PSTR)szAppName, 10 );
GetInstanceData( hPrevInstance, (PSTR)szAbout, 10 );
GetInstanceData( hPrevInstance, (PSTR)szMessage, 15 );
GetInstanceData( hPrevInstance, (PSTR)&MessageLength, sizeof(int) );
}
hPrevInstance has been a vestigial organ in modern Windows for decades. It’s set to NULL on program start, and has no purpose in Win32. Of course, that doesn’t mean it was always meaningless. Applications on 16-bit Windows existed in a general soup of shared address space. Furthermore, Windows didn’t immediately reclaim memory that was marked unused. Applications thus could have pieces of themselves remain resident beyond the lifespan of the application.
hPrevInstance was a pointer to these previous instances. If an application still happened to have its resources registered to the Windows Resource Manager, it could reclaim them instead of having to load them fresh from disk. hPrevInstance was set to NULL if no previous instance was loaded, thereby instructing the application to reload everything it needs. Resources are registered with a global key so trying to register the same resource twice would lead to an initialization failure.
I’ve also gotten the impression that resources could be shared across applications although I haven’t explicitly confirmed this.
Local/Global Memory Allocations
NOTE: Mostly cribbled off Raymond Chen’s blog, a great read for why Windows works the way it does.
Another concept that’s essentially gone is that memory allocations were classified as either local to an application or global. Due to the segmented architecture, applications have multiple heaps: a local heap that is initialized with the program and exists in the local data segment, and a global heap which requires a far pointer to make access to and from.
Every executable and DLL got their own local heaps, but global heaps could be shared across process boundaries, and as best I can tell, weren’t automatically deallocated when a process ended. HEAPWALK could be used to see who allocated what and find leaks in the address space. It could also be combined with SHAKER which rearranged blocks of memories in an attempt to shake loose bugs. This is similar to more modern-day tools like valgrind on Linux, or Microsoft’s Application Testing tools.
Oh boy, this is a real stinker and an entirely unnecessary one at that. MakeProcInstance didn’t even make it to Windows 3.1 and its entire existence is because Microsoft forgot details of their own operating environment. To explain, we’re going to need to dig a bit deeper into segmented mode programming.
MakeProcInstance’s purpose was to register a function suitable as a callback. Only functions that have been marked with MPI or declared as an EXPORT in the module file can be safely called across process boundaries. The reason for this is that Windows needs to register the Code Segment and Data Segment to a global store to make function calls safely. Remember, each application had its own local heap which lived in its own selector in DS.
In real mode, doing a CALL FAR to jump to a far pointer automatically push and popped the code segment as needed, but the data segment was left unchanged. As such, a mechanism was required to store the additional information needed to find the local heap. So far, this is sounding relatively reasonable.
The problem is that 16-bit Windows has this as an invariant: DS = SS …
If you’re a real mode programmer, that might make it clear where I’m going with this. The Stack Segment selector is used to denote where in memory the stack is living. SS also got pushed to the stack during a function call across process boundaries along with the previous SP. You might begin to see why MakeProcInstance becomes entirely unnecessary.
Instead of needing a global registration system for function calls, an application could just look at the stack base pointer (bp) and retrieve the previous SS from there. Since SS = DS, the previous data segment was in fact saved and no registration is required, just a change to how Windows handles function epilogs and prologs. This was actually found by a third party, and a tool FixDS was released by Michael Geary that rewrote function code to do what I just described. Microsoft eventually incorporated his fix directly into Windows, and MakeProcInstance disappeared as a necessity.
Other Oddities
From Raymond Chen’s blog and other sources, one interesting aspect of 16-bit Windows was it was actually designed with the possibility that applications would have their own address space, and there was talk that Windows would be ported to run on top of XENIX, Microsoft’s UNIX-based operating system. It’s unclear if OS/2’s Presentation Manager shared code with 16-bit Windows although several design aspects and API names were closely linked together.
From the design of 16-bit Windows and playing with it, what’s clear is this was actually future-proofing for Protected Mode on the 80286, sometimes known as segmented protection mode. On 286’s Protected Mode, while the processor was 32-bit, the memory address space was still segmented into 64-kilobyte windows. The primary difference was that the segment selectors became logical instead of physical addresses.
Had the 80286 actually succeeded, 32-bit Windows would have been essentially identical to 16-bit Windows due to how this processor worked. In truth, separate address spaces would have to wait for the 80386 and Windows NT to see the light of day, and this potential ability was never used. The 80386 both removed the 64-kilobyte limit and introduced a flat address space through paging which brought the x86 processor more inline with other architectures.
Backwards Compatibility on Windows 3.1
While Microsoft’s backward compatibility is a thing of legend, in truth, it didn’t actually start existing until Windows 3.1 and later. Since Windows 1.0 and 2.0 applications ran in real mode, they could directly manipulate the hardware and perform operations that would crash under Protected Mode.
Microsoft originally released Windows 286, and 386 to add support for the 80286 and 80386, functionality that would be merged together in Windows 3.0 as Standard Mode, and 386 Enhanced Mode along with legacy “Real Mode” support. Due to running parts of the operating system in Protected Mode, many of the tricks applications could perform would cause a General Protection Fault and simply fail. This wasn’t seen as a problem as early versions of Windows were not popular, and Microsoft actually dropped support for 1.x and 2.x applications in Windows 95.
Windows for Workgroups was installed in a fresh virtual machine, and HELLO.EXE, plus two more example applications, CARDFILE and FONTTEST were copied with it. Upon loading, Windows did not disappoint throwing up a compatibility warning right at the get-go.
Accepting the warning showing that all three applications ran fine, albeit it with a broken resolution due to 0,0 being passed into CreateWindow().
However, there’s a bit more to explore here. The Windows 3.1 SDK included a utility known as MARK. MARK was used, as the name suggests, to mark legacy applications as being OK to run under Protected Mode. It also could enable the use of TrueType fonts, a feature introduced back in Windows 3.0.
The effect is clear, HELLO.EXE now renders in TrueType fonts. The reason TrueType fonts are not immediately enabled can be see in FONTTEST, where the system typeface now overruns several dialog fields.
The question now was, can we go further?
35 Years Later …
As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon. However, running 16-bit applications is complicated by the fact that NTVDM is not available on 64-bit installations. As such, a fresh copy of Windows 10 32-bit was installed.
Some pain was suffered convincing Windows that I didn’t want to use a Microsoft account to sign in. Inserting the same floppy disk as used in the previous test, I double-clicked HELLO and Feature Installer popped up asking to install NTVDM. After letting NTVDM install, a second attempt shows, yes, it is possible to run Windows 1.x applications on Windows 10.
FONTTEST also worked without issue, although the TrueType fonts from Windows 3.1 had disappeared. CARDFILE loaded but immediately died with an initialization error. I did try debugging the issue and found WinDbg at least has partial support for working with these ancient binaries, although the story of why CARDFILE dies will have to wait for another day.
In Closing …
I do hope you enjoyed this look at ancient Windows and HELLO.C. I’m happy to answer questions, and the next topic I’m likely going to cover is a more in-depth look at the differences between Windows 3.1 and Windows for Workgroups combined with demonstrating how networking worked in those versions.
Any feedback on either the article, or the video is welcome to help me improve my content in the future.
Well it’s not really airgapped, but more so many policies, and selective firewalling to make Windows 10 useless.
I was given a ‘new machine’ in some deep data centre, but it’s pretty barren. No Microsoft Office (LOL USE GOOGLE stuff? NO ODBC?!!! WTF!!!!?), and worse, no Linux Subsystem, no ‘Windows Store’ and no Microsoft.Net
Well to add .NET it turns out that, it’s on the installation media. Which I was oddly able to download, using the ‘Windows Media Creation Tool‘, and have the installation tool create an ISO. Then simply mount the ISO as my ‘D’ drive and run:
If you can’t get the store running (it also can fail for various services not running like Storage Service), you can download the Linux Userland directly, after enabling the Linux Subsystem.
Ugh, nothing like having to uncrippled something deliberately crippled to waste your time.
On one of my later trips I picked up this fun title, Lemmings!
And looking at the back of the box, what fun it contains!
One interesting thing about 1995, is that with the rise of Windows 95, this marked the end of the specialized PC market in Japan. Just as WING/Direct X basically killed off the DIY driver/extender environment on MS-DOS, by being able to abstract the hardware it removed any meaningful difference between an EPSON PC vs a PC-98, FM Towns, or even the lowly IBM AT/386.
This being a Win32 includes both WING & Win32s. A perfect snapshot of an early Win32 commercial game circa 1995, as you needed to cater to that massive Windows 3.1 install base, although so many were rushing to Windows 95. Naturally this also means that the setup program is a Win16 app, once more again to preserve that bridge of the Windows 3.1 & Windows 95 world.
Well the obvious thing to do is just install it on a legacy 32bit OS, and what better than Windows XP?
Now to run it on something like Windows 10, it’s just a matter of copying the WINLEMM.INI into %sysroot%, along with placing a copy of WING32.DLL into the %sysroot%\SysWOW64 directory and you are good to go!
Sadly the character encoding in Windows is still really lacking and doesn’t render all that great. However that had me thinking as almost a decade ago I did find a demo of Lemmings for Windows. Could it be possible to just overlay the executables & DLL’s to produce an English commercial version?
Surprisingly the answer is yes. I wasn’t sure what to expect, but it’s as simple as that!
The game is mostly playable, some parts are just coded to run as fast as possible, as no doubt nobody was imagining 1+ Ghz machines. So the intro, warp & suicide are almost instant.
It’s something to keep the kids entertained for a day in recent events. It’s been a LONG CNY.
So after the crazed purchase I made a few weeks ago, I returned from Japan, and was able to unbox and use the machine I’d been wanting for a while, a non x86 Windows laptop!
The NovaGo has a Snapdragon 835, and my phone, the ASUS ROG phone has the 845. Yes for this week, my cellphone actually has the stronger processor than my computer. Honestly this is almost an unthinkable situation! Although I haven’t been using my phone as a desktop substitute this week. It’s amazing how MS screwed up 10 on the phones, and Continium.
By default it comes crippled with this ‘S’ mode Windows, which hearkens back to the Windows RT launch, with the difference that it’s a quick trip to the application store to unlock Windows 10 Professional. It’s a free download as it should be, and it doesn’t even require a reboot!
Build quality isn’t so bad, the screen folds all the way back to make the machine into a ‘tablet’ although I don’t like that mode so much, it just feels wrong to wrap a keyboard around a monitor. However if you have rambunctious young kids, it’s great as when someone went running by me flailing their arms around like a while animal, when they struck the laptop the screen could easily fold back 180 degrees. Yay.
My first thing to do after setting up Office and VMWare VDI was to install the Linux subsystem, and Ubuntu. it’s exactly the same as it is on x86_64, which is great. And this let’s me have the best of both worlds, just like x86_64. As much as I dislike stumbling around with that aborted child of Pascal & Fortran (Python) at least I can run it under (mostly) Linux to get something close to like the production environment.
The C/C++ compiler is actually all cross tools. I wanted CLI only stuff because I like torturing myself, and it required a few GB of downloads. The good news is that the latest Windows 10 SDK does support GDI/CLI apps, so no crazy SDK hacking required, unlike back in the Windows RT days. Oddly enough the Taskforce 87 interpreter runs fine, but nothing else does.
I did a horrible job at hacking up SDL 1.2 to at least run (kind of, the audio doesn’t work, and it’s all WinDB*EDIT I got it fixed!!!) I got a few things up and running, including DOSBox and FrontVM. One thing that greatly helps is that i386 binaries ‘just work’. Honestly you wouldn’t even know you are running them when you are. Which made hunting down the ARM64 version of Chromium Edge kind of difficult to find. There really needs to be a more apparent way to tell them apart, if anything for battery efficiency.
Again the audio in my crap SDL build doesn’t work, so DOSBox is silent, and without Direct X, the text mode is tiny. Oh also, there is no OpenGL in this version of Windows dev kit for some reason. Running ssystem is ungodly slow. Also the default optimizations seems to be Os, optimize for space, and on this ASUS I have to say /Ox is way way faster. DooM is quite playable on DOSBox when build with /Ox, unlike /Os.
For me, I spend most of my day to day in Office, and VMWare VDI, connecting to secure networks. So I’m just one step above a terminal. Which I guess is kind of sad, but this machine more than fills that roll for me. The 120GB of storage is tight. This isn’t a development machine persay, nor is it something to download tonnes of data to, it’s a lightweight machine where it’s strength is the built in 4G modem, and when running ARM software the longer battery life. To me the biggest drawback is that the keyboard isn’t backlit. Even though I touch type, I didn’t realize how much I’d grown used to it for casual use.
I guess it’s a hard toss up from this and a PINEBOOK Pro, I think most readers here would prefer the Pinebook, for all it’s openness, although I still like the idea of being able to copy over the Win32s version of Lemmings, and it just running. For me I kind of like this thing, although once I switch back to an x86_64 with more memory, better GPU and disk options, maybe this just feels like some kids toy.
I don’t know how I didn’t think of this, but I also ported Neko98! Although the STL is having an issue with the ‘control panel’ so Neko is on autopilot.
As for the emulation, it is 32bit only, so expect to see this stupid message quite a bit. The neckbeard is a nice touch though.
Also built into the thing is a cell modem. I guess it’s really not a surprise as the 835 really is a cellphone SOC. I have a ‘wifi egg’ as they are called here, a WiFi hotspot with unlimited internet from CLS, which is on the old 4G network. I popped the SIM in, and it picked up the APN settings on it’s own and I was connected in under a minute. I have to say that it’s about time that SIM cards have this stuff programmed into them for a plug & play experience. And thankfully the ASUS is unlocked, although from what I understand these were sold in the USA bundled with some cell service plans.
For anyone with one of these rare machines that cares to play along you can find my built stuff on my ‘vpsland’ archive:
I’ve been a hidden long time fan of non x86 NT, I’ve owned Alphas and PowerPC (still sadly no MIPS), and when it came to the arm platform, ive since picked up the Surface RT and the Surface 2 RT. YouTube works fine on both, although the 2 is far faster and overall nicer user experience. I use the 1st Gen as a winamp player as it’s easier to jailbreak and cross compile to and mess with. But locked down Windows 8.0 for arm is insanely limited.
Enter Windows 10 and another botched shot at Windows on ARM for the general consumer. These ship with a S limited version of windows, which apparently can be easily unlocked to full 10 pro. I chose the Asus as it’s a laptop, and has more ram than the HP. Both however should be enough for casual day to day usage of office and edge chromium. I’ll have to see how it goes for either cross or native compiling.
Although the arm in these machines is 64bit, is there 32bit user land at all? Is it still possible to maintain a 32bit userland of gcc 1/2 and binutils for legacy compiles? How terrible is x86 qemu on arm emulation? DOSBox native? I guess SDL should be a simple rebuild like NT MIPS?
I’m also curious about WineVDM and MS-DOS player.
Oh well, I’m just waiting for a flight in the airport, going slowly insane.
My conclusion is really that the biggest problems with the physical machine is the lack of a backlit keyboard, and the tiny storage. Windows on ARM feels like a solution looking for a problem, but the obtuse problem is non x86 diversity. And in those regards it’s pretty much working fine.
But looking forward to a non x86 usable machine. I even have an unlimited chip for Hong Kong. It’ll be interesting if it can keep up for me, and if I’ve finally hit Ted Smith FORTRAN Maximum usage. Although this has no floppy drives.
So while I’m in Japan, I bought this tiny and borderline useless Fujitsu Esprimo B532, powered by an i5, and not very much else. I upped it to 8GB of RAM, and put in a SSD and upgraded to Windows 10 to make it slightly tolerable.
The i5-3470T is ancient! And so old that newer versions of VMware and Hyper-V won’t run on it. The old solution was simply to use an older version of VMware. In my case the highest version that’ll run is 12.5.9, however when trying to launch it I got this fun message:
Well wasn’t that a big bust.
I guess there is something hidden somewhere, but I just renamed the executable, and set it to Windows 8.0 compatibility mode, and wow it works!
And there we go! Now the latest version of NT can run the first public pre-release of Windows NT. YAY.
So I bit the bullet and updated to Windows 10 build 1903. And then the fun started on my glorious 2006 MacPro. It finished the update, and on reboot I get the login screen, and then almost immediately a blue screen.
Naturally the QR code is useless as it doesn’t specify any stop codes, and the minidump… Well that requires gigabytes and gigabytes of crap to download to get a tool to read it. (I still haven’t finished that rabbit hole, like COME ON! why isn’t it included?!).
However after hitting F8 a million times, I found that safe mode & networking work just great. Searching online was basically useless as there was no specific stop code to go with this WDF_VIOLATION. Further looking around I did notice one thing, and that it was all Macintosh machines that crash out to this WDF_VIOLATION error. It must be something specific to the Apple hardware running Windows 10!?
Armed with this (dis)information, I went ahead and disabled all the Apple specific drivers & startup items.
From MSCONFIG.EXE I disabled the following services:
Apple OS Switch Manager
Apple Time Service
And in the task manager, I disabled the following startup items:
Realtek HD Audio Manager
Boot Camp Manager
I had the other VMWare serial & USB hook previously disabled, as I just don’t want them at all on my setup. The big upshot is that after rebooting out of safe mode, I’m now up and running on Windows build 1903.
Considering the BootCamp stuff was so woefully out of date, don’t expect Apple to fix this anytime soon. And since I’m on a MacPro 2006, I certainly won’t be getting any updates from Apple. But at least I can struggle to keep this thing up to date otherwise.
Now I can enjoy that ‘new command prompt’ everyone keeps telling me about.
***UPDATE***
I went through this on another Bootcamp Mac, and what I had to do was uninstall the “Boot Camp Services”. It’s startup component triggers the bluescreen as it’s doing some nonsensical inventory, banging around on the drivers in a not friendly way. I had version 4.0.4033 of the Boot Camp Services installed.
Removing this kept all the old drivers, which continue to work just fine.
Moved up from the old pair of E5-2620 v2’s to a pair of E5-2667 v2’s. What a big difference from a base clock of 2.1Ghz to 3.3Ghz. And yes, more cores!
And I can build DOSBox in 3 seconds using Visual C++ 6.0 Ultimate. I guess eventually I’ll get a modern machine, but for now this is pretty damned good. Which reminds me the newer processors for my 2006 Mac Pro should be arriving soon enough.
I couldn’t quite justify the more than double price for the E5-2697 v2 processor, although it has 50% more cores, but with a max clock of 3.5Ghz.
Oh well it’s as good as any update to the Huananzhi X79.
So I picked up this board on AliExpress for about $200 USD. Natrually the x79 chipset is NOT a dual CPU chipset, so yeah it’s one of those ‘not exactly 100% legit’ Chinese motherboards.
One thing about Chinese companies that many don’t sell directly to consumers, instead they sell on Tao Bao, Alibaba, or to foreigners, AliExpress. The company’s site is http://www.huananzhi.com, as they had written on the box. Yes you need the www. portion of the name, as again many things are… well dated on the Chinese internet.
High-speed USB3.0, SATA3.0 interface transmission speed is increased
PCI-E expansion slot*4
RJ45 Gigabit LAN interface
North Korean heat sink with HUANAN logo
Yes, I don’t get the whole Korean heat sink thing either. Anyways I thought it’d be fun to try so I ordered the thing. It took 3 days to get to my office in China, and an additional week to get from China to Hong Kong. I hear these things can take upwards of a month to arrive in North America.
Also worth noting is that they will not ship with a CMOS battery, so you need to supply your own CR-2032 battery, otherwise the board will not operate correctly.
The contents of the box are VERY minimal, but they did include 2 SATA cables, some CPU thermal paste, a very bare and … well not very good manual, a CD which I haven’t even tried to read, along with an IO shield.
I decided to pair this with a pair of E5-2620 v2‘s that I got for $40 USD shipped, as I didn’t want to initially spend a lot of money in case all of this just exploded or something. These were the ‘widest’ and cheapest processors I could find, I wanted a v2 E5 as they are faster then the first generation.
Also worth noting is that the board is only capable of driving v1 & v2 E5’s. And they need to be the E5-2 type, which support operating in pairs, unlike the E5-1 set. I have no idea if the E5-4’s aka 4-way part would work in a pair. Although it may be an interesting experiment to try.
The board apparently doesn’t support overclocking or anything that fancy.
Although it reports itself as an x79 based motherboard, it is in reality an Intel C602, based chipset. I don’t know if they are harvesting them off of recycled servers, or if they have located a giant cache of repair parts that have been pushed beyond 5 year warranties, so they are prime candidates for being re-purposed as end user motherboards. Nice things about these boards vs standard server boards is the inclusion of a Realtek HD Audio chip, VIA USB 3.0 controller, and even the nice spacing out of the slots so you could really use all the slots.
Since this is a dual processor board you really want a PSU with dual 8 pin power connectors, however as mentioned in the poorly translated manual, you can take a PCI-E 6 pin adapter, and place it into the 8 pin socket, just position it backwards so that the 12v+ pins are facing inwards.
It may look strange (well more so as I’m using an extension cable that is sadly more focused on aesthetics than function, but heh it was cheap), but rest assured it works!
Another thing to keep in mind is that since this board uses a server chipset, not a consumer one, just as it is using server processors, you will need server grade memory. In this case it’s REG ECC DDR3 based memory. I went with 1833Mhz parts, which are the fastest DDR3 parts they made. Although the processors I chose have a maximum frequency support of 1600Mhz, but the memory works fine when underclocked.
Another gotcha is the CPU fans. These need to fit the Intel Xeon 2011, but have support for the 2011 motherbards. Which unlike the consumer versions don’t have a separate plate to bolt to the underside, rather they screw in all from the top. I had purchased a pair of cheap heatsinks that were about the right size, but didn’t include any of the mounting hardware for a 2011 board. I picked up these GELID Phantom Black CPU’s for about $80 for the pair.
They are quite big, and include a pair of fans for each processor which will make the end build look a little crazy.
I didn’t want to spend a lot, and went with the cheapest PSU I could find to output more than 450 watts. Although it did turn on and run with the lower PSU the machine did shut off overnight for no apparent reason. I’ve been okay with the larger and cheap Antec NX 650 PSU.
Although, this is the older style ‘bundle o cables’ type of PSU which I’m not such a fan of.
If I had charged up a cordless screwdriver this would have taken a few minutes, but screwing in the heatsinks was a chore, and they really do dominate the boards real estate.
I thought I had a case, but it turns out that it was for normal ATX sized boards, and this is an E-ATX board so it simply will not fit.
Another nice server like feature is that the board has an LED readout for early post codes, as booting this board will take some time. I think with 32GB of RAM it’s almost a minute.
I took the SSD & Hard disk out of my MacPro 2010 and put them into the new machine, and it booted up right away. Once connected to the internet Windows 10 picked up the new hardware and downloaded and installed the board drivers as needed. Interestingly enough Windows 10 also wanted a new activation code as the CPU/Motherboard was changed, although it didn’t complain about it.
When it comes to jobs that can run in parallel this is an incredible build. Obviously single core performance at 2Ghz is. well. terrible. I know going to a 4Ghz max E5-2667 v2 won’t be exactly magic either, but there is something nice about having 32 threads. Running stuff like parallel compiles, compression and video encoding is a dream on these massively parallel machines.
Games, are ‘okay’. I get 60fps with Fallout 76 on this current 2Ghz build on medium settings with the 1050 video card.
I do plan on getting faster CPU’s after the Chinese New Year, as right now basically everything is shut down (it sucks being the only person in the office building, literally), and shipments wont’ resume for at least another week.