WRP 4.8.0 – Simple HTML Mode with Image Support!

(this is a guest post by Antoni Sawicki aka Tenox)

Previously I wrote a boring lengthy article about need for “simple html mode” in WRP. Today I want to introduce addition of images to this contraption! You can now browse modern web like it was in 1994!

The say that image is better than 1000 words, so here we go:

WRP 4.8.0 via Netscape 4.8 on SGI IRIX
WRP 4.8.0 via Mosaic 2.7 on HPUX 9.07

You can regulate the image size and make them however big you want, also PNG, GIF and JPEG of course:

WRP 4.8.0 on HPUX 9.07
WRP 4.8.0 via Netscape 1.0 on SunOS 4.1.4 on QEMU

The simple html mode is still quite buggy and needs a lot of fixes. I see some 400 errors here and there, Captha problems, etc. I think these can be all fixed in time.

You can download latest WRP from Github!

Please report bugs and issues!

Installing older version of QEMU on MacOS using Homebrew

(this is a guest post by Antoni Sawicki aka Tenox)

I often need to install a specific / older version of QEMU on a Mac using Homebrew. If you search for how to do it, typical answers are create a local tap, extract some files and other nonsense. Building from sources is equally retarded because configure can’t easily find includes and libraries installed by Homebrew.

This is how to do it in a simplest possible way. Find QEMU Homebrew Formula file on Github. Then click history on the top right corner. Browse for the desired version. Then on the right of the version, click a little icon saying “View code at this point”. It should show you an older version of the same formula. You can click download raw file or copy the URL and use curl to fetch it. Then simply run brew install ~/Downloads/qemu.rb or wherever you saved it. Magic! Hope it helps!

WRP – Simple HTML / Reader Mode

(this is a guest post by Antoni Sawicki aka Tenox)

TL;DR
WRP now allows rendering web pages in to a simplified HTML, compatible with old browsers (in addition to Image Map).

Netscape 4.x on IRIX 5.3 using WRP 4.7

Long version
WRP or “Web Rendering Proxy” is a proxy server that allows to use vintage web browsers on the modern web. Originally inspired by Opera Mini/Turbo rendering proxy for mobile devices. I wanted a similar service that would translate modern web pages, but in to it’s older HTML version. This not only proved very difficult, but I realized that the web is advancing in a way that it would not be very future proof. I’m talking about dynamic pages, JavaScript generated content and WASM. Instead, I took a different approach – generating a screenshot of a page with clickable Image Map. This allows to faithfully represent a fully rendered web page on a vintage machine + allow to click anywhere on it and perform actions. At a cost of performance. Rendering GIF or JPEG and transferring over network feels rather slow and clunky.

I have been using WRP for some 10 years now. I began to realize that, this approach, while pretty awesome for show and bragging, is not very practical for day to day use. In fact, my use of web browsers on vintage workstations typically revolves around reading documentation, blogs, wikis and other, “mostly text” websites. It would be much better if these were not clunky screenshots but rather some form of text output.

I again started poking around the original idea of simplified HTML. Looked at various reader modes, print to PDF, etc. In particular, I have noticed recent advancements in so called “web scraping”, extraction and html to markdown conversion services. Likely fueled by the recent AI/LLM craze, as robots scrape the web to learn about humans. What caught my attention are various “html to markdown” services. They can fully render dynamic JS pages and extract contents as it was in a browser. Also, Markdown, if you think about it, is in fact a simplified HTML.

After doing some research, in couple of evenings and less that 100 lines of code I got a basic version going. The principle is as follows: First capture the page HTML, convert to Markdown, do some manipulation like adding link prefixes and remove images (we’ll come back to that later). Then render Markdown back to HTML. Wrap it in a vintage HTML header an off we go. The results are amazing!!

This image has an empty alt attribute; its file name is w2-1024x819.png

For the “mostly text” pages this is way better than screenshot mode. Not only is way faster and more responsive, you can select and copy text, but also you use the old web browser more like it was originally intended. At any time, if you want to view the screenshot mode, you can simply switch back to PNG/GIF/JPG mode with couple of clicks.

Another interesting aspect of this is extensibility and potential for improvement. For the screenshot mode there just isn’t that much stuff you could add. It’s just a screenshot. For Markdown and simple HTML there’s a million things one could add. Both down and up converters offer a wide variety of plugins and filters. We can improve formatting, layout, processing, add translation and other features. Perhaps also different features based on client browser version. Maybe even input forms and …images.

Lets talk about images. Right now they are completely deleted from markdown. This is for several reasons, compatibility, performance, load time, size, formatting, etc. I’m thinking that perhaps images could be added in some converted form. For example downsized to a small JPG or maybe converted in to ASCII art. Suggestions more than welcome!

Netscape 3.x on OpenVMS 8.x using WRP 4.7 looking at VSI VMS Documentation!

Download from here: https://github.com/tenox7/wrp/releases/tag/4.7.0

To switch to Reader / Simple HTML mode simply change image type to “TXT”. This can also be done using -t txt flag.

Happy browsing!

In Defense of the Mac Pro 2023

guest post by neozeed‘s nephew

There are a few reasons to get an M2 Mac Pro and although many will say the Studio is a better buy for value: that’s only true if you’re not after these important considerations:

  • The ability to install your own *bootable* SSD: nearly every major Mac reviewer ignored this insanely important feature.
  • The ability to install internal storage (and go beyond 8 TB), period: do we really want a cocktail of external HDDs attached? I don’t!
  • The ability to install an internal USB A licensing dongle: unless you’re sharing your dongle over the network with 3rd party software from an RPi hiding in a closet (you should try VirtualHere if you want cross-platform dongle sharing it’s great), you don’t want to accidentally shear it off costing thousands of dollars of lost licensing.
  • The Magic Keyboard and (black) Magic Mouse are bundled (this is not the case with the Mac Studio or the MacBook Pro adding a substanstial cost. However, since the AppleCare+ is more expensive for the Mac Pro over the Mac Studio you could argue these costs cancel themselves out… unless you’re Icarus with a wax wallet instead of wax wings and never purchase AppleCare+).

Recently ‘GoFetch’ made the headlines, but it’s irrelevant for a variety of reasons in my opinion: 1) you won’t have WAN-exploitable instances of GoFetch in the real world, 2) it does indeed affect some Intel processors and probably others. The way all processors are designed now with speculative execution, CVE-after-CVE is unavoidable so the sensationalization has worn out its appeal. Even the once-ironclad AMD processors are afflicted with a bunch of nasty CVEs now too. \rant

Mac Pro vs PS/2 Model 95

After an eye-watering $8000: refurbished base model with AppleCare+ 🤮💸💸, we’re greeted with our new friend. The Cheese Grater (2023 Mac Pro) has befriended the Ardent Tool of Capitalism (PS/2 Model 95)! It’s odd how both share silly nicknames and a very similar height sans handles. Both systems symbolize the same sentiment that Louis Ohland shared many years ago: “Think of a business computer being used for purely personal reasons. Fist pump at the man! Isn’t using a corporate tool because you can an expression of free will?”*

*Louis Ohland is the guy who nicknamed the PS/2 Model 95, the Ardent Tool of Capitalism.

Q&A

Q: will I grate cheese on both of them?

A: only if you clean up the cheese residue for me. Are Personal System/2s even food-safe???

Storage

Sonnet M.2 4×4 NVMe PCI-e

The first thing we’ll need to do is install an NVMe PCI-e card. I’m going with the overly-priced “Sonnet M.2 4×4”, because the 2×4 card is nearly the same price making it a horribly valued product and we may as well expand this thing with four NVMes to get our money’s worth. It’s not really clear if the Sonnet M.2 4×4’s controller outperforms the Sonnet M.2 2×4 (they don’t use the same one), but both operate on gen 3 and the NVMes themselves are gen 3 so none of it really matters. There are much cheaper NVMe PCI-e cards but most are not compatible with Macs, you’re paying the tax for the fancy firmware… otherwise buy a much cheaper card if you’re on Windows or Linux. The card only came in a pink ‘static suppressant bag’ instead of a true antistatic bag which is laughable at how much sonnet is charging, and Amazon appears to have taken a bite out of the box.

For the primary boot NVMe we’re going with a 2TB 970 EVO Plus. I know Louis Rossmann decried them as being unreliable after he torched a bunch in some custom gaming rigs with sketchy PSUs, but they’re good drives if you don’t kill them with dirty PSU voltage rails. Always use quality PSUs folks. This is why many Maxtors failed due to the ST SMOOTH chips receiving power from PSUs outputting higher than 12v, and not the drives themselves… same thing applies today when you eclipse 12v on your power rails. I’ve also been running one in a ThinkPad for more than a year and it’s been fine.

For the remaining tertiary storage we’re going with some WD Green SN350s: solely because they’re compatible with macOS — the macOS compatibility with NVMes is very specific unfortunately. Otherwise I would have went with more TeamGroup 4TB drives as they’re one of the best value for money (particularly the TM8FP4004T0C101, it uses better NANDs than the more expensive and inferior 4TB offerings from Crucial and WD). Yeah… the cost of NVMe disks isn’t absolute, sometimes cheaper ones use better NANDs and you can be fleeced by brand recognition and false-positive specs on gen4 which I imagine is what Crucial capitalizes on.

[If you don’t know what I’m talking about: the Crucial and Western Digital NVMe drives always cheap out and use QLC NANDs instead of proper TLC NANDs as TeamGroup and Samsung do; and obviously they’re not going to advertise they’re cheating you and will price their products the same as the competition. Very similar to the whole SMR/CMR debacle, why would Western Digital tell you you’re buying something cheaper at a premium cost??? Caching is an entirely different thing separate to this and usually only the Samsung drives have ‘true’ dedicated cache logic, which is why I’m using the 970 EVO Plus as a boot/OS NVMe]

reinstalling MacOS

Fortunately we don’t need a second mac to perform OS reinstallation so the ‘Apple Configurator’ is not needed. The procedure is as simple as this: Press & hold power button until the recovery menu pops up, choose ‘continue’, choose reinstall OS, choose the new drive (in this case the Samsung drive I just formatted as “OS”). I know a lot of people raise an eyebrow requiring a second mac for when the system does actually need to be completely restored if it can’t boot into the internal recovery mode; just when you haven’t paid Apple enough you also need a second Mac to perform recovery and restoration. Even neozeed himself encountered this problem and with a heavy sigh (a very heavy sigh) and mild disbelief, set up a macOS VM for restoration since he only owns one. 😂

Once this is completed we’ll no longer be using the proprietary SSD that’s present on the Mac Pro. It DOES still need to be present in the system for the computer to POST (Apple marries it against the security IC so it’s intrinsic and serialized to the computer based on configured storage), but presumably as it won’t be written to anymore it’ll never become exhausted from write cycles… and even if it did fail over time, as a result of ordering the bottom-of-the-barrel 1TB model I could just buy another ‘cheap’ 1TB card which would allow the system to resume POSTing once again. If the soldered-in RAM or CPU fails then it’s game over; as much micro-soldering as I do, I refuse to purchase even more tools to swap out underfilled BGA ICs… and then of course you have to hope employees at Foxconn actually managed to sneak out unused genuine ones to be resold on AliExpress or eBay. *sigh*

With us now being able to use our own bootable SSD, the primary failure and annoyance of ARM-based Macs is now mitigated. For the Mac Studio you could buy backup replacement SSDs to constantly replace as they wear out (they would have to match what storage size the system was preconfigured with), but keep in mind I can add 8TB cheaply and have my own bootable SSD. And in the event you need to do data recovery or read the drive on another system, anything — even your grandma’s phonograph — can read NVMes so it’s much less of a hassle. As much as I hate to say it I think the Mac Studio makes less sense over the Mac Pro BECAUSE of the storage… you’re already buying an overpriced computer, may as well go the full distance for proper storage? Everyone’s living in the honeymoon phase right now while all of the NANDs are under warranty and still functioning… but once they start failing it’ll be a nasty money pit at best, or unfixable at worst. And do you know how many people make one computer their whole life and allow it to spontaneously fail with no backups?

ARE YOU NOT ENTERTAINED?

An ARM-based Mac using internal NVMes, is that not a nice thing? ARE YOU NOT ENTERTAINED? And no need to pay ~$2000 for 8TB. I did have to shell out $400 for the stupid SoNNeT card and $400 for the SSDs… buuut if I paid $2000 worth of SSDs I would far eclipse 8TB. In this screenshot you can also see the ‘OS’ Samsung SSD now the primary ‘Startup disk’. Fortunately, Apple’s utility automatically switched it over after I reinstalled the OS to this drive shockingly enough, so nothing more needed even on that.

Internal USB, perfect for Dongles:

Installing the iLok dongle

The iLok licensing dongle installs nicely inside the internal USB A port. Kind of reminds me of those internal VMware USB A ports meant for the ESX installation… and then you know they’d eventually go bad or corrupt themselves and the internal IT of that company never makes a backup so then you need to reconfigure ESX from scratch… good times. What? I’m not salty, not salty at all. The Sonnet NVMe card being installed on the first slot (bottom) does seem to bring more attention to the fact there’s so many unpopulated PCI-e slots.

What should be used as the display option?

1. The Dell UltraSharp U3224KB 6K actually has a few potential compatibility problems with macOS or the hardware (it’s not really well-known as Dell support gave up troubleshooting it), so you’ll get various screen distortions. It’s also possibly one of the most UGLY products I’ve ever seen in my life… the web camera looks like a malignancy, and I absolutely can’t stand silver-painted plastic. Complain about Apple’s prices all you want, at least they use nice materials.

2. The Pro Display XDR is just a little bit too much for my taste and sometimes temperamental as it’s such a complicated display (contrary to popular mythology it does not use OLED technology so it shouldn’t burn out over time). I honestly don’t think I would encounter any problems if I bought a Pro Display XDR but the cost is too much.

It’s Free Real Estate – Tim & Eric

3. That basically leaves us with the Studio Display. A lot of the 3rd party Samsung/ASUS/LG 4K or 5K offerings have dramatically inferior colour or a larger pixel size… and there’s still the potential aspect of compatibility since non-Apple hardware sometimes doesn’t play nicely. While the Studio Display is much-maligned with its high cost and strangely attached power plug, its DPI is the same as the Pro Display XDR you just get less screen real-estate and inferior contrast which I don’t care too much about. It will still look much better than your garden variety LG 27” 4K UHD Ultrafine because the colour is calibrated very well and it gets decently bright… again… I wish YouTube reviewers would point some of these things out instead assume that every display is equivalent to Apple’s offerings when they’re not. And in the event, you do find the 5K 27″ displays from other manufacturers they’re still at 60Hz. The refurbished Studio Display I had my eye on from Apple is no longer available, so I’ll be waiting for a bit until they stock another one… or maybe they’ll get a heavily marked down Pro Display XDR…. in the meantime, I’m stuck using one of my gaming monitors which has 240Hz and strobing to reduce ghosting, which does work on macOS!.. and makes macOS look so different since I’m used to how it looks with all of the ghosting all the time.

Another little something that’s rarely discussed: the nano-texture glass option causes a slight ‘frosting’ which is especially noticeable on text… it’s only meant as a compromise if you’re working in a literal sun room, sometimes more expensive does not mean better. This is exemplified with the M2 MacBook Air situation: if you opted for the superior GPU it ended up running more slowly because of the thermal throttling so the lower-end GPU option is more performant, lol. Of course Apple doesn’t always disclose these caveats or finer details, but their divisions responsible for publishing the products may not be privy to them.

Peripherals:

Onto the peripherals: I will indeed be using the Magic Mouse… before your jaw drops and you grab some tomatoes while calling me a heretic, let me explain. The Magic Mouse is one of the few peripherals with velocity sensitive 360º scrolling AND fully integrated in the UI of the operating system. This is extraordinarily similar and analogous to IBM’s ScrollPoint which also offered dynamic 360º scrolling and to a lesser extent the TrackPoint scrolling but which only offers vertical and horizontal. Needless to say 360º scrolling and horizontal scrolling is something I use all the time and cannot fathom why we still even have (notched!) mouse wheels. It’s bizarrely a mouse Apple seemingly designed specifically for me and nobody else, I imagine average or larger hands would be extremely uncomfortable with it and Apple really should offer a larger version to encompass a better demographic.

Men & Mice

The Magic Mouse and ScrollPoint Pro share very similar design philosophies in the way we scroll. I also made another strange discovery when I was looking for some more flat slick mousepads since the Magic Mice don’t work well on cloth ones at all, and that are these 3M ‘Precise’ mouse pads: AMAZON LINK.

Apparently the 3M mouse pads have a reflective material which allows the lasers to use less strength and thus supposedly saves 50% battery life, some Magic Mouse users affirmed this, so we’ll see how this goes down. It’s kind of surprising I’ve never heard of any tech reviewers mention these because saving 50% of battery life on a wireless mouse is huge.

Keyboards..

There’s a lot of good reasons to NOT use Bluetooth keyboards due to wireless keylogging, there’s not going to be anyone with that talent in rural Canada so I’m in the clear. You could buy a Matias keyboard but they’re actually worse in many aspects than the 1st party Apple keyboards: the legend printing is of dramatically worse quality, the surface of the keycaps don’t have that special velvety texture, and the snappiness of the scissor switches is probably worse. While I have many mechanical keyboards, I don’t care so much about it anymore. The Apple Magic Keyboard is just a little bit too flat for my tastes today so I ordered a “ESC Flip PRO Computer Keyboard Stand”, which can stick on the back and give you different height adjustments if needed.

onboard LEDs everywhere.

Both the iLok and Sonnet NVMe card have so many LEDs on them you can see the lightshow through the rear of the ‘grater’ now.

Now my plans are to use this thing for at least a decade to get my money’s worth: will 64GB of RAM be enough? To that I say: 64GB ought to be enough for anybody. The only major hindrance will be the forced software obsolescence when the Apple overlords declare it will not be receiving anymore updates… and then you know things like the Roland Cloud and other major vendor software will cease to get updates and functionally work. It’s appalling at how all software is heavily DRMed and requires a live account to work against. At the very least when WWIII breaks out I’ll have plenty of premium aluminum to donate to the state, forged by Tim Apple himself!

For the record I was never really an ‘Apple person’, but they’ve finally fixed all of the problems (mice have two buttons and the keyboard layout is restored to be more IBM-like) and made a product that fulfills everything I’ve ever wanted… AND forced developers to program for ARM: so now my Stallman-not-approved-absolutely-proprietary audio software runs incredibly well on a non-x86 platform. Astounding. Yeah there were some 3rd party mice that had two buttons for Macs ‘back in the day’ but a good portion of the *software* and games weren’t programmed for a real right click rendering it useless. I remember watching a ‘making-of’ video of the Myst developers pushing down Ctrl with the mouse to right click EVERY SINGLE TIME in their 3D modelling software and nearly fell off my chair… it’s quite jarring when you need to press a button on the keyboard at the same time with clicking the mouse so I’ve no idea how they tolerated that. Maybe they loved doing it? Who knows.

It’s crazy how much changes, and how much is the same

[PC-98] How an obscure fighting game for Japanese PCs can ruin your day

This is a guest post from spaztron64

Sometime in June 2023, I came across a Twitter post by kuma_neko24 where he detailed his struggles in getting Policenauts to work on his PC-9821 V166. As I own the exact same machine, I figured I should give it a shot myself and report the results.
Unfortunately, while I was experimenting, I had hit a strange situation where certain executable files would randomly corrupt themselves and become unusable:

I figured the game might’ve been the culprit, but after restoring the most recent backup, the same thing happened not long after, without even trying to play the game. I’ve done Memtests of the SDRAM, as well as block level diagnosis of the CF card, and they were fine, so I attributed it to a corrupt filesystem.

I then restored a much older backup, and moved over all the known good files from the recent backup. Things seemed fine for a while, but then the exact same situation happened again. I decided to look into the issue in more detail, and I’ve noticed that affected files like WIN.COM and KRNL386.EXE were 3KB larger…. as we’ll see later, this should’ve been an immediate red flag, but I yet again brushed it off as a bad filesystem or CF card.

Fast forward to March 2024, and I got myself a set of new CompactFlash cards. I had once again restored the last known good backup, and for about a day everything seemed alright, until….

Needless to say, all my previous hypotheses turned out to be wrong. As such, I investigated yet again into the corrupted files, and used windiff to take a closer look into what actually changed. Let’s take a look at WIN.COM, SCANREG.EXE and KRNL386.EXE:

We can observe the following pattern:

  1. The files always grow in size
  2. windiff shows that the header is modified and that a bunch of garbage is added at EOF
  3. It’s always the same exact garbage
  4. Additional examinations show that MEM.EXE also has these modifications
  5. MZ headers are present, so it’s certainly executable code

By every metric, this isn’t a set of accidental corruptions, these are deliberate infections.

I then proceeded to take a sample of the suspiciously added code, did a byte scan of every file on the card, and isolated the following programs as infected:

  • A:\WINDOWS\WIN.COM
  • A:\WINDOWS\COMMAND\MEM.EXE
  • A:\WINDOWS\COMMAND\SCANREG.EXE
  • A:\WINDOWS\SYSTEM\KRNL386.EXE
  • B:\RECYCLED\DB51.EXE
  • B:\WIN31\WIN.COM
  • B:\WIN31\SYSTEM\DOSX.EXE
  • B:\SBVGM\VGMPlay.exe
  • B:\GAMES\FIGHTING\DGA\DGP.EXE

Most of these are self-explanatory, apart from the last two.
VGMPlay is a program by Scali that allows playback of OPN(A) and OPL3 VGM files even on PC-98s without a SoundBlaster 16/98. I know this program is not the culprit, since the original program I got is clean.
DGP, on the other hand, is a different story, and it needs a bit of a foreword.


Duelists and queens!

DGP is a shorthand for “Duelist Gaiden Plus”, which refers to the game “Queen of Duelist Gaiden Alpha Plus”.

The original Queen of Duelist is a rubbish game not worth anybody’s time or any further mentions.
Queen of Duelist Gaiden Alpha is the 1994 sequel, which is a significantly better game, and arguably one of the best fighters for any personal computer at the time, featuring:

  • 10 fully voiced characters
  • Adjustable game speed
  • Dual PWM sampling over the integrated PC beeper speaker
  • Decent performance on a 286, and more

At the end of 1994, Agumix released an upgrade for the game called “Queen of Duelist Gaiden Alpha+”, which introduces some quality of life improvements. This update requires the original Gaiden Alpha to already be installed, although it doesn’t do any differential patching, it just replaces existing files with newer variants, and adds the DGP.EXE executable to be used instead of DGA.EXE. Shockingly, all of my older backups containing the patch, as well as the dump of the disk itself (available on Neo Kobe PC-9801) contain the suspicious code.

I didn’t expect it to return any results at first, but I uploaded DGP.EXE to VirusTotal for scanning, and well…

Who would’ve thought, the update was distributed with a copy of the Yankee Doodle virus embedded within!

Yankee Doodle is a very simple COM and EXE self-injecting virus from 1989. When executed, it resides in memory and infects COM and EXE files that use certain INT 21h DOS API calls, which include the files in the above list. It’s payload is normally supposed to play the Yankee Doodle tune through the PC speaker at 17h every day, but on PC-98 what it does instead is catastrophically fail and crash the entire system. It’s infection routines, however, do work.

A thing worth pointing out is that the infection routine is not automated. It will only engage when executing other programs from DOS manually. This should never be possible after starting the game, since there’s no way to return to the DOS command line without a full system reboot…

… unless you tried to start the game with not enough free memory, after which the game will dump you back to DOS. Pretty much every DOS user would, in situations like this, start MEM (preferrably with the /C and /P flags) to check what uses so much memory and how little remains free.

Sadly, for us, Yankee Doodle remains memory resident in this case, and it infects MEM as soon as it’s run, which explains how it got infected. Now every time you check your memory, the virus will be waiting to spread further.


Well, now what?

No known clean copy of the Plus version is known to be in circulation, and the only available source currently is the compromised disk image on Neo Kobe.
Additionally, nobody knows if the game was originally distributed with the infection, or if the person sharing a dump of the game around through P2P back in the 1990s had infected his copy.
As such, unless an original physical copy is found on auction, dumped, and confirmed clean, the only solution is to patch out the infectious payload from the game. Until then, do not play Queen of Duelist Gaiden Alpha Plus on your PC-98 machine!

EDIT 2024-03-31:
Fortunately it appears that the “Alt 1” dump available on Neo Kobe has a clean copy of DGP.EXE.
DrNyquist has also confirmed that this torrent also has a clean dump.

Conclusion: steer clear of the primary dump on Neo Kobe, use the others mentioned above.

Using an AIM-VPN EPII Plus in a Cisco 3845 or how to convert an EPII to HPII

this is a guest post by night3719

I got a 3845 for cheap with an AIM-VPN/EPII-PLUS salvaged from a 2851. At the seller’s place I installed the card and tested the 3845 there. When IOS was booting up I noticed a message along the lines of “AIM type unsupported by this platform”. I didn’t think much of it and just thought maybe I was using a version of IOS that didn’t support the module.

When I got home I threw a bunch of different IOS versions at it and nothing worked. Something was off.
As it turns out, there are 3 variants of that card, one for 1800 (BPII), one for 2800 (EPII) and one for 3800 (HPII) (note: the cards support other routers too but compatibility is a mess).

When I looked up the other variants, I immediately noticed something interesting: the modules seem identical. After looking at questions people who had similar issues posted on the Cisco community forums, something caught my eye: the output from show diag. I noticed that command spits out an EEPROM dump. As it turns out, the only difference between the cards is the contents of that EEPROM.

my card
card I found online

There are two 93C46A EEPROMs on the board, one connected to the crypto chip which probably holds config data for it and the other connected to the connector that goes to the main board. Okay, all I have to do is flash that chip with the dump I found online, right?

circled red is the EEPROM for the card, circled blue is the crypto EEPROM

Wrong.

dump from my card

The contents are obfuscated. Luckily however, two friends far smarter than I am (special thanks to Rachel Mant <[email protected]> and nyanpasu64!) figured out the obfuscation technique and one of them wrote code that encodes/decodes it

def bitSwap(value: int) -> int:
    result = 0
    for bit in range(8):
        result |= ((value >> (7 - bit)) & 1) << bit
    return result

def addrSwap(value: int) -> int:
    result = value & 1
    for bit in range(1, 6+1):
        result |= ((value >> (7 - bit)) & 1) << bit
    return result

data = [
    #data to be encoded or decoded goes here
]

for i in range(0x80):
    byte = data[addrSwap(i)]
    print(f'{bitSwap(byte):02x}', end = '\n' if (i % 16) == 15 else ' ')

I used their code to encode the output from the show diag then flashed it with my TL866. To my surprise, it worked, and the card works even on the latest IOS for this thing (15.1(4)M12a as far as I know)!

it works!

And just like that, it works!

Win32Emu / DIY WOW

This is a guest post by CaptainWillStarblazer

When the AXP64 build tools for Windows 2000 were discovered back in May 2023, there was a crucial problem. Not only was it difficult to test the compiled applications since you needed an exotic and rare DEC Alpha machine running a leaked version of Windows, it was also difficult to even compile the programs, since you needed the same DEC Alpha machine to run the compiler; there was no cross-compiler.

As a result, I began writing a program conceptually similar to WOW64 on Itanium (or WX86, or FX-32), only in reverse, to allow RISC Win32 programs to run on x86.

The PE/COFF file format is surprisingly simple once you get the hang of it, so loading a basic Win32 EXE that I assembled with NASM  was pretty simple – just map the appropriate sections to the appropriate areas, fix up import tables, and start executing.

To start, I wrote a basic 386 emulator core. To complement it, I wrote my own set of Windows NT system DLLs (USER32, KERNEL32, GDI32) that execute inside of the emulator and then use an interrupt to signal a system call  which is trapped by the emulator and thunked up to execute the API call on the host.

For example, up above, you can see that the emulated app calls MessageBoxA inside of the emulated USER32, which puts 0 in EAX (the API call number for MessageBoxA) and then does the syscall interrupt (int 0x80 in my case), which causes the emulator to grab the arguments off of the stack and call MessageBoxA.

To ease communication between the host’s Win32 environment and the emulated Win32 environment, I ran the emulated CPU inside of the host’s memory space, which means that to run applications written for a 32-bit version of Windows NT, you need a 32-bit version of win32emu (or a 64-bit version with /LARGEADDRESSAWARE:NO passed to the linker) to avoid pointer truncation issues, to prevent Windows from mapping memory addresses inaccessible by the emulated CPU.

To get “real” apps working, a lot of single-stepping through the CRT was required, but eventually I did get Reversi – one of the basic Win32 SDK samples – to work, albeit with some bugs at first. Calling a window procedure essentially requires a thunk in reverse, so I inserted a thunk window procedure on the host side that calls the emulated window procedure and returns the result.

It’s amazing, it’s reversi!

After this, I got to work on getting more complicated applications to work. Several failed due to lack of floating-point support, some failed due to unsupported DLLs, but I was able to get FreeCell and WinMine to work (with some bugs) after adding SHELL32. I was able to run the real SHELL32.DLL from Windows NT 3.51 under this environment.

Freecell
Minesweeper

One might wonder why I put all this work into running x86 programs on x86, but the reason is that there’s the most information about, and I’m most proficient with, Windows on the 386. Not only does Windows on other CPUs use other CPUs, but also there’s different calling conventions and a lot of other stuff I didn’t want to mess with at first. But this was at least a proof-of-concept to build a framework where I could swap the CPU core for an emulator for MIPS or PPC or Alpha or whatever I wanted and get stuff running.

Astute readers might be wondering why I didn’t take the approach taken by WOW64. For those who don’t know, most system DLLs on WOW64 are the same as those in 32-bit Windows, the only ones that are different are ones with system call stubs that call down to the kernel (NTDLL, GDI32, and USER32, the first of which calls to NTOSKRNL and the latter two calling to WIN32K.SYS). WOW64 instead calls a function with a system call dispatch number, which does essentially the same thing. The reason for this is that the system call numbers are undocumented and change between versions of Windows. WOW64, being an integrated component of Windows, can stay up to date. If I took this approach, I’d either have to stay locked to one emulated set of DLLs (i.e. from NT 4.0) and use their system call numbers on the emulated side, or write my own emulated DLLs and stick to a fixed set of numbers, but either way I’d somehow have to map them to whatever syscall numbers are being used on the host.

As I went on, I should probably also mention that what I said earlier about loading Win32 apps being easy was wrong. Loading a PE image is pretty straightforward, but once you get into populating the TEB and PEB (many of whose fields are undocumented), it quickly gets gnarly, and my PEB emulation is incomplete.

Adding MIPS support wasn’t too much of a hassle, since the MIPS ISA (ignoring delay slots, which gave me no shortage of trouble) is pretty clean and writing  an emulator wasn’t difficult. The VirtuallyFun Discord pointed me to Embedded Visual C++ 4.0, which was invaluable to me during development, since it included a MIPS assembler and disassembler, which I haven’t seen elsewhere. After writing a set of MIPS thunk DLLs and doing some more debugging, I finally got Reversi working.

There’s still some DLL relocation/rebasing issues, but Reversi is finally working in this homebrewed WOW!

I’d encourage someone to write a CPU module for the DEC Alpha AXP (or even PowerPC if anyone for some reason wants that). The API isn’t too complicated, and the i386 emulator is available for reference to see how the CPU emulator interfaces with the Win32 thunking side. An Alpha backend for the thunk compiler can definitely be written without too much trouble. Obviously, the AXP presents the challenge that fewer people are familiar with its instruction set than MIPS or 386, but this approach does free one from having to emulate all of the intricate hardware connections in actual Alpha applications while still running applications designed for it, and I’ve heard the Alpha is actually quite nice and clean. MAME’s Digital Alpha core could be a good place to start, but it’ll need some adaptation to work in this codebase. Remember that while being a 64-bit CPU with 64-bit registers and operations, the Alpha still runs Windows with 32-bit pointers, so it should run in a 32-bit address space (i.e. pass /LARGEADDRESSAWARE:NO to the linker).

Theoretically, recompiling the application to support the full address space should enable emulation of AXP64 applications, since the Alpha’s 64-bit pointers will allow it to address the host’s 64-bit address space, but I’m not sure if my emulator is totally 64-bit clean, or if the AXP64’s calling convention is materially different from that on the AXP32 in such a way that would require substantial changes. In either case, most of the code should still be transferable.

I also want to get more “useful” applications running, like development tools (i.e. the MSVC command line utilities – CL, MAKE, LINK, etc.) and CMD. Most of that probably involves implementing more thunks and potentially fixing CPU bugs.

This project is obviously still in a quite early stage, but I’m hoping to see it grow and become something useful for those in the hobby.

For those who want to play along at home, you can download the binary snapshot here: w32emu.zip

A more complete version of the writeup is available here: https://bhty.github.io/og/win32emu_VirtuallyFun_Post.htm and you can find the project here https://github.com/BHTY/Win32Emu/.

Sun Ray adventures pt1

this is a guest post by night3719

A while back I was looking for a 19in 5:4 screen so I messaged a guy I know that would normally have something like it. When I asked him about it, he said he didn’t have any 19in screens, however, he has this “14in Sun LCD”. I was intrigued so I asked him to send pics of it. Lo and behold, this is what he sent me the next day:

Unfortunately, bad news came. He powered it on and told me it was flickering. Ok fine. These are hard to come by in my country (Vietnam) so I decided to get it anyways. He also cut the price by half, so it was reasonable-ish.

When I got home and powered it on…. yeah. It was flickering. I opened up the menu of the LCD and I quickly noticed something peculiar: the image was flickering but the LCD menu was not. When I opened it up, I made yet another interesting discovery: the whole thing is practically a sun ray duct taped to a normal LCD. The sun ray board is not driving the lcd directly, there’s a separate controller board (similar to what you would find in a normal standalone display without a sun ray shaped tumor on the back).

As it turns out the flickering was caused by a single cap that went bad. I replaced it and the image looks good.


There is a GUI thing I’ve read that allows you to configure various parameters of the sun ray so I tried to bring it up. No matter what key combo I pressed it didn’t show up. Once again, bad news came. My sun ray has the non-GUI firmware. The only way to enable it is to flash a GUI firmware or a firmware with GUI enabled (the firmware shipped with SRSS 5.1 and below has separate firmware files for GUI and non-GUI while SRSS 5.2 and later both GUI and non-GUI are a single file, GUI on/off is specified with a flag during flashing).

Okay then. No big deal, all I have to do is just flash the firmware, right? Well yes but no. I would very quickly find out that I don’t have the firmware. I had SRSS 5.4 installed and turns out, 5.3 and later stopped including the firmware and that was something you needed MOS for. Great job Larry!

Okay then. No big deal, all I have to do is just download SRSS 5.2, right? Once again, for the second time, yes but no.


*cough*

2 days later I got access to edelivery again. I downloaded SRSS 5.2. I uninstalled SRSS 5.4 and installed 5.2, all I have to do now is just flash the firmware right? riiiight??? Once again, for the THIRD time, yes but no. For some reason I was able to flash the firmware with “utload“ (which has GUI disabled) but I couldn’t flash it with “utadm“ despite it being able to connect to my T5220 and start a session just fine. As I would find out after one whole day wasted, I was supposed to use a separate network served by the T5220, and this is what I did:
Setup NET1 port as a dedicated interface for Sun Ray


-bash-3.2$ sudo utadm -a e1000g1
### Warning: DHCP Service is in the maintenance mode
             There could be a problem with the DHCP configuration

### It is strongly recommended to fix the problem and then use:
### "/usr/sbin/svcadm clear svc:/network/dhcp-server:default"
### to get DHCP service out of the maintenance mode before running utadm

Do you want to Continue?  (Y/[N]): y
### Configuring /etc/nsswitch.conf
### Configuring Service information for Sun Ray
### configuring e1000g1 interface at subnet 192.168.128.0
  Selected values for interface "e1000g1"
    host address:       192.168.128.1
    net mask:           255.255.255.0
    net address:        192.168.128.0
    host name:          t5220-e1000g1
    net name:           SunRay-e1000g1
    first unit address: 192.168.128.16
    last unit address:  192.168.128.240
    auth server list:   192.168.128.1
    firmware server:    192.168.128.1
    router:             192.168.128.1
  Accept as is? ([Y]/N):
### successfully setup "/etc/hostname.e1000g1" file
### successfully setup "/etc/inet/hosts" file
### successfully setup "/etc/inet/netmasks" file
### successfully setup "/etc/inet/networks" file
### Disabling Route Advertisement
### finished install of "e1000g1" interface

### Configuring firmware version for Sun Ray
        All the units served by "t5220" on the 192.168.128.0
        network interface, running firmware other than version
        "4.3_146928-01_2011.06.03.14.41" will be upgraded at their next power-on.      
       
### Configuring Sun Ray Logging Functions


DHCP is not currently running, should I start it? ([Y]/N): ### Error: unable to start dhcp services.
    Please restart dhcp manually after utadm has completed.

well… oops. Shouldn’t’ve ignored that. One “svcadm clear dhcp-server“ and one “svcadm restart dhcp-server“ later… Let’s try to flash the firmware.

-bash-3.2$ sudo utfwadm -A -e 00144F6F69CA -n e1000g1 -G force
-n interface option ignored.  It is no longer required with -e option.
        Unit "00144F6F69CA" will be upgraded at its next power-on
        if it is served by host "t5220" and is connected to
        the  network and is not already running firmware
        version "4.3_146928-01_2011.06.03.14.41".

### stopped DHCP daemon
### started DHCP daemon
### reinitialized DHCP daemon

For those who are wondering what the flags do:

Options:
        -A            # add the specified unit(s) to the upgrade list
        -D            # delete the specified unit(s) from the upgrade list
        -P            # print version information
        -R            # remove firmware modules from boot directory
        -a            # apply to all units connected to the specific interface
                      #  or subnet
        -e enetAddr   # apply to the unit given by the six hex bytes
                      #  of its ethernet address
        -n intf       # name of a dedicated network interface to enable upgrades                                                                                                                                                              on
                      #  (e.g., hme0, vge1, etc. "all" = all interfaces)
        -G option     # control enabling of configuration GUI on Sun Rays
        -g option     # control disabling of configuration GUI on Sun Rays
        -i filename   # append contents of filename to config files
        -N subnetwork # shared subnetwork address to enable upgrades on
        -d            # actively disable firmware download (useful with "-e")
        -V            # only generate version files, do not configure DHCP
        -F            # force firmware load even if downgrading
        -u            # use frame buffer to do download and decompression
        -f firmware   # use the firmware described by the path "firmware"
                      #  for upgrades on the given network interface(s)

Power cycle with CTRL+Pause+A and…

…success!

Fun fact: the firmware is stored temporarily in the framebuffer (iirc at least) The GUI can now be accessed:

SoftWindows on OpenVMS

(This is a guest post by Antoni Sawicki aka Tenox)

I like exploring vintage hypervisors and emulators. In the past I did a whole series on Merge, VP/IX and others. This time I wanted to take look at something a little more exotic – SoftWindows on Alpha OpenVMS. I have in fact installed it a while back but I could never get it properly licensed. I looked everywhere, asked everyone and of course no one had a license pack for this. Fortunately there are two license generators for OpenVMS, pakgen and lmfgen. But how do you find out what is the exact product code and vendor? VMS provides a license debug facility:

$ reply/enable=license
$ define/sys/exec lmf$display_opcom_message true

Then, when starting an app, you will get an opcom log message with all the required product name, vendor, etc. The rest is easy. For the lazy, here is a complete license pack for SoftWindows:

$ LICENSE REGISTER SOFTPC -
        /ISSUER=DEC -
        /PRODUCER=DEC -
        /UNITS=0 -
        /OPTIONS=(MOD_UNITS,ALPHA) -
        /CHECKSUM=2-GNHM-DAFO-CPGG-AICI
$ LICENSE LOAD SOFTPC

Here is a screenshot for your viewing pleasure!

SoftWindows on OpenVMS Alpha

The install comes with it’s own version of Windows 3.1 plus some additional tools and apps, typical for Insignia products. You can map drives to folders, ports COM and LPT, etc. There are a variety of video modes – Hercules, CGA, EGA and VGA, even 256 colors. The performance is quite decent, however the CPU is pegged at 100%, as you can see in the system monitor. There is a CPU idle detection tool, however it doesn’t seem to work very well. I suspect that perhaps this may be to do with much never OpenVMS version, than the software has been designed for. The SoftWindows has been released in 1994 and not been updated since.

How do you install and run this thing? There is a full installation guide, however since this is just a PCSI file, you can simply use product install:

$ unzip softwin-v0100.zip
$ product install *

To start it you cast these magic spells:

$ @sys$sysroot:[sysmgr]softwin$startup.com
$ softwin

Since SoftWindows is essentially SoftPC you can run pure DOS mode. I will do a follow up on this and explore some DOS games.

You can find all the files on osarchive.org.

From a hindsight, it’s ironic how roles have reversed in 30 years. Back then MS-DOS / Windows was a toy OS, running on a toy “personal” computer, emulated in a little window on a “real” computer like DEC Alpha. In modern times you run OpenVMS as a guest VM on a Windows PC.

Have fun with virtualization!

Patching Touhou 6 (Embodiment of Scarlet Devil) to run on a 3dfx Voodoo 2

This is a guest post from spaztron64

One thing that’s been bugging me for years at this point is the ability to run Touhou 6 on my PC-9821 V166. For a good few years I’ve been stuck with nothing more than a Matrox Mystique graphics card in that thing, which can’t create a D3D6 HAL context for rendering the game’s 3D elements. In 2021 I snatched a 12MB 3dfx Voodoo 2, in hopes of being able to play more 3D games on that machine. There were two major problems….

1) The USBHID.SYS driver for PC-98 Windows 9x conflicts haaaard with Voodoo drivers. Moving the cursor around corrupts memory and makes the system unstable or kills the driver in mere seconds of use

2) None of the Touhou games support secondary Direct3D devices

For those not in the know in regards to the second issue, DirectX allows you to use multiple DDraw and D3D capable GPUs on one system. By default it’ll set the video card outputting a signal on the primary monitor as the primary DirectX device, the secondary output as secondary, and so on. Most people only used one monitor on their Win9x PC back in the day, hooked up to their 2D capable card. The Voodoo 1 and 2 aren’t meant to act as 2D video cards, yet they had to support D3D initialization somehow, so they presented themselves as non-primary DirectX devices, usually secondary, in hopes that game developers would allow the end user to select their 3D accelerator of choice.

This was standard practice at the tail end of the 1990s, but it was falling out of use at the turn of the millenium with the demise of 3dfx and the general lack of need for multiple graphics cards in one system for 3D gaming. This presented a problem, as games that technically could be played on a Voodoo 2… didn’t, as they could never be told to use it through normal means. Hacky solutions existed, like 3dfx’s unfinished, buggy attempt at a Voodoo 2 driver for Windows 2000 that allowed it to behave like a primary display adapter for general 2D and 3D use, but it’s notoriously unstable and isn’t possible to use on 9x. I’ve used this method before to play Touhou 6, Max Payne 1, GZDoom, GTA 3 and Vice City on the Voodoo 2 through Windows XP with mixed results.

Once I got an NEC bus mouse for use on my PC-98, I could finally use the Voodoo 2 on it without constant crashing. This got me interested in trying to get Touhou 6 to work on it, which lead me to a path of pure pain.

For starters, Touhou 6 is one of those games that only use primary DirectX devices, like the unsupported Mystique, so I had to somehow coax it into initializing the secondary device instead. My first approach to handling this was through direct binary patching. I didn’t know where to look for the init routines, so I asked 32th System for some heads up, and he pointed me to a rough location in process memory where the appropriate CreateDevice calls reside:

I then searched for the appropriate opcodes in the game binary, and patched all 6A 00 (push 0, A.K.A D3DADAPTER_DEFAULT) opcodes to be 6A 01 (push 1), forcing the game to init the secondary D3D device.

While this initially did in fact work, the approach ultimately sucked for two reasons.

1) Static binary patching only works for that specific binary, and doesn’t carry across different versions.

2) This requires manually patching every CreateDevice call, of which there are many in Touhou 6

It is at this point that I started sharing my progress with friends. jbit was quick to hop in and say “Why the fuck are you doing it this way? Just make a d3d8.dll wrapper DLL”. This was absolutely the smarter approach, I just didn’t know how to do it since I don’t know jack about DirectX programming. Fortunately, he handed me a little VS project he worked on called d3dcutter that, among other things, wrapped the CreateDevice function, which I promptly modified to always push 1 instead of 0 to the stack for the device selection parameter.

This solved the two patching problems, and I had something to show for starters:

Now, I’m sure you can tell that the performance is absolutely atrocious. This came as no surprise to me, as while the Voodoo can absolutely render the game at a full 60FPS most of the time, the dinky little Pentium MMX 166 struggles hard at doing triangle setup for the backgrounds every 16 milliseconds. Remember, 3dfx cards had no Hardware T&L, so the game has to fall back to Software T&L. I think the following wireframe screenshot will help illustrate the amount of work the CPU has to do every frame:

“Well, I’m in luck!”, one might think as they remember that older Touhou games support framerate division by 1/2 (30FPS) and 1/3 (20FPS) as options in custom.exe… There’s just one problem.

The game just… fails to initialize the D3D HAL on the Voodoo 2 in the frame divided modes. Why? Beats me, I still haven’t figured it out and likely never will. Just more ZUNcode bullshit I suppose.

I then figured out that framerate division is handled by a variable (that can be set even lower than 1/3, by the way) which could be set at runtime, even when the normal 60FPS mode is used. I suspected that the game uses a different initialization path for the two modes, so I once again tracked down opcodes that expect the variable to be set to 0 in process memory, this time with Cheat Engine, and patched them in the binary. Well guess fucking what, the game fails to initialize even when the regular init routines are modified to expect 30FPS or 20FPS frame division to be set.

This approach simply wasn’t going to work. so I went with trying to set the variable at runtime. Unfortunately, I had to go back to version specific patching once more for this, since there’s no way to wrap this functionality through DLL means. Additionally, while this wasn’t hard to do with the game running in windowed mode with Cheat Engine on the side on a modern system, it was basically impossible on a Voodoo 2 equipped machine, as the game ran in fullscreen and it wasn’t possible to restore the window after an Alt+Tab.

My final solution was to generate a trainer in Cheat Engine for version 1.02d of the game, as it’s the last one with a working logic speed limiter, that would forcibly set the frame divider variable at runtime with a hotkey:

This finally allowed me to play the game in 1/3 framerate mode on the Voodoo 2.
This allows the game to run at full logic speed most of the time, as the CPU now has 40 miliseconds per frame for triangle setup, however there’s something wrong with how the card handles buffer swaps in this mode of operation, leading to a very back-and-forth stuttery image that’s very unpleasant to look at.

Can we do better? Well yes! The game uses so-called STD scripts for certain stage-specific data setup, but also handling camera movement and geometry generation. Using Touhou Toolkit, I was able to unpack the appropriate DAT file, decompile all the STD scripts, remove all geometry commands, and recompile them for in-game use. As there are no more backgrounds to draw, there’s a trail effect left behind every frame, but thankfully custom.exe has an option to forcibly clear the back buffer every frame.

The end result? A nearly tripled framerate in 60FPS mode, recovered just by not drawing any 3D backgrounds. The game still lags when lots of bullets are on screen, but this doesn’t really come as a surprise.