One of the great things about Windows 11 was the inclusion of the Windows Subsystem for Linux or WSL. It wasn’t available at launch but it started with v1 a simple ELF loader and an implementation of the Linux kernel on the NTOS kernel. Being a re-implementation of Linux it was great for what did work, however many things did not. Compared to other Unix subsystems for NT over the years however WSLv1 was the best no question.
Not being enough however, Microsoft took a page out of the old WinOS/2 days and stubbed a Linux kernel to run under Hyper-V, allowing it to run far more applications, and for me giving the ability to use applications that alter memory space, and allowing i386/x32 applications to run. You could happily export your X-11 display to a Windows based X server, and get applications that way. But this isn’t 1993 so it was very limiting.
Enter WSLg
The big change is the ability to use RDP to hook both Wayland and Pulse Audio bringing Linux ‘desktop’ X11 applications to the Windows desktop. Also added in is virtual GPU capabilities allowing accelerated 3D, along with CUDA applications to run (although with a performance penalty.
The downshot for me, is that my existing Debian 10 install was not picking this up, and was somehow picking up a VMware
I have no idea how or why it was picking this up. While I did have the VMware Player installed, the newer versions backend through Hyper-V.
I did find this article, which gave me a path to get where I wanted, although the transition of an existing v2 instance didn’t work for me. Maybe Debian 10 is too weird. I don’t know.
Not sure where how to proceed I backed up my home directory and un-installed VMware Player and purged my existing Debian 10. I installed that Ubuntu Community Preview which promises to include all the new and exciting features.
$ glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Microsoft Corporation (0xffffffff)
Device: D3D12 (NVIDIA GeForce RTX 2070) (0xffffffff)
Version: 21.0.3
Accelerated: yes
Video memory: 40710MB
Unified memory: no
Preferred profile: core (0x1)
Max core profile version: 3.3
Max compat profile version: 3.1
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.0
OpenGL vendor string: Microsoft Corporation
OpenGL renderer string: D3D12 (NVIDIA GeForce RTX 2070)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 21.0.3
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 3.1 Mesa 21.0.3
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 21.0.3
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
Now this is looking MUCH better.
Now compare this to a native version of FuMark:
So it’s a 50% haircut. Ouch.
Gaming however using steam (yes steam runs!) reveals some other deeper issues. The mouse tracking is WAY off. So for FPS stuff you’ll spend a lot of time staring at the ceiling or the floors. The next issue is there is no mouselock, so things that rely on being able to move the mouse far beyond a screen length is impossible to play. To be fair it’s a preview, and so far I have to admit Windows 11 feels more like a technical preview. Also I don’t know what the deal is or anything about profiling it but KOTOR2 is insanely slow. Although at least with 3d acceleration running it’s not 1 frame every 5 seconds.
So I installed steam, because why not? But this is steam for Linux. It’s picking up VMware 3D accel.. .I don’t think that’s right. KOTOR2 loads but it’s 100% cpu and 1 frame every 5 seconds.
On the otherhand I’d installed Micropolis ages ago, and it added itself to the 11 launcher:
With all the controversy over 64bit pinball, and where and how things appeared, then disappeared to the discovery that the x64 version was a thing, but it was left off the install manifest but shipped on CD, along with my simple script to just extract it, the problem was that ARM32/64 users were left in the cold.
Don’t get me wrong, the original 32bit exe runs fine under emulation, but who wants emulation when you can have NATIVE CODE?! You’d have to try to find the source code (lol good luck!) or reverse engineer the program. And that’s what happened, enter:
I’m using Visual Studio 2019 to build this, and it was great it *just worked*. Hurray!
There is also a rebuild going on for SDL to bring Space Cadet Pin Ball to Linux and beyond. The only downside is that it uses a number of ‘new C++ features’ locking out older platforms. I’d done some work to dumb it down although there is a bit of this new fangled C++ I’m unsure of what is going on. So that means, unfortunately Itanium users are left in the dark, as Visual Studio 2010 is too old.
It sure may not look like much but it was an adventure getting here.
First, what is it? Well it’s the very simple NS32016 from here, with a few minor changes. I expanded the RAM from 256kb to a whopping 8MB. Then I added simple character I/O allowing me to print messages to the screen. Next looking at the toolchain page, I used my old Linux to Windows GCC 4 cross compiler to build the appropriate Canadian cross compiler to the NS3216.
Building the tools
A while back, I had built a cross compiler from Linux to Windows using GCC 4.1 as the basis as it was the last version that didn’t have massive external dependencies. NS32016 support was dropped some time in the late 3.x or early 4.x GCC so it means we need to go old anyways. I arbitrary picked GCC 2.8.1 for this build, while using the recommended Binutils 2.27
GCC 2.8.1 doesn’t quite know what we are doing so there is some flags we need to run off in auto-config.h namely
#define HAVE_BCMP 1
#define HAVE_BCOPY 1
#define HAVE_BZERO 1
#define HAVE_INDEX 1
#define HAVE_KILL 1
#define HAVE_RINDEX 1
#define HAVE_SYS_RESOURCE_H 1
#define HAVE_SYS_TIMES_H 1
You can just comment them out, or remove those lines all together.
When it came to building GCC, I did run into issues with GCC 7/8 trying to build GCC 2.8.1. I found it much easier to either have that Linux 4.1 compiler, or if you have access to Wine or WSL you can just run the Win32 binaries for the gen phases.
./configure --prefix=/cross --target=ns32k-pc532-netbsd --host=i686-mingw32
make CC=i686-mingw32-gcc xgcc cccp cc1 cc1obj
If you can run your own Win32 exe’s on Linux it’ll run just fine using the Linux to Windows GCC 4 cross. Otherwise you will need to either patch GCC or make your own GCC 4 hosted Linux to Linux cross compiler like this:
make CC=i686-mingw32-gcc HOST_CC=i586-linux-gcc xgcc cccp cc1 cc1obj
Hopefully that worked enough, and now you have your cross compiler. Now it’s time to build libgcc1.a
Again you really want to be able to run the resulting programs on Linux but I guess you could script it out. Naturally if you wanted to just use Linux, it’d be easier to make that cross compiler directly, although I’m not sure how much of GCC 2.8.1 I want to fight, or just get GCC 4 running on Linux and use that to port.
crt0, somewhere for C to start
As mentioned a crt0.s is missing but there was enough inspiration to come up with this:
#NO_APP
gcc_compiled.:
.text
.align 1
.globl _start
_start:
enter [],0
#APP
# setting the stack 256k under 8MB
lprd sp,0x7c0000
jsr _main
#NO_APP
L1:
exit []
# setting the stack 256k under 8MB
lprd sp,0x7c0000
bpt
.align 1
#does nothing
.globl ___main
___main:
ret 0
.globl _exit
_exit:
bpt
ret 0
I used a bit of the C example, and added some hooks that GCC was expecting namely a __main call that is made from main before it does anything (a place to init memory perhaps?), a place to catch an explicit exit call, along with setting the stack of course.
Patching InfoTaskForce without malloc / disk access
It’s not going to win any awards, but it was really great to get it to run a simple program written in GCC. Looking for something more fun, I took the old InfoTaskForce interpreter from ’87, and dug up my modification to run on cisco routers, and cooked up this version, that adds enough of printf from Linux, a bogus malloc that just allocates from a fixed memory array (otherwise you have to actually know about your platform), and a fun trick with later binutils where you can import a binary file directly as an object!
Neat!
Since I don’t have any file I/O being able to have the game data in RAM is crucial. I tried to tweak it so you could build the same working thing on Windows (maybe others?).
So for anyone who wants to look at the standalone adventure Win32 hosted tools are here, although the emulator should be somewhat portable.
(This is a guest post by Antoni Sawicki aka Tenox)
Pleased to announce that lsblk utility for Windows is finally released. This is not entirely new, the original code was on github for a few years now, but it was lacking major features of printing drive letters, mount points and filesystem types.
Why would anyone even want lsblk for Windows? There are many other “native” ways of displaying disks and volumes. For example in PowerShell: Get-Disk or Get-PhysicalDisk or in cmd: wmic diskdrive list brief. Not to mention diskpart or disk management UI. There are a few answers to this.
Firstly the output format of lsblk on Linux is rather intuitive and useful, which can’t be said about previously mentioned utilities. People with Linux background find it more like at home.
PowerShell and wmic lack ability to combine disk and volume information, unless you want to write a larger script. This is now part of lsblk.
Lastly lsblk uses low level function to list objects directly from the kernel (think WinObj), rather than going through various high level services, management interfaces and relying on VDS (Virtual Disk Service). As such it’s super fast and you can use it even with VDS stopped or inoperable. Finally it’s yet another of these native Linux tools now also available on Windows.
Finally, some of the column names may sound cryptic, so here is an explanation:
ST – Status 1=healthy 0=unhealthy TR – Trim / Unmap / Discard Capability RM – Removable Media MD – Media changed (for removable media) RO – Read only
In my “C:\Program Files (x86)\Windows Kits\10\Emulation\Mobile\10.0.14393.0” directory I have a modest 2GB file called flash.vhd which contains the phone image. I copy it to where I run my VM’s and run it with the XDE emulator:
And I’m running in no time, I login, load some apps, then I notice the storage:
What?! the disk image is a paultry 10GB. I guess the idea is that you wouldn’t actually try to load up the emulator like it’s your daily driver, rather you load YOUR app and only YOUR app, and just pretend that this isn’t some weird offshoot nostalgia machine.
Well needless to say something needs to be done about this storage situation.
I look and find this package, vhdutils. I had to go to some sketchy site, but it did include source. I should put this somewhere more legit to take away from all those weird squatters.
So with stuff installed onto my phone I’m almost at 7GB physical 7.6 virtual space. I could go all crazy with 128 or 256GB but it’d largely be stuff I bought… which of course thanks to the magical world of DRM won’t play.
Yeah I guess you are welcome that I bought all those movies, and stuff but sure I wasn’t going to watch them on this phone… emulator. Thanks. thanks again.
So the resize vhd is quick. brutal. and efficient. I go with 64GB, because, why not? I could probably just grow it again if I needed to.
Now for the fun part. We need to attach the vhd, and resize the volume. I hope you like diskpart.
In the MMC I attach the disk image.. it’ll pop a few folders as it’s got a bunch of drive letters. I’ve never explored a phone, I don’t know if the ARM images are just as weird.
Even more strange, it’s MBR!
So if you were thinking, lots of partitions, and a clear win for GPT, sadly this isn’t it.
Sadly there is no free partitions (although one hiding could be deleted…?) And the UI doesn’t support expanding a logical drive (the green container). But diskpart does.
As indicated above the emulator’s vhd is disk3. You can see it’s the 64GB disk. Select it.
Next list the volumes. The Data disk (J:) is what we want so select Volume 10.
Literally just ‘select volume 10’ and ‘extend’. Don’t tell me this is difficult.
Listing the volumes again will show a 59GB Data partition. Congratulations we did it!
Back in the MMC, you’ll see it as well how the Data partition, along with the green extended partition is now taking up the entire disk. So we can now Detach the VHD, and run the emulator again!
And just like that we now have plenty of free space on the emulator.
I downloaded some games, and some music. It’s nice to be back home.
It’s not an extensive list as I didn’t game much on my phone but here is what I know works:
Final Fantasy 1
Heroes of Larkwood
Skulls of the Shogun
Sonic CD
FL Studio
Candy Crush Saga
Pixel Dungeon +
Halo Spartan Asslt just closes, and Asphalt 8: Airborne doesn’t get the screen size right so it’s impossible to click enough buttons.
It’s nice that sonic runs, (haha) although using a mouse makes it impossible to control.
Now one fun thing is that the emulator is x86, nor ARM based so I converted the VHD to a VMDK, ran it under VMware, and YES it RUNS… sort of.
Perhaps a format that never was to be, the Phone/Tablet but it boots quickly and is so responsive. Windows without most of the .. Windows bits. I guess the real experiment will have to be will it run on a Surface?
It’s all 32bit anyways, and such an evolutionary dead end. Pitty.
Claunia‘s Aaru project is hitting a 5.3 milestone, and having a launch party!
Aaru is a fully featured media dump management solution. You usually know media dumps as disc images, disk images, tape images, etc.
With Aaru you can identify a media dump, extract files from it (for supported filesystems), compare two of them, create them from real media using the appropriate drive, create a sidecar metadata with information about the media dump, and a lot of other features that commonly would require you to use separate applications.
The year is 1983, and several Apple employees visit Brown University, and get some idea of what Universities want in a computer for the coming future. The big buzz of the era was the so called 3M machine:
1 Megabyte of Memory
1 Megapixel display
1 Megaflop of performance
Naturally the Macintosh didn’t fill this void, instead leaving this to the new SUN-2 workstation. However seeing the opportunity, in 1984 the seeds were planted for the ‘Big Mac’ project. The hardware design was headed by Rich Page, which included new things like ADB, and dedicated video RAM, along with a 68020 processor, and 68881 maths co-processor. Additionally Big Mac was intended to run a UniPlus version of SYSV Unix, along with the MacOS Toolbox being ported to run directly on top of Unix.
All that I can find of the Big Mac project is this insanely low resolution image, along with the codename ‘Milwaukee‘.
However all this came to and end in 1985 with the ouster of Steve Jobs, who in turn took various people including Bud Tribble, George Crow, Susan Barnes, Susan Kare, Dan’l Lewin, and Rich Page. Apple followed up with a $5MM USD lawsuit alleging that Jobs had done research for a next generation product and taken the key staff, namely Page from Apple to make it reality. The suit was eventually dismissed.
From there the race was on to build a 3M machine. NeXT would take the Big Mac concept further with the NeXT CUBE which included ADB, NuBUS and a 68030/68882 + SCSI + Ethernet setup. And for the OS, 4.3BSD Tahoe+Mach 2.5, along with a new Objective C language, and new OO frameworks.
Genesis
Back at apple however the ‘Big Mac’ project seemed to have stagnated, and was slimmed down and eventually shipped as the Macintosh II in 1987. There no doubt was a re-awoken sense of urgency in the academic space for the 3M market, now that NeXT was making a 3M machine Apple of course didn’t want to be pushed out of the new space. Apple released a real 1.0 product (1.1.1 survives, although you have to run ( /etc/toolboxdaemon & ; term) to get anything fun from Shoebill with the ISO), what can barely be called a bare bones SYSV port with overlapping terminals at best..
Overwhelming, and interesting this is not.
This of course was more like a tech demo, running a single ‘Unix toolbox app’ at a time. Pricing according to usenet was around $500 for the software, keeping in mind of course that a Macintosh II would be far more expensive. Version 1 also started to add BSD features namely curses in 1.0, allowing you to port simple terminal ‘graphics’ to the OS. The trend of adding BSD features was only going to continue from here! But all of this is a large step up from the earliest known version simply labeled as 0.7 which despite it’s ‘Oreo’ appearance is strictly text mode only.
Dawning of a new era
The real magic is in 2.0:
Think of it more like the OSX of the 1980s. Finder has been ported over to the Toolbox on Unix API allowing A/UX 2.0 to run off the shelf MacOS applications. Under the hood however is the same UniSoft SYSVr2. However running MacOS on top of Unix gives it far faster disk IO, and of course the much vaunted memory protection, although with the massive catch that it’s only for Unix applications. You can still crash applications, and even finder. However you can telnet into the box and restart services, or perform a graceful reboot.
For Unix fans this was the first time you could get ‘off the shelf applications’ that didn’t cost a fortune, along with the standard Unix far. Amazingly both the C compiler and Fortran 77 compiler are included in the box. By 1990 many a company was making these only available for a separate purchase. Version 2.0 also brought along some BSD features with the big one being UFS support for longer filenames, and faster disk performance than the aging SYSV filesystem.
Of course it wouldn’t be all sunshine and rainbows as around this time Apple launched a lawsuit against Microsoft, and Atari over the visual iconography of MacOS (Oddly enough GEM on the ST was ignored). This so called ‘look and feel’ lawsuit lead to a boycott of the fledgling Unix from the FSF, which in turn hurt things like binutils/gcc/gdb etc being easily available to A/UX users.
So what went wrong?
Without even looking at the follow up version 3, and the products demise in the transition from 68000 to PowerPC, the writing was on the wall.
Price
The damned thing was just too expensive! From Wikipedia “When introduced, a basic system with monitor and 20 MB hard drive cost US$5,498” Version 1 was available on tape, and later CD-ROM, I think there was a floppy version, but without a doubt a 20MB disk is far too small. Just as anything under 4MB of RAM is not going to be realistic. Adding in these components you are going to be into the low end of SUN’s catalogue. And why would you take a chance on Apple when you could go to an established Unix vendor?
The other issue is that Unix being Unix you really needed a MMU, and Motorola MMU chips were expensive. Also A/UX had drivers for SCSI only. This prevented a ‘low end revolution’ as the low end machines like the 605 didn’t have SCSI, or full 68040’s. Even the end of the line Quadra 800, sold for an eye watering $4,679!
Direction
What was the heart of A/UX? It was a Unix with a one button mouse, and optional X-11.. with A ONE BUTTON MOUSE?! It was a SYSV Unix, not a BSD, but did include BSD TCP/IP, NFS & UFS filesystem. It was shunned by the FSF as a first tier platform so people had to fidget with code to get it to compile. It was GSA C2 certifiable, but did anyone actually use it in that role?
It was also a Unix with a version of Outlook, and Excel, AfterDark, Fortran 77, and a dead simple UI.
Even after all this time, answering what A/UX was seems to be an identity crisis.
Where did it go right?
One of the big deciding factors in getting workstations for government compliance was the so called C2. This meant things like enforced passwords, auditing and POSIX. It’s everything that the POSIX subsystem for NT was built for, to check just enough boxes, while for Apple A/UX just gave them an instant win. I have no idea if it ever happened but I’m sure somewhere someone was using a Quadra with Word Perfect and A/UX to be a super expensive and certified Mac. Obviously the MAE project dovetails into this, giving commercial MacOS applications to Unix users, but so many others have covered that, and the short version is that it’s incredibly fragile and not very robust at all.
I’m sure someone used it as a fileserver, heck even in the PowerPC generation there a straight port of AIX to a server along with AppleTalk modules.
The demise
Its easy to point to using UniSoft SYSVr2 as being a cost factor, but it really was the hardware requirements. Without any AUX for the LC it was doomed. This wasn’t going to be the Unix for Grandma. Transitioning to the PowerPC removed the braindead CPU problems of lacking a MMU or FPU, but I suspect that the tricks of the 68000 translator would not have run, and certainly wouldn’t pull off things like device drivers. Worse stil people just got used to System 7, and had hopes that the fabled Copeland / System 8 would bring about something strong enough like a Unix without any of the complexities.
Timelines, however slipped, Apple had flirted with MkLinux but didn’t fully commit. Indeed these were dark days, it’s like they were so dead set on going forward to not see a seemingly obvious solution to the OS problem in the past.
Looking at Carbon, and Toolbox32, it’s hard not to imagine a world pushing ISV’s to write for a protected MacOS, but they’d never had bought NeXT. As a matter of face, I would argue that without Steve’s media connections from Pixar, Apple would have slid away into irrelevance, as media outsells the PC tech anyways. Even in 2010 Jobs had called Apple clearly in the ‘post PC era‘.
So this started out as a weird thing that killed a day for me. I thought it was a little fun to look at but, ultimately I proved that I could extract files, but not from the requested image.
So let’s get into some more details, my failure, and well it’s been raised into another chance for some luck/fast knowledgeable hacker to get a payout to extract a single file.
As mentioned above the computer is the Texas Instruments S1500, the disk image was dumped on bitsavers years ago as s1505_cp3540/s1505_cp3540.dd.gz. As you may guess it’s a raw ‘dd’ of a disk.
Now looking at a few sources namely unix-ag the OS in question is TI System V, an AT&T SVR3.2 derivate. Running strings does reveal ‘SysVr3TCPID’ And this appears to be the Unix Version Banner:
(c)Copyright 1993 Hewlett-Packard Company, All Rights Reserved.
(c)Copyright 1986-1992 Texas Instruments Incorporated, All Rights Reserved.
(c)Copyright 1984-1988 AT&T, All Rights Reserved.
(c)Copyright 1979, 1980, 1983, 1985-1990 The Regents of the Univ. of California
(c)Copyright 1980, 1984, 1986 Unix System Laboratories, Inc.
(c)Copyright 1990 Motorola, Inc.
(c)Copyright 1989-1990 The Santa Cruz Operation. All Rights Reserved.
RESTRICTED RIGHTS LEGEND
Use, duplication, or disclosure by the U.S. Government is subject to
restrictions as set forth in sub-paragraph (c)(1)(ii) of the Rights in
Technical Data and Computer Software clause in DFARS 252.227-7013.
Hewlett-Packard Company
3000 Hanover Street
Palo Alto, CA 94304 U.S.A.
Rights for non-DOD U.S. Government Departments and Agencies are as set
forth in FAR 52.227-19(c)(1,2).
Along with further extraneous info like:
TI Sys V
V/68-1.0
3.3.2
MC680X0
Hewlett-Packard 9000 Series 1500
Fantastic. Well digging around you’ll eventually find that SYSV filesystems have a magic number, and it’s 0xfd187320
So a simple search through the raw filesystem reveals some:
And this fits the bill, as the next 32bit ‘word’ is the version, in this case 2 to indicate 1024k blocks ,and improvement added to SYSVr2. One thing is that the struct to read a super block is 512bytes (or is it always?), and the magic number is near the end, so from the above offsets, subtract 496 (decimal!) and you can get the start and sizes of each filesystem. Fantastic!
Speaking of SYSVr2, Do you know what is another SYSVr2? A/UX.
Shoebill was panned for not emulating the full Macintosh, rather it reads the kernel directly from the filesystem, and boots into it. That means Shoebill can read UFS/SYSV. Great start?
So I took the filesystem code from Shoebill, hacked it enough to let me build on Visual Studio, and point it to a raw filesystem and take a look. I put it here: filesystem.c
Now I’m impatient so it still needs a legit Apple A/UX virtual disk. Granted we don’t need it, but it made it easier to let the existing code fiddle with apple partitions, but when it comes time to read SYSV blocks, I closed the file handle and swapped things around. And that lead to this:
As you can see there is a LOT of zeros. However the magic & type align.
Meanwhile here is what an A/UX SYSV filesystem looks like. Notice far less zeros.
Additionally I was able to get another 68k based SYSV Unix disk, and yeah not all zeros. Also yes, using the Shoebill code it extracted files just fine.
However using my approach on the filesytem I always only get a directory with 2 enteries the ‘. ..’. I modified the source to just count inodes and write them to disk. And use inode 2 is just a tiny file. No doubt with all the zeros the disk is either very corrupted (backup superblocks?! where?! how?!) or the kernel implicitly knows these things, or finds them somewhere else.
I’ve been authorized to give a bounty of $200 USD to be able to extract arbitrary files from the 1505 disk image. I thought I’d give it a shot, but I don’t get how the super block aligns but the data doesn’t. Unless there is some other insane padding thing for a 1k superblock? The more I think about it, it’s probably likely as I know at some point I was skipping 3 blocks from an offset to get to a superblock, and 3 is just a weird number. 1 block header, 2 block superblock makes more sense.
Additionally this table may prove useful, especially for the ‘skip 3’ or pad to 1k:
Tape and disk utility is in progress...
26 partitions, 12-longword descriptors:
Name Start Length User Comments
1 * LABL vl 0 2 FFFF
2 * PTBL pt 2 3 FFFF
3 SAVE sb 5 3 FFFF
4 FMT fp 8 9 FFFF
5 TZON tz 17 296 FFFF
6 * unx1 lb 313 1024 0002 TI Sys V 3.3.2
7 * unx1 lb 313 1024 000A TI Sys V 3.3.2
8 * unx1 lb 313 1024 0013 TI Sys V 3.3.2
9 * unx1 lb 313 1024 0014 TI Sys V 3.3.2
10 unx2 lb 1337 1024 0002 TI Sys V 3.3.2
11 unx2 lb 1337 1024 000A TI Sys V 3.3.2
12 unx2 lb 1337 1024 0013 TI Sys V 3.3.2
13 unx2 lb 1337 1024 0014 TI Sys V 3.3.2
14 unx3 lb 2361 1024 0002 TI Sys V 3.3.2
15 unx3 lb 2361 1024 000A TI Sys V 3.3.2
16 unx3 lb 2361 1024 0013 TI Sys V 3.3.2
17 unx3 lb 2361 1024 0014 TI Sys V 3.3.2
18 * cfg1 cb 3385 17 FFFF TI Sys V 3.3.2
19 cfg2 cb 3402 17 FFFF TI Sys V 3.3.2
20 cfg3 cb 3419 17 FFFF TI Sys V 3.3.2
21 * root fb 3436 12288 FC02 TI Sys V 3.3.2
22 usr fb 15724 32768 FC02 TI Sys V 3.3.2
23 jdis an 48492 2 FFFF multi-volume file system anchor
24 pipe fb 48494 1024 FC02 pipe file system partition
25 * swap pb 49518 32768 0002
26 prt1 fb 82286 448972 FC02 part of jdis multi-volume
Did you know there is almost nothing left to document that this poor machine even existed?