Being the unfair person I am, I thought I’d try NeXTSTEP to see how far it gets.
Processor Speed State Coprocessor State Cache Size
--------- -------- --------------------- ----------------- ----------
0 250 MHz Active Functional 0 KB
Available memory: 512 MB
Good memory required: 16 MB
Primary boot path: FWSCSI.6.0
Alternate boot path: LAN.0.0.0.0.0.0
Console path: SERIAL_1.9600.8.none
Keyboard path: PS2
Available boot devices:
1. DVD/CD [lsi 00:00.0 2:0 Drive QEMU QEMU CD-ROM 2.5+]
2. lsi 00:00.0 0:0 Drive QEMU QEMU HARDDISK 2.5+
Booting from lsi 00:00.0 0:0 Drive QEMU QEMU HARDDISK 2.5+
Boot IO Dependent Code (IODC) revision 153
Can't determine I/O subsystem type
NEXTSTEP boot v22.214.171.124
NEXTSTEP will start up in 10 seconds, or you can:
Type -v and press Return to start up NEXTSTEP with diagnostic messages
Type ? and press Return to learn about advanced startup options
Type any other character to stop NEXTSTEP from starting up automatically
And amazingly the bootloader works, although that is about it. Trying to boot up OpenBSD gets about this far:
I found on Windows though that the Debian 8 CD’s work the best, as the earlier ones lock up after loading a kernel, and the later one doesn’t fully initialize. I’ve been using this one: debian-8.0-hppa-NETINST-1.iso Serial console interaction is the way to go, so I ran Qemu like this:
There’s a reason I didn’t have anything to link for (a). That’s because to the best of my knowledge and research, no version has survived these past decades.
As for (b), it seems only the linked version 3.99 from the year 2003 was saved.
The CVS repository and thus commit history has been lost.
If anyone has either actual code for the old Perl Q or the CVS repo for the old
Q written in C, please reach out to me via `xorhash ++at++ protonmail.com’.
I’m most interested in looking through it.
However, not all hope was lost with the old Perl Q. As it turns out, most likely, the old Perl Q was actually based on an off-the-shelf product called “CServe”. What makes me think so?
Let’s take a look at [the QuakeNet Q command listing from 1998.
I picked the command “WHOIS” and googled its use “Will calculate a nick!user@host mask for you from the whois information of this nick.” This lead me to a help file for StarLink IRC. At the top, it reads:
CStar3.x User Command Help File **** 09/10/99
Information extracted from CServe Channel Service
Version 2.0 and up Copyright (c)1997 by Michael Dabrowski
Help Text (c)1997 (c)1997 StarLink-IRC (with permission)
Wait a second, “CServe Channel Service”? I know that from somewhere.
So the commands between that help file and the QuakeNet Q command listing match up and so does Q’s host today. Most likely, I’m on the right track with this. What’s left is to track down a copy of CServe.
Note: I’ve been on the old Perl Q for a while and this strategy didn’t use to work. It seems Google newly indexed these pages. For once I can sincerely say: Thank you, Google.
The only surviving versions are 3.0 and 5.1. CServe got renamed to “CS” starting with 5.0 and was rewritten in C by someone other than the original CServe author, going by the comments in the file header of CS5.1 `src/show_access.c’. CS was actually sold as a commercial product. I wonder how many people bought it.
QuakeNet most likely took a version between 2.0 and 4.0, inclusive, as the basis for the old Perl Q. Which one in particular it was, we may never know. If you have any details, please reach out to me at the e-mail address above.
I can’t make any clever guesses anyway since the only versions that the web archive has are 3.0 and 5.1. The latter is written in C, so it quite obviously can’t be the old Perl Q.
Making It Run
So now that I have CServe 3.0, I wanted to actually see it running.
There are three ways to reasonably accomplish this:
a. port CServe to a modern IRCd’s server-to-server protocol,
b. port an old IRCd to a modern platform,
c. emulate an old platform and run both IRCd and CServe there.
I chose option (b).
Once upon a time, I did option (a) for the old UnderNet X bot. It was a very painful exercise to port a bot that predates the concept of UIDs (or numeric nicks/numnicks as ircu’s P10 server-to-server protocol calls them). There’s nothing too exciting about doing (c) by just emulating a 486 or so and FreeBSD, just sounds like a boring roundtrip of emulation and network bridging.
Fortunately, the author was a nice person and wrote on the CServe website that version 3.0 requires “ircu2.9.32 and above”.
It seems the ircu2.10 series followed right after ircu2.9.32. While I’m sure there’s some linking backwards compatibility, determining which ircu in the ircu2.10 series still spoke enough P09 to link with CServe sounded like an exercise in boring excruciating pain. Modern-day ircu most certainly no longer speaks P09. Besides, what’s the fun in just doing the manual equivalent of `git bisect’?
So after grabbing ircu2.9.32, I tried to just straightforward compile and run it.
There’s a `Config’ script that’s supposed to be kind of like autoconf `configure’, but I’ve found it extremely non-deterministic. It generates `include/setup.h’. I’ve made a diff for your convenience. It targets Debian stable, and should work with any reasonably modern Linux. There are special `#ifdef’ branches for FreeBSD/NetBSD in the code. This patchset may break for BSDs in general.
Do not touch `Config’, meddle with `include/setup.h’ manually. Remember this is an ancient IRCd, there are actual tunables in `include/config.h’.
The included example configuration file is correct for the most part, but the documentation on U:lines is wrong. U:lines do what modern-day U:lines do, i.e., designate services servers with uber privileges.
Excuse Me, But What The Fuck?
Of course, I’m dealing with old code. It wouldn’t be old code if I didn’t have some things that just make me go “Excuse me, but what the fuck?”
Looping at the speed of light
for (acptr = client, (void)collapse(mask); acptr; acptr = acptr->next)
if (!IsServer(acptr) && !IsMe(acptr))
if (!match(mask, acptr->name))
See that `continue’ way on the left? What is it doing there? Telling the compiler to loop faster?
Carol of the Old Varargs
So apparently some of this code predates C89. Which means it uses old-style declarations, but that’s okay. It also uses old-style varargs, which is adorable.
The hacks around not even that being there are adorable, too:
These functions were declared like this (the example chosen above actually has
no declaration because why not):
extern void sendto_ops();
There are `mycmp’ and `myncmp’ for doing RFC1459 casemapping string comparisons. `strcasecmp’ got `#define’d to `mycmp’, but in one case `mycmp’ got `#define’d back to `strcasecmp’. It seemed easier to just remove `mycmp’, replacing it with `strcasecmp’ and forgo RFC1459 casemapping. This is doubly useful because CServe doesn’t actually honor RFC1459 casemapping.
Waiting for the Cookie
ircu uses PING cookies. I was rather confused when I didn’t get one immediately after sending `NICK’ and `USER’. In fact, it took so long that I thought the IRCd got stuck in a deadloop somewhere. That would’ve been a disaster since the last thing I wanted to do is get up close and personal with the networking stack.
As it turns out, it can’t send the cookie:
* Nasty. Cant allow any other reads from client fd while we're
* waiting on the authfd to return a full valid string. Use the
* client's input buffer to buffer the authd reply.
* Oh. this is needed because an authd reply may come back in more
* than 1 read! -avalon
I lowered `CONNECTTIMEOUT’ to 10 in the diff linked above. This makes the wait noticeably shorter when you aren’t running an identd.
CServe Isn’t Much Better
Not that CServe is much better. I have to hand it to Perl, I only needed to undo the triple-`undef’ on line 450 of `cserve.pl’ and it worked with no modifications. God bless the backwards compatibility of Perl 5.
That said, it has its own interesting ideas of code. This is the main command execution:
Yep, it opens, reads into an array, closes and then evals. For every command it recognizes. Of course, this means code hot swapping, but it also means terrible performance with any non-trivial amount of users.
Oh, and all passwords are hashed. But they’re hashed with `crypt()’. And a never-changing salt of ZZ.
Was it worth it?
No, not really.
Would I do it again? Absolutely.
You probably do not want to expose this to the outside world.
The IRCd code is scary in all the wrong ways.
Some other things if you’re into ancient IRC stuff:
It doesn’t hit on the breakpoints for some reason. I’m most likely doing something wrong.
GNU gdb 4.17
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type “show copying” to see the conditions.
There is absolutely no warranty for GDB. Type “show warranty” for details.
This GDB was configured as “–host=i486-pc-netbsd –target=i386-linux-gnuaout”…
Setting up the environment for debugging gdb.
Breakpoint 1 at 0x94a4: file panic.c, line 18.
Breakpoint 2 at 0x667b: file init/main.c, line 110.
tcp_open ! localhost:1234
0xfff0 in sys_unlink () at memory.c:430
Program received signal SIGINT, Interrupt.
panic (s=0x1dd6c “”) at panic.c:23
#0 panic (s=0x1dd6c “”) at panic.c:23
#1 0xd9b3 in mount_root () at memory.c:430
#2 0x12f39 in sys_setup (BIOS=0x1ab38) at hd.c:157
#3 0x7477 in system_call () at sched.c:412
#4 0x1000000 in ?? ()
But after a LOT of struggling it certainly looks about right.
I then went ahead and built GDB 8.0.1, and it’s incredibly slow, and I guess not too compatible with Qemu 0.9 (I had issues with newer builds though) but it does break.
(top-gdb)target remote localhost:1234
Remote debugging using localhost:1234
0x0000fff0 in sys_unlink () at memory.c:83
Num Type Disp Enb Address What
1 breakpoint keep y 0x000094a4 in panic at panic.c:18
2 breakpoint keep y 0x000094a4 in panic at panic.c:18
Breakpoint 1, panic (s=0xd8cf <sys_mount+227>) at panic.c:18
18 printk(“Kernel panic: %s\n\r”,s);
Reply contains invalid hex digit 79
Reply contains invalid hex digit 79
#0 0x0000d9b3 in mount_root () at memory.c:83
#1 0x0001ab60 in hd_info ()
#2 0x00000000 in ?? ()
eax 0x0 0
ecx 0x51 81
edx 0x1fb4c 129868
ebx 0x0 0
esp 0xffff94 0xffff94
ebp 0xffffa8 0xffffa8
esi 0x1ab60 109408
edi 0x0 0
eip 0xd9b3 0xd9b3 <mount_root+139>
eflags 0x246 [ PF ZF IF ]
cs 0x8 8
ss 0x10 16
ds 0x10 16
es 0x10 16
fs 0x17 23
gs 0x17 23
And while it more or less runs there is some issues using the GDB stub from Qemu 0.9.0, although I had a world of pain with newer versions. And I’ve never done the kernel debug thing before so this is all new to me.
And I guess it goes as no surprise that with GDB 8, they have a.out Linux tagged as obsolete and to be removed. I guess I need to try a GDB that was current to Qemu 0.90 so it may not have so many packet overruns and unexpected results…
So it’s always a fun time for me to push my old project Ancient Linux on Windows. And what makes this so special? Well it’s a cross compiler for the ancient Linux kernels, along with source to the kernels so you can easily edit, compile and run early Linux from Windows!
As always the kernels I have built and done super basic testing on are:
All of these are a.out kernels, like things were back in the old days. You can edit stuff in notepad if you so wish, or any other editor. A MSYS environment is included, so you can just type in ‘make’ and a kernel can be built, and it also can be tested in the included Qemu. I’ve updated a few things, first with better environment variables, and only tested on Windows 10. Although building a standalone linux EXE still requires a bit of work, it isn’t my goal here as this whole thing is instead geared around building kernels from source. I included bison in this build, so more of GCC is generated on the host. Not that I think it matters too much, although it ended up being an issue doing DooM on GCC 1.39.
So for people who want to relive the good old bad days of Linux, and want to do so from the comfort of Windows, this is your chance!
The line that says “Running inside a VM; adjusting spinout timeout to 180 seconds” would suggest that KVM implements enough of our backdoor interface to make it look like we’re running under a VMware hypervisor. When we’re running in this environment, we use the backdoor to get the host TSC frequency. I suspect that KVM doesn’t implement the “GETMHZ” backdoor call, so we are confused about the TSC frequency. The 30ms delay turns into … 30 hours? 30 years?
So they had a source code change for QEMU 1.7.0, however it obviously doesn’t work in 2.x. It was rolled up stream, and then made into a switch to disable with a simple flag to add into the command line.
So with VIRL in hand, the next thing I wanted to do was play with some LACP, and VMWare ESX. Of course the best way to do this is under KVM as you can use UDP to bounce packets around between virtual machines, like the VIRL L2 switch. I went ahead and fired up 5.5 and got this nice purple screen of death.
So naturally I need to force the processor type. Also after reading a few sites, I needed to turn on a nested & ignore_msrs settings:
So it’s basically the same, just no mounted CD-ROM image. Now this is all fun, but what about networking? As I had mentioned before, I bought a VIRL license, which includes a l2 Catalyst image, so why not use that, instad of a ‘traditional’ Linux bridge? Sure! In this example I’m going to connect the 4 ethernet ports from the ESXi into the first 4 ports on the cisco switch, with the last port connecting to a Linux bridge, that I then route to, as I wanted all my lab crap on a seperate network. To start the switch I use this script:
Now as you can see the udp sockets are inverse of eachother, meaning that the ESX listens on 10000 and sends to 127.0.0.1 on port 20000, while the switch listesns on 20000, and sends packets to 10000 for the first ethernet interface pair.
By default VMware only assigns the first NIC into the first virtual switch, so after enabling CDP, we can see we have basic connecitivity:
AMD-kvm#sho run int gig0/1
Current configuration : 99 bytes
no negotiation auto
AMD-kvm#show cdp neigh
Capability Codes: R – Router, T – Trans Bridge, B – Source Route Bridge
S – Switch, H – Host, I – IGMP, r – Repeater, P – Phone,
D – Remote, C – CVTA, M – Two-port Mac Relay
Device ID Local Intrfce Holdtme Capability Platform Port ID
KVMESX-1 Gig 0/0 155 S VMware ES vmnic0
Total cdp entries displayed : 1
And of course the networking actually does work… I created a quick VM, and yep, It’s online!
AMD-kvm#show mac address-table
Mac Address Table
Vlan Mac Address Type Ports
—- ———– ——– —–
1 000c.2962.09e5 DYNAMIC Gi0/0
1 002e.3c92.2600 DYNAMIC Gi0/0
1 76b0.3336.34b3 DYNAMIC Gi2/3
Total Mac Addresses for this criterion: 3
And of course some obliguttory pictures:
With ip forwarding turned on my Ubuntu server, and an ip address assigned to my bridge interface, I can then access the NT 4.0 VM from my laptop directly.
Nex’t time to make the L2 more complicated, and add in some L3 insanity…
Ugh. nothing like ancient crypto, major security vulnerabilities, and ancient crap. So first I’m going to use Juniper’s SDK (get it while you can, if you care). Note that the product is long since EOL’d, and all support is GONE. I’m using Debian 7 to perform this query, although I probably should be using something like 4 or 5. Anyways first off is that the python examples require “Ft.Xml.Domlette” which doesn’t seem to have a 4Suite-XML package. SO let’s build it the old fashioned way:
apt-get install build-essential python-dev
tar -xvvf 4Suite-XML-1.0.2.tar.bz2
Well (for now) and in my case I could reconfigure tomcat to be slightly more secure. Otherwise running the examples gives this fun filled error:
ssl.SSLError: [Errno 1] _ssl.c:504: error:14082174:SSL routines:SSL3_CHECK_CERT_AND_ALGORITHM:dh key too small
Naturally as time goes on this will not work anymore, and I’ll need a stale machine to query this stale service. Using ssl shopper’s Tomcat guide, I made changes to the server.xml file on the vGW SD VM. (Don’t forget to enable SSH in the settings WebUI, and then login as admin/<whatever password you gave> then do a ‘sudo bash’ to run as root, screw being nice!
And you get the idea. Certainly on the one hand it’s nice to get some data out of the vGW without using screen captures or anything else equally useless, and it sure beats trying to read stuff like this:
What on earth was Altor/Juniper thinking? Who thought making the screen damned near impossible to read was a “good thing”™
Naturally someone here is going to say, upgrade to the last version it’ll fix these errors, and sure it may, but are you going to bet a production environment that is already running obsolete software on changing versions? Or migrate to a new platform? Sure, the first step I’d want of course is a machine formatted rule export of the existing rules. And here we are.
Of course the real fun comes from trying to find the devices. Having to dig around I found that the device major is 98 for the UBD’s and that they incrament by 16, so that the first 3 devices are as follows:
mknod /dev/ubda b 98 0
mknod /dev/ubdb b 98 16
mknod /dev/ubdc b 98 32
Adding to that, you can partition them, and then they break out like this:
mknod /dev/ubda1 b 98 1
mknod /dev/ubda2 b 98 2
mknod /dev/ubda3 b 98 3
mknod /dev/ubdb1 b 98 17
mknod /dev/ubdb2 b 98 18
You get the idea.
With the disk added you can partition the ubd like a normal disk
node1:~# fdisk /dev/ubdb
Command (m for help): p
Disk /dev/ubdb: 1073 MB, 1073741824 bytes
128 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 4096 * 512 = 2097152 bytes
Device Boot Start End Blocks Id System
/dev/ubdb1 1 245 501744 83 Linux
/dev/ubdb2 246 512 546816 82 Linux swap / Solaris
etc etc. And yes, you can then format, mount and all that.
First let’s setup the swap:
Now let’s format the additional /tmp partition
node1:~# mke2fs /dev/ubdb1
mke2fs 1.40-WIP (14-Nov-2006)
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
125488 inodes, 501744 blocks
25087 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
62 block groups
8192 blocks per group, 8192 fragments per group
2024 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Now adding the following to the /etc/fstab so it’ll automatically mount the /tmp directory and add the swap: