Why BSD/OS is the best candidate for being the only tested legally open UNIX.

This is an anonymous guest post. Disclaimer: Nothing in this post constitutes legal advice. The author is not a lawyer. Consult a legal professional for legal advice.

Introduction

The UNIX® system is an old operating system, possibly older than many of the readers of this post. However, despite its age, it still has not been open sourced completely. In this post, I will try to detail which parts of which UNIX systems have not yet been open sourced. I will focus on the legal situation in Germany in particular, taking it representative of European law in general – albeit that is a stretch, knowing the diversity of European jurisdictions. Please note that familiarity with basic terms of copyright law is assumed.

Ancient UNIX

The term “Ancient UNIX” refers to the versions of UNIX up to and including Seventh Edition UNIX (1979) including the 32V port to the VAX. Ancient UNIX was created at Bell Laboratories, a subsidiary of AT&T at the time. It was later transferred of the AT&T UNIX Support Group, then AT&T Information Systems and finally the AT&T subsidiary UNIX System Laboratories, Inc. (USL). The legal situation differs between the United States of America and Germany.

In a ruling as part of the UNIX System Laboratories, Inc. v. Berkeley Software Design, Inc. (USL v. BSDi) case, a U.S. court found that USL had no copyright to the Seventh Edition UNIX system and 32V – arguably, by extension, all earlier versions of Ancient UNIX as well – because USL/AT&T had failed to affix copyright notices and could not demonstrate a trade secret. Due to the obsessive tendency of U.S. courts to consider themselves bound to precedents (cf. the infamous Pierson v. Post case), it can be reasonably expected that this ruling would be honored and applied in subsequent cases. Thus under U.S. law, Ancient UNIX can be safely assumed to belong in the public domain.

The situation differs in Germany. Unlike the U.S., copyright never needed registration in order to exist. Computer programs are works in the sense of the German 1965 Act on Copyright and Related Rights (Copyright Act, henceforth CopyA) as per CopyA Â§ 2(1) no. 1. Even prior to the amendment of CopyA Â§ 2(1) to include computer programs, computer programs have been recognized as copyrightable works by the German Supreme Court (BGHZ 112, 264 Betriebssystem, no. 19); CopyA Â§ 137d(1) rightly clarifies that. The copyright holder at 1979 would still have been USL via Bell Labs and AT&T. Copyright of computer programs is transferred to the employer upon creation under CopyA Â§ 69(1).

Note that this does not affect expiry (Daniel Kaboth/Benjamin Spies, commentary on CopyA Â§Â§ 69a‒69g, in: Hartwig Ahlberg/Horst-Peter Götting (eds.), Urheberrecht: UrhG, KUG, VerlG, VGG, Kommentar, 4th ed., C. H. Beck, 2018, no. 16 ad CopyA Â§ 69b; cf. Bundestag-Drucksache [BT-Drs.] 12/4022, p. 10). Expiry occurs 70 years after the death of the (co-)author that died most recently as per CopyA Â§ 65(1) and 64; this has been the case since at least the 1960s, meaning there is no way for copyright to have expired already (old version, as per Bundesgesetzblatt Part I No. 51 of September 16, 1965, pp. 1273‒1294).

In Germany, private international law applies the so-called “Territorialitätsprinzip” for intellectual property rights. This means that the effect of an intellectual property right is limited to the territory of a state (Anne Lauber-Rönsberg, KollisionsR, in: Hartwig Ahlberg/Horst-Peter Götting (eds.), ibid., pp. 2241 et seqq., no. 4). Additionally, the “Schutzlandprinzip” applies; this means that protection of intellectual property follows the lex loci protectionis, i.e. the law of the country for which protection is sought (BGH GRUR 2015, 264 HiHotel II, no. 25; BGH GRUR 2003, 328 Sender Felsberg, no. 24), albeit this is criticized in parts of doctrine (Lauber-Rönsberg, ibid., no. 10). The “Schutzlandprinzip” requires that the existence of an intellectual property right be verified as well (BGH ZUM 2016, 522 Wagenfeld-Leuchte II, no. 19).

Thus, in Germany, copyright on Ancient UNIX is still alive and well. Who has it, though? A ruling by the U.S. Court of Appeals, Tenth Circuit, in the case of The SCO Group, Inc. v. Novell, Inc. (SCO v. Novell) in the U.S. made clear that Novell owns the rights to System V – thus presumably UNIX System III as well – and Ancient UNIX, though SCO acquired enough rights to develop UnixWare/OpenServer (Ruling 10-4122 [D.C. No. 2:04-CV-00139-TS], pp. 19 et seq.). Novell itself was purchased by the Attachmate Group, which was in turn acquired by the COBOL vendor Micro Focus. Therefore, the rights to SVRX and – outside the U.S. – are with Micro Focus right now. If all you care about is the U.S., you can stop reading about Ancient UNIX here.

So how does the Caldera license factor into all of this? For some context, the license was issued January 23, 2002 and covers Ancient UNIX (V1 through V7 including 32V), specifically excluding System III and System V. Caldera, Inc. was founded in 1994. The Santa Cruz Operation, Inc. sold its rights to UNIX to Caldera in 2001, renamed itself to Tarantella Inc. and Caldera renamed itself The SCO Group. Nemo plus iuris ad alium transferre potest quam ipse habet; no one can transfer more rights than he has. The question now becomes whether Caldera had the rights to issue the Caldera license.

I’ve noted it above but it needs restating: Foreign decisions are not necessarily accepted in Germany due to the “Territorialitätsprinzip” and “Schutzlandprinzip” – however, I will be citing a U.S. ruling for its assessment of the facts for the sake of simplicity. As per ruling 10-4122, “The district court found the parties intended for SCO to serve as Novell’s agent with respect to the old SVRX licenses and the only portion of the UNIX business transferred outright under the APA [asset purchase agreement] was the ability to exploit and further develop the newer UnixWare system. SCO was able to protect that business because it was able to copyright its own improvements to the system. The only reason to protect the earlier UNIX code would be to protect the existing SVRX licenses, and the court concluded Novell retained ultimate control over that portion of the business under the APA.” The relevant agreements consist of multiple pieces:

The APA dates September 19, 1995, from before the Caldera license. Caldera cannot possibly have acquired rights that The Santa Cruz Operation, Inc. itself never had. Furthermore, I’ve failed to find any mention of Ancient UNIX; all that is transferred is rights to SVRX. Overall, I believe that the U.S. courts’ assesment of the facts represents the situation accurately. Thus for all intents and purposes, UNIX up to and including System V remained with Novell/Attachmate/Micro Focus. Caldera therefore never had any rights to Ancient UNIX, which means it never had the rights to issue the Caldera license. The Caldera license is null and void – in the U.S. because the copyright has been lost due to formalities, everywhere else because Caldera never had the rights to issue it.

The first step to truly freeing UNIX would this be to get Micro Focus to re-issue the Caldera license for Ancient UNIX, ideally it would now also include System III and System V.

BSD/OS

Another operating system near UNIX is of interest. The USL v. BSDi lawsuit includes two parties: USL, which we have seen above, and Berkeley Software Design, Inc. BSDi sold BSD/386 (later BSD/OS), which was a derivative of 4.4BSD. The software parts of the BSDi company were acquired by Wind River Systems, whereas the hardware parts went to iXsystems. Copyright is not disputed there, though Wind River Systems ceased selling BSD/OS products 15 years ago, in 2003. In addition, Wind River System let their trademark on BSD expire, though this is without consequence for copyright.

BSD/OS is notable in the sense that it powered much of early internet infrastructure. Traces of its legacy can still be found on Richard Stevens’ FAQ.

To truly make UNIX history free, BSD/OS would arguably also need to see a source code release. BSD/OS at least in its earliest releases under BSDi would ship with source code, though under a non-free license, far from BSD or even GPL licensing.

System V

The fate of System V as a whole is difficult to determine. Various licenses have been granted to a number of vendors (Dell UNIX comes to mind; HP for HP-UX, IBM for AIX, SGI UNIX, etc.). Sun released OpenSolaris – notoriously, Oracle closed the source to Solaris again after its release –, which is a System V Release 4 descendant. However, this means nothing for the copyright or licensing status of System V itself. Presumably, the rights with System V still remain with Novell (now Micro Focus): SCO managed to sublicense rights to develop and sell UnixWare/OpenServer, themselves System V/III descendants, to unXis, Inc. (now known as Xinuos, Inc.), which implies that Xinuos is not the copyright holder of System V.

Obviously, to free UNIX, System V and its entire family of descendants would also need to be open sourced. However, I expect tremendous resistance on part of all the companies mentioned. As noted in the “Ancient UNIX” section, Micro Focus alone would probably be sufficient to release System V, though this would mean nothing for the other commercial System V derivatives.

Newer Research UNIX

The fate of Bell Labs would be a different one; it would go on to be purchased by Lucent, now part of Nokia. After commercial UNIX got separated out to USL, Research UNIX would continue to exist inside of Bell Labs. Research UNIX V8, V9 and V10 were not quite released by Alcatel-Lucent USA Inc. and Nokia in 2017.

However, this is merely a notice that the companies involved will not assert their copyrights only with respect to any non-commercial usage of the code. It is still not possible, over 30 years later, to freely use the V8 code.

Conclusion

In the U.S., Ancient UNIX is freely available. People located everywhere else, however, are unable to legally obtain UNIX code for any of the systems mentioned above. The exception being BSD/OS, assuming a purchase of a legitimate copy of the source code CD. This is deeply unsatisfying and I implore all involved companies to consider open sourcing (preferably under a BSD-style license) their code older than a decade, if nothing else, then at least for the sake of historical purposes. I would like to encourage everybody reading this to consider reaching out to Micro Focus and Wind River Systems about System V and BSD/OS, respectively. Perhaps the masses can change their minds.

A small note about patents: Some technologies used in newer iterations of the UNIX system (in particular the System V derivatives) may be encumbered with software patents. An open source license will not help against patent infringement claims. However, the patents on anything used in the historical operating systems will certainly have expired by now. In addition, European readers can ignore this entirely – software patents just aren’t a thing.


UNIX® is a registered trademark of The Open Group.

Fallout 76 on trajectory to be free to play by Christmas?

$34.99 on Amazon

Well I’ve been able to put a few hours into the game, and I can already say that finding people is super scarce, and when you do basically they are too busy doing quests and stuff at such a higher level they basically avoid someone like me at level 5.  With no NPC’s or real ‘vibe’ to the world other than isolation and loneliness people then tend to be, well isolated.

I never really played WoW that much, I found it was drowning in too many people that you couldn’t get 5 minutes alone without people bugging you.  And oddly enough the absolute isolation is unreal.

Looking that we are going to start the second week of retail release now, and the game is discounted from $60 to $35 at the moment doesn’t bode well at all.  The reviews came in over the weekend, and it was universally panned as an ‘avoid’ which means death for something that relies on the multiplayer part.

Really?

And then people are pointing out that not only does 76 re-use all the assets from Fallout 4 (which really doesn’t bug me that much) but that Fallout really is a twist on Oblivion

I found this tidbit on ‘It’s a Gundam

Although I did try to do this with NV onto Oblivion and really had no luck, but re-purposing engines to do different things isn’t all that un heard of.  There is that DooM mod where you can mow the lawn, or even add in the QuakeWorld multiplayer after all.

One thing is for sure, Fallout 76 is in major trouble.  I’ve read all too many times that after 40-60 hours that there is basically nothing left to do, and people that are enjoying it are leaving as they are ‘done’ which again does not bode well for an online game.

If Bethesda isn’t in crisis mode trying to make ’76 more WoW like, then they will have burnt through a lot of community good will, as they slide into Oblivion.  While so many people are decrying the engine, I think the real faults lie in the lack of engagement, which then lets people stare at the assets and then the whole dated look of Fallout 4 really becomes apparent.

Looking back, Fallout 3 was a break out game, and I thought it was an excellent transformation of the isometric world to an engaging 3d game.  The story was… well, not the best, I accidentally stumbled onto ‘dad’ pretty quickly and ended it far too early which initially made it feel cheap and boring, not until I saw the strategy guide, and was amazed that there was so much in there, so I set about exploring and finding more enjoyment in the environment, lore and interactions.

New Vegas had so much in common, both development team wise, and atmosphere from the original Fallout it was an incredible follow up to Fallout 3, however too many people were too critical of the tech & timeline that they had been given and focused on defects that were frankly out of Obsidian’s hands.  It’s a shame that the best one had the worst reviews, and destroyed the people making it.

Fallout 4 returned to the 3 story, expect it was the parent seeking their child, and the twist that their child was now elderly really wasn’t all that surprising at all.  Cutting down NPC interaction was a major problem, as it felt so much on rails.  There was nothing to really do to step outside of yes/no trees with groups, you couldn’t ‘sort of side’ with someone, or disagree.. And then there was the minutemen and their constant nagging that was the worst.  Even cheating and putting 100 turrets into a settlement did nothing to save it, I saw the super mutants fall from the sky in the middle and proceed to attack.  What good is perimeter defenses when your opponents are apparently airborne?

I was so bored by Fallout 4, I can’t even remember if I finished the story.  It really wasn’t all that engaging.

And now we enter ’76 which again I knew was going to be strange with no NPC’s which meant no connection with the world at all.  But as I’d mentioned that the number of people playing this online is going to sharply crash that if you wanted to experience this aspect you better be quick.  And after more game play, I can safely say it doesn’t matter.

It’s now $35, and this won’t save it.  I expect more $5 discounts per week, if not steeper, then before the holidays some kind of rebalance to encourage micro transactions, and ’76 becoming a Freemium game, with it eventually being shuttered some time mid ’19 unless something amazing happens content wise between here & there.

They never should have launched @ $60 that’s for sure, and looking at the assets this really ought to have been a DLC / addon for Fallout 4 for perhaps $10-20 and I doubt itd have had anywhere near the massive backlash.

The real shame is that once the servers go dark that this will be the end.  I don’t think Bethesda ‘gets’ that the ability to self host is why Minecraft/Quake etc were so incredibly popular in their heyday.  And more importantly why that they will be continued to be played for years (decades) to come.

Fallout 76

Downloading…

I’m torn on this one.  Unless you have been living in a cave, you’d have heard that the launch of Fallout 76 has been…. well a spectacular disaster.

Launching at a full AAA price of $60 to what is apparently an empty world didn’t help things at all.

That said, I’ve liked Fallout for a long while, and yes I really did like the Bethesda treatment for Fallout 3.  And then we got New Vegas which was nothing short of amazing.  Sadly Obsidian, the team behind New Vegas that comprised many of the original Fallout team were punished in reviews with faults based on the aging gamebryo engine that Bethesda loves so much.  Which is sad as their bonus payment & future were tied to the metacritic score, which Bethesda tied a rock around their neck.

Fallout 4 was disappointing as it removed so much of the RPG elements, making the game boring, as it just lacked depth.  Which really is an unfortunate direction.  And the overall story/twist was so utterly predictable it was disappointing that you as the player were not expected to ‘get it’ right away.

And now here we are, Fallout 76, where they decided to remove all the NPC’s all together.  Which leads me to the following problem.

Fallout 76 is going to crash and burn, and as soon as the ‘next big thing’ launches nobody is going to play it.  So this is basically my only opportunity to play it with other people online.  The full price was certainly too much, and with the incredibly poor reception its had over the last week, it is already reduced in price by 33%.

Fallout 76 on the Bethesda store

Obviously this doesn’t bode well for the future of the game, but out of morbid curiosity I’m going to give it a try.

I’m pretty sure it’s going to be full of disappointment and failure.  I wouldn’t recommend anyone to really try it.

I wouldn’t be surprised if in another week the price was further reduced to $30 USD, with some time before Christmas for a further reduction to $20.  But by then will there be anyone left to play the game with?

The size of the game is overwhelming, I’m currently living in a small village in Hong Kong (yes it’s not all big city) the only internet options were 6MB DSL, or a 4G cellphone connection.  The 4G is much faster, however the WiFi bridge adapter I have is only 802.11a compatible so that is why I’m getting such a poor download speed.  It’s been downloading all night, and I’m too impatient to not at least write down my thoughts at the moment.

I should probably just break down, get a capture card, some tripods and lights and just make crappy YouTube videos.  I’ve been looking at numbers and I’m almost thinking that videos get further reach.  Not that I care too much, otherwise Id have done it ages ago, although it’s probably just me being lazy as video work is a lot of work, while quickly banging this out on a keyboard only takes a few minutes.  And I’ll have to get graphics, license music and use something to make snazy effects and stuff. Ugh sounds suspiciously like a lot of hard work.

After about 12 hours to download my junky machine would just launch Fallout 76 to a blank screen and then exit with no error code, or hint of a message.  Apparently going onto the forums it turns out that the ancient video card with 512MB of graphics RAM is just not enough, and Fallout 76 requires 2GB of video memory.  Unlike prior versions of Fallout, there is no pre-game tuning, instead you are thrown into the game with whatever settings are pre-defined.

I for one am not too amused with the black screen, and sudden close with not even a hint to the user. But the whole thing is apparently a rush job, so I suppose I shouldn’t be surprised.

I copied the downloaded game to a better machine via USB drive, and for most of the day I had this fun error:

NoRegionPing

The “No Region Ping” which either means that there are no servers up, or perhaps a firewall issue.  I later found out that UPnP was disabled on the Huawei E5785 which may have also been the source of issues (although turning it on, and immediately rebooting didn’t change a thing so I don’t know for certain).

After a bit of fighting and I finally got the game to launch using my ‘pro’ laptop that has a GPU.

I have to say that the engine does look a lot better than the Fallout 4 one, especially going outside into the forest.  Maybe it’s the vibrant colours, a nice change from the bleak/dreary games of old.

The texture pop is quite noticeable, it feels like borderlands type flat shading style until it pops the correct texture.  I’m playing from SSD which I had thought would help alleviate such issues, but seeing it made no difference, I just moved it to the disk instead.

Combat is atrocious.  Stepping outside of the vault you encounter these small robots, the “liberator MK 0” that are almost impossible to hit.  When combat isn’t happening I can swing my fists of rage around like no tomorrow.  Once I encounter the robots I’m lucky to get a single swing for every 50+ mouse clicks.  The enemies move so fast that running away isn’t an option either.  I had better luck on herding one group of robots to fight another group of ticks, once I had tried to venture out a little further.

And speaking of opponents it seems that they just spawn on top of you.  I made it to some lighthouse to have 3 feral ghouls spawn in front of me, mauling me in seconds.  Not that clicking the mouse button to attack frantically would have done anything.

The game world does feel incredibly vast, and also seemingly incredibly lonely as well.  After going through the online menu there was only 3 other people on the server I’m on.  It’s probably all ready far too late to run into users, as I think the downfall of Fallout 76 is pretty much complete.

So yeah after an hour playing the game feels incredibly lonely and isolated.  But I have to admit, that after a nuclear war I’d imagine things would be lonely and isolated.  I think the part where Bethesda has really made a critical error is that the NPC’s that they put so much love into the past gave people an emotional attachment to the game, and that the prior fallouts, and all the endless elder scrolls are teeming with life.  And ’76 instead presents a vast wasteland.  Maybe it’s just too true to the source material.

Running VMware ESX Server 2.5

One of my favorite things about VMware is that it can run itself.  This allows me to test & stage new setups, test API stuff on my desktop, allowing me to build a “micro data center” that I don’t need to ask & beg for permission to take down, or if I do something stupid, I’m just a quick revert away from putting it back, and more importantly not making other people mad.

This also let’s me step back in time, in this case to the dark & ancient world of 2005, where I’d first deployed VMware ESX 2.5.2 along with vCenter 1.3.1 .  I figured that I could use my ancient Dell P490, as I’d been using it as a desktop at home for casual use, but this seemed like a good thing to stress the system on.  Also handy to have is the installation guide, which VMware still has online.

I installed Windows 10 Pro, and VMware Player 12.5.9, The box has a single physical processor that is dual core, 8GB of RAM and a 1TB disk.  Not exactly a high end machine, but it’ll suffice.

The first thing to do was install ESX 2.5.2, I’d set it up as a Linux VM, with 1 CPU, 2GB of RAM, and 3 disks, one for the OS, another for SWAP, and a Data store / Data disk.

After the nice GUI setup is completed we are dumped to a console on reboot.  ESX is meant to be managed remotely.

Once the OS installed, edit the VMX file, and make the following changes, to allow VMware to setup the passthrough capabilities so the VM can run other VMs.

guestOS = “vmkernel”
monitor_control.vt32 = “TRUE”
monitor_control.restrict_backdoor = “TRUE”

Now the Version will report that it’s VMware ESX.  The other thing you’ll find out quickly is that you need a browser to manage the server (funny how things went back to this direction, later versions relied entirely on the ‘fat’ .NET client), and I found that FireFox 1.5 works the best.

The .NET client requires .NET 1.1 to operate correctly.  It will not install on Windows 10 by default, as the .NET 3.5 (which includes .NET 2.0 runtime) is not acceptable, it has to be the 1.1 runtime, along with the J# runtime, which it’ll install if needed.  I went through the installation steps in the aptly name ‘Installing .NET 1.1 on Windows 10‘. post.

Of course you’ll need a place to run the vCenter server, I just setup a Windows 2000 server, installed SQL 2000, .NET v1.0 & v1.1 and then the Virtual Center component.  VirtualCenter relies on a database backend, and I thought it’d be interesting to look at the tables through MSSQL, although Oracle, Access and some generic ODBC are also options for this ancient version of VirtualCenter.

For those who don’t know, VirtualCenter is the application that lets you build a ‘virtual datacenter’ join multiple ESX servers together, and more importantly orchestrate them together into a cluster, allowing you to vMotion VMs between servers,  which of course is the ‘killer feature’ of VMware ESX.  If you don’t have vCenter / VirtualCenter then you are missing out on so much of the products capabilities, which is sadly hidden away.

I setup a tiny Windows NT 4.0 domain, with a domain controller, and a terminal server.  My host machine is a bit weak to setup more ESX hosts, as there just isn’t enough punch in the box.  Although any modern machine will probably exhaust RAM before CPU running a mid 90’s workload.

Back in the day, I had moved our entire DC onto 4 ‘killer’ machines with fiber channel storage and had consolidated the entire DC to a single cabinet.  It was incredible that we were initially able to almost meet existing performance.  Of course the killer feature again is vMotion so a year later, I only needed 4 new servers which was an easy budget ask, and in the middle of the day I vmotioned from the old servers into the new servers, and things across the board were now faster.  Finally the bean counters saw the light that we didn’t have to buy faster gear for a single group, or that we no longer had the issues where we had ‘important enough’ to be in the data center but with no hardware maintenance, or proper backups.  Now everyone is on equal footing and all the boats raised with the tide so to speak.

In this quick re-visitation it would be fun to setup shared storage, multiple hosts and vMotion, but back in the days of ESX 2.5 there was no support for having VMFS over NFS or iSCSI.  As much as I’d love to use the Dr Dobbs iSCSI Target Emulator, it just sadly isn’t an option.  The ability to move beyond Fiber Channel shared storage (or other supported dedicated host bus adapters) was added in version 3, greatly expanding the capabilities of ESX.

 Obviously the career mistake here was to be a happy Admin, and concentrate on other things as now the infrastructure ‘just worked’ and it freed up an extraordinary amount of time.  The smarter people were either taking these types of experiences and turning it into a consultation gig (low effort) , or taking lessons learned in VMware space, and focusing them onto QEMU/KVM and building libre infrastructure (high effort).

Such is life, be careful riding those trendy waves, eventually you have to either lead, follow or just get out of the way.

Microsoft Bookshelf (1991)

I found this online a while ago, although it’s taken about half a year to pick it up, but here we are.

What is kind of cool about this, is that being from 1991 this is not for Windows, that this reference library instead targets MS-DOS using the MSL/Microsoft Library from the Programmer’s Library.  So the same advantage holds true, that the content can be scraped from the text mode video RAM.

Factbook: Hong Kong

So yeah, back in the day this was some really amazing stuff, the ability to search a few books in some incredibly fast and convenient, although as always lacking super depth.

Back then online services were crazy expensive, charging by the minute, and of course just like the stock MS-DOS client preventing you from being able to easily copy the text.  Outside of anything beyond gradeschool I couldn’t imagine the ‘encyclopedia’ being of all that much worth but the dictionary/thesaurus & quotations is okay enough, although in 2018 it really is showing it’s age.

Having your own private reference back then was a big deal, something like this would have been more apt in a library, but you’d have to wait in line, no doubt as the ability to look up stuff just by typing would have been great.  While using this online would have cost quite a bit quickly justifying the cost of a CD-ROM drive along with the program.

The common carrier and lower costs of delivering content over the internet has really made something like this an oddity of time, but for anyone that needs to work 100% offline, these are a real gem.

Another great use of extracting the books from the CD-ROM, is that you can take, say the “American Heritage Dictionary“, a 30MB file, and compress it with 7zip, yielding a file just under 4MB, or an 87%, or a 7.76:1 compression ratio.  So unlike other ‘dictionary’ test compression sets, this is using an actual dictionary.

For anyone wanting to take a dive, I put it on archive.org

Soon

Microsoft Game Studio

I’m on an extended work trip, so I’ve been unable to do much of anything blog like the last few weeks. But as a bonus I have about 6 months worth of random crap from Ebay packed up to take back to Hong Kong to review.

And yes, It’s nearly been a year in the making for this one, but rest assured unless the disks are un-readable this will happen!

Otherwise, the Diablo thing with the release to mobile only went over like a proverbial lead balloon.  Kind of gutsy to host a convention to die hard PC fans and try to keynote on a re-skinned rip off phone app, that’s just been accepted as now ‘official’.  Talk about outsourcing gone wrong.

UPDATE: sad news I have a piece of luggage MIA.  This may not be happening now. 

UPDATE2: Bag showed up, loaded the disks, and after 7% of disk one, my 5 1/4″ drive just died.

:'(

Installing Classic (MacOS 9.2.2) from OS X 10.4

I just got another PowerBook, and the disk had been wiped by the prior user, and all it did was boot up to the blinking mac face. So not very useful. I did luckily buy some CD’s from a user on reddit a few months ago, so I had 10.4 install DVD, and an install of 9.2.2 for the emac.

Now the OS 9, is an install disc, not one of the recovery discs, and naturally the aluminum powerbooks don’t boot OS 9, so I’m kind of out of luck for getting Classic working, or so I had thought. I copied the System Folder from the CD onto the hard disk, and told the classic applette to boot it, and it updated some system files, and then gave me this fine message:


The system software on the startup disk only functions on the original media, not if copied to another drive.

So this got me thinking, back in the Sheepshaver days when trying to boot from an ISO as a disk file, it fails the same way because the image is read/write. If it’s read-only it does boot up however. So I used disk util, and made a new read-only disk image from a directory, and pointed it to a directory that I’d moved the CD’s system folder, desktop to. After mounting the read only image, it booted!

Now for the best part, I then kicked off the installer from the CD, and had it install a copy of OS 9, onto the OS X disk.

OS 9 Installer running under OS X

It’s worth noting that just about every optional install fails. It’ll come back with an error, and you can skip the component. It’s probably just easier to install the minimal OS image.

But rest assured it really does install.

After the install you can eject the CD, unmount the read-only copy and tell the classic to stop and then boot from the new installed copy of OS 9 on the OS X disk. It didn’t interfere with my OS X from booting, although the ‘sane person’ would probably have disk image make a small (1gb) read/write virtual disk, and have the installer install to that.

So to recap, copy the system folder from the CD onto read-write media, and let classic update it. get it to the point that it’s not happy about being mounted read-write. Move it to a read-only disk image and have classic boot from that, and then run the OS 9 installer to install itself to whatever target disk you need or want.

SimCity 2000 on Classic / OS X

I’ve run Netscape 4, IE 3 & 4, QuickTime 4, and the SIMS version 1 (the OS 8/9 carbon version). using 10.4.0 on an aluminum powerbook.

I don’t know if anyone else has done this, I couldn’t find any real concrete guides for installing OS 9 from OS X.  So here we go.

Microsoft XENIX 286 BASIC Compiler

(This is a guest post by Antoni Sawicki aka Tenox)

I have recently acquired this artifact:

It’s the Microsoft BASIC compiler for XENIX 286 Operating System. Compiler as opposed to just BASIC interpreter, it can produce executable a.out files, similar to C compiler for example.  

Carefully removed the shrink wrap. Inside were couple of 5.25″ floppies, registration card and a manual:

Interestingly the 32 year old disks read just fine on a first attempt. I need to start backing up important files to 5.25″ floppy disks as they seem to outlast everything else.

Thanks to efforts of Michal Necasek from OS/2 Museum now you can run Microsoft XENIX 286 in Virtual Box

The disks can be installed in to XENIX running on Vbox following a few simple steps:

tar xvf /dev/fd0
./msinstall /dev/fd0

Upon installation you invoke the compiler like this:

bascom demo.bas
./a.out

And it produced an a.out executable which worked perfectly fine.

It’s fun to write BASIC code in vi editor, which I just realized I never done before.

Curiously the compiler also worked on the brand spanking new Xenix 2018, or rather I should call it Open Server 6, which you can download here.

The BASIC compiler is available for download from my archive along with the manual in pdf.

ttyplot – a real time plotting utility for the terminal

(This is a guest post from Antoni Sawicki aka Tenox)

I spend most of time in a day staring at a terminal window often running various performance monitoring tools and reading metrics.

Inspired by tools like gtop, vtop and gotop I wished for a more generic terminal based tool that would visualize data coming from unix pipeline directly on the terminal. For example graph some column or field from sar, iostat, vmstat, snmpget, etc. continuously in real time.

Yes gnuplot and several other utilities can plot on terminal already but none of them easily read data from stdin and plot continuously in real time.

In just couple of evenings ttyplot was born. The utility reads data from stdin and plots it on a terminal with curses. Simple as that. Here is a most trivial example:

To make it happen you take ping command and pipe the output via sed to extract the right column and remove unwanted characters:

ping 8.8.8.8 | sed -u 's/^.*time=//g; s/ ms//g' | ttyplot 

Ttyplot can also read two inputs and plot with two lines, the second being in reverse-video. This is useful when you want to plot in/out or read/write at the same time.

A lot of performance metrics are presented in as a “counter” type which needs to be converted in to a “rate”. Prometheus and Graphana have rate() or irate() function for that. I have added a simple -r option. The time difference is calculated automatically. This is an example using snmpget which is show in screenshot above:

{ while true; do snmpget -v 2c -c public 10.23.73.254 1.3.6.1.2.1.2.2.1.{10,16}.9 | gawk '{ print $NF/1000/1000 }'; sleep 10; done } | ttyplot -2 -r -u "MB/s"

I now find myself plotting all sorts of useful stuff which otherwise would be cumbersome. This includes a lot of metrics from Prometheus for which you normally need a web browser. And how do you plot metrics from Prometheus? With curl:

{ while true; do curl -s http://10.4.7.180:9100/metrics | grep "^node_load1 " | cut -d" " -f2; sleep 1; done } | ttyplot

If you need to plot a lot of different metrics ttyplot fits nicely in to panels in tmux, which also allows the graphs to run for longer time periods.

Of course in text mode the graphs are not very precise, but this is not the intent. I just want to be able to easily spot spikes here and there plus see some trends like up/down – which works exactly as intended.I do dig fancy braille line graphs and colors but this is not my priority at the moment. They may get added later, but most importantly I want the utility to work reliably on most operating systems and terminals. 

You can find compiled binaries here and source code and examples to get you started – here.

If you get to plot something cool that deserves to be listed as an example please send it on!

Gopher kills the LC

Macintosh LC

The LC isn’t a strong Macintosh.  It is after all, a low cost model.  And what I’m doing isn’t even slightly fair to it.

Since it has a mere 68020 running at a blazing 16Mhz with no 68881 nor any MMU running something like A/UX is simply out of the question.  However MMU less Mac’s can run MachTen.

Although I did make a backup of the disk to find out that this thing had been in Harvard of all places, apparently once belonging to Mark Saroyan.

Although there was nothing even slightly academic or useful on the disk.  I wonder if the software was even pirated as the last owner sure enjoyed all the various SIM games (city/earth/life/ant) it seems more than anything else.

I formatted the massive 50MB SCSI disk, put on a fresh copy of MacOS 7.0.1 along with the network driver and MachTen 2.2.

System 7.0.1

And as far as LC’s go, this one isn’t too bad, it’s loaded up with the maximum 10MB of RAM, although it seems the VRAM is pretty sparse as it’ll only go to 16 colours.  But since we are playing UNIX here, I didn’t see any need for that, and set it to mono.

I thought it’d be fun to install a gopherd server onto this machine, and that is where the fun started.

Granted it’s been a long time since I used a machine with no real L2 cache, let alone running at a whopping 16Mhz, and using a compiler like GCC is just incredibly slow.

So I thought I could just ‘cheat’ the system by taking the source code to GCC-1.42 and tweaking the SUN3-Mach configuration into a SUN2-Mach configuration but keeping it targeting a BSD like OS, along with setting it to compile to a 68020 without a 68881.  Oddly enough getting a cross compiler wasn’t so difficult, but the assembler on the LC, a modified GAS wouldn’t assembler the files. So I went ahead and built a68 from GAS 1.38 and now I can cross assemble from Windows. However I couldn’t get the linker ld from binutils-1.9 working.  I guess it was an endian issue somewhere, but my attempt at byte swapping files it was reading just led to further confusion.  And I figured linking on the target host wouldn’t be the end of the world, as compiling sure feels like it is.

I can’t see like anyone would care, but here it is: 
MachTen-crossgcc-1.42-nolinker.7z

So fighting the source and in a matter of a 30 minutes of on/off work I had it compiled.  All I needed to do then was FTP the objects to the machine, link and run.   Surprisingly this proved to be pretty simple.

gopherd running!

I managed to get a few pages out of it, and suddenly my telnet sessions dropped.  Looking over at the console and MacOS was busy being MacOS.

error of type 3

And that was that.

I tried another program to cross compile and upload phoon!

phoon cross compiled, natively linked.

It took a while to set the clock to the right year, as my minimal System 7 install doesn’t have the time control panel, and advancing 1 year at a time from 1999 takes time, by advancing the date to New Years Eve every minute 19 times to get us to 2018 with the old date syntax:

date 12312359

Lessons learned?

Obviously if I want to do something like this, I’m going to need a better Macintosh.  Or just not do things like this….

I’m kind of on the fence as to whither 68k Unix is really all that useful in the age of Ghz x86.