So while I currently have no tape drive for my ZX Spectrum, loading totally legit tap files, got me into this fun thing, lenslock copy protection.
But thanks to Simon Owen, there is this great emulator to an old physical dongal that’ll let you unlock the magical codes! LensKey doesn’t seem to scale to DPI that well, but it does work. And I was able to unlock Elite!
As you can see the weird pattern is reveled to be ‘j4’, you only get 3 chances, otherwise it’ll reset the Spectrum, and you HAVE TO LOAD FROM TAPE AGAIN. I can barely take it today, even with a dedicated MAX duino, tape emulator running at 3,850 baud, it’s just absolutely insane!
I need to write something sensible about tap files, and loading them to a physical machine, as it’s a bit more involved than I had first imagined. But it does work!!.. kinda.
Back nearly a decade ago, Apple was going to release a new Mac Pro. And it was goi to be unlike all the other computers, it was going to be compact, and stylish, a jet engine for the mind.
However instead, we got what everyone would know as the trash can.
big brain idea
So at the time i had this idea that I wanted a Xeon workstation in a nice portable form factor. And this little cylinder seemed to fir the bill. But things changed in my life, i was okay being tied down, and a regular Xeon desktop became my goto machine, a desktop would do just fine.
Then years later, an artist id commish to do some stuff was selling their Mac Pro, as they’d gone all in on Hackintosh, and this was my chance to get one on the cheap. As I’m on a business trip at the moment, I thought this would be a good time to test out what I had envisioned as the future of a personal server in a can.
A long while ago, I’d bought a newer/faster/larger flash for the Mac Pro, and it was a simple matter of hitting the Windows key + R and the machine boots up into an internet recovery mode, and will install OS X Mavericks over the wire. Which sounds great, but this is where the fun begins. Since I ordered. a NVMe M.2 module, it of course is too new for the 2013 machine, so I had to use a shim bridging the Mac’s NVMe SSD port to M.2 for my modern flash. And it never fit exactly right, and I kind of screwed it in incorrectly, but it held in place. Obviously flying bumped things around, as I had kind of figured, but I’m getting ahead of myself.
I didn’t take any big peripherals with me, as I figured I’d just get some new stuff, and didn’t worry about it at all. I picked up a View Sonic VX2770 for £45, I got this RED5 Gaming keyboard for £13, and I already had this Mad Catz 43714 mouse NIB with me. I think I paid $200 HKD or so a year ago, but I like the feel of this style of mouse, and was happy to bring it with me. Little did I know…
So after setting up a desk, and the system, it performed like crap. Worse it was locking up again at random times. I already was using Macs Fan Control to set the fan to 100%, and still it was locking up. I had guessed it’d taken a jostle too many, and I reseated the storage. And then on booting it back up I only got the blinking folder. Great, either it was dying, or I’d just killed it.
A quick jump on Amazon, and I found the “Timetec 512GB MAC SSD NVMe PCIe Gen3x4 3D NAND TLC”, which at a whopping £68 seemed like a good idea. And since it was SSD NVMe, it’d just slot into the Mac Pro, and life would be good. Or so I thought.
The first problem I ran into is that I couldn’t boot the mac into either diagnostics, or recovery mode. There is something really weird with a UK keyboard on a non UK machine. I think the 2013 (and probably many more) power up as American, and this is some kind of common issue with non American keyboards. Seriously why is the pipe,backslash on the lower row? Quotes is over 2? It’s a mess. And since I got my Mac Pro in Asia, maybe it defaults to Chinese? Japanese? Who knows?!
Lucky for me, I had this ugly little thing with me for another project. And yeah holding down the ‘Win’+R button got me to recovery mode, with zero issues.
I still have to say, this is pretty cool. However what wasn’t cool, is loading the disk util, and yeah, NO FLASH detected. I have VMWare ESX 7.0 on USB, so booting that up, and yeah it totally sees the drive:
And of course, like an idiot, I installed VMware to at least make sure it’s working.
Yeah it’s booting fine.
By default the Mac Pro seems to be picking up bootable USB devices, so I pop in a Windows 10 MBR USB, and instead I get this:
Bad memory on the GPU? Bad cable? Bad monitor? I have no idea. At this point I’m thinking I’ve totally killed the machine, but a power cycle, and I’m back in ESX in no time. Something is up.
I pull the flash, and I can boot Windows 10 to the installer, but obviously there is no storage to install to. I try adding in a 16GB USB thumb drive, and … It won’t let you install to it. It appears that there is a way to prepare a USB drive for Windows 10 to install, but it’s not exactly something that is easy to do. However Mac OS X, doesn’t suffer this limitation and will let you install to whatever you want, so I install Mavericks to the 16GB drive, and yeah it’s booting. And SUPER slow. The flash still doesn’t show up, so I read the amazon page some more and find this tidbit:
“My Macbook came with Mac OS Capitan as the operating system for recovery, and therefore did not detect the SSD. I had to create a High Sierra installer on a USB using another Mac and an app (DiskMaker) in order to reinstall the operating system from High Sierra. Once this was done, the SSD appeared available and I was able to install the operating system and upgrade without problem.” –Gilberto R. Rojina
Oh, now isnt’ that interesting? So of course I got to update my thumb drive, and of course 16GB isn’t enough space. Great. So I order a Elecife M.2 NVME Enclosure for £23, thinking I should be able to figure out once and for all if I can see the old drive, or maybe boot from it. I get the drive, plug in the storage, and Disk Util sees a drive, but will not mount it, nor is it selectable too boot from. The issue of course is that it’s APFS, which I guess cannot boot from external media? I have no idea, but I don’t have anything that critical on there, as I keep my stuff backed up on some cloud thing. So I do have a 128GB thumb drive on me, so I format the 1TB as HFS+, backup the drive, and and once more again reboot to the recovery mode, using the crap keyboard, to install Mavericks onto the 128GB flash. Thinking everything is going to be fine, I find this apple support page, with the needed links to get ‘old’ versions of MacOS.
These versions can be directly downloaded and installed without the store.
Another weird thing is that Mavericks won’t let me login to the Apple store. It notifies me on my phone, I approve it, but it never prompts for the verification. Maybe it’s too old? Anyways I install macOS Sierra, and do the upgrade.
Now running Sierra, I can use the store, and try to take the leap on my USB to Mojave. And of course disappointment strikes again:
What the hell?! So now I’m trying to find out how to create a bootable USB installer from the download. That leads me to this fun page at apple. Apparently an ‘install installer to USB drive’ would be too complicated for Apple, so its hidden in a terminal command. Fantastic. Since I’m using that 128GB as my system, I grab that 16GB flash drive, and install the installer to that.
sudo /Applications/Install\ macOS\ Mojave.app/Contents/Resources/createinstallmedia --volume /Volumes/SanDisk\ Fit
What an insane path to get this far. The tool will partition and format the drive, and now I can shut down, pop out the 128GB Sierra drive, and boot into the Mojave installer.
I didn’t take pictures, but by default the Mojave installer & DiskTool only show existing partitions. You have to right click on the drive, to expose the entire drive. This was an issue as I’d installed ESX onto the new storage. I clear the drive, and now I can finally install Mojave.
Thinking it’s all over, I reboot into the Mac Pro, thinking everything should be fine, I have a properly fitting drive that is super fast, and It’s already 10.14.6 the latest and last version that lets me run 32bit stuff. Except that It’s slow. And unstable. No progress was seemingly made.
Trying to search ‘why is my Macintosh slow’ is, well a total waste of time. And it periodically locks hard making it extremely annoying.
I have a quad-core CPU Mac Pro late 2013 (Model Identifier: MacPro6,1). MacOS X 10.9.5. I have had all sorts of USB devices hooked up to it. At any one time, I usually have all 4 ports filled. I have a 3TB USB 3.0 disk that stores my large files, a USB mouse and keyboard (logitech with a usb mini dongle), a cable to charge my logitech USB cordless mouse, Lightning cable to my iPhone 5, and other things that I rotate in and out, like CF card reader, Audio Box USB audio interface from PreSonus, Sony Webcam, etc. About 3 months into having the Mac Pro, I noticed that my keyboard went dead in the middle of using it. The mouse was dead too. I blamed the RF dongle that they both share, because the Apple Magic Trackpad (bluetooth) I have still functioned. Try as I might, I couldn't get the keyboard or mouse to work again, so I used the Magic Trackpad to restart the machine, and then my keyboard and mouse worked again. It wasn't until later that I realized that all the USB busses on the machine had frozen or "died" temporarily. I realized it later because my USB hard drive complained about being "ejected improperly." Now I have had the USB die on the Mac Pro at least 15 times over the last month and a half. Usually once every two days or so. I have tried (almost one by one) using some of the USB devices on the mac, and removing others to ascertain if it's a certain USB device that is causing this. But the odd thing is that I never get a message from the OS like "xxx USB device is drawing too much power." I'm going a little nuts here because I cannot see any rhyme or reason to the USB interface lock ups. And each time it happens, all the USB devices go dead until I restart. Sometimes, I'm able to SSH into the machine from my iPhone and issue a "shutdown -h now" and even though I see the Mac OS X UI shutdown, it never fully halts. I often have to hold the power button to get the machine to turn off. I really can't say if it's software related, hardware related or what. I've tried to watch my workflow carefully to see if anything seems to make a pattern, but nothing yet. Any suggestions? Is anyone else seeing behavior like this? Do we think it's a USB device... or is my Mac Pro flakey? -- Cheule
"When I plugged in the same config on my new machine USB 3.0 directly it was very weird, devices would not remount and only show up if they were then when present at startup, and thruput was sluggish. So I stopped using the in built USB 3.0 and grabbe the old belkin thunderbolt USB hub, and BAM it all works perfectly. Better than that after testing the throuput , the belkin gave me 30-50% better performance that the inbuilt USB, that is without any hubs just direct." -- symonty Gresham
And sure enough another search about the USB setup seems to confirm it from Anandtech
Here we really get to see how much of a mess Intel’s workstation chipset lineup is: the C600/X79 PCH doesn’t natively support USB 3.0. That’s right, it’s nearly 2014 and Intel is shipping a flagship platform without USB 3.0 support. The 8th PCIe lane off of the PCH is used by a Fresco Logic USB 3.0 controller. I believe it’s the FL1100, which is a PCIe 2.0 to 4-port USB 3.0 controller.
Unreal. I notice as I try to use the machine more occasionally the mouse turns itself off. Replugging the mouse shows it powering up and immediately powering off. I turn on the annoying backlight of the keyboard, and yeah it powers down too, however reinserting it brings it back to life. Luckily I still have this A1296 Apple Wireless Magic Mouse with me, so I pair that and unplug the mouse, and everything else USB.
It was the mouse. I can’t believe it either. I am simply blown away how this could possibly be a thing. I haven’t ordered the thunderbolt to USB dock yet, as I really didn’t want to spend any money on this thing, it was a grab and go solution, that has proven itself not so much grab and go.
Finally getting somewhere
After 6 hours of working yesterday, I shut it down to give it a break for a few hours, and it’s been up some 12 hours so far, pain free. In 2022, the Xeon E5v2 processor just really isn’t worth lugging around, but I already had it, so when it comes to transport, it actually works out pretty well. I wonder if this would have been a good traveling solution 2013 onward, but the fact a mouse could basically bring the machine down makes me think I’d have gone totally insane trying this on the road. Just as the USB Win/Alt/Alt GR/FN keys not being able to trigger the recovery mode was also crazy.
I don’t know why Apple insists on such fragile machines, but maybe the new Arm stuff is better? I can’t justify one at the moment.
Updates in the field
I’m working on getting some local retro kit, and I’ll have more fun coming up. But this fun experience ate 4 days of my life, and the least I could do is document it. I don’t know if it’ll help anyone in the future, maybe once these become iconic collectable, like the Mac Cube. Although as a former cube owner, those at least didn’t freak out when you used a 3rd party mouse.
(This is a guest post by Antoni Sawicki aka Tenox)
I often find myself replicating and making copies of large data archives, typically many TB in size. I found that rsync transfers slow down over time, typically after a few hundred MB, especially when copying large files. Eventually reaching crawl speeds of just few KB/s. The internet is littered with people asking the same question or why rsync is slow in general. There really isn’t a good answer out there so I hope this may help.
I decided to get to the bottom of it. After doing some quick profiling I found out that the main culprit was rsync's advanced delta transfer algorithm. The algorithm is super awesome for incremental updates as it will only transfer changed parts of a file instead of the whole thing. However when performing initial copy it’s not only unnecessary but gets in the way and the CPU is spinning calculating CRC on chunks that never could have changed. As such…
Initial rsync copies should be performed with -W option, for example:
$ rsync -avPW src dst
The -W or --whole-file option instructs rsync to perform full file copies and do not use delta transfer algorithm. In result there is no CRC calculation involved and maximum transfer speeds can be easily achieved.
Long term, rsync could be patched to do a full file transfer if the file doesn’t exist in destination.
While copying jumbo archives of many TB I don’t want to see every individual file being copied. Instead I want a percentage of the total archive size and current transfer speed in MB/s. After some experiments I arrived at this weird combo:
Well at first that looks weird. It pings and all so I jump to incognito mode, and…
Content Lock on EE helps to keep you and your children safe online by blocking 18-rated content. We have three settings – Strict, Moderate and Off so you can choose exactly what level of security you’d like. Please note: All new and existing accounts with Content Lock enabled have the “Moderate” setting applied by default. Content Lock is only activated when you’re using our network – not when you’re using WiFi.
And this is EE censoring archive.org . UNREAL!
Going through the SIM registration, and login….
You need a credit card to get it unlocked. Luckily my Hong Kong business card worked, as always set the zip code to ‘0000’.
Thanks over reaching corporations (at the behest of who?) from blocking me from the past?
Although I only got this for Fallout 76, back after the discounts started after launch:
So it doesn’t mean a heck of a lot to me. And they did a Fallout 76 migration a while back, and the rest was just freebies given out for whatever reason. Oh well. Steam, love them or hate them kills another single vendor pointless storefront.
(This is a guest post by Antoni Sawicki aka Tenox)
I was recently registering a new OpenVMS Community License. In the process I learned that there is a ready to run, pre-installed and pre-configured VM with OpenVMS 8.4. Completely free for non-commercial purposes. You don’t even need to register or leave your details (WOW). Just download and run! Thank you VSI!
I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood,
and I ---
I took the one less traveled by,
And that has made all the difference.
"The Road Not Taken"  -- Robert Frost
I didn’t want to make my last post exclusively focusing on 386BSD 0.0, but I thought the least I could do to honor Bill’s passing was to re-install 0.0 in 2022. As I mentioned his liberating Net/2 and giving it away for free for lowly 386/486 based users ushered in a massive shift in computer software where so called minicomputer software was now available for micro computer users. Granted 32bit micro computers, even in 1992 were very expensive, but they were not out of the reach of mere mortals. No longer did you have to share a VAX, you could run Emacs all by yourself! As with every great leap, the 0.0 is a bit rough around the edges, but with a bit of work it can be brought up to a running state, even in 2022.
But talking with my muse about legacies, and the impact of this release I thought I should at least go thru the motions, and re-do an installation, a documented one at that!
Stealing fire from the gods:
Although I had done this years ago, I was insanely light on details. From what I remember I did this on VMware, and I think it was fusion on OS X, then switching over to Bochs. To be fair it was over 11 years ago.
Anyways I’m going to use the VMware player (because I’m cheap), and just create a simple VM for MS-DOS that has 16MB of RAM, and a 100MB disk. Also because of weird issues I added 2 floppy drives, and a serial & parallel port opened up to named pipe servers so I can move data in & out during the install. This was really needed as the installation guide is ON the floppy, and not provided externally.
One of the things about 386BSD 0.0 is that it’s more VAX than PC OS, so it doesn’t use partition tables. This also means geometry matters. So hitting F2 when the VM tries to boot, I found that VMware has given me the interesting geometry of 207 cylinders, 16 heads, and a density of 63 sectors/track. If you multiply 207*16*63 you get 208656 usable sectors, which will be important. Multiply that by 512 for bytes per sector you get a capacity of 106,831,872. Isn’t formatting disks like it’s the 1970s fun? Obviously if you attempt to follow along, obviously yours could be different.
Throwing the install disk in the VM will boot it up to the prompt very quickly. So that’s nice. The bootloader is either not interactive at all, or modern machines are so fast, any timeout mechanism just doesn’t work.
As we are unceremonially dumped to a root prompt, it’s time to start the install! From the guide we first remount the floppy drive as read-write with the following:
mount -u /dev/fd0a /
Now for the fun part, we need to create an entry in the /etc/disktab to describe our disk, so we can label it. You can either type all this in, use the serial port, or just edit the Conner 3100 disk and turn it into this:
As you can see the big changes are the ‘dt’ or disk type line nt,ns and nc, which describe heads, density and cylinders. And how 16,63,207 came from the disk geometry from above. The ‘pa’,’pb’… entries describe partitions, and since they are at the start of the disk, nothing changes there since partitions are described in sectors. Partition C refrences the entire disk, so it’s set to the calculated 208656 sectors. Partition A+B is 24288, so 208,656-24,288 is 184,368 which then gives us the size of partition H. I can’t imagine what a stumbling block this would have been in 1992, as you really have to know your disks geometry. And of course you cannot share your disk with anything else, just like the VAX BSD installs.
With the disklabel defined, it’s now time to write it to the disk:
disklabel -r -w wd0 vmware100
And as suggested you should read it back to make sure it’s correct:
disklabel -r wd0
Now we can format the partitions, and get ready to transfer the floppy disk to the hard disk. Basically it boils down to this:
Oddly enough the restore set also has files for the root, *however* it’s not complete, so you need to make sure to get files from the floppy, and again from the restore set.
One of the annoying things about this install is that VMware crashes trying to boot from the hard disk, so this is why we added 2 floppy drives to the install so we can transfer the install to the disk. Also it appears that there is some bug, or some other weird thing as the restore program wants to put everything into the ‘bin’ directory just adding all kinds of confusion, along with it not picking up end of volume correctly. So we have to do some creative work arounds.
So we mount the ‘h’ partition next as it’s the largest one and will have enough scratch space for our use:
mount /dev/wd0a /mnt/bin
mount /dev/wd0h /mnt/bin/usr
Now is when we insert the 1st binary disk into the second floppy drive, and we are going to dump into a file called binset:
cat /dev/fd1 > binset
Once it’s done, you can insert the second disk, and now we are going to append the second disk to binset:
cat /dev/fd1 >> binset
You need to do this with disks 2-6.
I ran the ‘sync’ command a few times to make sure that binset is fully written out to the hard disk. Now we are going to use the temperamental ‘mr’ program to extract the binary install:
mr 1440 /mnt/bin/usr/binset | tar -zxvf -
This will only take a few seconds, but I’d imagine even on a 486 with an IDE disk back then, this would take forever.
The system is now extracted! I just ran the following ‘house cleaning’ to make sure everything is fine:
Now for actually booting up and using this, as I mentioned above, VMware will crash attempting to boot 386BSD. Maybe it’s the bootloader? Maybe it’s BIOS? I don’t know. However old versions of Qemu (I tested 0.9 & 0.10.5) will work.
With the system booted you should run the following to mount up all the disks:
fsck -p mount -a update /etc/netstart
I just put this in a file called /start so I don’t have to type all that much over and over and over:
On first boot there seems to be a lot of missing and broken stuff. The ‘which’ command doesn’t work, and I noticed all the accounting stuff is missing as well:
The source code is extracted in a similar fashion, it expects everything to be under a ‘src’ directory, so pretty much the same thing as the binary extract, just change ‘bin’ to ‘src’, and it’s pretty much done.
I think this wraps up the goal of getting this installed and booting. I didn’t want to update or change as little as possible to have that authentic 1992 experience, limitations and all. It’s not a perfect BSD distribution, but this had the impact of being not only free, but being available to the common person, no SPARC/MIPS workstations, or other obscure or specialized 68000 based machine, just the massively copied and commodity AT386. For a while when Linux was considered immature, BSD’s led the networking charge, and I don’t doubt that many got to that position because of that initial push made by Bill & Lynne with 386BSD.
Compressed with 7zip, along with my altered boot floppy with my VMware disk entry it’s 8.5MB compressed. Talk about tiny! For anyone interested here is my boot floppy and vmdk, which I run on early Qemu.
As mentioned on the TUHS mailing list. Many will remember his work on being a literal Prometheus, thrashing his work in the cathedral and delivering Net/2 to the lower i386 users. He and his wife, Lynne were instrumental in kicking off the surviving legacy of University research Unix.
I don’t need to bemoan on opertunities lost, the pivotal moments of 1991 or the way the Internet arranged itself around the needs of being PC i386, portability, and then a schism later security.
Details are sparse overall, I believe he is survived by his wife and a daughter?
Even when Im trying to live under my rock, I still am somehow flooded with news that there was a slap fight.
No not this Will Smith Chris Rock thing, I’m talking of course about Clive Sinclair slapping Chris Curry at the Baron of Beef pub in Cambridge.
Where’s the beef?
As the legend goes, Curry worked under Clive, but he ran into Herman Hauser who had encouraged Curry to go his own way and make that computer of his dreams. Incised about this Clive was able to put together and rush out the Z80 before Acorn had anything ready to ship
And more importantly it was CHEAP. You’d have thought that the zx80 would have found a larger world wide market but Commodore and Apple reigned supreme in North America.
Later that year Acorn would ship the Acorn Atom priced around £129 in kit, and £179 assembled it was a lot more expensive but granted it did have a lot more ‘computer’ in there.
In the following year Sinclair had released the ZX81, which although a larger price point also included a lot more, larger ram/rom better display and of course this was ready to ignite the coming war.
As the legend goes a TV show of all things, ‘The Might Micro, (2/3/4/5/6)’ had ignited such a storm in parliament that the Department of Industry & the BBC decided that they were going to produce programming to go along with a selected microcomputer. And that machine was the Newbury NewBrain… until it was obvious that this wasn’t going to be the machine of choice, and the selection was pushed back from the fall of 81 to the spring of 82. With the BBC being forced to open up selection to other UK computer manufacturers, both worked hard for a machine, however Curry swooped in with his new ‘BBC Micro’ (that had started working the day of the inspection) and won the contract.
1982 of course would give us the ZX Spectrum as Sinclair’s answer to what the people needed.
Oddly enough things in the long term didn’t work out for ether of them, as they both made so many missteps that they ended up ultimately shelving both of the units, with Acorn barely surviving, although their ARM processor does live on, mostly because it ended up free of any hardware platform to go along with it.
There was no ZX 83 model, instead there was of course the QL for 1984. And taking on the design of the QL the Sinclair + was launched. And despite the name, it was just a 48k with a reset button and nicer keyboard. Very NON plussed. The only upgrade to the ZX would have to come from spain in the form of the 128.
The QL was 100% incompatible with the ZX. Apparently doing something like the SEGA Megadrive, by including both a 68000 and z80 was just too out of the question. Instead it was so focused on price it made the machine not serious enough for the serious business market Clive had craved so much. No socket for a 68881, and the drives being so incredibly tiny, IBM had quickly followed up the PC with the XT which allowed for a hard disk, while the QL with a single slot in no way could fit a then 5 1/4″ full heigh disk.
Although many fault the QL for having relied on the 68008 processor remember even IBM was using the 8088, with the same 8bit constraints, it’s not that it was impossible, it’s that the sleek stylized deck of the QL was just far too ahead of itself, it’d be fine for today, just look at the Pi400! I’d prefer to have one with SD cards up front but I guess I need to learn how to 3d print and make my own.
Another fault of the QL was not having the space on the motherboard to go to the full 1MB of addressable RAM like the PC, and loading the OS from disk. Having the OS in ROM was such an 8bit holdover when loading it from tape would have been useless but the PC way of loading the OS from disk was the way to go, also it far easier facilitated updating. I know the ST & Amiga also went with OS in ROM thinking it saved money but in the long term all the wedge’s of the era just limited themselves.
The real slap: in the market
The real slap that was heard was the stagnation of both machines, and the decline of the UK computer makers. Acorn had apparently manufactured a tonne of Electron’s for Christmas but the order wasn’t actually put through because of some ‘pull back of a video game crash’ in Europe. I guess it’s the continuation of the video game crash in the USA, but as you can see the stockpile of machines to be blown out was just incredible.
And it was in 1984 that apparently Acorn had run an ad showing that Sinclair computers had a high defect rate, something that has always plagued Sinclair’s quest for low cost machines, Something that had been hand waved as a 1 year replacement policy with many teenagers abusing the machines, that led to the confrontation in the Baron of Beef along with the whooping Sinclair had unleashed on Curry. Although much of this has passed into more legend than fact, even Ruth Bramley didn’t recall anything about the event.
It’s an amazing flash in the pan, that has so many games, and so much early computer culture that was partitioned to a tiny island and for the most part in the rest of the world totally unknown. I hope to get a real Spectrum 128 one day, it sounds like a fascinating machine. Although they made a million? of them, they are quite expensive in any market place. I wonder sometimes if there is demand for a super cheap almost ‘disposable’ 8bit computer. Obviously it’d have be under £20.
Since all this UK micro computer stuff never really left the island it’s all new to me. And maybe many people outside of the UK, or surprisingly the iron curtain where zx spectrums were abundant.
footnote: I know people will say that there was some attempt at selling Sinclair Micros out of Texas with one OEM, but honestly I’ve never hear or seen of any such thing, it’s only recently as a curiosity on youtube. And they were incompatible anyways so whatever.
Also holy crap so an actor slapped another actor in a show where they backslap each other. Who cares?! Bring back Beavis and Butthead, and prime time boxing! People obviously have a thirst for this, why did the WWF’s kayfabefade? the paywalls?