A while ago I had chased FrontVM to moretom.net and found 2 links. One from 2003 which is a dead link, and the 2004 version which was archived by the wayback machine!
It was an interesting build, as it still used 68000 emulation from Hatari/UAE this pre-dates the 68000 to C or i386 ASM. However since it ran (mostly) the original code, it was more ‘feature complete’, although loading save games is broken for some reason (I think the decryption was not disassembled correctly). It was actually a stupid file mode setting. I just updated the source & put out a new binary, testing save games between Linux &Windows.
Anyways, it originally built on Cygwin, so I filled in the missing bits, and have it building on both MinGW & Visual C++
So yeah, it’s Frontier, for the AtariST with the OS & Hardware calls abstracted, still running the 68000 code under emulation. I think it’s an interesting thing, but that’s me.
This repository is the official place to hold all the source code around the PC/GEOS graphical user interface and its sophisticated applications. It is the source to build SDK and release version of PC/GEOS. It is the place to collaborate on further developments.
Watcom C/C++ v2 is being used to build this source tree.
(This is a guest post from Antoni Sawicki aka Tenox)
I spend most of time in a day staring at a terminal window often running various performance monitoring tools and reading metrics.
Inspired by tools like gtop, vtop and gotop I wished for a more generic terminal based tool that would visualize data coming from unix pipeline directly on the terminal. For example graph some column or field from sar, iostat, vmstat, snmpget, etc. continuously in real time.
Yes gnuplot and several other utilities can plot on terminal already but none of them easily read data from stdin and plot continuously in real time.
In just couple of evenings ttyplot was born. The utility reads data from stdin and plots it on a terminal with curses. Simple as that. Here is a most trivial example:
To make it happen you take ping command and pipe the output via sed to extract the right column and remove unwanted characters:
ping 8.8.8.8 | sed -u 's/^.*time=//g; s/ ms//g' | ttyplot
Ttyplot can also read two inputs and plot with two lines, the second being in reverse-video. This is useful when you want to plot in/out or read/write at the same time.
A lot of performance metrics are presented in as a “counter” type which needs to be converted in to a “rate”. Prometheus and Graphana have rate() or irate() function for that. I have added a simple -r option. The time difference is calculated automatically. This is an example using snmpget which is show in screenshot above:
{ while true; do snmpget -v 2c -c public 10.23.73.254 1.3.6.1.2.1.2.2.1.{10,16}.9 | gawk '{ print $NF/1000/1000 }'; sleep 10; done } | ttyplot -2 -r -u "MB/s"
I now find myself plotting all sorts of useful stuff which otherwise would be cumbersome. This includes a lot of metrics from Prometheus for which you normally need a web browser. And how do you plot metrics from Prometheus? With curl:
If you need to plot a lot of different metrics ttyplot fits nicely in to panels in tmux, which also allows the graphs to run for longer time periods.
Of course in text mode the graphs are not very precise, but this is not the intent. I just want to be able to easily spot spikes here and there plus see some trends like up/down – which works exactly as intended.I do dig fancy braille line graphs and colors but this is not my priority at the moment. They may get added later, but most importantly I want the utility to work reliably on most operating systems and terminals.
You can find compiled binaries here and source code and examples to get you started – here.
If you get to plot something cool that deserves to be listed as an example please send it on!
; LOGO Language Interpreter for the Apple-II-Plus Personal Microcomputer
; Written and developed by Stephen L. Hain, Patrick G. Sobalvarro,
; and the M.I.T. LOGO Group, at the Massachusetts Institute of
; Technology.
This repo contains the original source-code and compiled binaries for MS-DOS 1.25 and MS-DOS 2.0.
These are the same files originally shared at the Computer History Museum on March 25th 2014 and are being (re)published in this repo to make them easier to find, reference-to in external writing and works, and to allow exploration and experimentation for those interested in early PC Operating Systems.
License
All files within this repo are released under the MIT (OSI) License as per the LICENSE file stored in the root of this repo.
At first I just thought it was simply just another mirror of the original source that had been released that had some incredible restrictions.
Original license: To access this material, you must agree to the terms of the license displayed here, which permits only non-commercial use and does not give you the right to license it to third parties by posting copies elsewhere on the web.
However the restrictions have been lifted, and MS-DOS 1.25 & 2.0 are now available under a MIT license.
It’s been a few years since Tenox had mentioned OpenNT 4.5, and in that time the project pages, repositories and well just about everything has vanished. It seems that the hardest thing to do with OpenNT had become finding it.
Then I found this over on vetusware, and with my curiosity piqued, I thought I’d check it out.
As mentioned the first thing to do is combine the parts, and create the single 7zip file, and then extract that.
Extracting that will give a simple ISO file, weighing in at 1.7GB!
While not obvious at first, there is a readme in the ISO that provides instructions on how to compile it. Basically it boils down to a few main points:
xcopy the CD onto a ‘W’ drive, or subst any other drive to ‘W’ as apparently the build process requires it to be on W. Did I mention that the CD needs to be copied onto the W drive?
Run the ‘setup.cmd’ file to configure the environment and get the build process ready
run zTESTBUILD and do a clean build. It will run and eventually fail.
run zTESTBUILD again, but do not do the clean build, and it should finish
run \cdimg\genall to create the ISO image
So with those points basically figured out after the fact, let’s go!
The first thing to do is either create a VM to compile this in, or just xcopy and go. The big requirement though is that it must be a 32bit version of Windows, as part of the build process requires the ability to use NTVDM. For simplicity sake, I chose Windows 2000 server, so I could allocate 2GB of RAM, and 4 CPU cores. During the build it doesn’t use that much memory, cores are more so important during various phases of the build that can seemingly use any and all cores, while various other parts only run on a single core.
I chose to use a separate ‘C’ drive, and ‘W’ drive for the 2000 VM. With no idea how much space to give it, I setup a 32GB W drive, which after the build takes up just under 4GB of space. Â
With the VM installed, and the W drive formatted, and the contents of the CD copied over, it’s time to start the build.
So basically you just answer ‘Y’ to zTESTBUILD.cmd and it’ll do it’s thing. For me this took about 42 minutes for this to run until it failed, as expected.
Looking at the \binaries\nt directory there was now 1,274 files currently built. Naturally with the failure this is not a complete build.
After this failure you then re-run zTESTBUILD.cmd but this time answer ‘No’ to the clean build.
This step took about 15 minutes to complete.
Checking the binaries\nt directory there is now 1,293 files and looking at the entire directory there is 2,624 files taking up about 120MB of space.
With the OS compiled, all that remains is to create an install CD and boot it up. running \cdimg\genall.cmd will create the ISO image.
This will compress almost all the files, and took another 15 minutes to create the CD. After this is all done it’s just a matter of setting up a VM to run the NT45Wks.iso file.
The first thing you notice is the extra banners on OpenNT 4.5, when compared to a retail copy of NT 4.0.
And of course the different branding during setup. One of the nice things about OpenNT is that it can format filesystems directly as NTFS, instead of the old way of first creating a FAT partition, and converting it to NTFS. This ought to bypass all the limitations of disk/partition sizes for the older NT.
Running OpenNT 4.5 on VMWare seemed to run the Win32 stuff okay, although Win16/WOW stuff immediately crashed, and MS-DOS was incredibly slow with screen redraw issues. I know that NT 4.0 builds prior to SP 6 have issues with many newer emulation/hypervisors even when CPU levels are set to regular Pentium.
MS-DOS DPMI stuff like DooM are incredibly slow, and seemingly lock up when launching.
People were excited, but then kind of dismayed as they couldn’t really do much with it. Oddly enough the source code release really didn’t have any notes on how to build it, although everything needed is included. I went looking for information on how to build Word to see why it keeps doing weird things on WineVDM, and I came across this thread on betaarchive:
Special props to yksoft1 for getting it to build in the first place, and Ringding for noticing that the OS/2 supplied compiler binaries can be re-bound to run under MS-DOS using a MS-DOS Extender.
So I went ahead and fired up Qemu and within an hour I had done it!
Well this is great fun, and all, but there isn’t a heck of a lof of people with Windows 2.x around anymore. And of course Word 1.1a really wanted to have 2.11 or higher. It has some hooks for what would be Windows 3.0 although I think it was much more. Although it certainly doesn’t want to run (unmodified) under debug release 1.14.
So now that the world has gone beyond Win16 OS’s what can you do?
Well the tip of WineVDM will run it!
So now there is some new life for this old word processor.
Another fun thing in Word 1.1a is that it has an early implementation of MDI letting you view and work with several documents at once. Naturally you would need a massive monitor, which we all have today. Although people tend to just launch more than one copy of Word to accomplish this.
So now on my 64bit machine I can not only play with the source to Word, but I can run it at unimaginable resolutions on my modern machine!
So yeah HOURS of fun. Even though the database is only a few gigabytes, it took a while to rebuild everything as ‘cvs pserver’ package for Debian runs in a chroot of /var/lib/cvsd which doesn’t play nice when your archives are all created in /var/lib/cvs .. The cvs-web VM doesn’t seem to care, but the logon process for anonymous sure does.
Anyways the following archives are online:
32v
binutils
cblood
Corridor8
CSRG
darwin0
darwinstep
djgppv1
dmsdos
doom
dynamips
frontvm
gas
gcc1x
gcc2x
gcc130-x68000
gdb
linux001
linux
lites
mach
MacMiNT
MiNT
net2
nextstep33examples
pgp
plan9next
qemu
quake1
quake2
research
rsaref
sbbs
simh
TekWar
tme
truecrypt
uae
winnt
WitchavenII
Witchaven
xinu
xnu
Say you are interested in Research Unix v6, you logon to the CVS server:
Which in this case I tried to keep it somewhat sane with each found distro with some initials when there was more than one… As always its easier to look through the web interface (for me) and then decide which one to checkout.
Ok, that’s great, but how about something that has all kinds of source overlaid in varous branches, like my doom repository? First login, and then check out the default repository:
You will get errors about not having write permission into the CVS repository to set your current tag level, but that is fine, because you don’t have permission. And now if you check the directory it’s at the Jaguar port level, as the 68000 based assembly is now in the directory:
I don’t think many will care, but well for those who do, here you go. Anyways the web browsing from unix.superglobalmegacorp.com should be working just fine. Although I did move a bunch of stuff around, so people who like to deep link, I guess you are kinda screwed.
Yes, this WinFile. So Microsoft apparently went through their Windows NT 4.0 source code tree from 2007, and decided to pull this tool out, and send it out into the world. It’s available in a ‘original’ version, and a ‘v10’ version which includes the following enhancements:
OLE drag/drop support
control characters (e.g., ctrl+C) map to current short cut (e.g., ctrl+c -> copy) instead of changing drives
cut (ctrl+X) followed by paste (ctrl+V) translates into a file move as one would expect
left and right arrows in the tree view expand and collapse folders like in the Explorer
added context menus in both panes
improved the means by which icons are displayed for files
F12 runs notepad or notepad++ on the selected file
moved the ini file location to %AppData%\Roaming\Microsoft\WinFile
File.Search can include a date which limits the files returned to those after the date provided; the output is also sorted by the date instead of by the name
File.Search includes an option as to whether to include sub-directories
ctrl+K starts a command shell (ConEmu if installed) in the current directory; shfit+ctrl+K starts an elevated command shell (cmd.exe only)
File.Goto (ctrl+G) enables one to type a few words of a path and get a list of directories; selecting one changes to that directory. Only drive c: is indexed.
UI shows reparse points (e.g., Junction points) as such
added simple forward / back navigation (probably needs to be improved)
View command has a new option to sort by date forward (oldest on top); normal date sorting is newest on top
Which is quite the list of things to add to the old WinFile.
It’s the ‘classic’ MacOS. And it requires Code Warrior 10 to build. Apparently its for the PowerPC only, although I haven’t tried to compile it yet, as I foolishly just upgraded to 10.5 on my PowerPC, which of course has no classic support.
It’s a nice present from Night Dive studios. I know that many people are mad at their reboot being consumed by feature bloat, but at least they aren’t going down into obscurity.
As always, enjoy!
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.