In the ‘why do I even bother’ department, I just got placed on a RBL

for spam.

yay.

$20 USD ransom

$20 USD ransom

And this is what the internet has become, gone are the days of open connected system, but instead tolls to be paid to trolls as idiots believe their services are legit.

I always thought things would fall apart in censorship (which sure happens), lawyers, and idiotic patents, but I never thought of arbitrary tolls from no name, fly by night companies like this “lashback.com”.

What really amazes me, is that they actually want to demand a $20 ransom for me being able to send email, and I foolishly gave them (and verified) my email address, so without a doubt I’ll see my SPAM volume increase drastically.  So the joke is if you tried to move away from google, you are unable to do so as these NSA friendly companies will no doubt do their best to keep you stuck.

Obviously I got into the wrong business, as people are scared of the big bad internet, and there is money to be made by ‘allowing’ open protocols to function.

Fun with Windows Timeout command…

(this is a guest post by Tenox)

I’m pretty good at finding bugs in Windows and I get a new one every couple of weeks or so. Today I found out this unbelievable gem:

So there is this (cmd.exe) command called timeout. It works roughly similar to sleep(1) under Unix. It is supposed to stop execution of a batch script for a given period of time. Example:

In reality just wishful thinking, because apparently this is not always the case. Sometimes it does and sometimes… it doesn’t.

Wait… what?

Sounds unbelievable but it appears the timeout command uses Real Time Clock for it’s sleep function. If you change the clock while timeout is running…

t2LOL 🙂

I found this because my batch scripts were stuck for rather long time when a machine would have time changed by NTP. If the change was negative the timeout command would wait x thousand seconds. When the change was positive the integer rolled and timeout stopped immediately causing avalanche of problems.

So beware to timeout eating your batch scripts…

My ‘vpsland’ thing is back online

I’m not to happy with my solution, but it’ll suffice for now.

So every day, I have cron fetch me a new password from makeagoodpassword.com,  update some httpassword entry, and use PHP for a simple redirect.

So let’s say I wan to get neko for the i386, the link was http://vpsland.superglobalmegacorp.com/install/WindowsNT4.0-i386/games/neko98-i386.7z  Now when you click on the link you get a 404 looking page that has a link to the new directory structure, and includes the username & password (I’m not currently evil enough to generate a random user, but I may have to do that in the future…).

New password procedure

Click the click here link!

So the new path lives in the /old directory making the new location of neko98 http://vpsland.superglobalmegacorp.com/old/install/WindowsNT4.0-i386/games/neko98-i386.7z

So use the username/password combo on the page, and you’ll be good to go.

Enter the username / password

For example user/rapidred92

Sorry about all this.

George R R Martin Writes with WordStar 4!

So apparently it’s all the news, The best selling series, Game of Thrones was written using Wordstar 4.  On a dedicated MS-DOS computer of all things.

Game of thrones..

What George probably sees..

Well isn’t that kinda cool.

As he says, he likes Wordstar because it doesn’t try to think for him, and he likes MS-DOS because there is no distractions.

“I actually like it, it does everything I want a word processing program to do and it doesn’t do anything else,” Martin told Conan O’Brien. “I don’t want any help. I hate some of these modern systems where you type a lower case letter and it becomes a capital letter. I don’t want a capital. If I wanted a capital, I would have typed a capital. I know how to work the shift key.”

He best says it back in 2007:

I do my writing on a completely different computer than the one I use for email and the internet, in part to guard against viruses, worms, and nightmares like this. My work machine does not even use Windows (which I loathe). I write with WordStar 4.0 on a pure DOS-based machine. Mock if you must… but WordStar and DOS are both stable as rocks, and never give me the sort of headaches I get from Windows. (I won’t even talk about Microsoft Word, about which I have nothing printable to say).

For anyone chasing Wordstar nostalgia, you can leaf through the manual.

Using expect with Cisco IOS

Following up my JunOS post, here is a handy script I cooked up to pull the configuration from a Cisco IOS device.  The one trip up for this stuff is sometimes you can logon to a cisco device, and you can be at the enabled state, you may have to enable,  and depending on how it’s configured you may have to use an enable password, which may be your password (again) or you may have to use a different password.

So yeah with a bunch of testing around this seems to work well enough for me.

#!/usr/local/bin/expect —
set MYUSER “my_user_name”
set MYPASS “my_password”
set ENPASS “my_enable_password”

set HOST [lindex $argv 0];
set timeout 90
if {$argc!=1} {
puts “Usage is scritpname <ip address>\r”
exit 1
}

#
#
puts “Connecting to $HOST\r”

spawn ssh $HOST -l $MYUSER

# Deal with hosts we’ve never talked to before
# or just logon
#
expect {
“*yes/no*” {send “yes\r” ; exp_continue }
“*assword:” {send “${MYPASS}\r” }
}
set ALREADY 0
expect {
“\r*>” {}
“\r*#” { set ALREADY 1}
“*enied” {exit 1}
“*assword” {exit 1}
}

if { $ALREADY < 1 } {

send “enable\r”
expect “*assword:” {
send “${MYPASS}\r”
expect {
“*enied” {
send “enable\r”
expect “*assword:”
send “${ENPASS}\r”
expect {
“*enied” {
exit 1}
“\r*#” {}
}
}
“\r*#” {}
}
}
}

send “show run\r”

expect {
“ore” {send ” “; exp_continue}
“\r*#” {}
}

#Let’s get out of here
send “q\r”
expect eof
exit 0

 

This is a little more cleaner than the prior JunOS one, as I’ll keep on improving it.

It works with ASA’s (tested 8.2)and IOS (tested 12.2)

Using expect with a JunOS device.

I’ll add more as I go along, but the first annoying thing was that there was no ‘central’ repository of configs.  Now call me old fashioned, but I liked the old days when telnet was scriptable and I could go and talk to my Cisco stuff.. but here we are in 2014, and I suppose I should break down and use that ‘expect’ package I’ve heard so much about.

So I have this Linux host that I want to talk to all these hosts.  The first problem is that it being a new host it hasn’t talked to anything so it doesn’t know the private keys.  Annoying.  The other thing is that some commands like to initiate a pager, which takes time to slap the space bar.  It’s much better to have the computer do it.

#!/usr/local/bin/expect —
set MYUSER “my_user_id”
set MYPASS “my_password”
set HOST [lindex $argv 0];
if {$argc!=1} {
puts “Usage is scritpname  <ip address>\r”
exit 1
}

puts “Connecting to $HOST\r”

spawn ssh $HOST -l $MYUSER
# Deal with hosts we’ve never talked to before
# or just login
#
expect {
“continue connecting (yes/no)?”
{send “yes\r”
expect “password:”
send “$MYPASS\r”
}
# We’ve been here before
“password:”
{send “$MYPASS\r”}
}
# Some commands run from configure, some don’t.
# It may be easier to just enter configure mode
expect “> ”
send “configure\r”
expect “# ”
#
# Pick a command to run
send “run show arp no-resolve\r”
#send “save terminal\r”
#send “run show lldp neighbors\r”
#
# Deal with paging. I don’t want to make any
# changes at *ALL* to the device, so instead
# I deal with the pager
#
expect {
“more” {send ” “; exp_continue}
“# ” {send “exit\r”}
}
# We are done, get out of here!
#
expect “>”
send “exit\r”

So in this shell example I’ve set it up to recognize that it’s never established before.  I know it’s messy that it has the password 2x I guess I could do variable substitution if I was more scripty but right now I just want to get some basic things in/out of the routers all the time, such as port status, MAC’s and I want it like yesterday.

The important part of the ‘more’ bypass is the exp_continue keyword.  Which took a lot of googling around because everyone “expects more”.  It’s kind of annoying when your keywords are common English words.

And as you can see, this is a good enough base for doing some more complicated things.  Of course I wouldn’t roll changes out automatically, but for the adventurous there you go.  It wouldn’t take much to adapt this for Cisco stuff, as the CLI operates more or less the same.

The real fun begins with parsing all this stuff.

It never rains, but it pours.

f

fragready’s ticketing system.

So yeah, I’m still without my “dedicated” server,  and now even fragready’s portal is broken.  I just want to get on the box, and do a secure wipe myself.

So at least I have this super discount VM in Germany to keep my blog running.  Before I was hosting Exchange on KVM in the dedicated server.  However now I’m going to pull all my crap back home, as I setup an OpenVPN connection from my home to the VPS, and from there got some static routing working well enough that I can host an Exchange server at home, and use postfix to store & forward.  A pretty simple & standard setup.

Well I got to update my MX records, and what do I get?

websitespot

websitespot

Now the people I bought my domain names from, websitespot.com is down.  Even “Down for everyone or just me” has them down.

I swear, I can’t catch a break on this one.

My fragready server has been taken offline because of a ‘virus’.

And let this be a warning to all.

The Data center has null routed because of virus complaints originating from 216.231.130.102.

Sadly I haven’t heard back as far as exactly what this virus is/was and what is going on.  Just that a ‘complaint’ had been logged against my ip address.

So googling my ipaddress + virus turns up more automation gone awry.

Virus Total...

Virus Total…

So as you can see this “virus total” is listing a bunch of my  files being infected.  The first thing I noticed is that it’s NetHACK, and for non i386 win32 platforms, both Windows CE for the i386 (it’s not a normal win32 exe), and nethack for the MIPS.

And looking on how they score me 2/52 well these are the sites that now scour around looking for “viruses” and false positives that will get your server blacklisted.

URL: http://vpsland.superglobalmegacorp.com/install/WindowsCE/nethack/nethack3.4.3-WinCE-2.11-x86.zip
Detection ratio: 2 / 52
Analysis date: 2014-04-13 05:37:54 UTC ( 1 day, 17 hours ago )
    URL Scanner Result
    CLEAN MX Malicious site
    Websense ThreatSeeker Malicious site
    ADMINUSLabs Clean site
    AegisLab WebGuard Clean site
    AlienVault Clean site
    Antiy-AVL Clean site
    AutoShun Unrated site
    Avira Clean site
    BitDefender Clean site
    C-SIRT Clean site
    CRDF Clean site
    Comodo Site Inspector Clean site
    CyberCrime Clean site
    Dr.Web Clean site
    ESET Clean site
    Emsisoft Clean site
    Fortinet Unrated site
    G-Data Clean site
    Google Safebrowsing Clean site
    K7AntiVirus Clean site
    Kaspersky Unrated site
    Malc0de Database Clean site
    Malekal Clean site
    Malware Domain Blocklist Clean site
    MalwareDomainList Clean site
    MalwarePatrol Clean site
    Malwarebytes hpHosts Clean site
    Malwared Clean site
    Netcraft Unrated site
    Opera Clean site
    PalevoTracker Clean site
    ParetoLogic Clean site
    Phishtank Clean site
    Quttera Clean site
    SCUMWARE.org Clean site
    SecureBrain Clean site
    Sophos Unrated site
    SpyEyeTracker Clean site
    StopBadware Unrated site
    Sucuri SiteCheck Clean site
    ThreatHive Clean site
    URLQuery Unrated site
    VX Vault Clean site
    WOT Clean site
    Webutation Clean site
    Wepawet Unrated site
    Yandex Safebrowsing Clean site
    ZCloudsec Clean site
    ZDB Zeus Clean site
    ZeusTracker Clean site
    malwares.com URL checker Clean site
    zvelo Clean site

    Which now makes hosting any kind of file that some random people with zero accountability can screw up your hosting.

    Worse for me, is that my automated backup hadn’t been running frequent enough.  I’m now suffering through low bandwidth, and replicating all my crap that I’ve acquired through the years on vpsland.superglobalmegacorp.com is just too much.  And with the possibility of being shut down “just because” is now too much.  I kind of liked having a dumping ground for old stuff but now that is no longer permissible.

    So where to go from here?

    I can password lock the site, and require people to contact me for access.  What a pain.  I’m sure I could automate it, but I don’t want these arbitrary systems to remove me again so that is out of the question.

    I could use some kind of certificate based encryption on everything, and provide a link to the certificate and give instructions on how to use it.  But obviously this will discourage people who are unfamiliar with the command line, and with OpenSSL (and all the great news it’s had the last week!).

    Another option is to use OpenVPN to permit people to access vpsland from within that.  This removes it from public search, but does allow people to connect in a somewhat easier method.  And it doesn’t involve something tedious like downloading OpenSSL, getting my servers’s key, downloading the wanted file, decrypting the file, and then decompressing it.

    I’ve pulled the latest posts out from google’s cache.  I’ll try to put up the comments but I can’t promise much there.  As it stands right now, I haven’t heard back from fragready in over 22 hours, and at this point I want to just get my blog back in operation.

    Sorry for the hassle.

     

    –update:

    Finally got a response, but not the one I was hoping for.

    In situations such as this, where a server has been compromised, we require the server to be reinstalled with a fresh OS installation. Please let us know how you would like to proceed

    So basically a false positive on the internet will get your data destroyed.  Well this sucks.

    Mirroring Wikipedia

    So I had an internet outage, and was thinking if I was trapped on my proverbial desert island what would I want with me?

    Well wikipedia would be nice!

    So I started with this extreme tech article by Sebastian Anthony, although it has since drifted out of date on a few things.

    But it is enough to get you started.

    I downloaded my XML dump from Brazil like he mentions.  The files I got were:

    • enwiki-20140304-pages-articles.xml.bz2 10G
    • enwiki-20140304-all-titles-in-ns0.gz 58MB
    • enwiki-20140304-interwiki.sql.gz 728Kb
    • enwiki-20140304-redirect.sql.gz 91MB
    • enwiki-20140304-protected_titles.sql.gz 887Kb

    The pages-articles.xml is required.  I added in the others in the hopes of fixing some formatting issues.  I re-compressed it from 10GB using Bzip2 to 8.4GB with 7zip.  It’s still massive, but when you are on a ‘slow’ connection every saved GB matters.

    Since I already have apache/php/mysql running on my Debian box, I can’t help you with a virgin install.  I would say it’s pretty much like every other LAMP install.

    Although I did *NOT* install phpmyadmin.  I’ve seen too many holes in it, and I prefer the command line anyways.

    First I connect to my database instance:

    mysql -uroot -pMYBADPASSWORD

    And then execute the following:

    create database wikimirror;
    create user ‘wikimirror’@’localhost’ IDENTIFIED BY ‘MYOTHERPASSWORD’;
    GRANT ALL PRIVILEGES ON wikimirror.* TO ‘wikimirror’@’localhost’ WITH GRANT OPTION;
    show grants for ‘wikimirror’@’localhost’;

    This creates the database, adds the user and grants them permission.

    Downloading and setting up mediawiki 1.22.5 is pretty straight forward.  There is one big caveat I found though.  InnoDB is incredibly slow for loading the database. I spent a good 30 minutes trying to find a good solution before going back to MyISAM with utf8 support.

    With the empty site created, I do a quick backup incase I want to purge what I have.

    /usr/bin/mysqldump -uwikimirror -pw1k1p3d1a wikimirror > /usr/local/wikipedia/wikimedia-1.22.5-empty.sql

    This way I can quickly revert as constantly re-installing mediawiki is… a pain.  And it gets repetitive which is good for introducing errors, so it’s far easier to dump the database/user and re-create them, and reload the empty database.

    When I was using InnoDB, I was getting a mere 163 inserts a second. That means it would take about 24 hours to import the entire database!!  Which simply is not good enough for someone as impatient as me.  As of this latest dump there are 14,313,024 records that need to be inserted, which would take the better part of forever to do.

    So let’s make some changes to the MySQL server config.  Naturally backup your existing /etc/mysql/my.cnf to something else, then I added the following bits:

     key_buffer = 1024M
    max_allowed_packet = 384M
    query_cache_limit = 18M
    query_cache_size = 128M

    I should add that I have a lot of system RAM available.  And that my box is running Debian 7.1 x64_86.

    Next you’ll want a slightly modified import program,  I used the one from Michael Tsikerdekis’s site, but I did modify it to run the ‘precommit’ portion on it’s own.  I did this because I didn’t want to decompress the massive XML file on the filesystem.  I may have the space but it just seems silly.

    With the script ready we can import!  Remember to restart the mysql server, and make sure it’s running correctly.  Then you can run:

    bzcat enwiki-20140304-pages-articles.xml.bz2 | perl ./mwimport2 | mysql -f -u wikimirror -pMYOTHERBADPASSWORD  –default-character-set=utf8 wikimirror

    And then you’ll see the progress flying by.  While it is loading you should be able to hit a random page, and get back some wikipedia looking data.  If you get an error well obviously something is wrong…

    With my slight moddifications I was getting about 1000 inserts a second, which gave me…

     14313024 pages (1041.174/s),  14313024 revisions (1041.174/s) in 13747 seconds

    Which ran in just under four hours.  Not too bad!

    With the load all done, I shut down mysql, and then copy back the first config.  For the fun of it I did add in the following for day to day usage:

     key_buffer = 512M
    max_allowed_packet = 128M
    query_cache_limit = 18M
    query_cache_size = 128M

    I should add that the ‘default’ small config was enough for me to withstand over 16,000 hits a day when I got listed on reddit.  So it’s not bad for small-ish databases (my wordpress is about 250MB) that see a lot of action, but wikipedia is about 41GB.

    Now for the weird stuff.  There is numerous weird errors that’ll appear on the pages.  I’ve tracked the majority down to lua scripting now being enabled on the template pages of wikipedia.  So you need to enable lua on your server, and setup the lua extensions.

    The two that just had to be enabled to get things looking half right are:

    With this done right, you’ll see Lua as part of installed software on the version page:

    mediawiki installed softwareAnd under installed extensions:

    wikimedia installed extensions

    I did need to put the following in the LocalSettings.php file, but it’s in the installation bits for the extensions:

    $wgLuaExternalInterpreter = “/usr/bin/lua5.1″;
    require_once(“$IP/extensions/Lua/Lua.php”);
    $wgScribuntoEngineConf[‘luastandalone’][‘luaPath’] = ‘/usr/bin/lua5.1′;
    require_once( “$IP/extensions/Scribunto/Scribunto.php” );

    Now when I load a page it still has some missing bits, but it’s looking much better.

    The Amiga page...

    The Amiga page…

    Now I know the XOWA people have a torrent setup for about 75GB worth of images.  I just have to figure out how to get those and parse them into my wikipedia mirror.

    I hope this will prove useful for someone in the future.  But if it looks too daunting, just use the XOWA.  Another solution is WP-MIRROR, although it can apparently take several days to load.