Categories
PHP Programming

Fixing the Apache “unable to include…” localised errors

Apache includes a nice feature that displays localised errors. This makes use of Server Side Includes (SSI).
When you use PHP and have it also parse .html documents, your localised errors are broken. In fact, because the initial error will generate new errors, it becomes a mess. You typically see this in your error log:

unable to include "include/top.html" in parsed file /usr/share/apache2/error/HTTP_NOT_FOUND.html.var
unable to include "include/bottom.html" in parsed file /usr/share/apache2/error/HTTP_NOT_FOUND.html.var

I fixed this by using a new extension .err for the error documents.
Snippet from my apache2.conf:

    AllowOverride None
    Options FollowSymlinks Includes

    AddType text/html err
    AddOutputFilter Includes err

    AddHandler type-map var
    Order allow,deny
    Allow from all
    LanguagePriority en cs de es fr it nl sv pt-br ro
    ForceLanguagePriority Prefer Fallback

Then go into /usr/share/apache2/error and rename all files to have .err.var extension (instead of .html.var):

rename 's/\.html\.var/\.err\.var/g' *

Rename the tree files in includes dir:

rename 's/\.html/\.err/g' includes/*

Finally recursively replace the strings in the type-map files too:

perl -pi -e 's/\.html/\.err/g' *

You /usr/share/apache2/error tree now looks something like this:

contact.err.var
HTTP_BAD_GATEWAY.err.var
HTTP_BAD_REQUEST.err.var
HTTP_FORBIDDEN.err.var
HTTP_GONE.err.var
HTTP_INTERNAL_SERVER_ERROR.err.var
HTTP_LENGTH_REQUIRED.err.var
HTTP_METHOD_NOT_ALLOWED.err.var
HTTP_NOT_FOUND.err.var
HTTP_NOT_IMPLEMENTED.err.var
HTTP_PRECONDITION_FAILED.err.var
HTTP_REQUEST_ENTITY_TOO_LARGE.err.var
HTTP_REQUEST_TIME_OUT.err.var
HTTP_REQUEST_URI_TOO_LARGE.err.var
HTTP_SERVICE_UNAVAILABLE.err.var
HTTP_UNAUTHORIZED.err.var
HTTP_UNSUPPORTED_MEDIA_TYPE.err.var
HTTP_VARIANT_ALSO_VARIES.err.var
include
README

Now your localised errors should be back again, and you logs will not be flooded anymore.

Categories
Security

Brute-forcing pksc12 passphrases with OpenSSL

For some project I needed to recover the password that was used to encrypt a PKCS12 key.

I found a nice patch for openssl by Aion but that did not compile on any of my machines.

After some trial and error, I was able to compile it under Debian Woody. For your convenience, I have put the openssl binary online. It runs on i386 linux systems.

Usage:

./openssl pkcs12 -in mycert.p12 -aion /usr/share/dict/words

Remember: it is ancient OpenSSL-0.9.7c so full of security bugs, hence only use it to recover lost passwords.

Categories
VMware

Rename VMware Virtual machines on ESX

Ever wanted to rename a virtual machine and found out that the “Rename” option merely renames the “friendly name” in VirtualCenter? You could clone the VM to a new one with the proper name, but that requires a lot of downtime usually.
There is a quicker way:

  1. Shut down the VM
  2. Choose “Remove from inventory”
  3. Log into the ESX console and cd to the place where your VM is
  4. Rename the directory
  5. Rename all the files in the directory
  6. Change the names in the vmdk, vmsd, vmx, and vmxf files
  7. Browse the datastore and add the new vmx to the inventory

Or:

cd /vmfs/volumes/vmfs-data6
mv OldVM NewVM
cd NewVM
rename OldVM NewVM *
perl -pi -e 's/OldVM/NewVM/g' NewVM.vm*

Categories
Windows

XP Service Pack 3 breaks FAST

After slipstreaming Service Pack 3 into my Windows XP Professional CD image, you cannot use FAST (File and Settings Transfer Wizard) to import a backup made with FAST on a XP SP2. Attempting to do so will yield this error:

Your migration store was created with a previous version of File and Settings Transfer Wizard.

This is of course not very nice. Especially not if you reinstalled the machine in question.

The fix is to apply hotfix 896344 on the old XP SP2 machine, and then make a backup using FAST.
This backup will be suitable for restoration on a SP3 machine.

The microsoft page on this hotfix only mentions this issue to apply to x64 Edition, however is also applies to pre-SP3 versions of FAST.

Categories
Windows

Mozilla Marktplaats tralies bug



Sinds jaren bestaat er een bug in de code van Marktplaats.nl, waardoor de boel bij het verwijderen van een advertentie blijft hangen in een soort gevangenis met grijze tralies. Het maakt niet uit wat je doet op de site — uitloggen, inloggen, zoeken, het scherm blijft een traliepatroon. Het enige dat helpt is het venster/tab sluiten.

Deze bug is ook al jaren geleden gefiled maar tot op heden is er nog niets mee gebeurd.
Ik liep er toevallig weer eens tegen aan vandaag. Dit gebeurt met Firefox 3.0 – ik had gehoopt dat de bug niet meer op zou treden met deze versie maar helaas.

Categories
Computers Windows

Windows XP for power users

Windows Vista has been released over 18 months ago, and my initial reaction was that this operating system is the most bloated, sluggish crap ever released by Microsoft. Everyone was hoping that Service Pack 1 would relieve some of the pain, but unfortunately Microsoft failed to put any significant performance improvements in. I have come to the conclusion that Vista stays crap and should not be used by any self respecting computer user. It might be an option for the average clueless users that have no notion of security, but anyone beyond that experience level, especially power users like system administrators, should not use any flavour of Vista, but Windows XP Professional.

XP has its limitations too, but with the right kind of measures it can be a very good and safe computer experience. Some of these measures and guidelines:

Always do a clean install

Whatever computer you want to start using, always reinstall it before putting it into service. It might sound strange but this holds especially true for new machines that come with XP preinstalled. Vendors like Dell and HP are known for putting huge amounts of crap on their machines. Software vendors want to sell their stuff to customers, and make deals with PC manufacturars to put trial versions on new PC’s. This means that a new PC is in fact partly sponsored by the software companies. It is easy to see that this strategy is not in the best interest of the actual user of the PC.
It is not uncommon for new PC’s to come with 3 different (incompatible) virus scanners installed, 2 different CD/DVD burning programs, a couple of firewall programs, and loads of other crap.
The way to get rid of this is not to uninstall everything, but wipe everything and reinstall. It is recommended to slipstream the latest Service Pack into your installation CDROM (SP3 at the moment), to avoid trouble installing it afterwards. A possible loophole is the drivers — these usually reside somewhere on disk — so please take care to save these onto a USB stick first.

Don’t install driver packs, but manually point to INF files

Another trap users tend to fall in, is to click the binary installers of the various drivers. While this is not a huge problem, things potentially can get screwed up during this step:

  • Wireless drivers that disable the Windows Wireless Zero Configuration (WZC) service. This is known to happen with Intel cards, and some of the Sitecom cards. Having a custom wireless configuration tool bloats your system and makes debugging very hard.
  • Drivers that install all kinds of management applications. Examples of this are vendor specific control panels for video, audio, etc. The standard Windows control panels are perfectly capable of controlling everything. Only install if you absolutely need their functionality

A convenient way to circumvent this is to extract the actual drivers by opening the installer binary with 7-Zip, and then point Windows at those drivers files.

Dont’ run as Adminstrator

The University of Michigan has a good paper on how to do this. The introduction speaks for itself:

You’ve heard it a thousand times: “Don’t run as admin”. Yet you continue to tempt fate. You log in with admin credentials and surf the wild wild web through whatever minefield it takes you. You open email and attachments with abandon, confident in the fact that you’ve never been hacked before. Yet every once in a while, your heart starts to beat a little faster. Perhaps it happens when you land on some web site you didn’t expect, or when you double click on that unsolicited email or launch some video clip that your friend sent you. Your heart accelerates because you know, deep down, it’s just a matter of time before you do get hacked. And then, because you’re logged in with administrative credentials, you know the price could be big. If you’re lucky, only your ego will be bruised. Worse, the integrity of your system will be compromised and personal as well as private University information will belong to someone else. In fact, it’s entirely feasible that your system has already been compromised and you’re not even aware of it. How do you know that it hasn’t?

If you’re pushing your luck by logging in with administrative credentials, then read this paper. We’ll illuminate the “tips and tricks” necessary to start running as user. You’ll feel better running in a less privileged context, and you’ll be making a critical contribution to the security posture of your unit and the University.

I have been non-admin for half a year now and I have no problems whatsover using my computers. However, right after installation of system you typically spent some time configuring it:

  • installing applications
  • installing printers
  • installing backup scripts
  • customising system options
  • configuring network settings and VPN connections
  • configuring power options

A practical recommendation is to leave yourself Admin until you have installed and configurated your system to the extent that you do not need admin rights during dayly use. This is usually a few weeks after installation. At that point, make yourself a regular user, and switch to Admin only if needed. There is a small list of issues that require manual intenvention, but it can be done, and it is recommended to spent some time figuring out how to fix them, instead of becoming admin again. The Michigan University PDF already contains some practical tips for some of these issues, but I ran into some additional problems that weren’t covered there.

Usually you can right-click and select Run as… to run stuff as admin. You can also use the poormans sudo for Windows: runas. However, you have to type an awkward string each time:

runas /user:administrator regedit

To make things more convenient and appeal more to the power user, place a textfile with this content in your WINDOWS directory and name it sudo.bat 😉 :


RUNAS /USER:Administrator "%*"
EXIT

Now you press Windows-R -> “sudo regedt32” and off you go!
When you are admin, you can directly run MMC files, but when using sudo or runas you need to supply the application as well. For instance to run the Group Policy Editor, you would run sudo mmc gpedit.msc

Here is an overview of common admin tasks and how to conveniently run them. Note that sometimes there is not option to right-click and select “Run as…”, so you have run commands from a shell (you are not afraid of that anyway aren’t you?).

Formatting removable media Can be fixed with GPO
Configure printers Shift-Click on printer -> Run as… -> Configure
Configure networking Add yourself to Network Configurators group
Group Policy Editor sudo mmc gpedit.msc
Add/Remove Programs sudo control appwiz.cpl
Teletubby user control panel sudo control userpasswords
Normal user control panel sudo control userpasswords2
complete Control Panel sudo control

More examples of how to run specific Control Panel item are listed on http://support.microsoft.com/kb/192806/.

Probably the last option is the best compromise between usability and amount of typing.

Try to stick with default options

Just because it is possible to customize about every aspect of the operating system and the user interface doesn’t mean that you should do so. Some of these customisations lead to poor performance. A good example in this respect is installing 12 Mb desktop wallpaper images. The default theme however (teletubby style) is eligable for replacement. For best results, choose the “Windows Classic” style, and after that choose “Adjust for best performance” in the Visual Effects Tab of the Performance Options.

Categories
Computers VMware

Areca releases driver for VMware

Areca has just released a beta driver for use with VMware ESX 3.5 🙂

This means that finally all the advantages of the Areca hardware can be used to build VMware systems.

I consider the Areca’s one of the best (if not the best) professional SATA RAID controllers out there.

I have used Dell servers a lot, because they offer more bang for the buck. However, Dell keeps on using crappy RAID controllers that are full of bugs. Over the last few years, it happened several times that Dell servers went down because of RAID controller problems, such as bugs in firmware.
I got really depressed by looking at the firmware history of their shitty PERC controllers – they started out the naming scheme with letters but they had to revert to another scheme as soon as the past the 27th firmware update. How’s that for mature code. Oh, and almost every update is labeled critical by Dell.
The PERC controllers that ship with Dell servers perform OK-ish, but they are hard to manage, they don’t have cool features like online RAID level migration, and at the time did not offer SATA RAID.
Luckily we have an IBM Fibre Channel box to store our data on, so if one of the Dells goes down again (you it will once you’ve seen the driver and firmware history) we don’t risk loosing too much data.

It was very frustating to be forced to buy servers that contain sub-optimal hardware when you know there is much better kit out there. But now, with the Areca drivers available I can create a multi-terabyte 1U VMware server for our disaster recovery plan.

When I get my hands on an Areca controller I will see how VMware behaves with that – to be continued.

Categories
Computers Ubuntu

Custom Ubuntu software repository

Sometimes I need to recompile software packages on Ubuntu, for instance because of a special feature. I use the resulting packages on a number of servers, but manually using dpkg to install muickly becomes a pain if the number of machines becomes significant. Therefore I have created a custom software repository that can be used with apt-get. Add the following line to your /etc/apt/sources.list:

deb http://www.tienhuis.nl/ubuntu feisty main restricted universe multiverse

Run apt-get update, and you’re ready to install the packages. The repository is not entirely finished, but I plan to have binary packages for dapper, edgy, feisty, and gutsy, on i386 and powerpc. The version numbers have been bumped up so they will overwrite the original packages, so be carefull.

During installation you will get this warning:

WARNING: The following packages cannot be authenticated!
  netatalk
Install these packages without verification [y/N]

This is because the packages are not signed. Even if I did you would get this warning because a default Ubuntu system does not have my keys. So just choose y to install.

As mentioned before, use these packages at your own risk.

Categories
Ubuntu

Music with Ubuntu

Play it on remote HiFi speakers

After several Linux trials and errors over the last couple of years, I finally switched to Ubuntu Gutsy as my main OS on my laptop. The user experience of current Linux distro’s is good enough, and Linux does not (yet) suffer from the massive amount of malware and virusses for the Windows operation system. Another welcome issue is that I can now do all my perl/python/ruby programming work for school on my home machine, without any fuss. Also very nice is the ability to encrypt your partitions, so if your laptops gets stolen or lost (as they do), your data is reasonably safe.

Lex Light 533Mhz mini PCWhat was a big problem, was playing music with my shiny new Ubuntu laptop. In my situation I have a nice stereo set with good speakers. Sitting next to that is my home server, running Ubuntu. I deliberately picked a very small server that does not make noise or suck up much power: the Lex Light. This is a completely silent (no moving parts) booksize PC with everything onboard:

  • 533MHz VIA Centaur CPU
  • 256Mb RAM
  • 10/100 Mbit ethernet
  • 1Gb Compact Flash (/dev/hdd)
  • audio
  • VGA
  • 2 x USB

I have done some tests with a power meter and it uses about 10 Watts 🙂 The audio output jack is connected to my amplifier, the ethernet is hooked up to my home network.

My laptop running Windows XP had WinAmp installed, with an obscure Russian plugin that sends the raw audio frames to a small daemon on my home server. This way I was able to play music on my big amplified speakers. The whole setup was quite buggy so the daemon would crash sometimes, an most of the time minimizing the Winamp window would make if disappear from explorer. Only running the Winamp binary again would make it show up 😉

After installing Ubuntu-7.10 a few weeks ago, I really missed this great way of playing music, so I went on to find a Linux alternative for my remote sound system. It turned out to be quite easy 😉 I installed Esound (or ESD, the The Enlightened Sound Daemon). ESD can be configured to run in deamon mode and accept connections via TCP/IP. On my Ubuntu laptop I installed XMMS and the XMMS-ESD plugin, configured the right IP address and hey, Presto remote sound system 🙂

Seeking in HTTP streams with XMMS

There was one other annoyance though. All my MP3 music sits on a server that streams via HTTP. In Windows XP, I was a big fan of Winamp, which had great support for seeking (skipping inside a song) in these HTTP streams. However, it seems that XMMS (both the upstream source and the Ubuntu package) does not have support for this. Not very nice if you listen to 2-hour mixes 🙁 As you can see in the original bugreport for this issue, this was recognized back in 2001. It is still not solved, but somebody did write a patch to implement it 2004. Luckily this patch applied cleanly to the Ubuntu sources, and after installing it XMMS now does have a slider to seek in the stream 🙂 (it is not a good as Winamp’s seek support however, if you skip to position near the end you sometimes get a nice 416 Requested Range Not Satisfiable error. I think this is because the patch only does some basic assumption on ranges, which is not accurate with VBR streams).

You can download this recompiled Ubuntu XMMS package from my Ubuntu software repository.

Categories
Computers VMware

Multiple full VM backups using VCB, rsync, OpenSSH and VSS

The problem

Our shiny new VI3 setup works really well, but the backup chapter still needs work. I P2V-ed all our Linux boxes to VM’s, so the existing rsnapshot file level backups still run. So far so good.
But, in addition to file level backups, I also want full VM backups, each day, both on-site and off-site. As a matter of fact, I also want some sort of versioning system, to have multiple, full VM, off-site backups. I don’t want to install some mega expensive disk array that contains X times the ~900 Gb of space all my raw VM’s suck up.
What I want is a very simple, efficient and elegant setup, without all kinds of fancy stuff and graphical bells and whistles. I’m running UNIX systems for a living so I’m not afraid of console utilities.

After doing some research I was unable to find any existing solutions, and the ones that come close are commercial and expensive, or require too much complicated crap to be installed.

The solution

Our VMware license includes a license for VMware Consolidated Backup (VCB). Being a great company, VMware has plugins and manuals for all the major closed source, expensive, buggy black boxes enterprise backup suites, but documentation about their command line tools is pretty lame and comes down to one louzy console screen of help text.
Luckily, it seems that in order to make full VM backups you actually need just one command (vcbMounter.exe).

Since the open source program rsync has served me really well in the past, I decided to use it again for our VMware backups.
My setup uses two machines (Windows 2003 Server, as VCB runs only on Windows), one machine hooked up to our SAN, running the VCB software, and one off-site machine housing the archive. Both machines are modest 1U Supermicro boxes, with 4 x 1 Tb SATA in RAID5, on Areca controllers. They are connected via our dedicated WAN link at 100Mb/s.

It basically comes down to:

  1. Full VM backups are created locally with VCB; old backups are first deleted (because VCB refuses to overwrite old backups)
  2. The new backups are transferred to the remote site efficiently and securely using rsync and OpenSSH
  3. The off-site server uses Volume Shadow Copy to create a history of full VM backups

Steps 1 and 2 are done using this batchfile (rename to .bat/cmd).
By using the ––inplace option, we actually update the old backup files on the remote server. This is an important details, because without it the file would be deleted and recreated, thereby killing the efficiency of the VSS part later.
The rsync algorithm will cause only the diffs to go over the line. The backup of all our VM’s together here is about 500 Gb (VCB strips out redundant unused space, saving about 400 Gb already at this stage.
The link to our remote site is 100 Mbit/s, so in the theoretic, most optimistic approach this can transport 36 Gb/hour, which would make the synchronization take at least 13-14 hours. In practice it will be even longer, and thus impratical to use.
Using rsync, only the diffs are sent over the line.
Our situation, with 10 VM’s running website, e-mail, database, fileservers, applications etc, the first results show that the daily diffs are somewhere between 20 and 30 Gb. This would theoretically take less than a hour to transport.
The practical situation is alot different. Although the actual amount of data is reasonably small, running rsync with its sliding checksums on half a terabyte of binary chunks takes also hours.
My real-world example show that the VCB backups themselves take about 1.5 hours to execute, yielding a directory with ~500 Gb of backups. This then gets rsync-ed to the remote site, which takes 5-8 hours (a seen during the last week). This is a workable solution for a daily schedule.

The partition that houses the data on the remote server has Volume Shadow Copy enabled, and creates Shadow Copies daily at the appropriate time (30 minutes before the other site initiates the rsync step).
The following picture shows that we now have 5 full copies available of our 500 Gb directory, but instead of an extra 5 x 500 Gb = 2.5 Tb, it merely takes up an extra 120 Gb:
shadow copies dailog box on w2k3

At this stage we’ve got:

  • Daily full backups of our VM’s on-site
  • Multiple full backups of our VM’s off-site

Caveats

  • To prepare everything, I need a full copy of the 500 Gb tree on both machines. Initially I planned on using rsync and OpenSSH, but it turned out that the OpenSSH daemon on Windows is very slow. With my systems (dual Xeon 3 GHz etc) connected via gigabit, the throughput maxed out at about 6-7 Mb/sec (Linux to Linux: > 30 Mb/sec).
    Instead of using rsync/OpenSSH, I simply mounted the disk with CIFS and copied over the whole tree.
    Subsequent tranfers are already limited to about 12 Mb/sec because of our uplink speed, but that is not a poblem in the real world scenario.
  • I have used the cwRsync package to install rsync and OpenSSH on Windows. OpenSSH with public key authentication between two Windows systems is possible and runs perfectly fine, but setting things up can be a bit hairy, especially if you’re used to UNIX systems…
  • To secure things, you should restrict access to the OpenSSH daemon; I have used the buildin Windows Firewall to accomplish this, and it works fine.
  • This article describes only half of the story. The other half is called restore. The Vcb utility to restore VM’s that comes with VCB (vcbRestore.exe) is pretty buggy and inflexible. It is hard to restore VM’s to a different place or with a different name. As some people have found out, it is possible to use VMware Converter to convert restore VCB backups to a different system (VMware ESX, server, Workstation, etc), but even despite VMware now claiming Converter can do it, this step still requires manual fiddling with vmx and vmdk files.
    I have recently updated VMware Converter to 3.0.2u1 build 62456, and now everything works like a charm 🙂
    It is installed on the same machine as VCB, so Converter has direct access to the backups. The restoration process is very straighforward and easy to understand. The software allows you to change the disksize of the restore VM, the datastore where the VM will be put, and the name. This name is then reflected at low-level, so not only the ‘friendly name’, but also the VMDK files have this new name. I have restored several machines and it worked without glitch. The only downside is that the restoration process takes places over the network, which is a bit slower than the backup process, which is done over fibre channel. But with gigabit ethernet restoring a small VM of 4Gb only took a few minutes.
    This way of restoring also allows you to restore a VM onto a totally different system. This might come in handy for Disaster Recovery, where you might be forced to revive a VM onto VMware server or even on VMware Workstation.