What a n00b!

Website Redirects with WP-SuperCache

On my WordPress install, I use the WP-SuperCache plugin to be nice to my web server. I was shocked when trying to show a coworker something on my site (he typed in whatanoob.com - what a n00b! :) ) to see my homepage content very old and the redirect was no longer working (it worked some time ago when I setup WordPress). It appears that sometime after installing the WP-SuperCache plugin, my website no longer redirects the aliased site to the "real" domain, but instead just serves up the same content (this is very, very bad for SEO). The redirect removing the 'www.' still worked.

The fix is simple.. I just added the lines:

RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?whatanoob.com [NC] RewriteRule ^$ http://whatan00b.com/$1 [L]

.. to my .htaccess file.

Two lessons (re-)learned:

  1. Test everything - This WP-SuperCache is definitely not something that you can install, activate, and expect to "just work" with your setup. When you assume.. well, you know the rest.
  2. Monitor, monitor, monitor

FYI: still not sure why cache was so stale, but going into the plugin settings and deleting the cache cleared it up. I also changed it to clear cache whenever posting new posts.

Velocity 2010 Wrapup

I got the opportunity to attend the Velocity 2010 Web Performance and Operations Conference last week with most of the rest of the guys from the Ops team at SugarCRM. It took place right around the same time as some other stuff I had going on, so I wasn't able to hang out "after hours" as much as I would have liked, but it was great to listen in on some great sessions and talk to people far smarter than I!

Some of the sessions to highlight:

There were also a number of sessions on different performance tools. Hopefully posts on some tested out for real are soon to come.

On the slightly less technical side, there were some great talks about culture, including quite a few talks on DevOps (more on that in a later post):

Another great set of videos from the conference is the "Choose Your Own Adventure" talks with Adam Jacob from Opscode. You can head over to one of the videos and see them all listed in the related videos. I didn't get to that session, but watched all the videos and wish I had now :).

It was a great few days and was followed by DevOpsDay USA over at the LinkedIn campus, more on that to come!

Enabling VNC to Ubuntu Desktop via SSH

Over time my desktop has become a box that just sits in the corner that I boot up using wakeonlan and SSH to when needed. Tonight I wanted to jump onto the console to test a few things, but really didn't want to go through the trouble of hooking up the monitor that now is connected to my laptop (waay too much work, I know :) ). Anyway, it turns out enabling VNC isn't too bad over SSH.

First, enable it for your user:

gconftool-2 -s -t bool /desktop/gnome/remote_access/enabled true

If you're like me, you probably enabled it at one point, set the password, disabled it and forgot what you set the password. To set it, we use the base64 utility to convert it and set it using gconftool:

gconftool-2 --type string --set /desktop/gnome/remote_access/vnc_password $(echo -n 'dontstealmysupersecretpassword!'| base64)

I then just connected with my VNC client (I used Chicken of the VNC from my MacBook) by connecting to the IP of my desktop on display 0 and the password I had set.

Troubleshooting

If you're like me even more, you probably ran into trouble getting connected. If you're having troubles getting connected, you can see additional settings for the GNOME remote_access using the gconftool-2 utility:

gconftool-2 -a /desktop/gnome/remote_access

There are a few key settings including "local_only", "enabled" (of course), "prompt_enabled" (makes VNC not prompt on the desktop to allow connection - something that would obviously be a problem if you didn't have access to the console to begin with!), and "use_alternative_port".

Install Killall on Ubuntu JeOS

I am playing with a new install of Ubuntu 10.04 in a minimum virtual machine and noticed there was no killall utility installed. In case you run into this, don't fret: sudo apt-get install psmisc

The package to install from wasn't overly obvious, but not difficult to install :)

How to Treat Painfully Slow DNS Lookups in OS X

I'm not really sure what made this suddenly appear, but I've noticed over the past few days that my initial connections to websites have started taking longer and longer. Today, it became painful as the browser would say it was "Looking up example.com" for a good 3-4 seconds (if not longer) before loading the page. Once the lookup completed, the site usually came down pretty fast. Thanks to a (not so quick) Google search, I ran across this forum thread. It turns out, disabling IPv6 in OS X speeds things up quite a bit. To disable, go to System Preferences -> select the network interface you're using (probably Airport) -> click Advanced.

Then, in the TCP/IP tab, change the drop down next to "Configure IPv6" to "Off" instead of "Automatic".

This worked for me. My browser and other various apps no longer take several seconds before loading pages on the web!

Zimbra on Minimal Hardware

I really like Zimbra, but it tends to use a ridiculous amount of CPU while just sitting there, which makes it a bad choice for someone like me who wants to run it with a few users at home as a virtual machine. As I stated growing the amount of virtual machines on my physical host at home, things started to get a little cramped. Zimbra just plain uses far more CPU "out of the box" than the other virtual machines (I've got enough RAM) and it was starting to become my bottleneck.

After installing Zimbra and just leaving it running, it used the better part of a processor core most of the time. That's not good if you've got a limited amount of hardware like I do. However, it wasn't too difficult of a process to get my Zimbra server to use almost no CPU most of the time. As a great side-effect to this project, I will be trying to bump down the amount of memory allocated to my Zimbra VM, but that wasn't the highest priority. I am running on the latest version of Zimbra (6.0.6 at the time of writing), but the tricks should apply to almost any version.

First, I started with disabling services that I really wasn't using. I'm not monitoring my Zimbra server using snmp, so snmp was a pretty easy one. My server isn't for an IT department or hosting service, so stats and logging history isn't overly important, so I also chose to disable logger and stats. To disable those, run:

zmprov ms mail.whatan00b.com -zimbraServiceEnabled snmp zmprov ms mail.whatan00b.com -zimbraServiceEnabled logger zmprov ms mail.whatan00b.com -zimbraServiceEnabled stats

Now, let's do a restart:

zmcontrol stop; zmcontrol start

This really only gave me gains in memory usage, but since I didn't need them turned on, that was ok. Another good candidate to disable would be antispam and antivirus, but I didn't want to turn off spam filtering on my system.

After disabling those extra services, I still was having CPU spikes every minute (which ultimately was what I was after). After doing a little digging, it turns out that Zimbra was calling zmmtaconfigctl which makes several zmprov calls. If you have been around Zimbra for any amount of time, you know that zmprov calls are expensive and time-consuming. It turns out that this script just scans to updated config to apply to the MTA. I really can't think of a reason that I would need this every minute. A quick Google search led to a forum post on how to increase the interval of which this script is called. It's defined in zmlocalconfig and 60 seconds is assumed if the value is not set. I chose to have it run every 2 hours (a fairly arbitrary decision):

zmlocalconfig -e zmmtaconfig_interval=7200 zmmtactl restart

That got my spikes down quite a bit, but I was still getting spikes of nearly around 20% every couple of minutes or so. While this wasn't all that detrimental, it would be good for my overall CPU usage to get rid of it. A quick look at the crontab for the zimbra user showed that the script /opt/zimbra/libexec/zmstatuslog. Apparently, this script checks the status of the Zimbra server and displays the status in the Admin Console. Since I rarely ever log into the admin console, I really don't need this to run very often. While there's really no use for me to have it running every two minutes, I did leave it set to run every hour:

0 * * /opt/zimbra/libexec/zmstatuslog

Now it's time to look at the good we've done.

This is what we started with:

Obviously, quite a bit of CPU usage. You can see why I needed to do something to fit more VMs on this host.

And now:

Looks great now!

There are a few extra cron jobs left in the zimbra user's crontab that really don't need to run for me such as the Dspam cron jobs, but those only run once a day. If you're really zealous, you can disable those as well, assuming you have Dspam disabled (the default).

Update: For anyone who is interested, I did the 6.0.6 -> 6.0.7 upgrade a few weekends ago and had my cron jobs reset. All the other changes stuck.

Hands on with Opendedup

After reading about Opendedup on Slashdot this weekend, I decided to try it out to see how well it all really worked. My test server was an install of Ubuntu 9.10 x64. If you happen to be using that stack, the installation isn't too difficult:

Download required files (adding links to the most recent versions of each, check for newer versions as necessary):

cd /usr/local/src wget http://download.java.net/jdk7/binaries/ wget http://opendedup.googlecode.com/files/debian-fuse.tar.gz wget http://opendedup.googlecode.com/files/sdfs-latest.tar.gz

And install:

chmod +x jdk-7-ea-bin-b87-linux-x64-25_mar_2010.bin ./jdk-7-ea-bin-b87-linux-x64-25_mar_2010.bin (follow instructions - afterwards, but sure to set the JAVA_HOME variable) export JAVA_HOME=/usr/local/src/jdk1.7.0

tar zxf debian-fuse.tar.gz

cd debian-fuse

dpkg --install *.deb

Next, just extract the SDFS packages and use:

tar zxf sdfs-latest.tar.gz cd sdfs-bin

Now, we make our filesystem and mount it:

./mkfs.sdfs --volume-name=deduped --volume-capacity=5000MB ./mount.sdfs -m /srv -v deduped

Assuming all goes well, you should have a newly mounted deduped mount.

Great results from testing in the small

As a test, I copied over a sample song from my music collection (what nerd doesn't enjoy a little Weird Al?). Copying to /root, the file size was 2.9MB. Once I copied it to my deduped /srv directory, the file size took just 46K on disk! Not too shabby. Just as a sanity check, I copied the file back off the deduped filesystem and the file size grew back to normal.

Things not all rosy in Opendedup-land

I decided to try throwing a little more data at it as a test and copied over the Documents directory from my desktop. The folder that I copied was slightly over 600MB of docs, text files, images, and a few other file types. During the file copy, Opendedup took a significant amount of memory (it hung around the 90% mark). My test machine was a small virtual machine (1 CPU, 2GB of RAM) and the file transfer slowed it down significantly. Eventually, I got curious as to how much had been transferred. I cd'd to the test dir and did an 'ls' which never completed and I could no longer open a new shell via SSH to the vm either. I'm sure this would be much better if I had the resources to throw a little more RAM and CPU at it (since I'm running the minimum), but I don't have time the resources to try at the moment.

Conclusion

Overall, the technology seems really promising and pretty straightforward to use. If my compression rates hold true, this could dramatically cut down on the amount of disk space needed to store my backups and virtual machine templates. Judging by the performance I've seen thus far, I don't think I'd want to run this in production, but it looks promising, nonetheless.

Enabling Flash in Chrome on Ubuntu 10.04

Update: The same steps seem to work on Ubuntu 11.04 as well.

Another update (10/24/2011): The same seems to work as well for 11.10. "Virtue" hints that the path may have changed to /usr/lib/adobe-flashplugin/libflashplayer.so, but that doesn't seem to be the case for me.

Hello again there, world. I've been away from my computer for a little while now as I relocated to Silicon Valley, but I got a chance to play around with one of the Alpha's of Ubuntu 10.04 this weekend. The new version has some vast improvements in the looks over the last one as well as now it includes Google Chrome in the default repository. When I wanted to setup Flash for Chrome, I followed a handy how-to, but this one didn't account for the fact that Chrome was installed via the regular repositories and wasn't installed to /opt.

To install, I simply had to follow the step-by-step with a few modifications:

  1. Install Chrome and Flash (with the Ubuntu Software Center or with apt-get
  2. sudo apt-get install chromium-browser flashplugin-nonfree
  3. Add the Flash plugin to the Chrome plugins directory
  4. sudo cp /usr/lib/flashplugin-installer/libflashplayer.so /usr/lib/chromium-browser/plugins/
  5. Restart Chrome

That's it. While a bit annoying that one has to install Flash for Chrome this way (especially considering YouTube - another Google product - relies on Flash), but it's not too painful.

If you still run into problems, you can double-check the location of the file (using locate libflashplayer.so) needed and the location where Chrome is installed (using whereis chromium-browser).

If you've just installed Ubuntu 10.04 and came across this, you may also want to install the browser Java plugin as well.

No Dig on Ubuntu 9.10 Minimum Virtual Machine

Well, I guess Canonical has taken the idea of "minimum virtual machine" to the extreme. The 9.10 version of Ubuntu Server JeOS (F4 + select "Minimal Virtual Machine" at install time) apparently doesn't include dig in the default installed packages.

I was shocked when my new virtual machine was having problems with connecting to the Ubuntu repositories and I couldn't do a dig as a test:

-bash: dig: command not found

I've never seen a Linux distro without dig installed by default, but apparently it's not as necessary to others as I would have thought..

Anyway, the package comes with the dnsutils package:

sudo apt-get install dnsutils

Insecurity by Non-Obscurity

I was a bit shocked and disheartened tonight to discover that my WordPress version was being broadcast to the world without me knowing it. It's something that I hadn't ever really given much thought to, mostly because I always assumed that a piece of information like that wasn't being given out. What was even more disheartening to me was what I discovered as the method for disabling this broadcasting of my version number. The easiest way, by far, was to just install the Secure WordPress extension (or I could dive into a bit of their PHP code and have to make the change with each upgrade, not so much fun). Not so long ago, there was a huge ordeal about a vulnerability in WordPress 2.8.3 that allowed an attacker to reset an admin password very easily. No wonder they urged us to upgrade so quickly - your vulnerability was being broadcast.

The sad part is, broadcasting this version number isn't something that can be disabled using the built-in settings. I don't know what the rationale is, but one either has to edit the functions.php file in WordPress directly, or install the plugin mentioned above.

Anyway, this got me thinking about plenty of other open source softwares that I've disguised over the years.. For instance, perform a fresh install of Ubuntu 8.04 with the LAMP stack and you'll see the version listed in the headers as detailed as this:

Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.9 with Suhosin-Patch Server

Yup, there it is, script kiddies. Bust out Metasploit and eat your hearts out. In this case, if one leaves the defaults enabled, the server major version, minor version, PHP version, OS, and WordPress version all are exposed. That leaves a pretty nice little attack vector.

Of course, hiding these things doesn't mean that anything is secure. On the contrary, one must go far deeper than that. I am just disappointed in so many open source projects that cut down the time needed for any script kiddies to start playing with my public services.