What a n00b!

Umounting NFS Share After Deletion on the 'Server Side'

Since I like to be an example to others of what not to do, I thought I'd share my fun today. Today I was doing a little cleanup on our systems and removed an empty directory that had been mounted by another system via NFS. (HINT: if you're stuck in this position and need help fast, scroll to the end, otherwise I'm going to tell you the story) Not really realizing that I had done this, I went on with my day as normal. Later in the day, I was asked why all of a sudden certain operations were timing out on that system. The first thing I checked was disk space on the device which returned some mount points, but not all and "sat there doing nothing". After a little while I closed the session and opened up a new one. Running a mount returned all the mount points that allegedly existed, but the list from my 'df -h' stopped at the mount right before the NFS mount point in question. Doing a 'umount -f /mnt/point' simply returned a "device is busy" error.

The particular directory that I had deleted was an old directory that we were using for Virtualmin to do its backups to and had moved. Since it was complaining I thought I'd just try doing a bounce of the Webmin/Virtualmin services quick to see if that helped since Virtualmin should've been the only application that needed to have files open in that directory. But, alas, didn't make a difference. Then I had the bright idea of trying to find out what processes might've tried to have files open in that directory. I ran a 'lsof | grep /mnt/point' and it "sat there doing nothing" again..

After my failed attempts, I decided to try to recreate the directory on the other end, re-export the share and hopefully my system would pick back up where it left off. Unfortunately, I didn't realize this, but my export had already been deleted from the /etc/exports file as well so recreating the directory and re-exporting the NFS shares did me no good.

A quick Google search led me to a post in the Ubuntu forums with a user saying they had to reboot. As this server is in production, a reboot was not an option at this point (or at least the worst possible scenario). For sure if the system was rebooted, the mount point wouldn't be busy anymore but that's no good. Later in the thread (after the unnecessary reboot) I found that one can let umount do a "lazy" umount. This lets the OS umount a mount point while still maintaining any references to files in that mount point until those processes lets go. Anyway, if you're still with me, the command is:

sudo umount -l /mnt/point

Voila! No more timeouts checking disk space, lsof's, and the mount point is no longer mounted.


Comments powered by Disqus