This article is geared toward eRacks customers who have a desktop or laptop system, i.e. a personal workstation. It is not intended to serve as a guide for customers wishing to upgrade a server.
With the above in mind, for those who use Linux on such a machine, your choice of distributions that cater to this niche is growing nicely. You have the “Big Boys” such as Ubuntu, Fedora, Mandriva or OpenSUSE, as well as a host of more specialized distributions, the main focus of most being on user friendliness and “up-to-dateness.” What this usually leads to is a faster upgrade cycle than what you would typically find on a server oriented distro such as Debian (stable), RedHat Enterprise, SuSE Enterprise or CentOS.
I myself have been tracking RedHat (including Fedora) since version 5.0, doing a mix of upgrades and fresh installs. I have also kept up with Ubuntu since 6.04, and have had similar experiences with it. I have found that one way of making regular upgrades easier is to keep a separate /home partition. This way, you have a choice of an upgrade or a fresh install, without losing valuable data.
My experience, and that of many other salty seasoned Linux gurus, is that upgrading from a previous version tends to be a bit messier and usually takes longer to do than a fresh install. This can be true, especially if you use third party repositories, if you install software not maintained by your distro package manager (DEB or RPM) or if you do a lot of tweaking. Doing so may leave you looking at a broken system when the upgrade finishes. For this reason, it is usually more desirable to do a clean installation and install your third party applications afterward.
How then to keep from losing your data? Many system admins would suggest the multiple partition method, which has been used on servers a lot, yet not so much on the desktop. The multiple partition method can have advantages and disadvantages, but since hard drives are so big these days, many of the disadvantages are no longer prevalent.
While most modern desktop distros have a default partitioning scheme that gives you just a swap partition (usually about 2x the amount of RAM, or physical memory) and a large root partition for everything else, most server configurations have multiple partitions for directories like /usr or /var, which can have many advantages. For example: if you wanted to have /usr mounted as read-only to prevent unauthorized system-wide software installs, if you wanted to keep /boot separate for a RAID array or if you wanted to keep /var and /tmp separate to avoid corrupting the core system files; these are all examples of why one might want to make use of multiple partitions. In this case, however, the partitioning must be very carefully planned according to the intended use of the server, what programs need to be installed, how many users will be logging in, etc.
Luckily, there is a happy medium that works well for desktops, and that is to use a swap partition with 2x the amount of RAM, a root partition for your operating system and a very large /home partition for all your data. When you do a fresh install, all you have to do is make sure you don’t format /home, and your data will be safe across installations. If you want to save any system-wide tweaks, you will, of course, also have to backup important configuration files and check them against their replacements, making changes where necessary.
In my case, I have a 120GB hard drive for Linux, which makes use of the following partition scheme:
20GB /
75GB /home
1GB /swap
14GB “other” (at times it has a Gentoo install, other times it has FreeBSD, depends on my mood…)
I have found through experience that this setup works well.
When I do an OS update, such as my recent one to Fedora 9, I usually backup important configuration files to /home, do a fresh install and finally install any third party programs I need.
In the past, when upgrading systems without doing a fresh install, things for me have tended to get rather wonky. However, I have recently tried upgrading Ubuntu, and I must say that the recently improved Upgrade Manager, a graphical front end to the apt-get dist-upgrade functionality, is a nice touch. It allows you to upgrade to the next version of Ubuntu, while still allowing you to run your system so you can go about your business as it downloads and installs all the packages. When it’s done, you simply reboot, and voila, new version! Upgrades on Fedora, by contrast, are still usually done by the tried and true method of booting the install disk and running the upgrade procedure. Fedora does have the capability to do upgrades using the yum package manager, but that functionality isn’t as mature as apt-get dist-upgrade, and thus is not for the faint of heart.
So now, what if you have an existing Linux installation utilizing only a single partition and you want to do a fresh install while keeping your data safe?
Of course, you could just back your data up to a large external hard drive, but not everyone has one at their disposal. In this case, what you could try is resizing your root partition, create a new partition for /home and copy your personal data to it before starting the upgrade. Then, just run through the installation as usual. This is, of course, only if you have enough space to resize. If not, you may still require an external drive, at least temporarily, to copy your data to before starting the installer.
If you want to make use of multiple partitions on a new eRacks system purchase, just ask for it during your order. This way, your system will be ready when the next OS update rolls around!
Matt
Matt June 27th, 2008
Posted In: How-To, Laptop cookbooks, Upgrades
Have you ever needed to backup the contents of one or more filesystems to another machine, yet you only had a single hard drive in the machine being backed up and found that you lacked the temporary disk space necessary to create your backup before shuttling it across the network to its final destination?
As an example, when I backup my laptop, I have too many gigabytes of data to realistically store my data on DVD-R’s, and my only option is to create a tarball of the root filesystem and store it on another machine on my network. The problem is that if I try to create a backup of my laptop’s contents, I find that the resulting tarball backup is too large to fit on the hard drive along with all the data.
One solution that I’ve found to this problem is to avoid storing the backup on the source machine altogether. Through stdin and stdout, along with the magic of *NIX pipes, we can stream the data in realtime over to its destination, and only then write it to disk.
Before we begin, it is very important to note that in most situations, you’ll have to boot into another environment and manually mount your partition before proceeding, particularly when dealing with an operating system’s root filesystem. Otherwise, not only will tar choke on certain directories like /proc and /dev, the contents of the disk will also continue to change as the backup is being made, leading to inconsistencies between the data on your filesystem and the data in the backup.
With that in mind, assuming that you have ssh installed and configured correctly on both the source and destination computers, you can create a backup with the following commands (as root):
#cd /path/to/your/mounted/filesystem
#tar -jcvp | ssh username@destination “cat > /path/to/backup.tar.bz2”
If you prefer to use gzip as opposed to bzip2, replace the above tar command with the following:
#tar -zcvp | ssh username@destination “cat > /path/to/backup.tar.gz”
Now, let’s say that you’ve created a new partition and want to restore a previous backup. Again, assuming that ssh is configured properly on the source and the destination machines, and assuming that you’ve mounted your partition, you would recover your backup with the following commands (again, as root):
#cd /path/to/your/mounted/filesystem
#ssh username@destination “cat /path/to/backup.tar.bz2” | tar -jvxp
If the backup is a gzipped archive, then replace the above tar command with the following:
#ssh username@destination “cat /path/to/backup.tar.gz” | tar -zvxp
Note that the user specified by ‘username’ above should have read/write permissions on the directory where the backup is to be stored for this procedure to work.
The astute reader will probably notice the missing -f option, which one usually passes to tar. The reason for this is that it tells tar to write its data to, or read its data from, a file. However, by ommitting it, we tell tar to send its output to stdout, or to receive its data from stdin when reading from an archive, which allows us to make use of pipes. It’s situations like these where the power of *NIX really shines!
james May 28th, 2008
Posted In: Backups
Tags: backup, bzip, bzip2, filesystem, gzip, partition, ssh, zip