from The Open Source Newsletter – July 2008
Aside from all the usual green advice, what can a conscientious SysAdmin do to save money during this time of rising energy prices and a challenging economic situation?
Here is eRacks’ top-ten list of recession-proofing strategies:
Remember, recession isn’t permanent, but can be long. And playing it smart now will help, and quite possibly make all the difference.
britta August 8th, 2008
Posted In: News
Tags: firewall, recession-proof, security
A secure environment is absolutely crucial for a virtualization server connected to the Internet. If the host is compromised, all its virtual machines are at risk and their services will be affected, learn more from these important internet safety tips and advice article.
![]() |
eRacks virtualization experts have put together a useful list of security considerations for virtualization migration planners. TIP #1. Use an open source virtualizer if possible. Open source software vulnerabilities are documented clearly, are well-known, and fixed quickly. |
Proprietary-software bugs usually take longer to get fixed, and are even sold on black markets for illicit hacking. In fact, there are documented cases of closed source software companies purchasing security hole information of their own applications. Open source software vulnerabilities have less value on the black market, because of their shorter shelf-life. If you have a dental practice, you may want to check out sites like https://cloud9.software/cloud-9-ortho/ and see if this software can help increase productivity and efficiency. | |
TIP #2. Use open source guests wherever possible. New drivers for open source applications improve security as well as performance. Open source guests are more cooperative with the host, leaving less room for attack. Windows is inherently less secure, since a – it is closed source and updated less frequently. b – widely used and thus a big target. c – statistically has more severe vulnerabilities than open source OSes which take longer to fix. | |
TIP #3. Minimize the host footprint, making less surface area available for hackers. A small target is harder to hit than a large one. eRacks typically recommends KVM because of its small footprint, simple design, and ease of use. The virtualization host provides services in the form of ports and packages, which should only include those required by the VMs. An effective security plan should minimize the number of open ports, narrowing the possibilities of illicit entry. |
|
![]() You can look for Fortinet if you want to know about the next-generation firewall. |
|
TIP #6. Assess your security level, including regular port scans (Nmap), and OS fingerprinting, keeping track of any changes. A hardened system will not give out versions of running services, otherwise it would be too easy to know exactly where the vulnerabilities lie. eRacks can give you a head start by building, installing, and configuring your system for you. Your physical host server can be configured with your choice of a virtualization host, including the freely available version of VMWare or Linux-native KVM (Kernel-based Virtual Machine), as well as a large number of possible virtual operating systems and applications, including web, DNS, email, proxy and other infrastructure services. |
virtualizer | description | complexity | level of open source |
KVM | built into the kernel, uses the standard Linux scheduler, memory management and other services | simple, non-intrusive, very stable, easy to administrate – KVM hypervisor about 10-12K lines of code (2007) |
released under the GNU GPL free |
Xen | external hypervisor, supports both paravirtualization and full virtualization, has its own scheduler, memory manager, timer handling, and machine initialization. | specially modified kernel – has 10x more lines of code as KVM => raises the vulnerability level | released under the GNU GPL free |
VMware | fully virtualizes using software techniques only, very good performance, stability. | very large and complex; more than 10x lines of code of Xen | proprietary, player open (teaser-ware), fees |
britta July 9th, 2008
Posted In: News, security, virtualization
Tags: firewall, News, redundant firewall, security, twinguard, virtualization
This article is geared toward eRacks customers who have a desktop or laptop system, i.e. a personal workstation. It is not intended to serve as a guide for customers wishing to upgrade a server.
With the above in mind, for those who use Linux on such a machine, your choice of distributions that cater to this niche is growing nicely. You have the “Big Boys” such as Ubuntu, Fedora, Mandriva or OpenSUSE, as well as a host of more specialized distributions, the main focus of most being on user friendliness and “up-to-dateness.” What this usually leads to is a faster upgrade cycle than what you would typically find on a server oriented distro such as Debian (stable), RedHat Enterprise, SuSE Enterprise or CentOS.
I myself have been tracking RedHat (including Fedora) since version 5.0, doing a mix of upgrades and fresh installs. I have also kept up with Ubuntu since 6.04, and have had similar experiences with it. I have found that one way of making regular upgrades easier is to keep a separate /home partition. This way, you have a choice of an upgrade or a fresh install, without losing valuable data.
My experience, and that of many other salty seasoned Linux gurus, is that upgrading from a previous version tends to be a bit messier and usually takes longer to do than a fresh install. This can be true, especially if you use third party repositories, if you install software not maintained by your distro package manager (DEB or RPM) or if you do a lot of tweaking. Doing so may leave you looking at a broken system when the upgrade finishes. For this reason, it is usually more desirable to do a clean installation and install your third party applications afterward.
How then to keep from losing your data? Many system admins would suggest the multiple partition method, which has been used on servers a lot, yet not so much on the desktop. The multiple partition method can have advantages and disadvantages, but since hard drives are so big these days, many of the disadvantages are no longer prevalent.
While most modern desktop distros have a default partitioning scheme that gives you just a swap partition (usually about 2x the amount of RAM, or physical memory) and a large root partition for everything else, most server configurations have multiple partitions for directories like /usr or /var, which can have many advantages. For example: if you wanted to have /usr mounted as read-only to prevent unauthorized system-wide software installs, if you wanted to keep /boot separate for a RAID array or if you wanted to keep /var and /tmp separate to avoid corrupting the core system files; these are all examples of why one might want to make use of multiple partitions. In this case, however, the partitioning must be very carefully planned according to the intended use of the server, what programs need to be installed, how many users will be logging in, etc.
Luckily, there is a happy medium that works well for desktops, and that is to use a swap partition with 2x the amount of RAM, a root partition for your operating system and a very large /home partition for all your data. When you do a fresh install, all you have to do is make sure you don’t format /home, and your data will be safe across installations. If you want to save any system-wide tweaks, you will, of course, also have to backup important configuration files and check them against their replacements, making changes where necessary.
In my case, I have a 120GB hard drive for Linux, which makes use of the following partition scheme:
20GB /
75GB /home
1GB /swap
14GB “other” (at times it has a Gentoo install, other times it has FreeBSD, depends on my mood…)
I have found through experience that this setup works well.
When I do an OS update, such as my recent one to Fedora 9, I usually backup important configuration files to /home, do a fresh install and finally install any third party programs I need.
In the past, when upgrading systems without doing a fresh install, things for me have tended to get rather wonky. However, I have recently tried upgrading Ubuntu, and I must say that the recently improved Upgrade Manager, a graphical front end to the apt-get dist-upgrade functionality, is a nice touch. It allows you to upgrade to the next version of Ubuntu, while still allowing you to run your system so you can go about your business as it downloads and installs all the packages. When it’s done, you simply reboot, and voila, new version! Upgrades on Fedora, by contrast, are still usually done by the tried and true method of booting the install disk and running the upgrade procedure. Fedora does have the capability to do upgrades using the yum package manager, but that functionality isn’t as mature as apt-get dist-upgrade, and thus is not for the faint of heart.
So now, what if you have an existing Linux installation utilizing only a single partition and you want to do a fresh install while keeping your data safe?
Of course, you could just back your data up to a large external hard drive, but not everyone has one at their disposal. In this case, what you could try is resizing your root partition, create a new partition for /home and copy your personal data to it before starting the upgrade. Then, just run through the installation as usual. This is, of course, only if you have enough space to resize. If not, you may still require an external drive, at least temporarily, to copy your data to before starting the installer.
If you want to make use of multiple partitions on a new eRacks system purchase, just ask for it during your order. This way, your system will be ready when the next OS update rolls around!
Matt
Matt June 27th, 2008
Posted In: How-To, Laptop cookbooks, Upgrades
Have you ever needed to backup the contents of one or more filesystems to another machine, yet you only had a single hard drive in the machine being backed up and found that you lacked the temporary disk space necessary to create your backup before shuttling it across the network to its final destination?
As an example, when I backup my laptop, I have too many gigabytes of data to realistically store my data on DVD-R’s, and my only option is to create a tarball of the root filesystem and store it on another machine on my network. The problem is that if I try to create a backup of my laptop’s contents, I find that the resulting tarball backup is too large to fit on the hard drive along with all the data.
One solution that I’ve found to this problem is to avoid storing the backup on the source machine altogether. Through stdin and stdout, along with the magic of *NIX pipes, we can stream the data in realtime over to its destination, and only then write it to disk.
Before we begin, it is very important to note that in most situations, you’ll have to boot into another environment and manually mount your partition before proceeding, particularly when dealing with an operating system’s root filesystem. Otherwise, not only will tar choke on certain directories like /proc and /dev, the contents of the disk will also continue to change as the backup is being made, leading to inconsistencies between the data on your filesystem and the data in the backup.
With that in mind, assuming that you have ssh installed and configured correctly on both the source and destination computers, you can create a backup with the following commands (as root):
#cd /path/to/your/mounted/filesystem
#tar -jcvp | ssh username@destination “cat > /path/to/backup.tar.bz2”
If you prefer to use gzip as opposed to bzip2, replace the above tar command with the following:
#tar -zcvp | ssh username@destination “cat > /path/to/backup.tar.gz”
Now, let’s say that you’ve created a new partition and want to restore a previous backup. Again, assuming that ssh is configured properly on the source and the destination machines, and assuming that you’ve mounted your partition, you would recover your backup with the following commands (again, as root):
#cd /path/to/your/mounted/filesystem
#ssh username@destination “cat /path/to/backup.tar.bz2” | tar -jvxp
If the backup is a gzipped archive, then replace the above tar command with the following:
#ssh username@destination “cat /path/to/backup.tar.gz” | tar -zvxp
Note that the user specified by ‘username’ above should have read/write permissions on the directory where the backup is to be stored for this procedure to work.
The astute reader will probably notice the missing -f option, which one usually passes to tar. The reason for this is that it tells tar to write its data to, or read its data from, a file. However, by ommitting it, we tell tar to send its output to stdout, or to receive its data from stdin when reading from an archive, which allows us to make use of pipes. It’s situations like these where the power of *NIX really shines!
james May 28th, 2008
Posted In: Backups
Tags: backup, bzip, bzip2, filesystem, gzip, partition, ssh, zip
Hello everyone out in the blogosphere (Look my vocabulary improved!) Allow me to introduce myself. I am Max, the Op Manager here at eRacks. Now that that’s out of the way, lets dig in!
I recently had the chance to lay my grubby mits on the latest ASUS eeePC, and am here to give my initial impressions. Now mind you, I am a very busy man (Darn starbucks being so far away!), so I only had a couple hours to play with this little PC, and I must say, I am impressed.
Now, Asus keeps a tight grip on the distribution of their eeePCs, and makes sure they get there asking price, so shopping around won’t net more than about a dollar in savings. I will chalk that up on the bad side of things. However, while a little on the high end of the price scale, for its functionality, let me tell you something that makes up for that 100x: it works flawlessly, it’s quick, and it gets a lot of looks (ladies, forget the new hairstyle. Pick up one of these bad boys and prepare for the geek onslaught!) The fact that I had no issues with it speaks volumes, because I always break something and have to have Tony, our Head Tech, come and save me.
It also comes with all the software you would need: open source applications, games, and media playing programs preinstalled and ready to go. You do have to sit through a quick registration screen at first to get to this, but hey, you have to do that with everything. When I started this mini-beast up, I was pleased to see that everything displayed quite nicely on the 7″ 800×480 res. screen. It even comes with a pretty nice Intel graphics chipset to boot. So, as far as visuals go, while you wont be seeing HD style graphics, you will get a clear, precise picture that makes working on it pretty easy. Not bad ASUS, not bad… But you could, ya know, boost the res up to maybe 1200? Maybe…please? C’mon…
Anyway, this is not by any means a replacement for a full fledged laptop, but it is a nice miniPC that will come in handy for a quick write up at a trade show, a place to store a few pictures, a checking of websites or emails from the airport or any number of road-warrior-like activities. The other thing it’s good for is KIDS! Kids love it; it comes in multiple colors, it plays games, it’s small, it’s neat, it makes noise on its 5.1 realtek HD sound card, it plays music AND it’s cool looking. The only problem I see with kids and this is that on the models we got, the keyboard is white (wash your hands, children, before touching it), so beware of dirty fingers! We actually had a customer call us and let us know that their children were hammering away on these things and that they stood the test of time (at that point, 1 week. But hey, it’s a miniPC and a child. Thats like platinum record status!) Another good feature is the card reader. This allows you to store plenty of files on the SD cards. Neat!
The few bad things I have to say are as follows: it only has 2 hours of battery life (I know, I know; laptops and such do not have amazing battery lives, but 2 hours?! I’ve had layovers longer then that on flights from OC to SF); it has no DVD or CD player, which is a bummer, even though I do understand that it’s a different category of PC — I still want to be able to throw a DVD in or listen to a CD I just bought (ok, that may be a lie; who really buys CD’s anymore, anyone? I admit it. I do. MP3’s be damned!); the graphics could be a bit better and the white keyboard is a parents nightmare, although at least the keys are stuck close enough together that food can’t hide in them. Overall, there weren’t enough bad things to warrant a bad review, or to take away from the coolness factor.
In closing, I know this isn’t as in-depth or as technical as some people would like. But hey, I’m Max, and Max is allowed to write what he wants (you love the 3rd person, I know it!) Overall, I give this 4/5 stars for a mini pc on coolness factor, and 3.5/5 on tech factor. Take my opinion with a grain of salt though, for I am just an Op Manager doing my thing.
For the techies, here’s a rundown of the specs:
Visit www.eRacks.com for more info.
max May 2nd, 2008
Posted In: New products
Tags: Asus, laptop, linux, New products, Review