Imagine having the technology to store your music, movies and pictures in a central location and to access them from anywhere in the house. Even better, imagine that you can do this with little cost for hardware and zero cost for software. Not only is it possible, it’s never been easier.
In order to be able to access your central media repository, you’ll need to connect your computers to a network. With wireless networking, you can cheaply connect your machines almost anywhere in the house without having to run any cable. For the minimal configuration of one server (your media repository) and one client (the system connected to your home theater that lets you actually use the media), one wireless router and two wireless adapters will do. Even better, if the wireless router sits near the server, you can directly connect the two via a cable, saving you the cost of one wireless adapter.
Nowadays, with storage so plentiful and CPUs that are so powerful, it really doesn’t take much money to get good results, which is fortunate for those of us who have been negatively impacted by the downward turn in our nation’s economy. On the server side, a modest Intel Core 2 Duo with anywhere from two to four gigabytes of memory will do, and with one terabyte hard drives falling below $100, you should be able to save even more money. On the client side, with the new Intel Atom CPU, which is powerful, compact, quiet and highly energy efficient, you can build a thin client that sits snuggly atop your entertainment center.
Media center software has grown increasingly popular, and the open source movement has kept up nicely with easy to install, easy to use applications.
For the operating system on both the client and server side, you have a plethora of Linux distributions to choose from, Ubuntu (http://www.ubuntu.com/) being our recommendation. Then, on the server side, you’d simply have to configure your system to share your files over the network. On the client side, applications for managing your media include XBMC (http://xbmc.org/), Elisa (http://elisa.fluendo.com/), Entertainer (http://www.entertainer-project.com/) and MythTV (http://www.mythtv.org/ — note that MythTV is a little more involved with regards to configuration and has components that must run on the server side.)
With hardware becoming cheaper and more powerful, and with the added bonus of using free software, a capable home entertainment system can be had for a minimal investment. And, of course, eRacks specializes in providing its customers with the resources they need, whether it be selling systems pre-configured to your specifications or offering consulting for more difficult projects. Contact eRacks today and find out what we can do for your home!
james March 24th, 2009
You may or may not have heard of revision control systems such as Subversion (http://subversion.tigris.org/), Mercurial (www.selenic.com/mercurial) or Git (git-scm.com/). Usually, these applications are used to keep track of revisions in software projects. Each time you change the source code of a project, you check it into your revision control system so that you can browse back and forth through various versions. Revision control systems have many standard features that are very useful, including the ability to display only the differences between two versions of a project.
Revision control is generally associated with software development, but in actuality, its uses extend far beyond. Anything you work on can be checked into a revision control system. For example, at home, I check the stories I write into Subversion so that I can track the changes I make and go back to a previous version if necessary. Even binary files, such as word documents, images, etc. can be tracked this way. If it’s something that you change regularly and if the changes don’t result in very many differences to the structure of its associated files, there’s no reason you shouldn’t check it in.
Since revision control is still primarily associated with software development, there will inevitably be a significant learning curve. However, googling for graphical front ends to projects like Subversion will bring up some results. While it doesn’t make learning the concepts of revision control any easier, it can ease the burden of day-to-day use, since it can alleviate the user of having to resort to the command line.
eRacks is ready and willing to install revision control software per your instructions when purchasing any new system, and even offers consulting services for those times when you need help installing, configuring or using your software.
james February 10th, 2009
Setting up a server at home can be a rewarding experience. Not only does it make for an excellent experiment and learning experience, it also allows you access to your home network from anywhere in the world. You may be tempted to think that such a project would be time consuming and expensive, but actually the opposite is true. Today, the software required for running a server is relatively easy to configure. And, with open source software, a cheap computer and the right internet connection, you can be up and running with minimal cost.
A home server can be a very useful thing to have, and is a worthwhile project, if for no other reason, because it’s a good learning experience. KMF Technologies can provide the hardware you need to get the job done, and can also offer consulting services for difficult software configurations. If you decide to take the time to setup a server at home, you won’t be disappointed.
The possibilities are endless with a home server. With an HTTP server like Apache or Lighttpd , you can host your own homepage, keep a remotely accessible calendar, share information with family, friends and co-workers or even experiment with your own custom web applications, with complete control over the software that supports them.
With SSH and/or FTP running on your server, you can gain access to files you have saved on your machine. What if you come to work and discover that you left an important Powerpoint presentation at home? No problem. If you have your desktop computer on the same network as your server, you can use Wake-On-Lan to power up your desktop, SSH to copy the file to your server and SSH or FTP to download it. Problem solved!
Today, with modern Linux distributions such as Ubuntu , installing and configuring server applications has never been easier. With default configurations that work mostly out of the box with minimal tweaking, you can have a machine up and running in minutes. In addition, no special hardware is required. If you have a spare computer with a NIC, you have a server.
The only issue that could be an obstacle is your internet connection. First and foremost, you’ll require a broadband connection such as DSL or cable. In addition, while not required, it’s a good idea to get a static IP address if you can, which is just a unique identifier assigned to your network on the
internet that doesn’t change. DSL Extreme, for example, offers affordable static IP solutions to residential customers. From there, you would register a domain name and point it to your IP address, or get a free subdomain if you preferred.
If you can’t find a static IP, however, all is not lost. Using a service like DynDNS.org, you can get a free subdomain name that can be automatically updated via your home network every time your dynamic IP address changes.
james January 19th, 2009
At one point or another, you’ve probably asked yourself why you continue to spend hundreds (or perhaps thousands) of dollars on Microsoft products, especially in the downward economy we find ourselves faced with today. It could be that you’re worried about having to learn a new and unfamiliar environment. Or, maybe you feel that there aren’t enough applications available for anything other than Windows to justify switching to something else. Perhaps you’ve already invested a substantial amount of money in software that runs on Windows and don’t want that investment to go to waste.
Whatever the reason may be, there’s never been a better time to migrate away from proprietary software and make the move to Linux, a premium open source solution. Not only are the arguments outlined above irrelevant to the current technological climate, there are many other exciting reasons to consider giving Linux a try.
Linux Does More “Out-of-the-Box,” and It’s all Free!
After installing Microsoft Windows, your first task will always inevitably be to install a lengthy suite of applications before being able to do anything productive, and by the time you’ve finished, you’ll have potentially incurred hundreds of dollars in additional licensing costs. By contrast, any popular modern Linux distribution will come bundled with an office suite, fully-featured mail client, system administration tools and a host of other applications, saving you hours of installation time, all at no added cost. Even if you use a commercial Linux distribution with a price tag to match, the software bundled with it is almost always free and open source, meaning that you pay no extra licensing fees.
Thousands of Additional Applications, all Ready To Install at the Click of Your Mouse
We’ve all gone through the lengthy process of installing our initial set of applications, just to discover that we’ve either forgotten something or that we have additional needs. If you’re a user of Microsoft Windows and proprietary applications, you’ll get to fork out even more money, and be faced with the daunting task of manually downloading executable files and/or swapping CDs back and forth, with every installation method differing significantly from the last.
If you’re a user of Linux, with a few clicks of the mouse, you’ll find thousands of applications, all available from a single repository, ready to automatically download and install. Oh, and have I mentioned that they’re all free?
Running Windows Software on Linux
“I want to use Linux, but there’s one crucial application that’s holding me back.” Those of us who have moved away from Windows know all too well the pain of leaving behind old (or perhaps not so old) software investments. Whether it’s an in-house program for your workplace, an office suite or even a favorite game, you don’t want to lose your ability to run legacy Windows software.
This used to be a very good reason for abandoning open source migration efforts, but fortunately, it’s no longer a serious issue. The WINE project (http://www.winehq.org/), which represents fifteen years of hard work and dedication on the part of open source developers across the globe, has grown to be a very mature, nearly drop-in replacement for the Windows environment, and runs quite a few Windows programs out-of-the-box, including Microsoft Office. In addition, those applications that don’t will often run with minimal tweaking, and for those situations where native Windows libraries are required to make an application work, you have the option of using them in place of or in addition to WINE’s own bundled libraries.
For those rare instances where WINE fails to meet your needs, Linux sports a competitive suite of virtualization solutions (for more information, look up KVM or Xen), which will enable you to run a properly licensed Windows installation on top of your Linux environment at a level of performance comparable to that attained by running Windows natively on hardware.
Security and Your Peace of Mind
Anybody who’s had to manage a Windows machine will know what a hassle it is to have to keep up with anti-virus and anti-spyware updates, and how worrying it can be when we learn about new critical vulnerabilities that could result in a malicious third party gaining control of our software.
By using Linux, you have the dual advantage of working on a minimally targeted platform and of working on a platform that was built on a solid, simple and time-tested security model. Unlike Windows, there is little if any real need for anti-virus software (unless you’re running a mail server that hosts messages which might be read by people using Windows.) In addition, due to the rapid pace of open source software development, if a security vulnerability is discovered, a fix follows quickly. Instead of relying on any single organization to inspect and patch their code — a single point of failure, you have an entire global community with access to the source code, eager to support the software they maintain with a passion for writing good code.
With today’s uncertain economic climate, now is the perfect time to consider migrating to an open source solution. The arguments against it continue to dwindle as open source operating systems such as Linux increasingly prove not only to match Windows for functionality, but surpass it.
We here at eRacks specialize in open source solutions, and are ready to cater to your needs. Whether you’re purchasing servers or desktops running open source software, or you’re looking for help with your open source migration efforts, eRacks provides the services you need to get the job done.
james December 19th, 2008
There may be situations where you’d like to login to a remote machine via SSH and not have to enter a password to do it. Perhaps you have some sort of automated file transfer that makes use of SCP. Or, perhaps you frequently login to the same machine and get tired of having to enter a password each time. Whatever the reason may be, an attractive alternative to using passwords involves making use of cryptographic keys.
To give you a general idea of what’s involved, you’ll first generate a public/private key pair. Your public key is what you would copy to every machine you want to be able to log into. You can think of the public key as the lock on a door to a house. The reason why we call this a public key is that it’s safe to share it with the public, just as the lock on your door is safe to display from the outside. By contrast, your private key can be thought of as the key that fits into the lock. Unlike your public key, you should never copy it to machines that are either untrusted or to machines that you yourself don’t administer — this would be a bit like placing the key to your front door outside your house for strangers to use! Anybody who possesses your private key can access every machine to which you’ve made your public key accessible, so exercise extreme caution and guard your private key with your life.
SSH makes generating your keys very simple. From the command line, you’ll simply enter the following command:
You’ll then be asked a series of questions. Accept all the defaults. If you don’t desire to password protect your key pair (which would require you to enter a password when you use it), hit enter when asked for the password, without typing anything in. At the end of the process, you should discover two new files in ~/.ssh, id_rsa and id_rsa.pub, where ~ stands for your home directory. From here, you’ll copy your public key (id_rsa.pub) to every machine you wish to log into and append its contents to a file called ~/.ssh/authorized_keys, where ~ stands for the home directory of the account you wish to log into.
To test your newly generated key pair, try to connect to one or more of the remote machines you copied your public key to. You’ll find that you’re sent straight to a command prompt, without the need for a password.
Now, there are situations where using keys without passwords can potentially be hazardous, so some significant thought should be given to the circumstances in which your key pair will be used. For example, I will never copy my laptop’s public key to my personal server at home, because if my laptop is ever stolen, the thief (if he knew how to use *NIX) would not only have access to all my local data, but would also have complete SSH access to my network at home, since he would have my laptop’s private key. Thus, I choose to sacrifice convenience for security in that particular situation. As with all things in life, the amount of relative security versus convenience is a trade off, so make sure you choose wisely.
james November 21st, 2008
This blog is the result of at least two hours of pain and suffering while trying to boot off of an Ubuntu-based CD. If it saves even one person from the same laborious fate, it has served its purpose.
How many times have you attempted to boot from an Ubuntu CD, only to find yourself sitting in front of a very intimidating (initrd) prompt with no clue as to why the system failed to boot or how to fix it? Unfortunately, the causes of this dreaded phenomena are many, which often makes troubleshooting this problem very difficult. I myself have run into this issue on occasion, though up until a couple weeks ago it had never been caused by anything too difficult to fix. Most of the time it was simply a matter of using unsupported hardware. That all changed with my latest install.
Now, before I go any further with this, I should probably note that the distribution I had trouble with was NOT Ubuntu; it was an Ubuntu derivative, Eeebuntu, developed by a third party that is not in any way affiliated with Ubuntu. In fact, I later tried installing from an official Ubuntu CD and it booted just fine!
That being said, it’s quite possible that you have found yourself faced with the (initrd) prompt on at least one occasion. If so, you hopefully figured out what went wrong and were able to fix it. But, what if you’ve exhausted all of your obvious options? It’s quite possible that you’ll ask questions on the Ubuntu forums, only to find that nobody has an answer that solves your problem. That isn’t in any way meant to disparage the Ubuntu community. In fact, I think you’ll find that the forums are very helpful and that the community is very friendly and knowledgeable. Rather, it’s quite possible that, for whatever reason, you’ve run into a problem the community hasn’t yet encountered or been able to solve, which on some rare occasions may even turn out to be a bug. Whatever the reason may be, short of finding another Linux distribution, you may be thinking that all is lost. Fortunately, there’s another way!
When you see the (initrd) prompt, it’s because, for whatever reason, Ubuntu was unable to find or mount the root filesystem. The solution is to manually do the mounting for the Ubuntu CD. Assuming you can get the filesystem mounted, you should have no problem breaking out of what at first glance may have seemed to be a hopeless situation.
Now, you may be tempted to believe that the root filesystem of the Ubuntu CD is the same root filesystem you would see after booting the Ubuntu LiveCD, but that’s actually incorrect. Ubuntu uses a special compressed filesystem called SquashFS. If you mount your Ubuntu LiveCD, you should find it in /path/to/cdrom/casper/filesystem.squashfs. With this information in mind, we can proceed.
Step 1: Manually mount the CD
From the (initrd) prompt, manually enter the following command:
mount /dev/scd0 /cdrom
(scd0 should be replaced by the device name that refers to your optical drive.)
If you can’t find a device name for your optical drive, that may be why the initrd (short for initial RAM disk) failed to mount it. If you’re sure there’s no device in /dev for your optical drive, copy the contents of the Ubuntu disc to an external hard drive or USB thumb drive (either should be recognized immediately by the initrd after being plugged into a USB port.) Mount it instead of the CD to complete this step. To do so, use the command:
mount /dev/your_device /cdrom
(note that the device name usually shows up as sda1, sdb1, sdc1, etc.)
Step 2: Manually mount the root filesystem
Again, from the (initrd) prompt, enter the following command:
mount -o loop /cdrom/casper/filesystem.squashfs /mnt/root
At this point, things may or may not get tricky. Most likely, the command will be successful and you’ll be ready to continue booting the Ubuntu LiveCD. If that’s the case, skip directly to step 4.
Step 3: I can’t mount the root filesystem; HELP!
It’s possible that you’re more than just a little unlucky, and that for some very strange reason that I haven’t yet figured out, you don’t have support for loopback devices. Fear not! You will have some extra work to do, but the following steps should work just fine.
First, you’ll need access to another Linux machine. You’ll also need a spare hard drive or USB thumb drive. Please note that if using a thumb drive, you’ll need one larger than 2GB, as the SquashFS filesystem included on the CD will take up more than 2GB of space when decompressed. Finally, make sure that squashfs support is installed on your system, as it most likely isn’t by default. Depending on your Linux distribution, yo u may or may not have to patch your kernel and compile the squashfs module manually. If you’re using a distribution like Ubuntu, you shouldn’t have to.
Now, mount the block device you’re going to extract the filesystem’s contents to (we’ll refer to it henceforth as /dev/sda1.) We’ll assume for the sake of this tutorial that we’re mounting it to the directory /mnt/tmp. To do so, you would enter the following command:
mount /dev/sda1 /mnt/tmp
Next, we must mount the SquashFS filesystem. Assuming we’re using the mount point /mnt/squashfs, we would do so with the following command:
mount -o loop /path/to/cdrom/casper/filesystem.squashfs /mnt/squashfs
Finally, copy the contents of /mnt/squashfs to /mnt/tmp. Note that simply using the command cp will result in symbolic links being treated as real directories, which is not desirable. Instead, we’ll use tar and pipes. Enter the following commands, in order:
tar -jcvp /mnt/squashfs/* | tar -jvxp
When the above commands are completed, enter this one last command:
mv /mnt/tmp/mnt/squashfs/* /mnt/tmp; rm -rf /mnt/tmp/mnt
Now, just unmount the volumes and you’re done!
Step 4: Success!
At this point, you’ve succeeded in mounting the root filesystem that, for whatever reason, was unable to be mounted automatically by the LiveCD. Just type the command “exit” from the prompt and watch as Ubuntu continues where it left off. Note that you will no longer have a splash screen during the boot process, so expect to see the output of init for a few seconds as it starts background processes before seeing a graphical login.
Wrapping Things Up
Hopefully, this blog will prove useful to someone. Even if you haven’t run into this issue before, it’s good information to have on hand for the day when that changes. In addition, the techniques outlined in this blog aren’t just useful for getting a cranky LiveCD to behave. If you’re using a device that’s bootable via the BIOS but which isn’t supported by Ubuntu, and you’d rather not take the time to modify the LiveCD to make it work, simply follow the steps above to copy the contents of the CD to another device and manually mount the real root filesystem.
james October 16th, 2008
We at eRacks are designing a new model geared specifically toward the developer, and want to hear from you, the customer, about what you would like to see in the system (please leave detailed comments for this blog post!)
We’ve been batting around a few ideas, both software and hardware related, and would like to share them here for your consideration.
1. IDE, Revision Control System and your Operating System of Choice
Our development model would (of course!) come pre-installed with the best in open source development-related software. Do you have a favorite IDE, or do you prefer to simply invoke your text editor, compiler and makefiles directly? Would you like us to install a revision control system such as CVS, Subversion, Mercurial or Git? What’s your operating system of choice? Are you a fan of Linux, FreeBSD, OpenBSD, NetBSD, OpenSolaris, etc.?
2. What Kind of Developer are you?
While there are usually at least some applications common to most developers, a great deal of the software you’d like to be installed will probably depend significantly on the kind of development you do. Are you a kernel developer? If so, we’ll install the kernel source and headers for you. Are you an applications developer? If so, are there any open source libraries you’d like us to pre-install for you? What about you web developers out there? We could, at your option, install a local web and database server for testing purposes, as well as your scripting engine of choice (PHP, Ruby, Python, Perl, etc.) Do you not fit exactly into any of these categories? Have we missed something? Let us know!
Do you prefer to develop on a laptop, or do you like to do your programming on a desktop machine? What would you think about having the option of two or more monitors to help you spread out your work, configured to your unique specifications (would you like 2 or more individual displays, or 2 or more monitors tied together into a single virtual display?)
Anything we haven’t mentioned that you’d love to see in a development-specific model? Again, just let us know! Be sure to leave us a comment sharing your thoughts.
james September 18th, 2008
Have you ever needed to backup the contents of one or more filesystems to another machine, yet you only had a single hard drive in the machine being backed up and found that you lacked the temporary disk space necessary to create your backup before shuttling it across the network to its final destination?
As an example, when I backup my laptop, I have too many gigabytes of data to realistically store my data on DVD-R’s, and my only option is to create a tarball of the root filesystem and store it on another machine on my network. The problem is that if I try to create a backup of my laptop’s contents, I find that the resulting tarball backup is too large to fit on the hard drive along with all the data.
One solution that I’ve found to this problem is to avoid storing the backup on the source machine altogether. Through stdin and stdout, along with the magic of *NIX pipes, we can stream the data in realtime over to its destination, and only then write it to disk.
Before we begin, it is very important to note that in most situations, you’ll have to boot into another environment and manually mount your partition before proceeding, particularly when dealing with an operating system’s root filesystem. Otherwise, not only will tar choke on certain directories like /proc and /dev, the contents of the disk will also continue to change as the backup is being made, leading to inconsistencies between the data on your filesystem and the data in the backup.
With that in mind, assuming that you have ssh installed and configured correctly on both the source and destination computers, you can create a backup with the following commands (as root):
#tar -jcvp | ssh username@destination “cat > /path/to/backup.tar.bz2”
If you prefer to use gzip as opposed to bzip2, replace the above tar command with the following:
#tar -zcvp | ssh username@destination “cat > /path/to/backup.tar.gz”
Now, let’s say that you’ve created a new partition and want to restore a previous backup. Again, assuming that ssh is configured properly on the source and the destination machines, and assuming that you’ve mounted your partition, you would recover your backup with the following commands (again, as root):
#ssh username@destination “cat /path/to/backup.tar.bz2” | tar -jvxp
If the backup is a gzipped archive, then replace the above tar command with the following:
#ssh username@destination “cat /path/to/backup.tar.gz” | tar -zvxp
Note that the user specified by ‘username’ above should have read/write permissions on the directory where the backup is to be stored for this procedure to work.
The astute reader will probably notice the missing -f option, which one usually passes to tar. The reason for this is that it tells tar to write its data to, or read its data from, a file. However, by ommitting it, we tell tar to send its output to stdout, or to receive its data from stdin when reading from an archive, which allows us to make use of pipes. It’s situations like these where the power of *NIX really shines!
james May 28th, 2008
Posted In: Backups