A while ago I bought an Iomega ix2-200 NAS box. I’m a shallow human being, I bought it mainly because it wasn’t fugly like other NASes. (Also it got reasonable reviews. It really is quite sexy though.) It’s been a great little device so far. I mount it over NFS on my Mac to give me seamless filesystem-level access, and it has a DLNA media server which I pointed at the iTunes folder I keep on it, allowing me to browse and listen to music in bed on my phone (via a great little Android app called 2player).
Inevitably, I bought the cheaper version with two 1TB disks. Mainly this was to accommodate my iTunes library, which had outgrown the external drive it used to live on, so I thought 2TB would be more than enough. However, I then discovered the joys of archiving DVDs, and the drives soon filled up. So, the time finally came to upgrade them. Iomega requires you to buy replacement drives from them at a hefty markup, but it seems you can just install your own (though according to the manual this will void your warranty).
The project was therefore: get two new 2TB disks into the NAS, and then copy the data from the old 1TB disks.
This fellow’s blog suggests that’s it doable, and describes preparing a disk by partitioning it, creating filesystems, copying key files etc. Being lazy I wondered whether I could just use UNIX dd to copy the old disk to a new one, as this should replicate the partition table and any contents.
So, I took a fresh disk (one of a pair of Samsung 2TB F4EG HD204UI/Z drives) and plugged it into one of two spare internal SATA cables in an Ubuntu Linux box. I extracted drive 1 from the NAS (a simple job of taking out two screws and sliding out the removable drive tray) and plugged it in too. Booting the Linux PC, I could then see the drives as new devices, /dev/sda and /dev/sdc. Using the Ubuntu GUI Disk Utility I could see the models of each device and so know which was the NAS drive (a Seagate) and which the Samsung.
I ran a simple dd if=/dev/sda of=/dev/sdc. This does a byte-for-byte copy of one drive to the other, including partition tables and whatnot. The layout of the NAS drive is described in the above post: it’s two partitions, a small one containing Iomega’s NAS Linux, and a large ~1TB one for the data. (I discovered more details later, when I had to mount these drives to get at the old data.)
Since I knew I didn’t want the data (I was going to copy it manually later), I interrupted the dd after a while, reasoning that the partition table and the contents of the important first partition would have been copied, and I didn’t care about the 2nd partition’s contents. dd reported ~22MB/s, so it would have taken ages to copy the whole 1TB anyway.
Next, I took the second drive out of the NAS, leaving me with an empty chassis. I unscrewed the first 1TB drive from its rail caddy and popped the 2TB onto it instead, and slid it into the NAS. Powering up, at first I thought it had failed as there was just a blinking white light on the front, but after a while the blue drive light started flickering as it read the disk. One thing of note – the Samsung drives are very very quiet compared to the clattery Seagate ones.
The NAS is configurable via a web interface, but I needed to know its IP address for the URL. The NAS gets its IP address via DHCP, so I went to my router’s web admin interface and looked at the DHCP clients to discover the address. Typed this into the browser, and hey presto! It had worked, I got the usual Iomega NAS interface, with a message on the dashboard about a drive failure and possible data loss (i.e. “I can’t see drive 2!”). It reported the drive as 1.8TB.
I powered down the NAS and put in the second, completely blank 2TB drive. After rebooting, I went back to the NAS UI and re-initialised the drives into JBOD mode (to give me 4TB of un-mirrored space – Iomega call this “without data protection”, since you lose all your data if one drive fails). And that was that, the NAS was upgraded.
The next job, then, was to copy the old data over. Each of the two 1TB disks had a data partition. The Iomega uses md (software raid) to treat both partitions as a single device, and then LVM (Linux Volume Manager) on top of that. The clues to this were in the Ubuntu Disk Utility, which shows the partition type (“Raid” when the RAID array isn’t yet assembled, and then “LVM2 Physical Volume” when it is).
To create a RAID device that I could mount, I used the mdadm command: mdadm –assemble –scan. This created a single device /dev/md0 out of the two RAIDed partitions. Next I used the pvs command (as root, no arguments) to discover the LVM “Volume Group”, in this case, md1_vg. With sudo lvdisplay /dev/md1_vg I was able to get the LV Name, which is what mount needs in order to mount it all as a filesystem. However, lvdisplay listed the “LV Status” as “NOT available”, which apparently means you can’t mount it. To fix this I ran sudo lvchange -a y md1_vg, and lvdisplay then listed it as available. I could then mount the filesystem with sudo mount /dev/md1_vg/md1vol1 /mnt/oldnas.
Mounting the new NAS was just a normal NFS mount. One thing to note (easy to miss if you don’t read the small print on the NAS box’s NFS page) is that NFS shares have an /nfs prefix. Thus if your NAS is called ‘storage’ and you have a share called ‘data’, the NFS export is actually storage:/nfs/data. I mounted this to /mnt/nas and rsync’d the data over. (I tend to use rsync, even for local copies, so if something goes wrong I can just rsync again without having to recopy everything.)
One small note – Ubuntu didn’t have any of the mdadm, lvm or nfs tools installed by default. But they were easy enough to install via the package manager or apt-get.
I did have some concerns about power usage and temperature. I suspect the Samsung drives actually use less power than the Seagate ones, but I don’t have the technical data. The temperature as reported via the dashboard hardware status went up to 52 degrees Celsius during the file transfer (with about 18 degrees ambient temperature) but the fan didn’t come on, so I suppose this is within tolerance. It’s early days yet but we’ll see how it goes
All in all it was one of those days when everything went pretty smoothly; everything worked as expected and there were no real gotchas.