Upgrading the disks in an Iomega StorCenter ix2-200 NAS

A while ago I bought an Iomega ix2-200 NAS box. I’m a shallow human being, I bought it mainly because it wasn’t fugly like other NASes. (Also it got reasonable reviews. It really is quite sexy though.) It’s been a great little device so far. I mount it over NFS on my Mac to give me seamless filesystem-level access, and it has a DLNA media server which I pointed at the iTunes folder I keep on it, allowing me to browse and listen to music in bed on my phone (via a great little Android app called 2player).

Inevitably, I bought the cheaper version with two 1TB disks. Mainly this was to accommodate my iTunes library, which had outgrown the external drive it used to live on, so I thought 2TB would be more than enough. However, I then discovered the joys of archiving DVDs, and the drives soon filled up. So, the time finally came to upgrade them. Iomega requires you to buy replacement drives from them at a hefty markup, but it seems you can just install your own (though according to the manual this will void your warranty).

The project was therefore: get two new 2TB disks into the NAS, and then copy the data from the old 1TB disks.

This fellow’s blog suggests that’s it doable, and describes preparing a disk by partitioning it, creating filesystems, copying key files etc. Being lazy I wondered whether I could just use UNIX dd to copy the old disk to a new one, as this should replicate the partition table and any contents.

So, I took a fresh disk (one of a pair of Samsung 2TB F4EG HD204UI/Z drives) and plugged it into one of two spare internal SATA cables in an Ubuntu Linux box. I extracted drive 1 from the NAS (a simple job of taking out two screws and sliding out the removable drive tray) and plugged it in too. Booting the Linux PC, I could then see the drives as new devices, /dev/sda and /dev/sdc. Using the Ubuntu GUI Disk Utility I could see the models of each device and so know which was the NAS drive (a Seagate) and which the Samsung.

I ran a simple dd if=/dev/sda of=/dev/sdc. This does a byte-for-byte copy of one drive to the other, including partition tables and whatnot. The layout of the NAS drive is described in the above post: it’s two partitions, a small one containing Iomega’s NAS Linux, and a large ~1TB one for the data. (I discovered more details later, when I had to mount these drives to get at the old data.)

Since I knew I didn’t want the data (I was going to copy it manually later), I interrupted the dd after a while, reasoning that the partition table and the contents of the important first partition would have been copied, and I didn’t care about the 2nd partition’s contents. dd reported ~22MB/s, so it would have taken ages to copy the whole 1TB anyway.

Next, I took the second drive out of the NAS, leaving me with an empty chassis. I unscrewed the first 1TB drive from its rail caddy and popped the 2TB onto it instead, and slid it into the NAS. Powering up, at first I thought it had failed as there was just a blinking white light on the front, but after a while the blue drive light started flickering as it read the disk. One thing of note – the Samsung drives are very very quiet compared to the clattery Seagate ones.

The NAS is configurable via a web interface, but I needed to know its IP address for the URL. The NAS gets its IP address via DHCP, so I went to my router’s web admin interface and looked at the DHCP clients to discover the address. Typed this into the browser, and hey presto! It had worked, I got the usual Iomega NAS interface, with a message on the dashboard about a drive failure and possible data loss (i.e. “I can’t see drive 2!”). It reported the drive as 1.8TB.

I powered down the NAS and put in the second, completely blank 2TB drive. After rebooting, I went back to the NAS UI and re-initialised the drives into JBOD mode (to give me 4TB of un-mirrored space – Iomega call this “without data protection”, since you lose all your data if one drive fails). And that was that, the NAS was upgraded.

The next job, then, was to copy the old data over. Each of the two 1TB disks had a data partition. The Iomega uses md (software raid) to treat both partitions as a single device, and then LVM (Linux Volume Manager) on top of that. The clues to this were in the Ubuntu Disk Utility, which shows the partition type (“Raid” when the RAID array isn’t yet assembled, and then “LVM2 Physical Volume” when it is).

To create a RAID device that I could mount, I used the mdadm command: mdadm –assemble –scan. This created a single device /dev/md0 out of the two RAIDed partitions. Next I used the pvs command (as root, no arguments) to discover the LVM “Volume Group”, in this case, md1_vg. With sudo lvdisplay /dev/md1_vg I was able to get the LV Name, which is what mount needs in order to mount it all as a filesystem. However, lvdisplay listed the “LV Status” as “NOT available”, which apparently means you can’t mount it. To fix this I ran sudo lvchange -a y md1_vg, and lvdisplay then listed it as available. I could then mount the filesystem with sudo mount /dev/md1_vg/md1vol1 /mnt/oldnas.

Mounting the new NAS was just a normal NFS mount. One thing to note (easy to miss if you don’t read the small print on the NAS box’s NFS page) is that NFS shares have an /nfs prefix. Thus if your NAS is called ‘storage’ and you have a share called ‘data’, the NFS export is actually storage:/nfs/data. I mounted this to /mnt/nas and rsync’d the data over. (I tend to use rsync, even for local copies, so if something goes wrong I can just rsync again without having to recopy everything.)

One small note – Ubuntu didn’t have any of the mdadm, lvm or nfs tools installed by default. But they were easy enough to install via the package manager or apt-get.

I did have some concerns about power usage and temperature. I suspect the Samsung drives actually use less power than the Seagate ones, but I don’t have the technical data. The temperature as reported via the dashboard hardware status went up to 52 degrees Celsius during the file transfer (with about 18 degrees ambient temperature) but the fan didn’t come on, so I suppose this is within tolerance. It’s early days yet but we’ll see how it goes

All in all it was one of those days when everything went pretty smoothly; everything worked as expected and there were no real gotchas.


  1. Anonymous said:

    hello,thank you for this post.is there a way that i can run this from my mac book pro?which format are the harddisks formated ?can they store files bigger than 4gb?i am new here, thank you

  2. Anonymous said:

    I think it would be difficult to do the things I describe from a MacBook. The NAS is a Linux machine and uses Linux-specific disk formats. In theory you might be able to run Linux on the MacBook in a virtual machine, and connect the disks using an external enclosure or dock, but I wouldn’t think this is easy.The filesystem on the disk is ultimately an ext3 filesystem I believe (which your MacBook wouldn’t understand).I’ve just checked and the NAS is fine with >4GB files. I opened a Terminal window on my MacBook Pro and created a 5GB file full of random bytes: dd if=/dev/random count=6020 bs=1m of=5gig(This command means: copy 6020 blocks of 1MB from the special file /dev/random to the output file called "5gig".)I then copied this to the NAS. I was curious so I timed the copy too: time cp 5gig /nas/5gig real
    5m24.687s user
    0m0.026s sys
    0m34.758sSo 5 GB in 5 minutes and 25 seconds, that’s about 18.5 MB/s.I then compared the file on my laptop with the file on the NAS, to make sure the copy was successful: cmp 5gig /nas/5gigThis command produced no output, which means the files were identical.

  3. Anonymous said:

    Thank you for this post. I have upgraded my ix2-200 cloud edition with a pair of 2TB WD Caviar Green hdd’s. They are a lot more quiet and make this NAS that much better. And when I say quite, I mean this things don’t make a sound compared to the original Seagate crap.The only hang up I had following your directions was that after I copied one of the original drives onto the new larger drive, it created the OS partition as well as a (practically invisible) 1tb partition onto the 2tb drive. I eventually got around that problem as well as a ‘pool:name’ problem. In the end, it all works, so for that thank you.As for doing this on a Mac, I do believe it is possible using a Live CD of Ubuntu. I was able to do this on my Windows PC using an Ubuntu CD and a USB HDD dock. On Ubuntu’s website there is a download to make a live CD. Then restarting your machine with the CD in should give you an option to try Ubuntu, no installation required.

  4. Anonymous said:

    johnny12342 im getting the ‘pool:name’ problem, how did u solve this one?

  5. Marcel W. said:

    This article was a real time-saver (and life-saver actually) for me, many many thanks 🙂

    One of the HDD’s in my ix2 was crashing at a regular interval lately. Leaving the ix2 turned off for awhile tended to temporarily ‘fix’ this issue, but it still looked like the unit would become unusable in the short term. Major downside of this issue was that I configured my unit to use ‘JBOD’, which means I would effectively lose all my 6TB of data if I did not take action. So I bought a new PC and four new 3TB harddisks (I’m going to use the six harddisks in a ZFS ‘RAIDZ2’ configuration).

    But after having the new PC up and running, the issue became how to migrate the data from my ix2 to this PC. So first I tried copying the data from the ix2 via SFTP, then I tried rsync over SSH. Unfortunately both methods yielded a max transfer rate of about 6MB/s, meaning transferring my 6TB was going to take more than 60 hours to complete! Of course this could probably also mean the ix2 would crash several times during the process.

    That’s where this article kicked in. Even though I’m somewhat of a Linux-noob, after following the instructions described above I had the old HDD’s up and running in the new PC in a few minutes. Now my data is transferring at speeds up to 50MB/s, with an ETA of less than 24 hours!

    So once again many, many thanks to the author (Yuggoth?)

    • Glad it was useful!

      “Yuggoth… is a strange dark orb at the very rim of our solar system… There are mighty cities on Yuggoth—great tiers of terraced towers built of black stone… The sun shines there no brighter than a star, but the beings need no light. They have other subtler senses, and put no windows in their great houses and temples… The black rivers of pitch that flow under those mysterious cyclopean bridges—things built by some elder race extinct and forgotten before the beings came to Yuggoth from the ultimate voids—ought to be enough to make any man a Dante or Poe if he can keep sane long enough to tell what he has seen…”
      —H. P. Lovecraft, “The Whisperer in Darkness”

      • Marcel W. said:

        I think I’ve been there once 🙂

  6. Marcel W. said:

    tentacles, do u have any clues on how to remove and destroy the array when u’re done with it?

    I’ve tried several things, but it looks like I keep ending up in a catch 22-situation…

    • If you mean how to unmount it, it would be something like: sudo umount /mnt/oldnas. If by ‘destroy’ you mean erase the disks for security reasons, I would do something like: dd if=/dev/zero of=/dev/xyz (where xyz is the disk). This will write zero bytes across the whole disk, which as far as I know makes the data unrecoverable.

      • Marcel W. said:

        Eventually I managed to kill the thing, thanks again.

  7. John said:


    First off thanks for your article.

    When you say “I thought it had failed as there was just a blinking white light on the front, but after a while the blue drive light started flickering as it read the disk”, can you define “a while”? Minutes, hours, days? It’s been stuck there for several hours but I’m reluctant to reboot it in case it is initializing the array or something.


    • Hi John – it’s been some time now, so can’t say I recall, but I can’t imagine it was more than minutes as I would’ve probably started fiddling again if it had taken any longer.

      • John said:

        Thanks, I appreciate the quick response.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: