ZFS with HP Smart Array P410i

As I was trying out zfs (on Ubuntu) on our HP server, I realised the RAID card that came with the server, HP Smart Array P410i, does not have a JBOD (Initiator-Target) mode. Now this is a First-world-problem – the card can support RAID 0, 1, 10, 5, 6, 50, 60, but not JBOD (?!).

Moving on, the next best thing is to configure every disk (we have 8 SSD) to be a 1-disk RAID 0 array to “simulate” a JBOD setup. Inside Ubuntu, there is now a 1-to-1 mapping of device to drive (sda – sdh), which is what is recommended by zfs.

Creating the zfs pool and setting some options:

zpool create tank sda sdb sdc sdd sde sdf sdg sdh
zfs set atime=off tank
zfs set sharesmb=on tank

But this is not the crux of this article. The problem is that random write performance is not as good as we expected out of zfs. The ideal setup would be to expose individual disks to zfs, but we could only expose the disks through the RAID card. After some tinkering, we found that the drive write cache was disabled, probably due to the fact that there is no battery unit on the RAID card.

To enable drive write cache:

hpacucli controller slot=0 modify drivewritecache=enable

IIRC there was some warnings, but I ignored them as what I want is a JBOD setup to hold non-mission critical data. A raidz pool can be setup if need be for redundancy, but at this point I lost myself looking at the 2x increase in write bandwidth. Next up, trying to saturate the pipe 😀

Advertisements