For our implementations, we use hardware raid for larger configurations (where you have more than 4+ disks) and software raid for smaller configurations (2 to 4 disks). The increase received from hardware RAID is negligible on smaller disk configurations.
Hardware RAID has a couple of disadvantages though. The configuration page is during the boot process, so accessing and working on the raid has to be done during a reboot. As long as you have IPMI or console access, this may not be a big deal, but you are somewhat limited by diagnostic tools.
Hardware RAID is the way to go if you want to do Windows based RAID, as there is no RAID option during Windows or Windows Server install process.
Software raid though has a lot of additional options, as you can access mdadm when accessed as the root user to configure the RAID, and add/drop disks through this interface. This requires hot swap, but the fact that you can do that is pretty helpful. You can also run health checking tools like hard disk sentinel on software raid, as the disks will still be displays in the /dev/sd* manner.
Another thing you can do with software raid, is that you can take an installation that is installed to a single disk, and convert to software raid. This requires a separate OS setup, or a boot into a rescue disk, but you can copy the drive partitions from a single disk to 2 or 3, and configure the boot sector across all disks if you are converting to RAID 1. This is a bit more complex than deploying a RAID 1 or RAID 10 from scratch, but can be done.
Based on the Ops question, I think that software raid should suffice, and as long as he keeps his max disk usage below 80%, he shouldn’t run into too many issues.