I just removed a logical volume from my system, but upon reboot, ESXi 4.1 wasn't able to see a datastore that was on a logical volume created at a later date. Here's the sequence of events:
- Installed ESXi 4.1 on an HP ML350 G6 with a P420i RAID controller. Originally configured with one RAID-1 logical volume (LV1)
- Added one non-RAID logical volume (LV2) for backups
- Added one RAID-1 logical volume (LV3) for additional VMs / storage
The non-RAID drive started giving errors after a couple of years (problems reading from / writing to the disk), so I went into the RAID configuration and deleted LV2. After I rebooted, ESXi no longer saw the datastore on LV3 (though it could still see the datastore on LV1). I tried to rescan for datastores, but none were detected.
Rather than poke around and perhaps mess things up worse and lose the data on LV3, I decided to re-create LV2, and after a reboot, ESXi saw the datastore on LV3 once again.
The question is, then, how should I remove that failing logical volume without losing access to the datastore on LV3?
And on a related note, are there any logs I can look at, or are there any commands I can run, to see/verify the file system integrity in ESXi? As I mentioned, the non-RAID volume seemed to be developing lots of bad sectors (or perhaps it was file system corruption from power outages - I'm not sure), and I'd like a way to periodically check health of the file systems.
Thanks.