Quantcast
Channel: VMware Communities : Discussion List - All Communities
Viewing all articles
Browse latest Browse all 193198

Lost access to datastore after removing a different logical volume

$
0
0

I just removed a logical volume from my system, but upon reboot, ESXi 4.1 wasn't able to see a datastore that was on a logical volume created at a later date.  Here's the sequence of events:

 

  1. Installed ESXi 4.1 on an HP ML350 G6 with a P420i RAID controller.  Originally configured with one RAID-1 logical volume (LV1)
  2. Added one non-RAID logical volume (LV2) for backups
  3. Added one RAID-1 logical volume (LV3) for additional VMs / storage

 

The non-RAID drive started giving errors after a couple of years (problems reading from / writing to the disk), so I went into the RAID configuration and deleted LV2.  After I rebooted, ESXi no longer saw the datastore on LV3 (though it could still see the datastore on LV1).  I tried to rescan for datastores, but none were detected.

 

Rather than poke around and perhaps mess things up worse and lose the data on LV3, I decided to re-create LV2, and after a reboot, ESXi saw the datastore on LV3 once again.

 

The question is, then, how should I remove that failing logical volume without losing access to the datastore on LV3?

 

And on a related note, are there any logs I can look at, or are there any commands I can run, to see/verify the file system integrity in ESXi?  As I mentioned, the non-RAID volume seemed to be developing lots of bad sectors (or perhaps it was file system corruption from power outages - I'm not sure), and I'd like a way to periodically check health of the file systems.

 

Thanks.


Viewing all articles
Browse latest Browse all 193198

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>