I not sure what happen because i have done this done this update process on other machines. The raid controller's firmware can only be updated in a widows environment. So, I loaded windows and made the firmware update from a drive that is attached to the onboard SATA. The drive was remove and loaded esxi 6.5 and both local datastores where missing.
The raid controller's web interface show that the disk volumes were there and unaffected by the update.
esxcfg-scsidevs -c was showing the raid volumes also
I then ran the following command which shows the vmfs filesystem:
voma -m vmfs -f check -d /vmfs/devices/disks/naa.60026b903ecf8900207acd6e23602932
Checking if device is actively used by other hosts
Running VMFS Checker version 2.1 in check mode
Initializing LVM metadata, Basic Checks will be done
Phase 1: Checking VMFS header and resource files
Detected VMFS-6 file system (labeled:'210.ST.Raid-10') with UUID:58e81197-05964348-4e6f-0022192af848, Version 6:81
Phase 2: Checking VMFS heartbeat region
Phase 3: Checking all file descriptors.
Phase 4: Checking pathname and connectivity.
Phase 5: Checking resource reference counts.
Total Errors Found: 0
But when i ran the following command the filesystem was showing vfat :
esxcli storage filesystem list
Is there a way to fix the filesystem back to vmfs without losing the data or do i need to contact a data recovery company ?