Some Useful Storage Commands
Command to view all LUNs presented to a host
#esxcfg-scsidevs -c
And to check about a specific LUN,
#esxcfg-scsidevs -c | grep naa.id
To find the unique identifier of the LUN, you may run this command:
# esxcfg-scsidevs -m
To find associated datastore using a LUN id
#esxcfg-scsidevs -m|grep naa.id
To get a list of RDM disks, you may run following command,
#find /vmfs/volumes/ -type f -name ‘*.vmdk’ -size -1024k -exec grep -l ‘^createType=.*RawDeviceMap’ {} \; > /Datastore123/rdmsluns.txt This command will save the list of all RDM disk to a text file rdmluns.txt and save it to Datastore123.
Now Run following command to find the associated LUNs,
#for i in `cat /tmp/rdmsluns.txt`; do vmkfstools -q $i; done
This command will give you the vml.id of rdm luns,
#for i in `cat /tmp/rdmsluns.txt`; do vmkfstools -q $i; done
This command will give you the vml.id of rdm luns,
To mark an RDM device as perennially reserved:
#esxcli storage core device setconfig -d naa.id –perennially-reserved=true you may create an script to mark all RDMs as perennially reserved in one go.
Confirm that the correct devices are marked as perennially reserved by running this command on the host:
#esxcli storage core device list |less
To verify about an specific lun/device, run this command:
#esxcli storage core device list -d naa.id
The configuration is permanently stored with the ESXi host and persists across restarts.
To remove the perennially reserved flag, run this command
#esxcli storage core device setconfig -d naa.id –perennially-reserved=false
To obtain LUN multipathing information from the ESXi host command line:
To get detailed information regarding the paths.
#esxcli storage core path list
or To list the detailed information of the corresponding paths for a specific device,
#esxcli storage core path list -d naa.ID
To figure out if the device is managed by VMware’s native multipath plugin, the NMP or it is managed by a third-party plugin,
#esxcli storage nmp device list -d naa.id
This command not only confirms that the device is managed by NMP, but will also display the Storage Array Type Plugin (SATP) for path failover and the Path Selection Policy (PSP) for load balancing.
This command not only confirms that the device is managed by NMP, but will also display the Storage Array Type Plugin (SATP) for path failover and the Path Selection Policy (PSP) for load balancing.
To list LUN multipathing information,
#esxcli storage nmp device list
To check the existing path selection policy
#esxcli storage nmp satp list
To change the multipathing policy
# esxcli storage nmp device set –device naa_id –psp path_policy
# esxcli storage nmp device set –device naa_id –psp path_policy
(VMW_PSP_MRU or VMW_PSP_FIXED or VMW_PSP_RR)
Note: These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions
Note: These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions
To generate a list of all LUN paths currently connected to the ESXi host.
#esxcli storage core path list command
For the detail path information of a specific device
#esxcli storage core path list -d naa.id
To generate a list of extents for each volume and mapping from device name to UUID,
#esxcli storage vmfs extent list command
or To generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.
#esxcli storage filesystem list
To list the possible targets for certain storage operations,
#ls -alh /vmfs/devices/disks
To rescan all HBA Adapters,
#esxcli storage core adapter rescan –all
#esxcli storage core adapter rescan –all
To rescan a specific HBA.
#esxcli storage core adapter rescan –adapter <vmkernel SCSI adapter name> Where <vmkernel SCSI adapter name> is the vmhba# to be rescanned.
#esxcli storage core adapter rescan –adapter <vmkernel SCSI adapter name> Where <vmkernel SCSI adapter name> is the vmhba# to be rescanned.
To get a list of all HBA adapters,
#esxcli storage core adapter list command
Note: There may not be any output if there are no changes.
#esxcli storage core adapter list command
Note: There may not be any output if there are no changes.
To search for new VMFS datastores, run this command,
#vmkfstools -V
#vmkfstools -V
To check which VAAI primitives are supported.
#esxcli storage core device vaai status get -d naa.id
The esxcli storage san namespace has some very useful commands. In the case of fiber channel you can get information about which adapters are used for FC, and display the WWNN (nodename) and WWPN (portname) information, speed and port state
#esxcli storage san fc list
To display FC event information:
# esxcli storage san fc events get
VML ID
For example: vml.02000b0000600508b4000f57fa0000400002270000485356333630
Breaking apart the VML ID for a closer understanding: The first 4 digits are VMware specific and
Breaking apart the VML ID for a closer understanding: The first 4 digits are VMware specific and
the next 2 digits are the LUN identifier in hexadecimal.
In the preceding example, the LUN is mapped to LUN ID 11 (hex 0b).
NAA id
NAA stands for Network Addressing Authority identifier. EUI stands for Extended Unique Identifier. The number is guaranteed to be unique to that LUN.
The NAA or EUI identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same.
Path Identifier: vmhba<Adapter>:C<Channel>:T<Target>:L<LUN>
This identifier is now used exclusively to identify a path to the LUN. When ESXi detects that paths associated to one LUN, each path is assigned this Path Identifier. The LUN also inherits the same name as the first path, but it is now used an a Runtime Name, and not used as readily as the above mentioned identifiers as it may be different depending on the host you are using. This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:C0:T0:L0 = Adapter 1, Channel 0, Target 0, and LUN 0.
This identifier is now used exclusively to identify a path to the LUN. When ESXi detects that paths associated to one LUN, each path is assigned this Path Identifier. The LUN also inherits the same name as the first path, but it is now used an a Runtime Name, and not used as readily as the above mentioned identifiers as it may be different depending on the host you are using. This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:C0:T0:L0 = Adapter 1, Channel 0, Target 0, and LUN 0.
To determine the firmware for a Qlogic HBA on an ESXi/ESX host 5.1
(QLogic)
To determine the firmware for a QLogic fibre adapter, run these commands on the ESXi/ESX host:
Go to /proc/scsi/qla####.
Where #### is the model of the Qlogic HBA
Run the ls command to see all of the adapters in the directory.
The output appears similar to:
1 2 HbaApiNode
Run the command:
head -2 #
Where # is the HBA number.
You see output similar to:
QLogic PCI to Fibre Channel Host Adapter for QLA2340 :
Firmware version: 3.03.19, Driver version 7.07.04.2vmw
To determine the firmware for a QLogic iSCSI hardware initiator on an ESXi/ESX host:
Go to /proc/scsi/qla####.
Where #### is the model of the Qlogic HBA
Run the ls command to see all of the adapters in the directory.
You see output similar to:
1 2 HbaApiNode
Run the command:
head -4 #
Where # is the HBA number.
You see output similar to:
QLogic iSCSI Adapter for ISP 4022:
Driver version 3.24
Code starts at address = 0x82a314
Firmware version 2.00.00.45
(Emulex)
To determine the firmware for a Emulex HBA on an ESXi/ESX host 5.1
Go to /proc/scsi/lpfc.
Note: the lpfc may be appended with model number appended. For example, /proc/scsi/lpfc820
Run the ls command to see all of the adapters in the directory.
You see output similar to:
1 2
Run the command:
head -5 #
where # is the HBA number.
You see output similar to:
Emulex LightPulse FC SCSI 7.3.2_vmw2
Emulex LP10000DC 2Gb 2-port PCI-X Fibre Channel Adapter on PCI bus 42 device 08 irq 42
SerialNum: BG51909398
Firmware Version: 1.91A5 (T2D1.91A5)
Notes:
To determine the firmware for a Emulex HBA on an ESXi/ESX host 5.5
In ESXi 5.5, you do not see native drivers in the /proc nodes. To view native driver details, run the command:
/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval –a
To Get Hardware Details
Information:
# esxcfg-info | less -I
Identify the SCSI shared storage devices with the following command:
For ESX/ESXi 4.x, ESXi 5.x and 6.0, run the command:
# esxcfg-scsidevs -l | egrep -i ‘display name|vendor’
Run this command to find additional peripherals and devices:
# lspci –vvv
Installation of VIB
#esxcli software vib install –d /vmfs/volumes/datastore_name/driver_file_name.zip
Removing of VIB
#esxcli software vib remove –n –f
ESX Monitoring Steps
Configure SNMP Communities
esxcli system snmp set –communities public
Configure the SNMP Agent to Send SNMP v1 or v2c Traps
If the SNMP agent is not enabled, enable it by typing
esxcli system snmp set –enable true
esxcli system snmp set –targets target.example.com@162/public
Send a test trap to verify that the agent is configured correctly by typing
esxcli system snmp test
The agent sends a warmStart trap to the configured target
Creating ESX Logs from Command Line
vm-support
Creating /var/tmp/esx-(Hostname).tgz
cp /var/tmp/esx-Z2T3GBGLPLM26-2014-12-17–11.24.tgz /vmfs/volumes/glplx94_vmdata_iso_01/ESX_Logs
Rename the CTK.VMDK
Go to datastore
Go the machine folder
Rename the file mv xyz-ctk.vmdk xyz-ctk_old.vmdk
then power on the machine
Install VMware tools without Reboot
/s /v/qn ADDLOCAL=ALL REBOOT=ReallySuppress
To Read File in ESX
vi <filename>
- Esc+ : + /+ <search keyword>
- Use n to see next instance of search
- For exiting the file use esc+:+q!
cat <filename> | grep –i <keyword>
cat <filename> | grep –e<keyword> -e <keyword>
Less <filename>
Shift + G (To go to End)
To Read last 100 Lines of file
Tail -f <filename> -n 100
To get VM Snapshot Details
get-vm | get-snapshot | format-list vm,name,SizeMB,Created,IsCurrent | out-file c:\a.txt
To Get Array Details from ESXi 5.1
esxcli hpssacli cmd -q “controller all show status”
To Get VM Created Date
Get-VIEvent -maxsamples 10000 -Start (Get-Date).AddDays(-60) | where {$_.Gettype().Name-eq “VmCreatedEvent” -or $_.Gettype().Name-eq “VmBeingClonedEvent” -or $_.Gettype().Name-eq “VmBeingDeployedEvent”} |Sort CreatedTime -Descending |Select CreatedTime, UserName,FullformattedMessage | Format-Table –AutoSize
Find AMS Version
#esxcli software vib list | grep ams
Configure SATP Claimrule for Changing Path Selection Policy according to Storage Vendor
1. First we need to find out for a specific LUN, what the current path selection policy is running.
#esxcli storage nmp device list -d naa.60060e80132892005020289200001001
Look into result for Storage Array Type and Path Selection Policy.
2. The next step is to find out what storage array Vendor and Model type this LUN is coming from, because we need this info to create a new SATP claiming rule.
#esxcli storage core device list -d naa.60060e80132892005020289200001001
Look into result for Vendor Name and Model Name
3. Now need to check the current SATP rule configured in ESXi host.
#esxcli storage nmp satp rule list
RESULT:
Name Vendor Model Rule Group Claim Options Default PSP
VMW_SATP_DEFAULT_AA HITACHI system inq_data[128]={0x44 0x46 0x30 0x30} VMW_PSP_RR
VMW_SATP_DEFAULT_AA HITACHI System
The first line tells ESXi that if you find a storage of Vendor Hitachi with specific claim options “inq_data[128]={0x44 0x46 0x30 0x30}” (which I don’t fully understand), then the VMW_PSP_RR policy should be used.
The second line says to apply the system default connected to VMW_SATP_DEFAULT_AA for all Hitachi arrays.
VMW_SATP_DEFAULT_AA HITACHI system inq_data[128]={0x44 0x46 0x30 0x30} VMW_PSP_RR
VMW_SATP_DEFAULT_AA HITACHI System
The first line tells ESXi that if you find a storage of Vendor Hitachi with specific claim options “inq_data[128]={0x44 0x46 0x30 0x30}” (which I don’t fully understand), then the VMW_PSP_RR policy should be used.
The second line says to apply the system default connected to VMW_SATP_DEFAULT_AA for all Hitachi arrays.
4. Let’s check what the default for VMW_SATP_DEFAULT_AA is configured.
#esxcli storage nmp satp list
RESULT:
VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays
So above result is stating that default SATP rule is used to apply PSP “VMW_PSP_FIXED”.
5. Now We’re telling the ESXi to use VMW_SATP_DEFAULT_AA with a PSP of “VMW_PSP_RR” when Venor and Model match our specification:
#esxcli storage nmp satp rule add -V HITACHI -M “OPEN-V” -P VMW_PSP_RR -s VMW_SATP_DEFAULT_AA
#esxcli storage core claimrule load
#esxcli storage core claimrule load
6. To check how this worked out, check the satp rule list again:
#esxcli storage nmp satp rule list
Name Vendor Model Rule Group Claim Options Default PSP
VMW_SATP_DEFAULT_AA HITACHI system inq_data[128]={0x44 0x46 0x30 0x30} VMW_PSP_RR
VMW_SATP_DEFAULT_AA HITACHI OPEN-V user VMW_PSP_RR
VMW_SATP_DEFAULT_AA HITACHI system
VMW_SATP_DEFAULT_AA HITACHI system inq_data[128]={0x44 0x46 0x30 0x30} VMW_PSP_RR
VMW_SATP_DEFAULT_AA HITACHI OPEN-V user VMW_PSP_RR
VMW_SATP_DEFAULT_AA HITACHI system
7. Wait for 5 Minutes to Automatically detect the Claimrule or reboot the host to Manual detection.
8. To check if this changed the way the policy was applied to the LUNs, run the command below.
#esxcli storage nmp device list -d naa.60060e80132892005020289200001001
Look into result for below changes which we wanted.
Storage Array Type: VMW_SATP_DEFAULT_AA
Path Selection Policy: VMW_PSP_RR
Path Selection Policy: VMW_PSP_RR
Comments
Post a Comment