Labels

Saturday, May 30, 2009

Extending a server 2003 VM system/boot disk

Whether using basic or dynamic disks, Windows Server 2003 doesn’t provide a method to extend the size of the system or boot disk. In VI35, I have used the following process to extend the system/boot disk of 2003 server virtual machines. Note that this is for the system/boot disk only, data volumes can be grown and extended live.

This process requires the VM to be shutdown, then a small outage while the disk is extended, then a reboot once the OS is first started.

The process I used in VI 3.5 U4, with a 2003 SP2 virtual machine:

  1. Turn the VM off - VM1 in this example
  2. Use VC to increase the size of the boot/system disk of VM1 by xGB (6 in my test)
  3. Use VC to attach the disk inside another running VM that can be shutdown, eg. VM2
  4. On VM2, load diskmgmt.msc, rescan disks (or run diskpart rescan)
  5. On VM2, start diskpart:
    1. List vol (get a list of volumes)
    2. Select vol x (where x is the number of the newly added disk)
    3. Extend
  6. Shutdown VM2 and detach the disk (don’t delete it!)
  7. Start VM1
  8. On my test, after logon, a setupapi message ‘Windows has finished installing new devices. Do you want to restart your computer now?’.
  9. Said yes and the server restarted, diskmgmt.msc shows correctly sized disk.
  10. Ran chkdsk c: to verify no corruption

Step 8 has a matching event ID 271 from PlugPlayManager and setupapi.log entry:

The Plug and Play operation cannot be completed because a device driver is
preventing the device from stopping. The name of the device driver is listed as
the vetoing service name below.
Vetoed device:
STORAGE\VOLUME\1&30A96598&0&SIGNATURE6AF3AEFAOFFSET7E00LENGTH53FD06C00
Vetoing device:
STORAGE\Volume\1&30a96598&0&Signature6AF3AEFAOffset7E00Length53FD06C00
Vetoing service name: FileSystem\Ntfs
Veto type 6: PNP_VetoDevice
When Windows attempts to install, upgrade, remove, or reconfigure a device,
it queries the driver responsible for that device to confirm that the operation
can be performed. If any of these drivers denies permission (query-removal
veto), then the computer must be restarted in order to complete the operation.
User Action
Restart your computer.


Setupapi.log driver logging showing the first vetoed attempt at installation and then the shadow copy volume snapshot install:
[2009/05/06 08:45:46 404.3 Driver Install]
#-019 Searching for hardware ID(s): storage\volume
#-198 Command line processed: C:\WINDOWS\system32\services.exe
#I393 Modified INF cache "C:\WINDOWS\inf\INFCACHE.1".
#W383 "volume.PNF" migrate: PNF Language = 0409, Thread = 0c09.
#I022 Found "STORAGE\Volume" in C:\WINDOWS\inf\volume.inf; Device: "Generic volume"; Driver: "Generic volume"; Provider: "Microsoft"; Mfg: "Microsoft"; Section name: "volume_install".
#I023 Actual install section: [volume_install.NTx86]. Rank: 0x00000000. Driver date: 10/01/2002. Version: 5.2.3790.3959.
#-166 Device install function: DIF_SELECTBESTCOMPATDRV.
#I063 Selected driver installs from section [volume_install] in "c:\windows\inf\volume.inf".
#I320 Class GUID of device remains: {71A27CDD-812A-11D0-BEC7-08002BE2092F}.
#I060 Set selected driver.
#I058 Selected best compatible driver.
#-166 Device install function: DIF_INSTALLDEVICEFILES.
#I124 Doing copy-only install of "STORAGE\VOLUME\1&30A96598&0&SIGNATURE6AF3AEFAOFFSET7E00LENGTH53FD06C00".
#-166 Device install function: DIF_REGISTER_COINSTALLERS.
#I056 Coinstallers registered.
#-166 Device install function: DIF_INSTALLINTERFACES.
#-011 Installing section [volume_install.NTx86.Interfaces] from "c:\windows\inf\volume.inf".
#I054 Interfaces installed.
#-166 Device install function: DIF_INSTALLDEVICE.
#I123 Doing full install of "STORAGE\VOLUME\1&30A96598&0&SIGNATURE6AF3AEFAOFFSET7E00LENGTH53FD06C00".
#W100 Query-removal during install of "STORAGE\VOLUME\1&30A96598&0&SIGNATURE6AF3AEFAOFFSET7E00LENGTH53FD06C00" was vetoed by "STORAGE\Volume\1&30a96598&0&Signature6AF3AEFAOffset7E00Length53FD06C00" (veto type 6: PNP_VetoDevice).
#W104 Device "STORAGE\VOLUME\1&30A96598&0&SIGNATURE6AF3AEFAOFFSET7E00LENGTH53FD06C00" required reboot: Query remove failed (install) CfgMgr32 returned: 0x17: CR_REMOVE_VETOED.
#I121 Device install of "STORAGE\VOLUME\1&30A96598&0&SIGNATURE6AF3AEFAOFFSET7E00LENGTH53FD06C00" finished successfully.
[2009/05/06 08:55:18 400.3 Driver Install]
#-019 Searching for hardware ID(s): storage\volumesnapshot
#-198 Command line processed: C:\WINDOWS\system32\services.exe
#W383 "volsnap.PNF" migrate: PNF Language = 0409, Thread = 0c09.
#I022 Found "STORAGE\VolumeSnapshot" in C:\WINDOWS\inf\volsnap.inf; Device: "Generic volume shadow copy"; Driver: "Generic volume shadow copy"; Provider: "Microsoft"; Mfg: "Microsoft"; Section name: "volume_snapshot_install".
#I023 Actual install section: [volume_snapshot_install.NTx86]. Rank: 0x00000000. Driver date: 10/01/2002. Version: 5.2.3790.3959.
#-166 Device install function: DIF_SELECTBESTCOMPATDRV.
#I063 Selected driver installs from section [volume_snapshot_install] in "c:\windows\inf\volsnap.inf".
#I320 Class GUID of device remains: {533C5B84-EC70-11D2-9505-00C04F79DEAF}.
#I060 Set selected driver.
#I058 Selected best compatible driver.
#-166 Device install function: DIF_INSTALLDEVICEFILES.
#I124 Doing copy-only install of "STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT1".
#-166 Device install function: DIF_REGISTER_COINSTALLERS.
#I056 Coinstallers registered.
#-166 Device install function: DIF_INSTALLINTERFACES.
#-011 Installing section [volume_snapshot_install.NTx86.Interfaces] from "c:\windows\inf\volsnap.inf".
#I054 Interfaces installed.
#-166 Device install function: DIF_INSTALLDEVICE.
#I123 Doing full install of "STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT1".
#I121 Device install of "STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT1" finished successfully.

References

How to extend a data volume in Windows Server 2003, in Windows XP, in Windows 2000, and in Windows Server 2008
http://support.microsoft.com/kb/325590


Wayne's World of IT (WWoIT), Copyright 2009 Wayne Martin.


Read more!

Saturday, May 16, 2009

Generating 100% CPU with calc or PowerShell

Occasionally when testing something I want to generate 100% CPU load on a Windows computer. There are several utilities out there to do this, but that implies you have the utility on hand and are comfortable running it on the server. A colleague of mine (thanks Mark S.) showed me this nifty trick of using calc.exe to generate 100% CPU. The best thing about this is that every standard Windows OS installation has calc.exe.

This post provides two methods of generating 100% CPU load, the original calc.exe method, and a simple one line PowerShell command to do the same thing from the command-line (locally or on a remote server with PS v1 and psexec). Note that there may be a better method with PowerShell, I simply scripted the same operation calc was performing.

On dual-CPU/core computers this uses 100% of one CPU/core. To use more than that, calc or the PowerShell command can be run more than once, and Windows by default will run the new process on another less-busy CPU/core.

One practical application of this is to load-test a VMware VI3 cluster, generating 100% CPU on one or more VMs to see how ESX and DRS/VC handles the load. I have also used this in the past when testing multi-threaded applications and processor affinity to see how Windows allocates a processor.

calc.exe

Use calc to calculate the factorial of a number - the product of all integers from 1 up to and including the number specified, eg 5! = 1x2x3x4x5

  1. Run calc.exe and switch to scientific mode
  2. Type a large number (eg. 12345678901234567890), press the 'n!' button.
  3. Calc will ask to confirm after warning this will take a very long time
  4. 100% CPU utilisation will now occur (essentially forever)
PowerShell
 
Using the largest int32 positive integer, calculate the factorial to generate 100% CPU utilisation
$result = 1; foreach ($number in 1..2147483647) {$result = $result * $number};

Depending on how fast the CPU is, this could finish, so a loop to run the command above 2 billion times:
foreach ($loopnumber in 1..2147483647) {$result=1;foreach ($number in 1..2147483647) {$result = $result * $number};$result}

If you want to see how long the command takes to run:
Measure-Command {$result = 1; foreach ($number in 1..2147483647) {$result = $result * $number}}

Using the command-line then provides the ability to run the command remotely. To use psexec to remotely execute powershell v1 factorial to generate 100% CPU:
psexec \\%computername% /s cmd /c "echo. | powershell $result = 1; foreach ($number in 1..2147483647) {$result = $result * $number}"

Wayne's World of IT (WWoIT), Copyright 2009 Wayne Martin. 


Read more!

Monday, May 4, 2009

Converting VHD to VMDK SCSI for ESX

I had problems converting a 2003 server VHD to a vmdk that I could import into a VM running on ESX. I used WinImage to convert eh VHD->VMDK, but it seems WinImage creates the VMDK as an IDE device, which is unsupported by ESX. I'm sure there are better ways to do this, such as satisfying whatever the pre-requisites are to getting VMware converter to automatically inject the drivers, but it was interesting doing it manually.

Below is information on the process that I thought would have worked automatically, followed by the manual steps I took to make it work.

To convert the vmdk, the following process was first tried to get VMware converter to convert the IDE disk to something ESX would recognise:

  1. 1. Use WinImage to create the vmdk from the vhd
  2. 2. Use a VMware workstation VMX, including the disk as an IDE device. (a modified vmx is fine you don’t actually need VMware workstation)
  3. 3. Use VMware converter to import the workstation VMX into VC

I thought this would have been enough, but the VMware Converter process failed at 95% saying that it couldn’t find symmpi.sys. Symmpi.sys is the LSI Logic SCSI driver for the virtual SCSI adapter. I’m guessing that running VMware converter should automatically inject the drivers into the vmdk but couldn’t in this scenario (maybe because my local PC didn’t have the driver, or maybe because the driver cache cab files containing symmpi.sys weren’t on the target vmdk).

Powering on the machine resulted in a stop 0x7b inaccessible disk error. To manually fix the problem, I then:

  1. Added the disk to another VM. When starting the VM it warned that the new disk was created for LSI not buslogic. I said yes to convert to buslogic (which this VM was using as opposed to LSI).

The drive was then accessible through the VM, and I added the drivers (file and registry):

  1. Copied the driver file to the drivers directory: copy "\\%workingVM%\c$\WINDOWS\system32\drivers\symmpi.sys" "%mountedDrive%:\WINDOWS\system32\drivers"
  2. Copied the driver cache files to the machine from a working 2003: copy "\\%workingVM%\c$\WINDOWS\Driver Cache\i386\*.*" "%mountedDrive%:\WINDOWS\Driver Cache\i386"
  3. Exported the HKLM\SYSTEM\CurrentControlSet\Services\symmpiregistry and HKLM\SYSTEM\CurrentControlSet\Control\CriticalDeviceDatabase\pci#ven_1000&dev_0030 entries from a working server as regedit4 files (not Unicode).
  4. Loaded the %mountedDrive%:\windows\system32\config\system registry hive on the disk to HKLM\VM: reg load HKLM\VM %mountedDrive%:\windows\system32\config\system
  5. Modified the reg files to match the path the hive was loaded to (eg HKLM\VM\controlset001 instead of HKLM\system\currentcontrolset). The modified reg files are included below.
  6. Imported the registry files which modified the loaded system hive on the disk
  7. Disconnected the hive, shutdown the VM and disconnected the disk from the VM and reattached to the server created during the VMware Converter process
  8. Turned on the server and was prompted to convert the disk type back to LSI (which I did).
  9. The server started normally

Note that the conversion between buslogic and LSI logic in both directions were only required because the virtual machine that I mounted the disk on had a different adapter type.
---



REGEDIT4
[HKEY_LOCAL_MACHINE\VM\ControlSet001\Services\symmpi]
"ErrorControl"=dword:00000001
"Group"="SCSI miniport"
"Start"=dword:00000000
"Type"=dword:00000001
"ImagePath"=hex(2):73,79,73,74,65,6d,33,32,5c,44,52,49,56,45,52,53,5c,73,79,6d,\
6d,70,69,2e,73,79,73,00
"Tag"=dword:00000021

[HKEY_LOCAL_MACHINE\VM\ControlSet001\Services\symmpi\Parameters]
"BusType"=dword:00000001

[HKEY_LOCAL_MACHINE\VM\ControlSet001\Services\symmpi\Parameters\PnpInterface]
"5"=dword:00000001

[HKEY_LOCAL_MACHINE\VM\ControlSet001\Services\symmpi\Enum]
"0"="PCI\\VEN_1000&DEV_0030&SUBSYS_00000000&REV_01\\3&61aaa01&0&80"
"Count"=dword:00000001
"NextInstance"=dword:00000001



REGEDIT4
[HKEY_LOCAL_MACHINE\VM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1000&dev_0030]
"Service"="symmpi"
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"

References:
Injecting SCSI controller device drivers into Windows
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005208

Troubleshooting a virtual machine that fails to boot with STOP 0x7B error
http://kb.vmware.com/selfservice/viewContent.do?externalId=1006295&sliceId=1

Wayne's World of IT (WWoIT), Copyright 2009 Wayne Martin.


Read more!

Saturday, May 2, 2009

VMware VI3 iSCSI with multiple non-HBA NICs

While researching iSCSI on VI3, I came across some interesting information when using the ESX iSCSI software initiator that would be applicable to many installations, highlighting a potential bottleneck.

The short version is that if you’re using the iSCSI software initiator connecting to a single iSCSI target, multiple uplinks in an ESX network team for the VMKernel iSCSI port would not be used for load balancing.

This can be easily proven by connecting to the service console and running esxtop (n) to view the traffic for individual network adapters. Assuming your storage is in use, one or more physical uplinks for the vSwitch handling iSCSI should be showing traffic. You can also use resxtop through the RCLI on ESXi.

Why this happens

My understanding is that current ESX software initiated iSCSI connections have a 1:1 relationship between NIC and iSCSI targets. An iSCSI target in this sense is a connection to the IP-based SAN storage, not LUN targets. This limitation applies when the SAN presents a single IP address for connectivity.

VI3 software initiated iSCSI doesn’t support multipathing, which within ESX leaves only load balancing the physical uplinks in a team. Unfortunately, that leaves load balancing up to the vSwitch load balancing policy exceptions. I don’t believe any of the three choices fit most scenarios when connectivity to the iSCSI is through a single MAC/IP:

  • Route based on the originating virtual switch port ID, based on virtual port ID, of which there is only one VMKernel iSCSI port
  • Route based on source MAC hash, based on source MAC, of which there is only one
  • Route based on IP hash, based on layer 3 source-destination IP pair, of which there is only one (VMKernel -> iSCSI virtual address). I don’t think this is a generally recommended load balancing approach anyway

Link aggregation

The VI3 SAN Deploy guide does state that one connection is established to each target. This seems to indicate one connection per LUN target, but the paragraph starts with software iSCSI and switches half way through to discuss iSCSI HBA’s.

I’m still unsure of whether software iSCSI has multiple TCP sessions, one per target (I don’t believe this is the case). The blog referenced below also talks about 802.3 link aggregation which states the ESX 3.x software initiator does not support multiple TCP sessions.

However, if multiple TCP sessions were being established for the iSCSI software initiator to a single target IP address, this opens the possibility of link aggregation at the physical switch. When using 802.3ad LACP in this IP-IP scenario, the switches would have to distribute connections based on the hash of TCP source/destination ports, rather than just IP/MAC.

The following excerpt from the SAN deploy guide:

Software iSCSI initiators establish only one connection to each target.

Therefore, storage systems with a single target that contains multiple LUNs have all LUN traffic routed through that one connection. In a system that has two targets, with one LUN each, two connections are established between the ESX host and the two available volumes. For example, when aggregating storage traffic from multiple connections on an ESX host equipped with multiple iSCSI HBAs, traffic for one target can be set to a specific HBA, while traffic for another target uses a different HBA. For more information, see the “Multipathing” section of the iSCSI SAN Configuration Guide. Currently, VMware ESX provides active/passive multipath capability. NIC teaming paths do not appear as multiple paths to storage in ESX host configuration displays, however. NIC teaming is handled entirely by the network layer and must be configured and monitored separately from ESX SCSI storage multipath configuration.



VI4/vSphere

Excerpts from the following blog, indicate that changes in vSphere for software iSCSI to support multiple iSCSI sessions, allowing multipathing or link aggregation, which would allow separate iSCSI TCP sessions to be spread across more than one NICs (depending on how many iSCSI sessions).

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

The current experience discussed above (all traffic across one NIC per ESX host):

VMware can’t be accused of being unclear about this. Directly in the iSCSI SAN Configuration Guide: ESX Server‐based iSCSI initiators establish only one connection to each target. This means storage systems with a single target containing multiple LUNs have all LUN traffic on that one connection, but in general, in my experience, this is relatively unknown.

This usually means that customers find that for a single iSCSI target (and however many LUNs that may be behind that target – 1 or more), they can’t drive more than 120-160MBps. This shouldn’t make anyone conclude that iSCSI is not a good choice or that 160MBps is a show-stopper. For perspective I was with a VERY big customer recently (more than 4000 VMs on Thursday and Friday two weeks ago) and their comment was that for their case (admittedly light I/O use from each VM) this was working well. Requirements differ for every customer.


The changes in vSphere:

Now, this behavior will be changing in the next major VMware release. Among other improvements, the iSCSI initiator will be able to use multiple iSCSI sessions (hence multiple TCP connections). Looking at our diagram, this corresponds with “multiple purple pipes”for a single target. It won’t support MC/S or “multiple orange pipes per each purple pipe” – but in general this is not a big deal (large scale use of MC/S has shown a marginal higher efficiency than MPIO at very high end 10GbE configurations) .

Multiple iSCSI sessions will mean multiple “on-ramps” for MPIO (and multiple “conversations” for Link Aggregation). The next version also brings core multipathing improvements in the vStorage initiative (improving all block storage): NMP round robin, ALUA support, and EMC PowerPath for VMware which integrates into the MPIO framework and further improves multipathing. In the spirit of this post, EMC is working to make PowerPath for VMware as heterogeneous as we can.

Together – multiple iSCSI sessions per iSCSI target and improved multipathing means aggregate throughput for a single iSCSI target above that 160MBps mark in the next VMware release, as people are playing with now. Obviously we’ll do a follow up post.


Wayne's World of IT (WWoIT), Copyright 2009 Wayne Martin.


Read more!

All Posts

printQueue AD objects for 2003 ClusterVirtualCenter Physical to VirtualVirtual 2003 MSCS Cluster in ESX VI3
Finding duplicate DNS recordsCommand-line automation – Echo and macrosCommand-line automation – set
Command-line automation - errorlevels and ifCommand-line automation - find and findstrBuilding blocks of command-line automation - FOR
Useful PowerShell command-line operationsMSCS 2003 Cluster Virtual Server ComponentsServer-side process for simple file access
OpsMgr 2007 performance script - VMware datastores...Enumerating URLs in Internet ExplorerNTLM Trusts between 2003 and NT4
2003 Servers with Hibernation enabledReading Shortcuts with PowerShell and VBSModifying DLL Resources
Automatically mapping printersSimple string encryption with PowerShellUseful NTFS and security command-line operations
Useful Windows Printer command-line operationsUseful Windows MSCS Cluster command-line operation...Useful VMware ESX and VC command-line operations
Useful general command-line operationsUseful DNS, DHCP and WINS command-line operationsUseful Active Directory command-line operations
Useful command-linesCreating secedit templates with PowerShellFixing Permissions with NTFS intra-volume moves
Converting filetime with vbs and PowerShellDifference between bat and cmdReplica Domain for Authentication
Troubleshooting Windows PrintingRenaming a user account in ADOpsMgr 2007 Reports - Sorting, Filtering, Charting...
WMIC XSL CSV output formattingEnumerating File Server ResourcesWMIC Custom Alias and Format
AD site discoveryPassing Parameters between OpsMgr and SSRSAnalyzing Windows Kernel Dumps
Process list with command-line argumentsOpsMgr 2007 Customized Reporting - SQL QueriesPreventing accidental NTFS data moves
FSRM and NTFS Quotas in 2003 R2PowerShell Deleting NTFS Alternate Data StreamsNTFS links - reparse, symbolic, hard, junction
IE Warnings when files are executedPowerShell Low-level keyboard hookCross-forest authentication and GP processing
Deleting Invalid SMS 2003 Distribution PointsCross-forest authentication and site synchronizati...Determining AD attribute replication
AD Security vs Distribution GroupsTroubleshooting cross-forest trust secure channels...RIS cross-domain access
Large SMS Web Reports return Error 500Troubleshooting SMS 2003 MP and SLPRemotely determine physical memory
VMware SDK with PowershellSpinning Excel Pie ChartPoke-Info PowerShell script
Reading web content with PowerShellAutomated Cluster File Security and PurgingManaging printers at the command-line
File System Filters and minifiltersOpsMgr 2007 SSRS Reports using SQL 2005 XMLAccess Based Enumeration in 2003 and MSCS
Find VM snapshots in ESX/VCComparing MSCS/VMware/DFS File & PrintModifying Exchange mailbox permissions
Nested 'for /f' catch-allPowerShell FindFirstFileW bypassing MAX_PATHRunning PowerSell Scripts from ASP.Net
Binary <-> Hex String files with PowershellOpsMgr 2007 Current Performance InstancesImpersonating a user without passwords
Running a process in the secure winlogon desktopShadow an XP Terminal Services sessionFind where a user is logged on from
Active Directory _msdcs DNS zonesUnlocking XP/2003 without passwords2003 Cluster-enabled scheduled tasks
Purging aged files from the filesystemFinding customised ADM templates in ADDomain local security groups for cross-forest secu...
Account Management eventlog auditingVMware cluster/Virtual Center StatisticsRunning scheduled tasks as a non-administrator
Audit Windows 2003 print server usageActive Directory DiagnosticsViewing NTFS information with nfi and diskedit
Performance Tuning for 2003 File ServersChecking ESX/VC VMs for snapshotsShowing non-persistent devices in device manager
Implementing an MSCS 2003 server clusterFinding users on a subnetWMI filter for subnet filtered Group Policy
Testing DNS records for scavengingRefreshing Computer Account AD Group MembershipTesting Network Ports from Windows
Using Recovery Console with RISPAE Boot.ini Switch for DEP or 4GB+ memoryUsing 32-bit COM objects on x64 platforms
Active Directory Organizational Unit (OU) DesignTroubleshooting computer accounts in an Active Dir...260+ character MAX_PATH limitations in filenames
Create or modify a security template for NTFS perm...Find where a user is connecting from through WMISDDL syntax in secedit security templates

About Me

I’ve worked in IT for over 13 years, and I know just about enough to realise that I don’t know very much.