Latest Entries »

Hello all,

This blog has been moved to  www.erkanaksoy.net

Hope to meet you there.

Hi all,

Here on the Performance Team we constantly deal with issues caused by incorrect performance tuning of various servers. This will generally manifest itself in system or process slowness or memory or CPU bottlenecks. I have decided to publish a short series on basic guidelines you can use when provisioning a new server or tuning an old one. First, we should address hardware scaling. View full article »

This blog discusses running a Windows Server Failover Cluster (WSFC) in a Virtual Machine (VM) on top of a VMware host.  Running a cluster in a virtualized environment is commonly referred to as “Guest Clustering”.  Guest Clustering enables health monitoring of applications running within a VM, as well as application mobility to allow applications to failover from within one VM to another (for example, to allow patching the guest operating system).  It is supported by Microsoft to run Failover Clustering in a virtualized environment; however the support policy varies for different guest OS versions.

 

Windows NT Server 4.0 / Windows 2000 Server

It is not supported by Microsoft to run a Guest Cluster with the Microsoft Cluster Service (MSCS) on Windows NT Server 4.0 or Windows 2000 Server in any virtualized environment.

 

Windows Server 2003

For a cluster solution to be supported by Microsoft it must be a tested solution which has been qualified and verified to function properly with the Failover Clustering (or MSCS) feature.  The full Windows Server 2003 cluster support policy is documented here:  http://support.microsoft.com/kb/309395.

 

When a cluster solution has been qualified it will receive a ‘Designed for Microsoft® Windows® Server 2003′ logo and be listed on the Windows Server Catalog under “Cluster Solutions” at the following site: http://www.windowsservercatalog.com/.

 

Two separate VMware configurations have received a logo and are supported in Windows Server 2003 with vSphere 4.0 and EMC storage.  One configuration is with EMC V-Max storage and the other with EMC CLARiiON CX4 storage.  Details are listed here:

·         http://www.windowsservercatalog.com/item.aspx?idItem=3fe95a9f-0fb0-f22f-3a41-71c3c7e7c359&bCatID=1291

·         http://www.windowsservercatalog.com/item.aspx?idItem=91a18b7b-777a-fdaf-69ca-c3a081085d49&bCatID=1291 

These are the only two supported Windows Server 2003 guest clustering configurations.  The Windows Server 2003 cluster logo program stopped accepting new submissions as of 12/31/09, so no additional configurations will be added in the future.

 

Windows Server 2008 & Windows Server 2008 R2

The Microsoft support policy for Failover Clustering radically changed with Windows Server 2008 to become much more flexible.  In order for a solution to be supported by Microsoft all individual components must have a Windows Server logo, and the solution must pass the cluster “Validate a Configuration…” tests.  It is supported by Microsoft to run Windows Server 2008 and Windows Server 2008 R2 as a guest cluster.  The full support policy is documented here: http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx

 

In particular see the “Virtualized servers” section here: http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx#BKMK_validation_scenarios 

 

VMware Considerations

VMware has a Knowledge Base article titled “Microsoft Cluster Service (MSCS) support on ESX” which outlines additional support considerations: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004617. 

 

It is recommended to also review the VMware support policies which have additional considerations.

Some points of consideration:

·         Windows Server 2008 guest clustering requires vSphere 4.0 or higher

·         Windows Server 2008 R2 guest clustering requires vSphere 4.0 Update 1 or higher

·         Guest Clustering with VMware HA requires vSphere 4.1

·         It is not supported to deploy guest clustering with iSCSI, FCoE, and NFS disks

·         It is not supported to deploy guest clustering  in conjunction with VMware Fault Tolerance

·         It is not supported to vMotion a VM that is part of a guest cluster

Please review the “vSphere MSCS Setup Limitations” section in the documentation linked in the VMware KB above for VMware’s complete and authoritative list of configuration restrictions.

 

Guest Clustering Support Matrix Summary

 

 

ESX 3.5 or earlier

vSphere 4.0

vSphere 4.1

Windows NT Server 4.0

No

No

No

Windows 2000 Server

No

No

No

Windows Server 2003

No

Yes  (limited hardware configurations)

No

Windows Server 2008

No

Yes  (restricted configurations)

Yes  (restricted configurations)

Windows Server 2008 R2

No

Yes  (restricted configurations)

Yes  (restricted configurations)

 

 

Thanks!
Elden Christensen
Senior Program Manager Lead
Clustering & High-Availability
Microsoft

Original Article is here

In previous Pushing the Limits posts, I described the two most basic system resources, physical memory and virtual memory. This time I’m going to describe two fundamental kernel resources, paged pool and nonpaged pool, that are based on those, and that are directly responsible for many other system resource limits including the maximum number of processes, synchronization objects, and handles.

Here’s the index of the entire Pushing the Limits series. While they can stand on their own, they assume that you read them in order.

Pushing the Limits of Windows: Physical Memory

Pushing the Limits of Windows: Virtual Memory

Pushing the Limits of Windows: Paged and Nonpaged Pool

Pushing the Limits of Windows: Processes and Threads

Pushing the Limits of Windows: Handles

Pushing the Limits of Windows: USER and GDI Objects – Part 1

Pushing the Limits of Windows: USER and GDI Objects – Part 2

Paged and nonpaged pools serve as the memory resources that the operating system and device drivers use to store their data structures. The pool manager operates in kernel mode, using regions of the system’s virtual address space (described in the Pushing the Limits post on virtual memory) for the memory it sub-allocates. The kernel’s pool manager operates similarly to the C-runtime and Windows heap managers that execute within user-mode processes.  Because the minimum virtual memory allocation size is a multiple of the system page size (4KB on x86 and x64), these subsidiary memory managers carve up larger allocations into smaller ones so that memory isn’t wasted.

For example, if an application wants a 512-byte buffer to store some data, a heap manager takes one of the regions it has allocated and notes that the first 512-bytes are in use, returning a pointer to that memory and putting the remaining memory on a list it uses to track free heap regions. The heap manager satisfies subsequent allocations using memory from the free region, which begins just past the 512-byte region that is allocated.

Nonpaged Pool

The kernel and device drivers use nonpaged pool to store data that might be accessed when the system can’t handle page faults. The kernel enters such a state when it executes interrupt service routines (ISRs) and deferred procedure calls (DPCs), which are functions related to hardware interrupts. Page faults are also illegal when the kernel or a device driver acquires a spin lock, which, because they are the only type of lock that can be used within ISRs and DPCs, must be used to protect data structures that are accessed from within ISRs or DPCs and either other ISRs or DPCs or code executing on kernel threads. Failure by a driver to honor these rules results in the most common crash code, IRQL_NOT_LESS_OR_EQUAL.

Nonpaged pool is therefore always kept present in physical memory and nonpaged pool virtual memory is assigned physical memory. Common system data structures stored in nonpaged pool include the kernel and objects that represent processes and threads, synchronization objects like mutexes, semaphores and events, references to files, which are represented as file objects, and I/O request packets (IRPs), which represent I/O operations.

Paged Pool

Paged pool, on the other hand, gets its name from the fact that Windows can write the data it stores to the paging file, allowing the physical memory it occupies to be repurposed. Just as for user-mode virtual memory, when a driver or the system references paged pool memory that’s in the paging file, an operation called a page fault occurs, and the memory manager reads the data back into physical memory. The largest consumer of paged pool, at least on Windows Vista and later, is typically the Registry, since references to registry keys and other registry data structures are stored in paged pool. The data structures that represent memory mapped files, called sections internally, are also stored in paged pool.

Device drivers use the ExAllocatePoolWithTag API to allocate nonpaged and paged pool, specifying the type of pool desired as one of the parameters. Another parameter is a 4-byte Tag, which drivers are supposed to use to uniquely identify the memory they allocate, and that can be a useful key for tracking down drivers that leak pool, as I’ll show later.

Viewing Paged and Nonpaged Pool Usage

There are three performance counters that indicate pool usage:

  • Pool nonpaged bytes
  • Pool paged bytes (virtual size of paged pool – some may be paged out)
  • Pool paged resident bytes (physical size of paged pool)

However, there are no performance counters for the maximum size of these pools. They can be viewed with the kernel debugger !vm command, but with Windows Vista and later to use the kernel debugger in local kernel debugging mode you must boot the system in debugging mode, which disables MPEG2 playback.

So instead, use Process Explorer to view both the currently allocated pool sizes, as well as the maximum. To see the maximum, you’ll need to configure Process Explorer to use symbol files for the operating system. First, install the latest Debugging Tools for Windows package. Then run Process Explorer and open the Symbol Configuration dialog in the Options menu and point it at the dbghelp.dll in the Debugging Tools for Windows installation directory and set the symbol path to point at Microsoft’s symbol server:

image

After you’ve configured symbols, open the System Information dialog (click System Information in the View menu or press Ctrl+I) to see the pool information in the Kernel Memory section. Here’s what that looks like on a 2GB Windows XP system:

image

    2GB 32-bit Windows XP

Nonpaged Pool Limits

As I mentioned in a previous post, on 32-bit Windows, the system address space is 2GB by default. That inherently caps the upper bound for nonpaged pool (or any type of system virtual memory) at 2GB, but it has to share that space with other types of resources such as the kernel itself, device drivers, system Page Table Entries (PTEs), and cached file views.

Prior to Vista, the memory manager on 32-bit Windows calculates how much address space to assign each type at boot time. Its formulas takes into account various factors, the main one being the amount of physical memory on the system.  The amount it assigns to nonpaged pool starts at 128MB on a system with 512MB and goes up to 256MB for a system with a little over 1GB or more. On a system booted with the /3GB option, which expands the user-mode address space to 3GB at the expense of the kernel address space, the maximum nonpaged pool is 128MB. The Process Explorer screenshot shown earlier reports the 256MB maximum on a 2GB Windows XP system booted without the /3GB switch.

The memory manager in 32-bit Windows Vista and later, including Server 2008 and Windows 7 (there is no 32-bit version of Windows Server 2008 R2) doesn’t carve up the system address statically; instead, it dynamically assigns ranges to different types of memory according to changing demands. However, it still sets a maximum for nonpaged pool that’s based on the amount of physical memory, either slightly more than 75% of physical memory or 2GB, whichever is smaller. Here’s the maximum on a 2GB Windows Server 2008 system:

image

    2GB 32-bit Windows Server 2008

64-bit Windows systems have a much larger address space, so the memory manager can carve it up statically without worrying that different types might not have enough space. 64-bit Windows XP and Windows Server 2003 set the maximum nonpaged pool to a little over 400K per MB of RAM or 128GB, whichever is smaller. Here’s a screenshot from a 2GB 64-bit Windows XP system:

image 

    2GB 64-bit Windows XP

64-bit Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 memory managers match their 32-bit counterparts (where applicable – as mentioned earlier, there is no 32-bit version of Windows Server 2008 R2) by setting the maximum to approximately 75% of RAM, but they cap the maximum at 128GB instead of 2GB. Here’s the screenshot from a 2GB 64-bit Windows Vista system, which has a nonpaged pool limit similar to that of the 32-bit Windows Server 2008 system shown earlier.

image 

    2GB 32-bit Windows Server 2008

Finally, here’s the limit on an 8GB 64-bit Windows 7 system:

image 

    8GB 64-bit Windows 7

Here’s a table summarizing the nonpaged pool limits across different version of Windows:

  32-bit 64-bit
XP, Server 2003 up to 1.2GB RAM: 32-256 MB 
> 1.2GB RAM: 256MB
min( ~400K/MB of RAM, 128GB)
Vista, Server 2008,
Windows 7, Server 2008 R2
min( ~75% of RAM, 2GB) min(~75% of RAM, 128GB)

Paged Pool Limits

The kernel and device drivers use paged pool to store any data structures that won’t ever be accessed from inside a DPC or ISR or when a spinlock is held. That’s because the contents of paged pool can either be present in physical memory or, if the memory manager’s working set algorithms decide to repurpose the physical memory, be sent to the paging file and demand-faulted back into physical memory when referenced again. Paged pool limits are therefore primarily dictated by the amount of system address space the memory manager assigns to paged pool, as well as the system commit limit.

On 32-bit Windows XP, the limit is calculated based on how much address space is assigned other resources, most notably system PTEs, with an upper limit of 491MB. The 2GB Windows XP System shown earlier has a limit of 360MB, for example:

image

   2GB 32-bit Windows XP

32-bit Windows Server 2003 reserves more space for paged pool, so its upper limit is 650MB.

Since 32-bit Windows Vista and later have dynamic kernel address space, they simply set the limit to 2GB. Paged pool will therefore run out either when the system address space is full or the system commit limit is reached.

64-bit Windows XP and Windows Server 2003 set their maximums to four times the nonpaged pool limit or 128GB, whichever is smaller. Here again is the screenshot from the 64-bit Windows XP system, which shows that the paged pool limit is exactly four times that of nonpaged pool:

image 

     2GB 64-bit Windows XP

Finally, 64-bit versions of Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 simply set the maximum to 128GB, allowing paged pool’s limit to track the system commit limit. Here’s the screenshot of the 64-bit Windows 7 system again:

image 

    8GB 64-bit Windows 7

Here’s a summary of paged pool limits across operating systems:

  32-bit 64-bit
XP, Server 2003 XP: up to 491MB
Server 2003: up to 650MB
min( 4 * nonpaged pool limit, 128GB)
Vista, Server 2008,
Windows 7, Server 2008 R2
min( system commit limit, 2GB) min( system commit limit, 128GB)

Testing Pool Limits

Because the kernel pools are used by almost every kernel operation, exhausting them can lead to unpredictable results. If you want to witness first hand how a system behaves when pool runs low, use the Notmyfault tool. It has options that cause it to leak either nonpaged or paged pool in the increment that you specify. You can change the leak size while it’s leaking if you want to change the rate of the leak and Notmyfault frees all the leaked memory when you exit it:

image

Don’t run this on a system unless you’re prepared for possible data loss, as applications and I/O operations will start failing when pool runs out. You might even get a blue screen if the driver doesn’t handle the out-of-memory condition correctly (which is considered a bug in the driver). The Windows Hardware Quality Laboratory (WHQL) stresses drivers using the Driver Verifier, a tool built into Windows, to make sure that they can tolerate out-of-pool conditions without crashing, but you might have third-party drivers that haven’t gone through such testing or that have bugs that weren’t caught during WHQL testing.

I ran Notmyfault on a variety of test systems in virtual machines to see how they behaved and didn’t encounter any system crashes, but did see erratic behavior. After nonpaged pool ran out on a 64-bit Windows XP system, for example, trying to launch a command prompt resulted in this dialog:

image

On a 32-bit Windows Server 2008 system where I already had a command prompt running, even simple operations like changing the current directory and directory listings started to fail after nonpaged pool was exhausted:

image

On one test system, I eventually saw this error message indicating that data had potentially been lost. I hope you never see this dialog on a real system!

image

Running out of paged pool causes similar errors. Here’s the result of trying to launch Notepad from a command prompt on a 32-bit Windows XP system after paged pool had run out. Note how Windows failed to redraw the window’s title bar and the different errors encountered for each attempt:

image

And here’s the start menu’s Accessories folder failing to populate on a 64-bit Windows Server 2008 system that’s out of paged pool:

image

Here you can see the system commit level, also displayed on Process Explorer’s System Information dialog, quickly rise as Notmyfault leaks large chunks of paged pool and hits the 2GB maximum on a 2GB 32-bit Windows Server 2008 system:

image

The reason that Windows doesn’t simply crash when pool is exhausted, even though the system is unusable, is that pool exhaustion can be a temporary condition caused by an extreme workload peak, after which pool is freed and the system returns to normal operation. When a driver (or the kernel) leaks pool, however, the condition is permanent and identifying the cause of the leak becomes important. That’s where the pool tags described at the beginning of the post come into play.

Tracking Pool Leaks

When you suspect a pool leak and the system is still able to launch additional applications, Poolmon, a tool in the Windows Driver Kit, shows you the number of allocations and outstanding bytes of allocation by type of pool and the tag passed into calls of ExAllocatePoolWithTag. Various hotkeys cause Poolmon to sort by different columns; to find the leaking allocation type, use either ‘b’ to sort by bytes or ‘d’ to sort by the difference between the number of allocations and frees. Here’s Poolmon running on a system where Notmyfault has leaked 14 allocations of about 100MB each:

image

After identifying the guilty tag in the left column, in this case ‘Leak’, the next step is finding the driver that’s using it. Since the tags are stored in the driver image, you can do that by scanning driver images for the tag in question. The Strings utility from Sysinternals dumps printable strings in the files you specify (that are by default a minimum of three characters in length), and since most device driver images are in the %Systemroot%\System32\Drivers directory, you can open a command prompt, change to that directory and execute “strings * | findstr <tag>”. After you’ve found a match, you can dump the driver’s version information with the Sysinternals Sigcheck utility. Here’s what that process looks like when looking for the driver using “Leak”:

image

If a system has crashed and you suspect that it’s due to pool exhaustion, load the crash dump file into the Windbg debugger, which is included in the Debugging Tools for Windows package, and use the !vm command to confirm it. Here’s the output of !vm on a system where Notmyfault has exhausted nonpaged pool:

image

Once you’ve confirmed a leak, use the !poolused command to get a view of pool usage by tag that’s similar to Poolmon’s. !poolused by default shows unsorted summary information, so specify 1 as the the option to sort by paged pool usage and 2 to sort by nonpaged pool usage:

image 

Use Strings on the system where the dump came from to search for the driver using the tag that you find causing the problem.

So far in this blog series I’ve covered the most fundamental limits in Windows, including physical memory, virtual memory, paged and nonpaged pool. Next time I’ll talk about the limits for the number of processes and threads that Windows supports, which are limits that derive from these.

Original article is here

We’ve been getting some feedback that the current version of the Windows Server 2008 R2 SP1 download package was too large — 1.5GB for both the 32-bit and 64-bit versions in one package. So we’ve split them down out into their individual packages for easier downloading. Head on over to the Windows Server SP1 Resource page to start your evaluation. If you’re still on the fence about evaluating…don’t be. SP1 is a big value-add for Windows Server users — and it’s free. Check out the in-depth scoop here, here and here.

And as always, give us your feedback. Happy testing.

Original article is here

In my first Pushing the Limits of Windows post, I discussed physical memory limits, including the limits imposed by licensing, implementation, and driver compatibility. Here’s the index of the entire Pushing the Limits series. While they can stand on their own, they assume that you read them in order.

Pushing the Limits of Windows: Physical Memory

Pushing the Limits of Windows: Virtual Memory

Pushing the Limits of Windows: Paged and Nonpaged Pool

Pushing the Limits of Windows: Processes and Threads

Pushing the Limits of Windows: Handles

Pushing the Limits of Windows: USER and GDI Objects – Part 1

Pushing the Limits of Windows: USER and GDI Objects – Part 2

This time I’m turning my attention to another fundamental resource, virtual memory. Virtual memory separates a program’s view of memory from the system’s physical memory, so an operating system decides when and if to store the program’s code and data in physical memory and when to store it in a file. The major advantage of virtual memory is that it allows more processes to execute concurrently than might otherwise fit in physical memory.

While virtual memory has limits that are related to physical memory limits, virtual memory has limits that derive from different sources and that are different depending on the consumer. For example, there are virtual memory limits that apply to individual processes that run applications, the operating system, and for the system as a whole. It’s important to remember as you read this that virtual memory, as the name implies, has no direct connection with physical memory. Windows assigning the file cache a certain amount of virtual memory does not dictate how much file data it actually caches in physical memory; it can be any amount from none to more than the amount that’s addressable via virtual memory.

Process Address Spaces

Each process has its own virtual memory, called an address space, into which it maps the code that it executes and the data that the code references and manipulates. A 32-bit process uses 32-bit virtual memory address pointers, which creates an absolute upper limit of 4GB (2^32) for the amount of virtual memory that a 32-bit process can address. However, so that the operating system can reference its own code and data and the code and data of the currently-executing process without changing address spaces, the operating system makes its virtual memory visible in the address space of every process. By default, 32-bit versions of Windows split the process address space evenly between the system and the active process, creating a limit of 2GB for each:

 image

Applications might use Heap APIs, the .NET garbage collector, or the C runtime malloc library to allocate virtual memory, but under the hood all of these rely on the VirtualAlloc API. When an application runs out of address space then VirtualAlloc, and therefore the memory managers layered on top of it, return errors (represented by a NULL address). The Testlimit utility, which I wrote for the 4th Edition of Windows Internals to demonstrate various Windows limits,  calls VirtualAlloc repeatedly until it gets an error when you specify the –r switch. Thus, when you run the 32-bit version of Testlimit on 32-bit Windows, it will consume the entire 2GB of its address space:

image

2010 MB isn’t quite 2GB, but Testlimit’s other code and data, including its executable and system DLLs, account for the difference. You can see the total amount of address space it’s consumed by looking at its Virtual Size in Process Explorer:

image

Some applications, like SQL Server and Active Directory, manage large data structures and perform better the more that they can load into their address space at the same time. Windows NT 4 SP3 therefore introduced a boot option, /3GB, that gives a process 3GB of its 4GB address space by reducing the size of the system address space to 1GB, and Windows XP and Windows Server 2003 introduced the /userva option that moves the split anywhere between 2GB and 3GB:

 image

To take advantage of the address space above the 2GB line, however, a process must have the ‘large address space aware’ flag set in its executable image. Access to the additional virtual memory is opt-in because some applications have assumed that they’d be given at most 2GB of the address space. Since the high bit of a pointer referencing an address below 2GB is always zero, they would use the high bit in their pointers as a flag for their own data, clearing it of course before referencing the data. If they ran with a 3GB address space they would inadvertently truncate pointers that have values greater than 2GB, causing program errors including possible data corruption.

All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag, including Chkdsk.exe, Lsass.exe (which hosts Active Directory services on a domain controller), Smss.exe (the session manager), and Esentutl.exe (the Active Directory Jet database repair tool). You can see whether an image has the flag with the Dumpbin utility, which comes with Visual Studio:

image

Testlimit is also marked large-address aware, so if you run it with the –r switch when booted with the 3GB of user address space, you’ll see something like this:

image

Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory. If you run Testlimit on 64-bit Windows, you’ll see it consume the entire 32-bit addressable address space:

image

64-bit processes use 64-bit pointers, so their theoretical maximum address space is 16 exabytes (2^64). However, Windows doesn’t divide the address space evenly between the active process and the system, but instead defines a region in the address space for the process and others for various system memory resources, like system page table entries (PTEs), the file cache, and paged and non-paged pools.

The size of the process address space is different on IA64 and x64 versions of Windows where the sizes were chosen by balancing what applications need against the memory costs of the overhead (page table pages and translation lookaside buffer – TLB – entries) needed to support the address space. On x64, that’s 8192GB (8TB) and on IA64 it’s 7168GB (7TB – the 1TB difference from x64 comes from the fact that the top level page directory on IA64 reserves slots for Wow64 mappings). On both IA64 and x64 versions of Windows, the size of the various resource address space regions is 128GB (e.g. non-paged pool is assigned 128GB of the address space), with the exception of the file cache, which is assigned 1TB. The address space of a 64-bit process therefore looks something like this:

image

The figure isn’t drawn to scale, because even 8TB, much less 128GB, would be a small sliver. Suffice it to say that like our universe, there’s a lot of emptiness in the address space of a 64-bit process.

When you run the 64-bit version of Testlimit (Testlimit64) on 64-bit Windows with the –r switch, you’ll see it consume 8TB, which is the size of the part of the address space it can manage:

image

image 

Committed Memory

Testlimit’s –r switch has it reserve virtual memory, but not actually commit it. Reserved virtual memory can’t actually store data or code, but applications sometimes use a reservation to create a large block of virtual memory and then commit it as needed to ensure that the committed memory is contiguous in the address space. When a process commits a region of virtual memory, the operating system guarantees that it can maintain all the data the process stores in the memory either in physical memory or on disk.  That means that a process can run up against another limit: the commit limit.

As you’d expect from the description of the commit guarantee, the commit limit is the sum of physical memory and the sizes of the paging files. In reality, not quite all of physical memory counts toward the commit limit since the operating system reserves part of physical memory for its own use. The amount of committed virtual memory for all the active processes, called the current commit charge, cannot exceed the system commit limit. When the commit limit is reached, virtual allocations that commit memory fail. That means that even a standard 32-bit process may get virtual memory allocation failures before it hits the 2GB address space limit.

The current commit charge and commit limit is tracked by Process Explorer in its System Information window in the Commit Charge section and in the Commit History bar chart and graph:

image  image

Task Manager prior to Vista and Windows Server 2008 shows the current commit charge and limit similarly, but calls the current commit charge “PF Usage” in its graph:

image

On Vista and Server 2008, Task Manager doesn’t show the commit charge graph and labels the current commit charge and limit values with “Page File” (despite the fact that they will be non-zero values even if you have no paging file):

image

You can stress the commit limit by running Testlimit with the -m switch, which directs it to allocate committed memory. The 32-bit version of Testlimit may or may not hit its address space limit before hitting the commit limit, depending on the size of physical memory, the size of the paging files and the current commit charge when you run it. If you’re running 32-bit Windows and want to see how the system behaves when you hit the commit limit, simply run multiple instances of Testlimit until one hits the commit limit before exhausting its address space.

Note that, by default, the paging file is configured to grow, which means that the commit limit will grow when the commit charge nears it. And even when when the paging file hits its maximum size, Windows is holding back some memory and its internal tuning, as well as that of applications that cache data, might free up more. Testlimit anticipates this and when it reaches the commit limit, it sleeps for a few seconds and then tries to allocate more memory, repeating this indefinitely until you terminate it.

If you run the 64-bit version of Testlimit, it will almost certainly will hit the commit limit before exhausting its address space, unless physical memory and the paging files sum to more than 8TB, which as described previously is the size of the 64-bit application-accessible address space. Here’s the partial output of the 64-bit Testlimit  running on my 8GB system (I specified an allocation size of 100MB to make it leak more quickly):

 image

And here’s the commit history graph with steps when Testlimit paused to allow the paging file to grow:

image

When system virtual memory runs low, applications may fail and you might get strange error messages when attempting routine operations. In most cases, though, Windows will be able present you the low-memory resolution dialog, like it did for me when I ran this test:

image

After you exit Testlimit, the commit limit will likely drop again when the memory manager truncates the tail of the paging file that it created to accommodate Testlimit’s extreme commit requests. Here, Process Explorer shows that the current limit is well below the peak that was achieved when Testlimit was running:

image

Process Committed Memory

Because the commit limit is a global resource whose consumption can lead to poor performance, application failures and even system failure, a natural question is ‘how much are processes contributing the commit charge’? To answer that question accurately, you need to understand the different types of virtual memory that an application can allocate.

Not all the virtual memory that a process allocates counts toward the commit limit. As you’ve seen, reserved virtual memory doesn’t. Virtual memory that represents a file on disk, called a file mapping view, also doesn’t count toward the limit unless the application asks for copy-on-write semantics, because Windows can discard any data associated with the view from physical memory and then retrieve it from the file. The virtual memory in Testlimit’s address space where its executable and system DLL images are mapped therefore don’t count toward the commit limit. There are two types of process virtual memory that do count toward the commit limit: private and pagefile-backed.

Private virtual memory is the kind that underlies the garbage collector heap, native heap and language allocators. It’s called private because by definition it can’t be shared between processes. For that reason, it’s easy to attribute to a process and Windows tracks its usage with the Private Bytes performance counter. Process Explorer displays a process private bytes usage in the Private Bytes column, in the Virtual Memory section of the Performance page of the process properties dialog, and displays it in graphical form on the Performance Graph page of the process properties dialog. Here’s what Testlimit64 looked like when it hit the commit limit:

image

image

Pagefile-backed virtual memory is harder to attribute, because it can be shared between processes. In fact, there’s no process-specific counter you can look at to see how much a process has allocated or is referencing. When you run Testlimit with the -s switch, it allocates pagefile-backed virtual memory until it hits the commit limit, but even after consuming over 29GB of commit, the virtual memory statistics for the process don’t provide any indication that it’s the one responsible:

image

For that reason, I added the -l switch to Handle a while ago. A process must open a pagefile-backed virtual memory object, called a section, for it to create a mapping of pagefile-backed virtual memory in its address space. While Windows preserves existing virtual memory even if an application closes the handle to the section that it was made from, most applications keep the handle open.  The -l switch prints the size of the allocation for pagefile-backed sections that processes have open. Here’s partial output for the handles open by Testlimit after it has run with the -s switch:

image

You can see that Testlimit is allocating pagefile-backed memory in 1MB blocks and if you summed the size of all the sections it had opened, you’d see that it was at least one of the processes contributing large amounts to the commit charge.

How Big Should I Make the Paging File?

Perhaps one of the most commonly asked questions related to virtual memory is, how big should I make the paging file? There’s no end of ridiculous advice out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. Almost all the suggestions are based on multiplying RAM size by some factor, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’s commit limit and how processes contribute to the commit charge, you’re well positioned to see how useless such formulas truly are.

Since the commit limit sets an upper bound on how much private and pagefile-backed virtual memory can be allocated concurrently by running processes, the only way to reasonably size the paging file is to know the maximum total commit charge for the programs you like to have running at the same time. If the commit limit is smaller than that number, your programs won’t be able to allocate the virtual memory they want and will fail to run properly.

So how do you know how much commit charge your workloads require? You might have noticed in the screenshots that Windows tracks that number and Process Explorer shows it: Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).

Paging file configuration is in the System properties, which you can get to by typing “sysdm.cpl” into the Run dialog, clicking on the Advanced tab, clicking on the Performance Options button, clicking on the Advanced tab (this is really advanced), and then clicking on the Change button:

image

You’ll notice that the default configuration is for Windows to automatically manage the page file size. When that option is set on Windows XP and Server 2003,  Windows creates a single paging file that’s minimum size is 1.5 times RAM if RAM is less than 1GB, and RAM if it’s greater than 1GB, and that has a maximum size that’s three times RAM. On Windows Vista and Server 2008, the minimum is intended to be large enough to hold a kernel-memory crash dump and is RAM plus 300MB or 1GB, whichever is larger. The maximum is either three times the size of RAM or 4GB, whichever is larger. That explains why the peak commit on my 8GB 64-bit system that’s visible in one of the screenshots is 32GB. I guess whoever wrote that code got their guidance from one of those magazines I mentioned!

A couple of final limits related to virtual memory are the maximum size and number of paging files supported by Windows. 32-bit Windows has a maximum paging file size of 16TB (4GB if you for some reason run in non-PAE mode) and 64-bit Windows can having paging files that are up to 16TB in size on x64 and 32TB on IA64. For all versions, Windows supports up to 16 paging files, where each must be on a separate volume.

Original article is here

This is the first blog post in a series I’ll write over the coming months called Pushing the Limits of Windows that describes how Windows and applications use a particular resource, the licensing and implementation-derived limits of the resource, how to measure the resource’s usage, and how to diagnose leaks. To be able to manage your Windows systems effectively you need to understand how Windows manages physical resources, such as CPUs and memory, as well as logical resources, such as virtual memory, handles, and window manager objects. Knowing the limits of those resources and how to track their usage enables you to attribute resource usage to the applications that consume them, effectively size a system for a particular workload, and identify applications that leak resources.

Here’s the index of the entire Pushing the Limits series. While they can stand on their own, they assume that you read them in order.

Pushing the Limits of Windows: Physical Memory

Pushing the Limits of Windows: Virtual Memory

Pushing the Limits of Windows: Paged and Nonpaged Pool

Pushing the Limits of Windows: Processes and Threads

Pushing the Limits of Windows: Handles

Pushing the Limits of Windows: USER and GDI Objects – Part 1

Pushing the Limits of Windows: USER and GDI Objects – Part 2

Physical Memory

One of the most fundamental resources on a computer is physical memory. Windows’ memory manager is responsible with populating memory with the code and data of active processes, device drivers, and the operating system itself. Because most systems access more code and data than can fit in physical memory as they run, physical memory is in essence a window into the code and data used over time. The amount of memory can therefore affect performance, because when data or code a process or the operating system needs is not present, the memory manager must bring it in from disk.

Besides affecting performance, the amount of physical memory impacts other resource limits. For example, the amount of non-paged pool, operating system buffers backed by physical memory, is obviously constrained by physical memory. Physical memory also contributes to the system virtual memory limit, which is the sum of roughly the size of physical memory plus the maximum configured size of any paging files. Physical memory also can indirectly limit the maximum number of processes, which I’ll talk about in a future post on process and thread limits.

Windows Server Memory Limits

Windows support for physical memory is dictated by hardware limitations, licensing, operating system data structures, and driver compatibility. The Memory Limits for Windows Releases page in MSDN documents the limits for different Windows versions, and within a version, by SKU.

You can see physical memory support licensing differentiation across the server SKUs for all versions of Windows. For example, the 32-bit version of Windows Server 2008 Standard supports only 4GB, while the 32-bit Windows Server 2008 Datacenter supports 64GB. Likewise, the 64-bit Windows Server 2008 Standard supports 32GB and the 64-bit Windows Server 2008 Datacenter can handle a whopping 2TB. There aren’t many 2TB systems out there, but the Windows Server Performance Team knows of a couple, including one they had in their lab at one point. Here’s a screenshot of Task Manager running on that system:

image

The maximum 32-bit limit of 128GB, supported by Windows Server 2003 Datacenter Edition, comes from the fact that structures the Memory Manager uses to track physical memory would consume too much of the system’s virtual address space on larger systems. The Memory Manager keeps track of each page of memory in an array called the PFN database and, for performance, it maps the entire PFN database into virtual memory. Because it represents each page of memory with a 28-byte data structure, the PFN database on a 128GB system requires about 980MB. 32-bit Windows has a 4GB virtual address space defined by hardware that it splits by default between the currently executing user-mode process (e.g. Notepad) and the system. 980MB therefore consumes almost half the available 2GB of system virtual address space, leaving only 1GB for mapping the kernel, device drivers, system cache and other system data structures, making that a reasonable cut off:

image

That’s also why the memory limits table lists lower limits for the same SKU’s when booted with 4GB tuning (called 4GT and enabled with the Boot.ini’s /3GB or /USERVA, and Bcdedit’s /Set IncreaseUserVa boot options), because 4GT moves the split to give 3GB to user mode and leave only 1GB for the system. For improved performance, Windows Server 2008 reserves more for system address space by lowering its maximum 32-bit physical memory support to 64GB.

The Memory Manager could accommodate more memory by mapping pieces of the PFN database into the system address as needed, but that would add complexity and possibly reduce performance with the added overhead of map and unmap operations. It’s only recently that systems have become large enough for that to be considered, but because the system address space is not a constraint for mapping the entire PFN database on 64-bit Windows, support for more memory is left to 64-bit Windows.

The maximum 2TB limit of 64-bit Windows Server 2008 Datacenter doesn’t come from any implementation or hardware limitation, but Microsoft will only support configurations they can test. As of the release of Windows Server 2008, the largest system available anywhere was 2TB and so Windows caps its use of physical memory there.

Windows Client Memory Limits

64-bit Windows client SKUs support different amounts of memory as a SKU-differentiating feature, with the low end being 512MB for Windows XP Starter to 128GB for Vista Ultimate and 192GB for Windows 7 Ultimate. All 32-bit Windows client SKUs, however, including Windows Vista, Windows XP and Windows 2000 Professional, support a maximum of 4GB of physical memory. 4GB is the highest physical address accessible with the standard x86 memory management mode. Originally, there was no need to even consider support for more than 4GB on clients because that amount of memory was rare, even on servers.

However, by the time Windows XP SP2 was under development, client systems with more than 4GB were foreseeable, so the Windows team started broadly testing Windows XP on systems with more than 4GB of memory. Windows XP SP2 also enabled Physical Address Extensions (PAE) support by default on hardware that implements no-execute memory because its required for Data Execution Prevention (DEP), but that also enables support for more than 4GB of memory.

What they found was that many of the systems would crash, hang, or become unbootable because some device drivers, commonly those for video and audio devices that are found typically on clients but not servers, were not programmed to expect physical addresses larger than 4GB. As a result, the drivers truncated such addresses, resulting in memory corruptions and corruption side effects. Server systems commonly have more generic devices and with simpler and more stable drivers, and therefore hadn’t generally surfaced these problems. The problematic client driver ecosystem led to the decision for client SKUs to ignore physical memory that resides above 4GB, even though they can theoretically address it.

32-bit Client Effective Memory Limits 

While 4GB is the licensed limit for 32-bit client SKUs, the effective limit is actually lower and dependent on the system’s chipset and connected devices. The reason is that the physical address map includes not only RAM, but device memory as well, and x86 and x64 systems map all device memory below the 4GB address boundary to remain compatible with 32-bit operating systems that don’t know how to handle addresses larger than 4GB. If a system has 4GB RAM and devices, like video, audio and network adapters, that implement windows into their device memory that sum to 500MB, 500MB of the 4GB of RAM will reside above the 4GB address boundary, as seen below:

image

The result is that, if you have a system with 3GB or more of memory and you are running a 32-bit Windows client, you may not be getting the benefit of all of the RAM.  On Windows 2000, Windows XP and Windows Vista RTM, you can see how much RAM Windows has accessible to it in the System Properties dialog, Task Manager’s Performance page, and, on Windows XP and Windows Vista (including SP1), in the Msinfo32 and Winver utilities. On Window Vista SP1, some of these locations changed to show installed RAM, rather than available RAM, as documented in this Knowledge Base article.

On my 4GB laptop, when booted with 32-bit Vista, the amount of physical memory available is 3.5GB, as seen in the Msinfo32 utility:

image

You can see physical memory layout with the Meminfo tool by Alex Ionescu (who’s contributing to the 5th Edition of the Windows Internals that I’m coauthoring with David Solomon). Here’s the output of Meminfo when I run it on that system with the -r switch to dump physical memory ranges:

image

Note the gap in the memory address range from page 9F0000 to page 100000, and another gap from DFE6D000 to FFFFFFFF (4GB). However, when I boot that system with 64-bit Vista, all 4GB show up as available and you can see how Windows uses the remaining 500MB of RAM that are above the 4GB boundary:

image 

What’s occupying the holes below 4GB? The Device Manager can answer that question. To check, launch “devmgmt.msc”, select Resources by Connection in the View Menu, and expand the Memory node. On my laptop, the primary consumer of mapped device memory is, unsurprisingly, the video card, which consumes 256MB in the range E0000000-EFFFFFFF:

image

Other miscellaneous devices account for most of the rest, and the PCI bus reserves additional ranges for devices as part of the conservative estimation the firmware uses during boot.

The consumption of memory addresses below 4GB can be drastic on high-end gaming systems with large video cards. For example, I purchased one from a boutique gaming rig company that came with 4GB of RAM and two 1GB video cards. I hadn’t specified the OS version and assumed that they’d put 64-bit Vista on it, but it came with the 32-bit version and as a result only 2.2GB of the memory was accessible by Windows. You can see a giant memory hole from 8FEF0000 to FFFFFFFF in this Meminfo output from the system after I installed 64-bit Windows:

image

Device Manager reveals that 512MB of the over 2GB hole is for the video cards (256MB each), and it looks like the firmware has reserved more for either dynamic mappings or because it was conservative in its estimate:

image

Even systems with as little as 2GB can be prevented from having all their memory usable under 32-bit Windows because of chipsets that aggressively reserve memory regions for devices. Our shared family computer, which we purchased only a few months ago from a major OEM, reports that only 1.97GB of the 2GB installed is available:

image

The physical address range from 7E700000 to FFFFFFFF is reserved by the PCI bus and devices, which leaves a theoretical maximum of 7E700000 bytes (1.976GB) of physical address space, but even some of that is reserved for device memory, which explains why Windows reports 1.97GB.

image

Because device vendors now have to submit both 32-bit and 64-bit drivers to Microsoft’s Windows Hardware Quality Laboratories (WHQL) to obtain a driver signing certificate, the majority of device drivers today can probably handle physical addresses above the 4GB line. However, 32-bit Windows will continue to ignore memory above it because there is still some difficult to measure risk, and OEMs are (or at least should be) moving to 64-bit Windows where it’s not an issue.

The bottom line is that you can fully utilize your system’s memory (up the SKU’s limit) with 64-bit Windows, regardless of the amount, and if you are purchasing a high end gaming system you should definitely ask the OEM to put 64-bit Windows on it at the factory.

Do You Have Enough Memory?

Regardless of how much memory your system has, the question is, is it enough? Unfortunately, there’s no hard and fast rule that allows you to know with certainty. There is a general guideline you can use that’s based on monitoring the system’s “available” memory over time, especially when you’re running memory-intensive workloads. Windows defines available memory as physical memory that’s not assigned to a process, the kernel, or device drivers. As its name implies, available memory is available for assignment to a process or the system if required. The Memory Manager of course tries to make the most of that memory by using it as a file cache (the standby list), as well as for zeroed memory (the zero page list), and Vista’s Superfetch feature prefetches data and code into the standby list and prioritizes it to favor data and code likely to be used in the near future.

If available memory becomes scarce, that means that processes or the system are actively using physical memory, and if it remains close to zero over extended periods of time, you can probably benefit by adding more memory. There are a number of ways to track available memory. On Windows Vista, you can indirectly track available memory by watching the Physical Memory Usage History in Task Manager, looking for it to remain close to 100% over time. Here’s a screenshot of Task Manager on my 8GB desktop system (hmm, I think I might have too much memory!):

image

On all versions of Windows you can graph available memory using the Performance Monitor by adding the Available Bytes counter in the Memory performance counter group:

image 

You can see the instantaneous value in Process Explorer’s System Information dialog, or, on versions of Windows prior to Vista, on Task Manager’s Performance page.

Pushing the Limits of Windows

Out of CPU, memory and disk, memory is typically the most important for overall system performance. The more, the better. 64-bit Windows is the way to go to be sure that you’re taking advantage of all of it, and 64-bit Windows can have other performance benefits that I’ll talk about in a future Pushing the Limits blog post when I talk about virtual memory limits.

Original article is here

When authenticated to the System Center Virtual Machine Manager Self-Service Portal (SSP), selecting a Virtual Machine (VM) hosted on Windows Hyper-V and clicking “Connect to VM” results in the following error message being displayed on a white screen:

Virtual Machine Manager lost the connection to the virtual machine because another connection was established to this machine.

Consider the following scenario:

- 2 user accounts exist: User1 and User2
- User1 is not a configured in SCVMM anywhere
- User2 is a member of a Self-Service User role in SCVMM and is the owner of a VM
- User1 logs onto a Windows client and connects to the SSP where they authenticate as User2
- The VM owned by User2 is selected from the list and the “Connect to VM” button is clicked

In this scenario, the error message above is displayed.

The reason this can occur is because the credentials for the user account logged onto Windows (User1) are passed through instead of those used to authenticate to the SSP (User2).

By default the “Do not store my credentials” radio button is selected which causes this behavior.

User1 can be authenticated on the Hyper-V host, but Authorization Manager (AzMan) fails to find any record of their privileges to connect to the VM’s console.(As authentication succeeds, it is not considered a “failed logon attempt”.)

To resolve this issue, select the radio button “Store my credentials” on the logon page of the SSP. By doing this, the credentials entered here are passed through when “Connect to VM” is clicked.

For all the details and more information see our new Knowledge Base article below:

KB2288932 – Connecting to a Hyper-V VM in System Center Virtual Machine Manager using the Self Service Portal fails with “Virtual Machine Manager lost the connection to the virtual machine…”

J.C. Hornbeck | System Center Knowledge Engineer

Original Article is here

In the past few blogs we’ve covered Page Sharing and Second Level Paging. Today, let’s dig into what we’re delivering with Hyper-V Dynamic Memory in Windows Server 2008 R2 SP1 as well as our free hypervisor Microsoft Hyper-V Server 2008 R2 SP1. So what is Dynamic Memory?

Dynamic memory is an enhancement to Hyper-V R2 which pools all the memory available on a physical host and dynamically distributes it to virtual machines running on that host as necessary. That means based on changes in workload, virtual machines will be able to receive new memory allocations without a service interruption through Dynamic Memory Balancing. In short, Dynamic Memory is exactly what it’s named.

Let’s dive in an explain how all this works starting with the new Dynamic Memory settings. Here are the new settings available on a per virtual machine basis. Here’s a screenshot:

Dynamic Memory Settings Highlighted

Dynamic Memory In Depth

With Hyper-V (V1 & R2), memory is statically assigned to a virtual machine. Meaning you assign memory to a virtual machine and when that virtual machine is turned on, Hyper-V allocates and provides that memory to the virtual machine. That memory is held while the virtual machine is running or paused. When the virtual machine is saved or shut down, that memory is released. Below is a screenshot for assigning memory to a virtual machine in Hyper-V V1/R2:

Hyper-V V1/R2 Static Memory

With Hyper-V Dynamic Memory there are two values: Startup RAM and Maximum RAM and it looks like this:

Hyper-V R2 SP1 Dynamic Memory

Startup RAM is the initial/startup amount of memory assigned to a virtual machine. When a virtual machine is started this is the amount of memory the virtual machine will be allocated. In this example, the virtual machine will start with 1 GB.

The Maximum RAM setting is the maximum amount of memory that the guest operating system can grow to, up to 64 GB of memory (provided the guest OS supports that much memory). Based on the settings above, here’s an example of what the memory allocation could look like over a workday…

image

As you can see, the workload is dynamically allocated memory based on demand.

Next, let’s look at the Memory Buffer.

Memory Buffer: In one of the earlier blogs posts in this series, we discussed the complexity of capacity planning in terms of memory. To summarize, there is no “one size fits all” answer for every workload as deployments can vary based on scale and performance requirements. However, one consistent bit of feedback was that customers always felt more comfortable by providing additional memory headroom ‘just in case.’

We completely agree.

The point being you want to avoid a situation where a workload needs memory and Hyper-V has to start looking for it. You want some set aside memory as buffer for these situations, especially for bursty workloads.

The Dynamic Memory buffer property specifies the amount of memory available in a virtual machine for file cache purposes (e.g. SuperFetch) or as free memory. The range of values are from 5 to 95. A target memory buffer is specified in percentages of free memory and is based on current runtime memory usage. A target memory buffer percentage of 20% means that in a VM where 1 GB is used, 250 MB will be ‘free’ (or available) ideally for a total amount of 1.25 GB in the virtual machine. By default, Hyper-V Dynamic Memory uses a default buffer allocation of 20%. If you find this percentage is too conservative or not conservative enough, you can adjust this setting on the fly while the virtual machine is running without downtime.

Hyper-V Dynamic Memory Buffer

This takes us to the last Dynamic Memory setting, Memory Priority.

Memory Priority: By default, all virtual machines are created equal in terms of memory prioritization. However, it’s very likely you’ll want to prioritize memory allocation based on workload. For example, I can see a scenario where one would give domain controllers greater memory priority than a departmental print server. Memory Priority is a per virtual machine setting which indicates the relative priority of the virtual machine’s memory needs measured against the needs of other virtual machines. The default is set to ‘medium’. If you find that you need to adjust this setting, you can adjust this setting on the fly while the virtual machine is running without downtime.

  Hyper-V Dynamic Memory Priority

Dynamic Memory Works Over Time With A Few VMs…

I’ve explained the per VM settings and shown how this would work with a single virtual machine, but how does Dynamic Memory work with multiple virtual machines? Below is an example to show just how Dynamic Memory works. I’ve kept this example simple on purpose to avoid confusion. Let’s assume I have a small server with 8 GB of memory. I’m going to run three virtual machines, one from Finance, Sales and Engineering. Each virtual machine is given the same settings: Startup RAM = 1 GB and Maximum RAM = 4 GB. With these settings, each virtual machine will start 1 GB and can grow up to 4 GB as needed.

Virtual Machine Start. On the left graphic below, you can see three virtual machines starting. Each virtual machine is consuming 1 GB of memory for Startup RAM. On the right graphic below, you can see the total amount of memory being used in the entire system ~3 GB.

Hyper-V DM Start Time

15 minutes later. The Finance VM is running reports while the Engineering VM starts an analysis job. With Dynamic Memory, the Finance VM is allocated 3 GB of memory, the Engineering VM is allocated 2 GB of memory while the Sales VM remains at 1 GB. System wide, the server is now using 6 GB of its 8 GB or 75% of the total physical memory.

Dynamic Memory @ 15 Minutes

30 minutes later. The Finance VM is running reports while the Engineering VM starts an analysis job. With Dynamic Memory, the Finance VM is allocated 2 GB of memory, the Engineering VM is allocated 3.5 GB of memory while the Sales VM remains at 1 GB and a fourth VM, Service VM is started using 1 GB of memory. System wide, the server is now using 7.5 GB of its 8 GB of memory for VMs. At this point the server is fully allocated in terms of memory and is using its memory most efficiently.

 

At this point, the question I’m always asked is, “What now? What if a virtual machine still needs more memory? Does the parent start paging?”

No.

At this point, Dynamic Memory will attempt to reclaim pages from other virtual machines. However, in the absolute worst case where no free pages are available, the guest operating system will page as needed, not the parent. This is important because the guest operating system knows best what memory should and shouldn’t be paged. (I covered this back in Part 5…) Finally, when free memory does become available from other virtual machines, Dynamic Memory will move memory as needed.

Over-Subscription & the CPU Analogy

One argument we routinely hear is that there’s nothing wrong with over-subscription. Customers tell us that they take a bunch of physical servers, virtualize them and run the server with over-subscribed CPUs without issue, so why is this an issue with memory?

Great analogy, wrong conclusion.

Example 1: Suppose you are running 8 physical servers at 10% utilization, virtualize them and run those 8 virtual machines on a single server for a total of ~85% utilization. In this example, you’re not over-subscribing the CPU and the server still has 15% CPU headroom.

Over-subscription is this…

Example 2: Suppose you are running 8 physical servers at 50% utilization, virtualize them and run those 8 virtual machines on a single server. The single server would max out at 100% utilization, but because the workloads require ~400% utilization, performance would be terrible. What would you do? Move virtual machines to other servers of course to avoid over-subscription. In short, what you really want to do is maximize resource utilization to get the best balance of resources and performance.

That’s exactly what we’re doing with Hyper-V Dynamic Memory.

Customer Requirements & Dynamic memory

When it comes to virtualization and memory, virtualization users have repeatedly provided the following requirements:

  1. Use physical memory as efficiently and dynamically as possible with minimal performance impact. Customers investing in virtualization hosts are purchasing systems with larger memory configurations (32 GB, 64 GB, 128 GB and more) and want to fully utilize this system asset. At the same time, they’re purchasing this memory to provide superior performance and to avoid paging.
  2. Provide consistent performance and scalability. One frequent comment from virtualization users is that they don’t want a feature with a performance cliff or inconsistent, variable performance. That’s makes it more difficult to manage and increases TCO.

You got it. Here’s why we’ve chosen the path we have with Dynamic Memory.

  1. Dynamic Memory is truly a dynamic solution. Memory is allocated to virtual machines on the fly without service interruption based on policy.
  2. Dynamic Memory avoids significant performance penalties by not adding additional levels of paging which can significantly impact performance
  3. Dynamic Memory takes advantage of Large Memory Pages and is, in fact, optimized for Large Memory Pages
  4. Dynamic Memory is a great solution for virtualizing servers and desktops (BTW, Dynamic Memory works fine with SuperFetch)

Cheers,

Jeff Woolsey

Principal Group Program Manager

Windows Server & Cloud, Virtualization

Original article is here

In my last blog, we covered some follow-up questions about Page Sharing. Today, we’ll discuss Second Level paging. To discuss the implications of using Second Level Paging, let’s put virtualization aside, take a step back and level set and start by discussing Virtual Memory and Paging.

Virtual Memory At A High Level

Modern operating systems employ virtual memory. Virtual memory is a way of extending the effective size of a computer’s memory by using a disk file (as swap space) to simulate additional memory space. The operating system keeps track of which memory addresses actually reside in memory and which ones must be brought in from disk when needed. Here are a few of the common memory management functions performed by modern operating systems:

  • Allow multiple applications to coexist in the computer’s physical memory (enforce isolation)
  • Use virtual addressing to hide the management of physical memory from applications
  • Extend the system’s memory capacity via swapping

Virtual Memory In Depth

Let’s dive in deeper. For that, I’m going to reference a TechNet article that discusses the Windows Virtual Memory Manager. If you’d like to read the full article it is here: http://technet.microsoft.com/en-us/library/cc767886.aspx. A second article I highly recommend on virtual memory is this one from Mark Russinovich: http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx

From the TechNet article:

Sharing A Computer’s Physical Memory

Operating systems that support multitasking allow code and data from multiple applications to exist in the computer’s physical memory (random access memory) at the same time. It is the operating system’s responsibility to ensure that physical memory is shared as efficiently as possible, and that no memory is wasted. As a result, an operating system’s memory manager must contend with a problem called memory fragmentation. Memory fragmentation refers to the situation where free (available) memory becomes broken into small, scattered pieces that are not large enough to be used by applications. In the example shown here, free memory is separated into three separate blocks.

Once free physical memory becomes fragmented, an operating system can consolidate free memory into a single, contiguous block by moving code and data to new physical addresses. In this case, the three blocks of free memory were consolidated into one larger block by moving system memory upward and application 1 downward in physical memory.

If an application accesses its code or data using physical memory addresses, the application may encounter problems when the operating system moves its code and data. A mechanism must be provided for applications to access their code and data no matter where the operating system moves them in physical memory.

Virtualizing Access to Memory

A common solution is to provide applications with a logical representation of memory (often called virtual memory) that completely hides the operating system’s management of physical memory. Virtual memory is an illusion that the operating system provides to simplify the application’s view of memory. Applications treat virtual memory as though it were physical memory. Meanwhile, the operating system can move code and data in physical memory whenever necessary.

In a virtual memory system, the addresses applications use to access memory are virtual addresses , not physical memory addresses. Every time an application attempts to accesses memory using a virtual address, the operating system secretly translates the virtual address into the physical address where the associated code or data actually resides in physical memory. Because the translation of virtual addresses to physical addresses is performed by the operating system, applications have no knowledge of (or need to be concerned with) where their code and data actually reside.

Extending Virtual Memory Through Swapping

When applications access memory using virtual addresses, the operating system is responsible for translation of virtual addresses to physical addresses. As a result, the operating system has total control over where data and code are physically stored. This not only means that the operating system can move code and data in physical memory as it likes, but it also means that code and data don’t need to be stored in physical memory at all!

A computer’s processor can only access code and data that resides in physical memory (RAM). However, physical memory is relatively expensive so most computers have relatively little of it. Most multitasking operating systems extend their virtual memory management schemes to compensate for this scarcity of physical memory. They rely on a simple, but very important fact: Code and data only need to be in physical memory when the processor needs to access them! When not needed by the processor, code and data can be saved temporarily on a hard disk (or other device with abundant storage). This frees physical memory for use by other code and data that the processor needs to access. The process of temporarily transferring code and data to and from the hard disk to make room in physical memory is called swapping.

Swapping is performed to increase the amount of virtual memory available on the computer. The memory manager performs swapping “behind the scenes” to make it appear as though the computer has more physical memory than it actually does. Effectively, the virtual memory available on a computer is equal to its physical memory plus whatever hard disk space the virtual memory manager uses to temporarily store swapped code and data.

Loading Swapped Code And Data On Demand

If an application attempts to access code or data that is not in physical memory (it was swapped to disk) the virtual memory manager gets control. The virtual memory manager locates (or creates) an available block of physical memory, and copies the required code or data into the block so it can be accessed. Applications are not aware that their code and data were ever swapped to disk. The code and data are automatically loaded into physical memory by the virtual memory manager whenever the application needs to use them.

Key Points:

  • Operating system memory management abstracts physical memory application and enforces isolation
  • The memory manager extends the system’s memory capacity via swapping

Ok, now that we’ve discussed how virtual memory and paging works, let’s relate this to virtualization.

Static Memory & Guest Only Paging

Today with Hyper-V (V1 & R2), memory is statically assigned to a virtual machine. Meaning you assign memory to a virtual machine and when that virtual machine is turned on, Hyper-V allocates and provides that memory to the virtual machine. That memory is held while the virtual machine is running or paused. When the virtual machine is saved or shut down, that memory is released. Below is a screenshot for assigning memory to a virtual machine today:

Memory Assignment Cropped 

This memory is 100% backed by physical memory and is never paged. Remember that the guest is actively determining which pages should and shouldn’t be paged as it manages all the memory it’s been allocated by the virtual machine and it knows best how to do so. Here’s a basic picture to illustrate what this looks like in a virtualization environment. There are four virtual machines running and each of the guest kernels are managing their own memory.

Guest Only Paging

Ok, now let’s dive into Second Level Paging…

Second Level Paging: What Is It?

Second Level Paging is a technique where the virtualization platform creates a second level of memory abstraction and swap files are created by the virtualization layer to page memory to disk when the system is oversubscribed. With SLP, you now have two tiers of paging to disk one within the guest and one below it at the virtualization layer. Here’s another picture to illustrate how Second Level Paging fits in. Again, there are four virtual machines running and each of the guest kernels are managing their own memory. However, notice that below them is the Second Level of Paging managed independently by the virtualization platform.

Second Level Paging

One common argument used in favor of Second Level Paging I’ve heard is this “If Windows and modern OSes all use paging today, why is this bad with virtualization?”

Great question.

Answer: Performance.

With Second Level Paging, memory assigned to a virtual machine can be backed by memory or by disk. The result is that Second Level Paging creates issues that are unique to a virtualized environment. When the system is oversubscribed, the virtualization layer can and will blindly and randomly swap out memory that the guest is holding even critical sections that the guest kernel is specifically holding in memory for performance reasons. Here’s what I mean.

Swapping the Guest Kernel

Swapping the guest kernel is an example where virtualization is creating an issue that doesn’t exist on physical systems. In an OS kernel, there are specific critical sections in memory that an operating system kernel never pages to disk for performance reasons. This is a subject where Microsoft and VMware agree and VMware states as much in their documentation.

“…hypervisor swapping is a guaranteed technique to reclaim a specific amount of memory within a specific amount of time. However, hypervisor swapping may severely penalize guest performance. This occurs when the hypervisor has no knowledge about which guest physical pages should be swapped out, and the swapping may cause unintended interactions with the native memory management policies in the guest operating system. For example, the guest operating system will never page out its kernel pages since those pages are critical to ensure guest kernel performance. The hypervisor, however, cannot identify those guest kernel pages, so it may swap them out. In addition, the guest operating system reclaims the clean buffer pages by dropping them. Again, since the hypervisor cannot identify the clean guest buffer pages, it will unnecessarily swap them out to the hypervisor swap device in order to reclaim the mapped host physical memory.

Understanding Memory Resource Management in VMware ESX Server p. 9-10; http://www.vmware.com/resources/techresources/10062

Thus, the more you oversubscribe memory, the worse the overall performance because the system has to fall back to using disk and ultimately trade memory performance for disk performance. Speaking of comparing memory to disk performance…

Memory vs Disk Performance

Finally, there is the performance comparison, or, really lack thereof, because there is no comparison between memory and disk. This isn’t debatable. This is fact. Let’s do a little math. Let’s assume that the typical disk seek time is ~8 milliseconds. For memory access, here are the response times in nanoseconds:

  • DDR3-1600 = 5 nanoseconds
  • DDR3-1333 = 6 ns
  • DDR3-1066 = 7.5 ns
  • DDR3-800 = 10 ns

So, if you want to compare disk access to DDR-3 1600 memory access the formula is .008/.000000005. Here are the results:

  • DDR3-1600 memory is 1,600,000 times faster than disk
  • DDR3-1333 memory is 1,333,333 times faster than disk
  • DDR3-1066 memory is 1,066,666 times faster than disk
  • DDR3-800 memory is 800,000 times faster than disk

We’ve heard on many occasions that virtualization users have been told that performance of Second Level Paging “isn’t that bad.” I don’t know how anyone can say with a straight face that a performance penalty of greater than six orders of magnitude isn’t that bad. To put 1.6 million times faster in perspective, assume it took you an hour to walk one mile. If you traveled 1.6 million times faster, you could roughly travel to Saturn and back in an hour. (Saturn is approximately 746 million miles away at its minimum distance to the Earth.)

Microsoft & VMware Agree: Avoid Oversubscription

The fact that swapping to disk carries a significant performance penalty and you should avoid it is another area where Microsoft and VMware agree. This isn’t new guidance so I’ve included examples from ESX 3 and VSphere.

From VMware:

Example 1: Make sure the host has more physical memory than the total amount of memory that will be used by ESX plus the sum of the working set sizes that will be used by all the virtual machines running at any one time.

–Performance Tuning Best Practices for ESX Server 3

Example 2: if the working set is so large that active pages are continuously being swapped in and out (that is, the swap I/O rate is high), then performance may degrade significantly. To avoid swapping in specific virtual machines please configure memory reservations for them (through the VI Client) at least equal in size to their active working sets. But be aware that configuring resource reservations can limit the number of virtual machines one can consolidate on a system.

–Performance Tuning Best Practices for ESX Server 3 page 15

Example 3: ESX also uses host-level swapping to forcibly reclaim memory from a virtual machine. Because this will swap out active pages, it can cause virtual machine performance to degrade significantly.

–Performance Tuning Best Practices for VSphere page 23

Translation: Ensure that the memory used by ESX and its virtual machines reside in physical memory and avoid swapping to disk, i.e. avoid oversubscribing memory.

Final Points on Second Level Paging

  • Second Level Paging breaks the fundamental assumption that Guest Operating Systems have an accurate representation of physical memory
  • Performance of memory to disk ranges from 800,000 times to 1,600,000 times faster
  • When the system is oversubscribed, Second Level Paging carries a significant performance hit. Simply stated, the more the system is oversubscribed, the more it relies on swapping to disk and the worse the overall system performance.

The good news is that there are other ways to pool and allocate memory and Hyper-V Dynamic Memory is a good solution for desktop and server operating systems… In my next blog, we’ll explain Hyper-V Dynamic Memory.

Cheers,

Jeff Woolsey

Principal Group Program Manager

Windows Server, Virtualization

Original article is here

In my last blog, we discussed Page Sharing in depth. To say this was a popular blog post would be a gross understatement. We received a lot of great feedback and a number of questions. So, I thought we’d close the loop on these questions about Page Sharing before getting to Second Level Paging in the next blog.

Questions

Q: When you say that Hyper-V R2 supports Large Memory Pages does that mean Hyper-V uses 2 MB memory page allocations or does that it mean it provides Large Memory Pages to the guest or both?

A: Both.

  1. Hyper-V R2 supports Large Memory Pages meaning that if the underlying hardware platform provides this capability, Hyper-V R2 will automatically take advantage of this feature in its memory allocations. It’s important to note that by virtue of Hyper-V R2 running on Large Memory Page capable platforms, virtualized workloads will benefit and don’t have to be large page aware to benefit from Hyper-V’s usage of Large Pages to back guest RAM.
  2. If the guest operating system and applications support Large Memory Pages (and of course the underlying hardware platform support Large Memory Pages), then those virtualized workloads can use Large Memory Page allocations within the guest as well.

===================================================

Q: So, to take advantage of Large Memory Pages do applications have to be rewritten to use this functionality?

A: No. While having the guest operating system and applications use Large Memory Pages can be a good thing, it’s important to note that applications don’t have to be large page aware to benefit from Hyper-V’s usage of Large Pages to back guest RAM.

===================================================

Q: You mentioned that SuperFetch can impact Page Sharing efficacy as it eliminates zero pages. Isn’t that a Windows specific feature, do other operating systems employ similar techniques?

A: Yes, other operating systems use similar techniques. For example, Linux has a feature called “Preload.” Preload is an “adaptive readahead daemon” that runs in the background of your system, and observes what programs you use most often, caching them in order to speed up application load time. By using Preload, it utilizes unused RAM, and improves overall system performance.

BTW: OSNews said this about SuperFetch:

SuperFetch is something all operating systems should have. I didn’t buy 4GB of top-notch RAM just to have it sit there doing nothing during times of low memory requirements. SuperFetch makes my applications load faster, which is really important to me – I come from a BeOS world, and I like it when my applications load instantly.

SuperFetch’ design makes sure that it does not impact the system negatively, but only makes the system smoother. Because it runs at a low-priority, its cache doesn’t take away memory from the applications you’re running.

http://www.osnews.com/story/21471/SuperFetch_How_it_Works_Myths

===================================================

Q: You didn’t mention Address Space Layout Randomization (ASLR) and if it impacts Page Sharing, does it?

A: I didn’t cover ASLR because the blog was pretty long already, but since you asked…

Yes, ASLR does impact the Page Sharing efficacy, but first a quick description of ASLR from a TechNet article written by Mark Russinovich:

The Windows Address Space Layout Randomization (ASLR) feature makes it more difficult for malware to know where APIs are located by loading system DLLs and executables at a different location every time the system boots. Early in the boot process, the Memory Manager picks a random DLL image-load bias from one of 256 64KB-aligned addresses in the 16MB region at the top of the user-mode address space. As DLLs that have the new dynamic-relocation flag in their image header load into a process, the Memory Manager packs them into memory starting at the image-load bias address and working its way down.

Mark continues with…

In addition, ASLR’s relocation strategy has the secondary benefit that address spaces are more tightly packed than on previous versions of Windows, creating larger regions of free memory for contiguous memory allocations, reducing the number of page tables the Memory Manager allocates to keep track of address-space layout, and minimizing Translation Lookaside Buffer (TLB) misses.

Today, the impact of ASLR on Page Sharing is relatively low ~10% compared to Large Memory Pages and SuperFetch, but it is indeed another factor that impacts Page Sharing efficacy. Moreover, that’s not to say that future improvements in ASLR won’t impact Page Sharing efficacy further.

===================================================

Q: You mention that Large Memory Page Support in included in the last few generations of Opterons and Intel has added support in the new “Nehalem” processors. Do you mean older Intel x86/x64 do not support Large Memory Pages?

A: 5/6/2010: CORRECTION: Actually, older x86/x64 processors do support Large Memory Pages going back many generations. However, 32-bit systems generally didn’t support generous amounts of memory (most maxed out at 4 GB which is a small fraction of what 64-bit systems support) so support for Large Memory Pages wasn’t as crucial as it is now with 64-bit servers being the norm.

In my next blog we’ll discuss Second Level Paging…

Jeff Woolsey

Principal Group Program Manager

Windows Server, Virtualization

Original article is here

Memory Overcommit, an Overloaded Term…

When it comes to virtualization and memory, I regularly hear the term “memory overcommit” used as if it’s a single technology. The problem is that there are numerous techniques that can be employed to more efficiently use memory which has led to much confusion. Some customers think page sharing equals overcommit. Others think second level paging equals memory overcommit and so on.

So, to avoid any confusion, here’s the definition of overcommit according to the Merriam Webster dictionary online:

http://www.merriam-webster.com/dictionary/overcommit

Main Entry: over·com·mit

: to commit excessively: as a : to obligate (as oneself) beyond the ability for fulfillment b : to allocate (resources) in excess of the capacity for replenishment

Memory overcommit simply means to allocate more memory resources than are physically present. In a physical (non-virtualized) environment, the use of paging to disk is an example of memory overcommit. Now that we’ve defined it, I’m done using this term to avoid the aforementioned confusion. From here on, I’m going to refer to specific memory techniques.

In a virtualized environment, there are a variety of different memory techniques that can be employed to more efficiently use memory such as page sharing, second level paging and dynamic memory balancing (e.g. ballooning is one technique, hot add/remove memory is another). Each one of these methods has pros and cons and varying levels of efficacy.

Today we’ll discuss Page Sharing in detail.

BTW, before we dive in, let me state at the onset that we have spent a lot of time looking at this technology and have concluded that it is not the best option for us to use with dynamic memory. Hopefully, this will explain why…

How Page Sharing Works

Page Sharing is a memory technique where the hypervisor inspects and hashes all memory in the system and stores it in a hash table. Over time, which can be hours, the hashes are compared and if identical hashes are found, a further bit by bit comparison is performed. If the pages are exactly the same, a single copy is stored and multiple VMs memory are mapped to this shared page. If any one of these virtual machines need to modify that shared page, copy on write semantics are used resulting in a new (and thus unshared) memory page.

Page Sharing is a well understood memory technique and there are a number of factors that contribute to its efficacy such as:

  1. Large Memory Pages
  2. OS Memory Utilization & Zero Pages

Page Sharing, TLBs, Large Memory Pages & More…

To discuss Large Memory Page Support and its implications on Page Sharing, let’s take a step back and level set. For that, I’m going to reference an excellent article written by Alan Zeichick from AMD in 2006. While the focus of this article discusses the implications of large memory pages and java virtual machines, it also applies to machine virtualization. I’m going to reference specific sections, but if you’d like to read the full article it is here:

http://developer.amd.com/documentation/articles/pages/2142006111.aspx

All x86 processors and modern 32-bit and 64-bit operating systems allocate physical and virtual memory in pages. The page table maps virtual address to physical address for each native application and “walking” it to look up address mappings takes time. To speed up that process, modern processors use the translation lookaside buffer (TLB), to cache the most recently accessed mappings between physical and virtual memory.

Often, the physical memory assigned to an application or runtime isn’t contiguous; that’s because in a running operating system, the memory pages can become fragmented. But because the page table masks physical memory address from applications, apps think that they do have contiguous memory. (By analogy, think about how fragmented disk files are invisible to applications; the operating system’s file system hides all of it.)

When an application needs to read or write memory, the processor uses the page table to translate the virtual memory addresses used by the application to physical memory addresses. As mentioned above, to speed this process, the processor uses a cache system—the translation lookaside buffers. If the requested address is in the TLB cache, the processor can service the request quickly, without having to search the page table for the correct translation. If the requested address is not in the cache, the processor has to walk the page table to find the appropriate virtual-to-physical address translation before it can satisfy the request.

The TLB’s cache is important, because there are a lot of pages! In a standard 32-bit Linux, Unix, or Windows server with 4GB RAM, there would be a million 4KB small pages in the page table. That’s big enough—but what about a 64-bit system with, oh, 32GB RAM? That means that there are 8 million memory 4KB pages on this system.

Mr. Zeichick continues:

Why is it [Large Pages] better? Let’s say that your application is trying to read 1MB (1024KB) of contiguous data that hasn’t been accessed recently, and thus has aged out of the TLB cache. If memory pages are 4KB in size, that means you’ll need to access 256 different memory pages. That means searching and missing the cache 256 times—and then having to walk the page table 256 times. Slow, slow, slow.

By contrast, if your page size is 2MB (2048KB), then the entire block of memory will only require that you search the page table once or twice—once if the 1MB area you’re looking for is contained wholly in one page, and twice if it splits across a page boundary. After that, the TLB cache has everything you need. Fast, fast, fast.

It gets better.

For small pages, the TLB mechanism contains 32 entries in the L1 cache, and 512 entries in the L2 cache. Since each entry maps 4KB, you can see that together these cover a little over 2MB of virtual memory.

For large pages, the TLB contains eight entries. Since each entry maps 2MB, the TLBs can cover 16MB of virtual memory. If your application is accessing a lot of memory, that’s much more efficient. Imagine the benefits if your app is trying to read, say, 2GB of data. Wouldn’t you rather it process a thousand buffed-up 2MB pages instead of half a million wimpy 4KB pages?

Kudos, Mr. Zeichick.

I expanded on Mr. Zeichick’s physical memory to page table entry example and created this table to further illustrate the situation with 4KB pages at varying physical memory sizes.

Physical Memory Page Table Entries (4KB)
4 GB 1 million pages
32 GB 8 million pages
64 GB 16 million pages
96 GB 24 million pages
128 GB 32 million pages
192 GB 48 million pages
256 GB 64 million pages
384 GB 96 million pages
512 GB 128 million pages
1 TB 256 million pages

When you consider that servers have supported 32/64 GB of memory for years now and that many industry standard servers shipping today, like the HP DL 385 G6, support up to 192 GB of memory per server today you can quickly see that the time for larger memory page support is overdue. Take a look at the recently released Nehalem EX processor. The Nehalem EX supports up to 256 GB of memory per socket. You could theoretically have a 4 socket server with 1 TB of physical memory. Do you really want to access all this memory 4k at a time?

(Even with just 64 GB of physical memory in a server, think of this as filling up an Olympic size swimming pool with water one 8 ounce cup at a time and it just gets worse as you add and use more memory…)

Key Points:

  • The TLB is a critical system resource that you want to use effectively and efficiently as possible as it can have significant impact on system performance.
  • The use of 4k memory pages in a 32-bit world where systems max out at 4 GB of memory has been an issue for years now and the problem is far worse in a 64-bit world with systems easily capable of tens (if not hundreds) of gigabytes of memory.
  • Using 4k memory pages on 64-bit systems with much greater memory support, drastically reduces the effectiveness of the TLB and overall system performance.

There’s More: SLAT, NPT/RVI, EPT and Large Pages…

One point I want to add is how Large Pages and Second Level Address Translation (SLAT) hardware coexist. With nested paging, (AMD calls this Rapid Virtualization Indexing (RVI) and/or Nested Page Tables (NPT), while Intel calls this Extended Page Tables or EPT) a page table in the hardware takes care of the translation between the guest address of a VM and the physical address, reducing overhead. With SLAT hardware, performance is generally improved about 20% across the board, and can be much higher (independent third parties have reported 100%+ performance improvements) depending on how memory intensive the workload is. In short, SLAT hardware is goodness and if you’re buying a server today as a virtualization host you want to ensure you’re purchasing servers with this capability.

One important point that doesn’t appear to be well-known is that SLAT hardware technologies are designed and optimized with Large Memory Pages enabled. Essentially, the additional nesting of page tables makes TLB cache misses more expensive resulting in about a ~20% performance reduction if you’re using SLAT hardware with Large Memory Page Support disabled. Furthermore, that’s not including the 10-20% average performance improvement (could be more) expected by using Large Memory Pages in the first place. Potentially, we’re talking about a 40% performance delta running on SLAT hardware depending on whether Large Memory Pages are used or not.

You may want to read those last two paragraphs again.

In short, using Large Memory Pages is a no brainer. You definitely want to take advantage of Large Memory Pages:

  • Improved performance (on average 10-20% & can be higher)
  • More efficient TLB utilization
  • Avoid a ~20% performance hit on SLAT hardware

HW Evolution, Large Memory Pages & the Implications on Page Sharing

Computing hardware is in a constant state of evolution. Take networking for example. Originally, the frame size for Ethernet was 1518 bytes largely due to the fact that early networks operated at much lower speeds and with higher error rates. As networking evolved and improved (faster and with lower error rates), the 1518 byte size was recognized as a bottleneck and jumbo frames were introduced. Jumbo frames are larger, up to 9014 bytes in length, and deliver 6x more packets per payload and reduce CPU utilization by reducing the number of interrupts.

Large Memory Pages is a similar situation where server hardware is evolving to improve the overall system scale and performance. As a byproduct of this evolution, it changes some base assumptions. Specifically, Large Memory Pages changes the fundamental assumption of small 4k memory pages to larger and more efficient 2MB memory pages. However, there are implications to changing such fundamental assumptions. While you can identify and share 4k pages relatively easily, the likelihood of sharing a 2MB page is very, very low (if not, zero). In fact, this is an area where Microsoft and VMware agree. VMware acknowledges this point and states as much.

From VMware:

The only problem is that when large pages is used, Page Sharing needs to find identical 2M chunks (as compared to 4K chunks when small pages is used) and the likelihood of finding this is less (unless guest writes all zeroes to 2M chunk) so ESX does not attempt collapses large pages and thats [sic] why memory savings due to TPS goes down when all the guest pages are mapped by large pages by the hypervisor.

http://communities.vmware.com/message/1262016#1262016

Bottom Line: Page Sharing works in a legacy 4k Memory Page world, but provides almost no benefit in a modern 2MB Memory Page world.

As stated previously, support for Large Memory Pages is a no brainer. In fact, when designing Hyper-V Dynamic Memory, we were sure to optimize in the case where Large Memory Pages are present because we expect it will soon be standard. We are so confident in Large Memory Page support that:

  • Windows Server 2008/2008 R2 have Large Memory Pages enabled by default
  • Windows Vista/7 have Large Memory Pages enabled by default
  • Windows Server 2008 R2 Hyper-V added support for Large Memory Pages (surprise!) and is one of many new performance improvements in R2

Memory page size is evolutionary. You can expect memory page size to grow beyond 2MB to even larger page sizes in the future. In fact, newer AMD64 processors can use 1GB pages in long mode and Intel is adding 1GB memory page support in their upcoming Westmere processors. (BTW, that’s not a typo, 1GB pages…)

In addition to Large Page Memory, another factor impacting the efficacy of Page Sharing is OS Memory Utilization and Zero Pages.

Page Sharing, OS Memory Utilization & Zero Pages

One aspect of page sharing most people may not know is that the greatest benefit of page sharing comes from sharing zeroed pages. Let’s assume for a moment that I have a Windows XP system with 2GB of memory. As you can see in the screenshot below from a freshly booted system running Windows XP with no apps, the OS is using ~375MB of memory while the remaining memory ~1.8GB is unused and unfortunately wasted.

XP-Just Booted 2

In reality, you want the operating system to take full advantage of all the memory in the system and use it as an intelligent cache to improve system performance and responsiveness. If you’re going to buy a brand new system (I see an online ad today for a brand new quad core system with 8 GB of memory for $499) don’t you want the OS to use that memory? Of course you do. That’s why we created SuperFetch.

SuperFetch keeps track of which applications you use most and loads this information in RAM so that programs load faster than they would if the hard disk had to be accessed every time. Windows SuperFetch prioritizes the programs you’re currently using over background tasks and adapts to the way you work by tracking the programs you use most often and pre-loading these into memory. With SuperFetch, background tasks still run when the computer is idle. However, when the background task is finished, SuperFetch repopulates system memory with the data you were working with before the background task ran. Now, when you return to your desk, your programs will continue to run as efficiently as they did before you left. It is even smart enough to know what day it is in the event you use different applications more often on certain days.

OK, so how is RAM usage affected? You may have noticed that Windows 7 tends to use a much greater percentage of system RAM than on Windows XP. It is not uncommon to view Task Manager on a Windows 7 system with several GB of RAM installed and less than 100MB of the RAM shows up as free. For instance, here is a screenshot of Task Manager from the machine I am working on now.

Win 7 Task Manager #2

As you can see, this system has 8GB of physical memory and is using 3.29 GB. I’m running Windows 7 x64 Edition, Outlook, One Note, Word, Excel, PowerPoint, Windows Live Writer, Live Photo Gallery, several instances of IE with over a dozen tabs open and other day to day tools and you can see that it shows 0 MB of free physical memory. At first glance, this would seem to be something to worry about, but once you consider the impact of SuperFetch this condition becomes less of a concern. Notice that ~5827MB is being used for cache.

Excellent.

Windows 7 is fully utilizing the system memory resources and intelligently caching so that I have a responsive system (fetching less from disk) with great performance and making it more likely the hard drive can spin down to save power to provide longer battery life.

So, why am I explaining Zero Pages and SuperFetch?

Because page sharing obtains the greatest benefit by page sharing zero pages running older operating systems and is less efficacious on modern operating systems. Like Jumbo Frame and Large Memory Pages, SuperFetch is another example of evolutionary changes in computing.

In talking with customers who are investigating hosting virtual desktops, they told us that Windows 7 is their overwhelming choice. This was another important data point in determining how we chose to implement dynamic memory because making the assumption that an OS will have lots of zero pages around isn’t a good one today or for the future.

Final Points on Page Sharing

To sum up…

Large Memory (2MB) Pages support is widely available in processors from AMD and Intel today. AMD and Intel have included support for Large Memory Pages going back many generations of x86/x64 processors. However, 32-bit systems generally didn’t support generous amounts of memory (most maxed out at 4 GB which is a small fraction of what 64-bit systems support) so support for Large Memory Pages wasn’t as crucial as it is now with 64-bit servers being the norm.

  • Page Sharing on systems with Large Memory Pages enabled results in almost no shared pages. While you can identify and share 4k pages relatively easily, the likelihood of sharing a 2MB page is very, very low (if not, zero). Again, this is an area where Microsoft and VMware agree.
  • Read that last bullet item again
  • Page Sharing works with small 4k memory pages. The downside to small memory pages is that they don’t efficiently use the TLB while Large Memory Pages more efficiently use the TLB and can significantly boost performance
  • Using small 4k memory pages instead of Large Memory pages reduces performance on SLAT hardware by ~20%
  • Windows Server 2008/2008 R2 have Large Memory Pages enabled by default
  • Windows Vista/7 have Large Memory Pages enabled by default
  • Windows Server 2008 R2 Hyper-V added support for Large Memory Pages and is one of the many new performance improvements in R2 (surprise!)
  • Page Sharing efficacy is decreasing (irrespective of Large Memory Pages) as modern OSes take full advantage of system memory to increase performance
  • The process of inspecting, hashing all the memory in the system, storing it in a hash table and then performing a bit-by-bit inspection can take hours. The time it takes is dependent on a number of variables such as the homogeneity of the guests, how busy the guests are, how much memory is physically in the system, if you’re load balancing VMs, etc.
  • Page sharing isn’t a particularly dynamic technique, meaning, if the hypervisor needs memory for another virtual machine immediately, page sharing isn’t the best answer. The converse is true as well. If a virtual machine frees up memory which could be used by other virtual machines, page sharing isn’t the best answer here either.

I hope this blog demonstrates that we have spent a lot of time looking at this technology and after significant analysis concluded it is not the best option for us to employ with Hyper-V Dynamic Memory. Moreover, the case for supporting Large Memory Pages is a no brainer. We feel so strongly about supporting Large Memory Pages that when designing Hyper-V Dynamic Memory, we were sure to optimize in the case where Large Memory Pages are present because we expect it to be standard. The benefits are too great to pass up. The good news is that there are other ways to pool and allocate memory and Hyper-V Dynamic Memory is a good solution for desktop and server operating systems.

In my next blog, we’ll discuss second level paging.

Jeff Woolsey

Windows Server Hyper-V

Original article is here

Virtualization Nation,

When it comes to virtualization and memory, customers want to use physical memory as efficiently and dynamically as possible with minimal performance impact and provide consistent performance and scalability.

Looking at the bigger picture

In addition to asking customers about memory and how it relates to virtualization, we took a step back and talked to our customers about the broader topic of memory and capacity planning. Let’s remove virtualization from the equation for the moment. If you were to setup some new physical servers how would you do this? How would you determine the workload memory requirements and the amount of memory to purchase?

For example,

  • How much memory does a web server require?
    • Is this for an internal LOB application?
    • Is this a front end web server receiving hundreds/thousands/more hits a day?
  • How much memory does a file server require?
    • Is this a departmental file server serving a few dozen folks?
    • Is this a corporate file server serving a few thousand folks?
  • How about Windows Server 2008 R2 BranchCache?
  • Domain Controllers?
  • Windows Server 2008 R2 DirectAccess Servers?
  • Print Servers?
  • <insert your application here>

If you answered, “it depends,” you’re correct. There isn’t one simple answer to this question. Your mileage will vary based on your workload and business requirements for scale and performance. When we ask customers how they tackle this problem, here are a few of the common answers:

  • “I give all servers [pick one: 2 GB, 4 GB, 8 GB] of memory and add more if users complain.”
  • “I take the minimum system requirements and add [pick one: 25%, 50%, 100%] more. I have no idea what is happening with that memory, I just don’t want any trouble tickets.”
  • “I do what the vendor recommends. If it’s 4 GB, it’s at least 4GB and some extra as buffer. I don’t have time to test further.”

The result is far from optimal. Customers overprovision their hardware and don’t use it efficiently which in turn raises the TCO.

Wouldn’t it be great if your workloads automatically and dynamically allocated memory based on workload requirements and you were provided a flexible policy mechanism to control how these resources are balanced across the system?

We think so too.

In my next blog, we’ll discuss the confusion that is “memory overcommit.”

Cheers,

Jeff Woolsey

Windows Server Hyper-V

Original article is here

Virtualization Nation,

I’ve had the pleasure of talking with customers in the last few months and the Hyper-V R2 reception has been nothing but unequivocally positive. Whether it’s been folks in small, medium or the enterprise, they appreciate the new capabilities in Windows Server 2008 R2 Hyper-V and the free Microsoft Hyper-V Server 2008 R2. At the same time, we’re always listening to our customers to better understand their business requirements and requests so we know know what to build for subsequent releases. Today, we’re pleased to announce new capabilities that will enhance both virtualized server and virtualized desktop deployments:

  • Remote FX: With Microsoft RemoteFX, users will be able to work remotely in a Windows Aero desktop environment, watch full-motion video, enjoy Silverlight animations, and run 3D applications within a Hyper-V VM – all with the fidelity of a local-like performance. For more info, check out Max’s blog here.
  • Hyper-V Dynamic Memory: With Hyper-V Dynamic Memory, Hyper-V will enable greater virtual machine density suitable for servers and VDI deployments.

What Virtualization Users Have Told Us

When it comes to virtualization and memory, virtualization users have repeatedly provided the following requirements:

  1. Use physical memory as efficiently and dynamically as possible with minimal performance impact. Customers investing in virtualization hosts are purchasing systems with larger memory configurations (32 GB, 64 GB, 128 GB and more) and want to fully utilize this system asset. At the same time, they’re purchasing this memory to provide superior performance and to avoid paging.
  2. Provide consistent performance and scalability. One frequent comment from virtualization users is that they don’t want a feature with a performance cliff or inconsistent, variable performance. That’s makes it more difficult to manage and increases TCO.

Their comments are clear: Maximize our investment in the hardware resources, provide high density, and with a minimal performance impact.

(Speaking of performance, Hyper-V R2 performance is exceptional. We recently released an in depth performance analysis on Windows Server 2008 Hyper-V R2 Virtual Hard Disk Performance using a variety of workloads including SQL, Exchange, Web and more. This is a must read: http://download.microsoft.com/download/0/7/7/0778C0BB-5281-4390-92CD-EC138A18F2F9/WS08_R2_VHD_Performance_WhitePaper.docx)

Virtual Machine Performance & Density

If you think about Virtual Machine Performance and Virtual Machine Density as a continuum and you can place the slider, where would you position the slider?

MaximumPerformance2_thumb1

Up to now, we’ve opted to err on the side of performance with excellent results. Now, customers are asking us to start moving that slider over to increase density and still minimize performance impact, so that’s what we’re doing.

So, what is Dynamic Memory? At a high level, Hyper-V Dynamic Memory is a memory management enhancement for Hyper-V designed for production use that enables customers to achieve higher consolidation/VM density ratios. In my next blog, we’ll dive deep into Hyper-V Dynamic Memory…

Cheers,

Jeff Woolsey

Windows Server

Original article is here

 With the release of Beta of Service Pack 1 for Windows Server 2008 R2 a number of you have asked about Service Pack 1 for the standalone Microsoft Hyper-V Server 2008 R2, and whether the new capabilities of Dynamic Memory and RemoteFX will be available for it. Absolutely, both Dynamic Memory and RemoteFX have been developed for Microsoft Hyper-V Server 2008 R2 as well.

 

In order to get these capabilities for the Microsoft Hyper-V Server 2008 R2, you will need to install the Beta of Service Pack 1 on Microsoft Hyper-V Server 2008 R2. Note that the first wave of the Service pack installer is only in 5 languages (English, French, German, Japanese and Spanish), so if you try and apply the package to Microsoft Hyper-V Server 2008 R2 (which has 11 language packs installed by default) you will rightly see the following screen

 

 

 

 It’s pretty simple to uninstall these language packs to thereafter install the Service pack. In order to uninstall the language packs, there is nifty utility included (lpksetup.exe). Launch this from an administrator’s command prompt and select “Uninstall display languages”.

On the next screen, select the all languages other than the five (English, French, German, Japanese and Spanish). Of course if you want to save some additional disk space, you can uninstall other languages as well and leave just the language that you use in your environment, Click next and let the tool do its job. Thereafter you can apply Service Pack 1. Enjoy!!

Vijay Tewari

Principal Program Manager, Windows Server Virtualization

Original article is here

A while ago, I got the opportunity to work on an interesting case where the customer’s Explorer process was showing a continuous increase in handle count. Using Process Explorer we could see that these handles were open to various Iexplore.exe processes, which were showing as terminated. Interestingly however, these Iexplore.exe processes were not being started by any user. They seemed to get created randomly, about one every half  hour and almost immediately showing up as a terminated process handle under Explorer.exe.

So what was causing these processes to be launched? Putting these processes under a debugger with a breakpoint set on CreateProcess was an option, however we did not have access to the server and getting internet access on the server would be difficult. So I thought of giving Process Monitor a try. The idea was to get log captured for the processes Iexplore.exe and Explorer.exe for the operations process create, process start, and thread create. Also, we wanted to ensure that when we leave this running, Process Monitor did not fill up the pagefile, which is used as the default backing file.

So we did the following:

1. Launched Process Monitor with the following syntax “procmon /backingfile:E:\processlaunch.pml”

2. In the Filters menu, checked the option “Drop Filtered Events”.

3. Set filters for processes Explorer.exe and Iexplore.exe and also for operations process create, process start and thread create.

With this done, we let the server run for a couple of hours and got the logs. Here’s what we saw.

clip_image002

Now, looking at the thread stack for Explorer, process create, we see unknown module, with addresses 0x10003d2f,0×10002298,0×10002629.

First off 0×10000000 converts to 268435456. This is essentially greater than the 2 GB user mode, virtual address space limit. The box was running with the /3GB switch, so this is a valid user mode address; however Explorer.exe and Iexplore.exe are not /LargeAddressAware, which definitely looks suspicious.

clip_image003

Now looking at the stack information of the thread creation of Iexplore.exe we see the following:

clip_image004

Seems we have a binary, Linkinfo.dll, and its loading from %windir% directory. Now the file name seems genuine, however a legitimate version of a system file like Linkinfo.dll is supposed to be loaded from the System32 directory and not the %windir% directory. Also the box we were working with was running Windows Server 2003, and Windows Server 2003 file versions start as 5.2.3790.xxxx. This in combination with the load address of  0×10000000 makes this look out of the ordinary.

Doing a Bing search on Linkinfo.dll in the %windir% directory led me to this link:

http://www.microsoft.com/security/portal/Threat/Encyclopedia/Entry.aspx?Name=Virus:Win32/Almanahe.B

Running a free Onecare online scan from the following link confirmed this, and was successful cleaning this up.

http://onecare.live.com/site/en-us/default.htm

Anshuman Ghosh

Original Article is here

We are really excited to announce the availability of the Hyper-V Linux Integration Services for Linux Version 2.1. This release marks yet another milestone in providing a comprehensive virtualization platform to our customers. Customers who have a heterogeneous operating system environment desire their virtualization platform to provide support for all operating systems that they have in their datacenters.

Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.
Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.
Integrated Shutdown: Virtual machines running Linux can be gracefully shut down from either Hyper-V Manager or System Center Virtual Machine Manager.
Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.
Heartbeat: Allows the host to detect whether the guest is running and responsive.
Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.

Download here

Original article is here

My name is Joseph Conway and I am a Senior Escalation Engineer on the CORE team.  Today’s blog entry entails the steps that you as a customer can take when encountering issues with the .Net framework.

The .Net framework ships as both an inbox and standalone installer, depending on the version of the operating system and the version of the framework.  Internally, we support the .Net framework through several teams depending on the types of issues being encountered.  Occasionally, we will have issues that cross internal support team boundaries that may require multiple engineers running utilities to gain information about the system.  This blog is attempts to let you, as a customer know what we typically would run for these issues ahead of time, to speed up the support process for you and the engineers working on your issue. 

If you are having issues with .Net framework, we ask that you do the following:

1.       Run the Aaron Stebner .Net verification tool for the .Net framework version you are experiencing issues with.  The tool is located here: http://blogs.msdn.com/b/astebner/archive/2008/10/13/8999004.aspx .  When you run the tool, all you need to do is choose the appropriate drop Framework from the drop down and Choose Verify Now.  When the tool is complete, it will return a success or failure based on its results.  Figures of the tool before and after being run are below:

 clip_image001

clip_image002

2.       If the verification of this tool fails, you will need to speak with someone on the developer support team to assist you with your issue.  This applies to all versions of the .Net framework that are shipped with standalone installers, or are out of box installations.  For information on determining the version of the .Net framework and how it may have been installed, please see: http://support.microsoft.com/kb/318785/en-us

3.       If the .Net framework that is failing is an inbox component, such as .Net 3.5.1 on Windows 2008 R2, then there are different steps we would ask for you to take.  When the component is an inbox component, we ask that you use Server Manager to remove the component and then re-add the component to the system as seen in the figure below:

 clip_image004

4.       If the re-addition of the component fails, we ask that you run the following two utilities against your system.  These utilities can resolve common servicing issues on a system:

a.       At an elevated command prompt, run SFC /SCANNOW

b.      Run the CheckSUR utility located here: http://support.microsoft.com/kb/947821

5.       If the use of methods 2-3b above do not alleviate the issue or if the inbox component is failing to install, please ask for assistance when you call in from the Windows CORE team.

Hope this helps….

Joseph Conway
Senior Support Escalation Engineer
Microsoft Enterprise Platforms Support

Original article is here


.NET Framework Setup Verification Tool User’s Guide

Introduction

This .NET Framework setup verification tool is designed to automatically perform a set of steps to verify the installation state of one or more versions of the .NET Framework on a computer.  It will verify the presence of files, directories, registry keys and values for the .NET Framework.  It will also verify that simple applications that use the .NET Framework can be run correctly.

Download location

The .NET Framework setup verification tool is available for download at the following locations:

The .zip file that contains the tool also contains a file named history.txt that lists when the most recent version of the tool was published and what changes have been made to the tool over time.

Supported products

The .NET Framework setup verification tool supports verifying the following products:

  • .NET Framework 1.0
  • .NET Framework 1.1
  • .NET Framework 1.1 SP1
  • .NET Framework 2.0
  • .NET Framework 2.0 SP1
  • .NET Framework 2.0 SP2
  • .NET Framework 3.0
  • .NET Framework 3.0 SP1
  • .NET Framework 3.0 SP2
  • .NET Framework 3.5
  • .NET Framework 3.5 SP1
  • .NET Framework 4 Client
  • .NET Framework 4 Full

By default, the .NET Framework setup verification tool will only list versions of the .NET Framework that it detects are installed on the computer that it is being run on.  As a result, the tool will not list all of the above versions of the .NET Framework.  This product filtering can be overridden by running the .NET Framework setup verification tool with the following command line switch:

netfx_setupverifier.exe /q:a /c:”setupverifier.exe /a”

Silent installation mode

The .NET Framework setup verification tool supports running in silent mode.  In this mode, the tool will run without showing any UI, and the user must pass in a version of the .NET Framework to verify as a command line parameter.  To run in silent mode, you need to download the verification tool .zip file, extract the file netfx_setupverifier.exe from the .zip file, and then run it using syntax like the following:

netfx_setupverifier.exe /q:a /c:”setupverifier.exe /p <name of product to verify>”

The value that you pass with the /p switch to replace <name of product to verify> in this example must exactly match one of the products listed in the Supported products section above.  For example, if you would like to run the tool in silent mode and verify the install state of the .NET Framework 2.0, you would use a command line like the following:

netfx_setupverifier.exe /q:a /c:”setupverifier.exe /p .NET Framework 2.0″

Exit codes

The verification tool can returns the following exit codes:

  • 0 – verification completed successfully for the specified product
  • 1 – the required file setupverifier.ini was not found in the same path as setupverifier.exe
  • 2 – a product name was passed in that cannot be verified because it does not support installing on the OS that the tool is running on
  • 3 – a product name was passed in that does not exist in setupverifier.ini
  • 100 – verification failed for the specified product
  • 1602 – verification was canceled

Log files

This verification tool creates 2 log files by default that can be used to determine what actions the tool is taking and what errors it encounters while verifying a product.  The 2 log files are listed below, and they are created in the %temp% directory by default.  Note that you can find the %temp% directory by clicking on the Windows start menu, choosing Run, typing %temp% and clicking OK to open the directory in Windows Explorer.

  • %temp%\setupverifier_main_*.txt – this log contains information about all actions taken during a verification tool session; it will include information about each resource that the tool attempts to verify for a chosen product and whether or not that resource was found on the system; this log tends to be fairly long, so errors will be logged with the prefix ****ERROR**** to make it easy to search and find them
  • %temp%\setupverifier_errors_*.txt – this log only contains information about any errors found during verification of a chosen product
  • %temp%\setupverifier_netfx20testapp_*.txt – this log contains error information for the .NET Framework test application that is run by the verification tool.  This log will only be created if there is an error while running the test application.

A new pair of log files will be created each time the verification tool is launched.  The date and time the tool is launched will be appended to the end of the log file names by default in place of the * in the names listed above.  If you want to control the exact names used for the log files, you can use the following command line parameters:

  • /l <filename> – specifies a name to replace the default value of setupverifier_main_*.txt for the main activity log for the verification tool
  • /e <filename> – specifies a name to replace the default value of setupverifier_errors_*.txt for the error log for the verification tool

For example, the following command line will allow you to specify non-default names for both log files:

netfx_setupverifier.exe /q:a /c:”setupverifier.exe /l %temp%\my_main_log.txt /e %temp%\my_error_log.txt”

Original article is here

TCPView is a Windows program that will show you detailed listings of all TCP and UDP endpoints on your system, including the local and remote addresses and state of TCP connections. On Windows Server 2008, Vista, and XP, TCPView also reports the name of the process that owns the endpoint. TCPView provides a more informative and conveniently presented subset of the Netstat program that ships with Windows. The TCPView download includes Tcpvcon, a command-line version with the same functionality.

Using TCPView

When you start TCPView it will enumerate all active TCP and UDP endpoints, resolving all IP addresses to their domain name versions. You can use a toolbar button or menu item to toggle the display of resolved names. On Windows XP systems, TCPView shows the name of the process that owns each endpoint.

By default, TCPView updates every second, but you can use the Options|Refresh Rate menu item to change the rate. Endpoints that change state from one update to the next are highlighted in yellow; those that are deleted are shown in red, and new endpoints are shown in green.

You can close established TCP/IP connections (those labeled with a state of ESTABLISHED) by selecting File|Close Connections, or by right-clicking on a connection and choosing Close Connections from the resulting context menu.

You can save TCPView’s output window to a file using the Save menu item.

Download TcpView (291 KB)

Original Article is here

Follow

Get every new post delivered to your Inbox.