Wednesday, April 04, 2012

ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0

I'd like to briefly explain how ASP.NET uses threads when hosted on IIS 7.5, IIS 7.0 and IIS 6.0, as well as the configuration changes that you can make to alter the defaults. Please take a quick look at the “Threading Explained” section in Chapter 6 of “Improving .NET Application Performance and Scalability”. Prior to v2.0 of the .NET Framework, it was necessary to tweak the processModel/maxWorkerThreads, processModel/maxIoThreads, httpRuntime/minFreeThreads, httpRuntime/minLocalRequestFreeThreads, and connectionManagement/maxconnection configuration. The v2.0 .NET Framework attempted to simplify this by adding a new processModel/autoConfig configuration, which made the changes for you at runtime. With the introduction of IIS 7.0 and the ASP.NET integrated pipeline, we've introduced another element to the mix, a registry key named MaxConcurrentRequestsPerCPU. Lets start with a discussion of how things worked on IIS 6.0 before discussing the changes made in IIS 7.0.

When ASP.NET is hosted on IIS 6.0, the request is handed over to ASP.NET on an IIS I/O thread. ASP.NET immediately posts the request to the CLR ThreadPool and returns HSE_STATUS_PENDING to IIS. This frees up IIS threads, enabling IIS to serve other requests, such as static files. Posting the request to the CLR Threadpool also acts as a queue. The CLR Threadpool automatically adjusts the number of threads according to the workload, so that if the requests are high throughput there will only be 1 or 2 threads per CPU, and if the requests are high latency there will be potentially far more concurrently executing requests than 1 or 2 per CPU. The queuing provided by the CLR Threadpool is very useful, because while the requests are in the queue there is only a very small amount of memory allocated for the request, and it is all native memory. It’s not until a thread picks up the request and begins to execute that we enter managed code and allocate managed memory.

The CLR Threadpool is not the only queue used by ASP.NET when hosted in IIS 6.0. There are also queues at the application level, within each AppDomain. If there is a lot of latency, the CLR Threadpool will grow and inject more active threads. At some point we would either run out of threads, not have enough threads left over for performing other tasks, or the memory associated with all the concurrently executing requests would be too much, so ASP.NET imposes a cap on the number of threads concurrently executing requests. This is controlled by the httpRuntime/minFreeThreads and httpRuntime/minLocalRequestFreeThreads settings. If the cap is exceeded, the request is queued in the application-level queue, and executed later when the concurrency falls back down below the limit. The performance of these application-level queues is really quite miserable. If you observe that the “ASP.NET Applications\Requests in Application Queue” performance counter is non-zero, you definitely have a performance problem. These queues were implemented to prevent thread exhaustion and contention related to web service requests. The problem was first described in KB 821268, which I had published many years ago. The KB article has been re-written a few times since it was originally published, and I hope nothing has been lost during the translations.

QR: Inline image 1

Posted via email from Jasper-Net