Improving ASP Performance With Asynchronous Coding

Crashed

This is your ASP.NET app with 400 concurrent users.

Studies have shown that the average internet user will abandon a page
that takes more than four seconds to load.

If you are using ASP.NET, and you aren’t using asynchronous programming,
then you may have a problem.

Success may literally kill your site.

How I Bumbled Into The Problem

I discovered this problem a couple of years ago with a web app I was load
testing.

Quick summary of what I found:

Two copies of same app were running on the same server. I was load testing one.

As I ramped up beyond about 300-350 concurrent users bad things happened.

The time it took to return a page skyrocketed up to 30 seconds and beyond!

CPU and Memory on the server were barely being used.

I logged in to the second app and guess what? It was responding as quickly as
ever.

Hmm.

Dirty Secret

As I researched this issue I came across an MSDN article from 2007 by Jeff Prosise
called
Scalable Apps with Asynchronous Programming in ASP.NET.

Jeff is one of those “Card-Carrying Genius” types. He has written probably more
books than I’ve read and was one of the founders of Wintellect.
I’ve known several folks who worked for Wintellect, and they have all been damned sharp.

Here’s the opening of his article:

Do you want to know a secret? A deep, dark, dirty secret? One that, if revealed, would cause great angst in the ASP.NET community and prompt shouts of “Aha!” from the anti-Microsoft crowd?

Most Web sites I’ve seen built with ASP.NET aren’t very scalable, not because of a flaw in ASP.NET, but because of how the technology is used. They suffer a self-imposed glass ceiling that limits the number of requests they can process per second. These sites scale just fine until traffic rises to the level of this invisible ceiling. Then throughput begins to degrade. Soon after, requests start to fail, usually returning “Server unavailable” errors.

– Jeff Prosise

Why?

In a word: Threads. In two words: Thread Pool. Three words? Thread Pool Saturation.

Each person who connects to your app gets a thread from the .Net CLR Thread Pool.

As long as you don’t have more people than threads, everything’s great!

Unfortunately once you get more people than threads, new users go into a queue to wait for the
next available thread.




Browsers waiting for connections lead to frustrated users leaving the page.

The real fun happens when they leave, and then a thread is assigned to them and waits for them to respond.

The thread doesn’t know they left, so it waits until it reaches its timeout period. That means even though
they left, they still ate up a thread.

Going back to my story of the two identical apps, the reason the one not being load tested kept working
was because it had its own thread pool. The thread pool is at the CLR instance level. In ASP terms that means
Application or App Pool level.

How Bad Is It Really?

Bad. Really bad.

I’ve created a super simple MVC app to illustrate.

To keep it simple I just did ye olde File: New Project and accepted the default MVC app.

The Home controller has three actions:

  1. Index
  2. About
  3. Contact

These do nothing but return their view.

In the real world, they would probably look something up in the database, so I put in a random delay to
simulate that.

So if you hit one of the pages, the app waits between .5 and 1.5 second before responding.

Here’s a bit of the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
Random randy = new Random();
public ActionResult Contact()
{
ViewBag.Message = "Your contact page.";
Thread.Sleep(GetSleepTime());
return View();
}
protected int GetSleepTime()
{
return randy.Next(500, 1500);
}

Next I used the Load Testing built into Visual Studio Ultimate to see how it performed.

I started with 200 users and added 200 every 10 seconds until I reached 2,000.

I held at 2,000 and ran for a total of five minutes.

Here is a chart of the results:

Load Test Chart



















ColorKey IndicatorMinMaxAvg
RedUser Load2002,0001,700
BlueAvg. Page Time0.4140.420.7
GreenPages/Sec5.816120.7

Yikes!

The average page load time was 20.7 seconds.

That’s FIVE TIMES longer than the average person will wait!

How Can I Fix This?

If you’ll remember from Jeff Prosise’s article, the answer comes down to asynchronous
programming.

If you read his whole article, you probably came away with the impression that Jeff
is very smart and asynchronous programming is very hard.

In 2007 that was true (the hard part is less true now, the “he’s smart’ part is still true).

Fortunately .Net 4.5 gave us async and await.

Async Made Easy (ier)

Don’t let anyone kid you, asynchronous coding will make your head hurt.

You really don’t know the order in which things will happen and have to
take great care with variables.

I’ve worked a bit with Node.js, and it starts with the idea that everything should
be asynchronous.

One of the big differences between Node and ASP is that Node makes you declare synchronous code
whereas .Net makes you declare asynchronous code.

Having said that I find the nesting of callbacks in Node to border on spaghettification at times.

Until .Net 4.5 using asynchronous code in .Net was even more cumbersome.

Now with async and await you can write code that looks like synchronous code, but
behind the scenes compiles to an asynchronous construct.

Let’s take a look at what changes need to be made to make the controller code I showed before
asynchronous, then see the results of running the same load test.

1
2
3
4
5
6
7
8
9
10
11
12
13
Random randy = new Random();
public async Task<ActionResult> Contact()
{
ViewBag.Message = "Your contact page.";
await Task.Delay(GetSleepTime());
return View();
}
protected int GetSleepTime()
{
return randy.Next(500, 1500);
}

That’s it.

As you can see I added the async keyword on the method and the await keyword
inside the method. I also made the method’s return type a Task of type ActionResult
instead of an ActionResult.

Also, I obviously couldn’t tell the Thread to sleep, so I swapped out Task.Delay.

The result is that behind the scenes a callback will be added.

Before I was explicitly saying “Wait here and let this thread spin for n-milliseconds, then
continue.”

Now I am saying, “Set a callback here, release the thread back into the thread pool, and
when n-milliseconds have passed, give me a thread and continue.”

How Much Does It Help?

Here is a chart of the results of running the asynchronous version:

Load Test Chart



















ColorKey IndicatorMinMaxAvg
RedUser Load2002,0001,667
BlueAvg. Page Time0.8317.82.94
GreenPages/Sec14.2758290

I think you’ll agree that’s quite an improvement.

Yes we had a few that took too long, but the average time to load dropped by an
order of magnitude.

Our proverbial average user wouldn’t abandon our site just because of the time to load a
page.




As you can see there are a couple of gaps in the chart.

Unfortunately my notebook tends to have I/O log jams and the test was a bit too much for it.

I could move the test up to Visual Studio Online, but it would exceed my monthly allotment
of testing “virtual user minutes.” Since these results demonstrate the point I’m making, I
don’t think I’ll buy extra minutes to perfect the test.

Quick Tip:

My first test was with 10,000 users and my notebook shut itself down
because the CPU was overheating!

Fortunately 2,000 users sufficiently demonstrates the point.

Caveats

You knew there had to be a catch, right? It couldn’t be this easy.

No, it’s not quite this easy.

In the real world you will be getting stuff from databases, file systems,
and web APIs.

If any of those do something synchronously, you will lose the ability to relinquish
the thread back to the pool.

Entity Framework has been an offender in this area, but from EF6 onwards there are
asynchronous methods you can use instead of synchronous ones.

I haven’t worked with asynchronous EF yet, so I can’t say much about it.

Conclusion

If you are creating anything other than a departmental app, you should be coding
asynchronously.

Sure you can throw more hardware at the problem, but what a waste!

You could spin up the biggest, baddest, fire-breathing monster of a server and
you would still be limited by the number of threads in the thread pool.

You can increase the number of connections in the thread pool (and you very well
might anyway), or spread the load across multiple web servers (again, you will probably do so),
but the biggest bang for the buck comes with asynchronous code.

Asynchronous Resources

Hopefully I’ve piqued your interest in asynchronous ASP. Hopefully you’ll starting
doing it ASAP.

To help you learn more check out these links: