I LOVE great debugging tools. Anything that makes it easier for me to make a site correct and fast is glorious. I've talked about Glimpse, an excellent firebug-like debugger for ASP.NET MVC, and I've talked about ELMAH, and amazing logger and error handler. Now the triad is complete with MiniProfiler, my Package of the Week #9.
Yes, #9. I'm counting "System.Web.Providers" as #8, so phooey. ;)
Hey, have you implemented the NuGet Action Plan? Get on it, it'll take only 5 minutes: NuGet Action Plan - Upgrade to 1.4, Setup Automatic Updates, Get NuGet Package Explorer. NuGet 1.4 is out, so make sure you're set to automatically update!
The Backstory: I was thinking since the NuGet .NET package management site is starting to fill up that I should start looking for gems (no pun intended) in there. You know, really useful stuff that folks might otherwise not find. I'll look for mostly open source projects, ones I think are really useful. I'll look at how they built their NuGet packages, if there's anything interesting about the way the designed the out of the box experience (and anything they could do to make it better) as well as what the package itself does.
This week's Package of the Week is "MiniProfiler" from StackExchange.
Each are small bad-ass LEGO pieces that make debugging, logging and profiling your ASP.NET application that much more awesome.
So what's it do? It's a Production Profiler for ASP.NET. Here's what Sam Saffron says about this great piece of software Jarrod Dixon, Marc Gravell and he worked on...and hold on to your hats.
Our open-source profiler is perhaps the best and most comprehensive production web page profiler out there for any web platform.
Whoa. Bold stuff. Is it that awesome? Um, ya. It works in ASP.NET, MVC, Web Forms, and Web Pages.
The powerful stuff here is that this isn't a profiler like you're used to. Most profilers are heavy, they plug into the runtime (the CLR, perhaps) and you'd avoid messing with them at production time. Sometimes people will do "poor man's profiling" with high performance timers and log files, but there's always a concern that it'll mess up production. Plus, digging around in logs and stuff sucks.
MiniProfiler will profile not only what's happening on the page and how it renders, but also separate statements whose scope you can control with using() statements, but also database access. Each one is more amazing.
First, from an ASP.NET application, install the MiniProfiler package via NuGet. Decide when you will profile. You can't profile everything, so do you want to profile local requests, just requests from administrators, or from certain IPs? Start it up in your Global.asax:
protected void Application_BeginRequest(){ if (Request.IsLocal) { MiniProfiler.Start(); } //or any number of other checks, up to you }protected void Application_EndRequest(){ MiniProfiler.Stop(); //stop as early as you can, even earlier with MvcMiniProfiler.MiniProfiler.Stop(discardResults: true);}
Add a call to render the MiniProfiler's Includes in a page, usually the main layout after wherever jQuery is added:
<head> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js"></script> @MvcMiniProfiler.MiniProfiler.RenderIncludes()</head>
The, if you like, put some using statements around some things you want to profile:
public class HomeController : Controller{ public ActionResult Index() { var profiler = MiniProfiler.Current; // it's ok if this is null using (profiler.Step("Set page title")) { ViewBag.Title = "Home Page"; } using (profiler.Step("Doing complex stuff")) { using (profiler.Step("Step A")) { // something more interesting here Thread.Sleep(100); } using (profiler.Step("Step B")) { // and here Thread.Sleep(250); } } using (profiler.Step("Set message")) { ViewBag.Message = "Welcome to ASP.NET MVC!"; } return View(); }}
Now, run the application and click on the chiclet in the corner. Open your mouth and sit there, staring at your screen with your mouth agape.
That's hot. Notice how the nested using statements are nested with their timings aggregated in the popup.
If you want to measure database access (where the MiniProfiler really shines) you can use their ProfiledDbConnection, or you can hook it into Entity Framework Code First with the ProfiledDbProfiler.
If you manage connections yourself or you do your own database access, you can get Profiled connections manually:
public static MyModel Get(){ var conn = ProfiledDbConnection.Get(GetConnection()); return ObjectContextUtils.CreateObjectContext<MyModel>(conn);}
Or, if you are using things like Entity Framework Code First, just add their DbProvider to the web.config:
<system.data> <DbProviderFactories> <remove invariant="MvcMiniProfiler.Data.ProfiledDbProvider" /> <add name="MvcMiniProfiler.Data.ProfiledDbProvider" invariant="MvcMiniProfiler.Data.ProfiledDbProvider" description="MvcMiniProfiler.Data.ProfiledDbProvider" type="MvcMiniProfiler.Data.ProfiledDbProviderFactory, MvcMiniProfiler, Version=1.6.0.0, Culture=neutral, PublicKeyToken=b44f9351044011a3" /> </DbProviderFactories></system.data>
Then tell EF Code First about the connection factory that's appropriate for your database.
I've spent the last few evenings on Skype with Sam trying to get the EF Code First support to work as cleanly as possible. You can see the checkins over the last few days as we bounced back and forth. Thanks for putting up with me, Sam!
Here's how to wrap SQL Server Compact Edition in your Application_Start:
protected void Application_Start(){ AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); //This line makes SQL Formatting smarter so you can copy/paste // from the profiler directly into Query Analyzer MiniProfiler.Settings.SqlFormatter = new SqlServerFormatter(); var factory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); var profiled = new MvcMiniProfiler.Data.ProfiledDbConnectionFactory(factory); Database.DefaultConnectionFactory = profiled;}
Or I could have used SQLServer proper:
var factory = new SqlConnectionFactory("Data Source=.;Initial Catalog=tempdb;Integrated Security=True");
See here where I get a list of People from a database:
See where it says "1 sql"? If I click on that, I see what happened, exactly and how long it took.
It's even cooler with more complex queries in that it can detect N+1 issues as well as duplicate queries. Here we're hitting the database 20 times with the same query!
Here's a slightly more interesting example that mixes many database accesses on one page.
Notice that there's THREE chiclets in the upper corner there. The profiler will capture GET, POSTs, and can watch AJAX calls! Here's a simple POST, then REDIRECT/GET (the PRG pattern) example as I've just created a new Person:
Notice that the POST is 141ms and then the GET is 24.9. I can click in deeper on each access, see smaller, trivial timings and children on large pages.
I think that this amazing little profiler has become, almost overnight, absolutely essential to ASP.NET MVC.
I've never seen anything like it on another platform, and once you've used it you'll have trouble NOT using it! It provides such clean, clear insight into what is going on your site, even just out of the box. When you go an manually add in more detailed Steps() you'll be amazed at how much it can tell you about your side. MiniProfiler works with WebForms as well, because it's all about ASP.NET! There are so many issues that pop up in production that can only be found with a profiler like this.
Be sure to check out the MiniProfiler site for all the detail and to download samples with even more detail. There's lots of great features and settings to change as seen in just their sample Global.asax.cs.
Stop what you're right doing now, and go instrument your site with MiniProfiler! Then go thank Jarrod Dixon, Marc Gravell and Sam Saffron and the folks at StackExchange for their work.
Some folks don't like the term "profiler" to label what the MiniProfiler does. Others don't like the sprinkling of using() statements and consider them useless, perhaps like comments. I personally disagree, but that said, Sam has created a new blog post that shows how to automatically instrument your Controller Actions and View Engines. I'll work with him to make a smarter NuGet package so this is all done automatically, or easily optionally.
This is done for Controllers with the magic of the MVC3 Global Action Filter:
class ProfilingActionFilter : ActionFilterAttribute{ IDisposable prof; public override void OnActionExecuting(ActionExecutingContext filterContext) { var mp = MiniProfiler.Current; if (mp != null) { prof = MiniProfiler.Current.Step("Controller: " + filterContext.Controller.ToString() + "." + filterContext.ActionDescriptor.ActionName); } base.OnActionExecuting(filterContext); } public override void OnActionExecuted(ActionExecutedContext filterContext) { base.OnActionExecuted(filterContext); if (prof != null) { prof.Dispose(); } }}
And for ViewEngines with a simple wrapped ViewEngine. Both of these are not invasive to your code and can be added to your Global.asax.
public class ProfilingViewEngine : IViewEngine{ class WrappedView : IView { IView wrapped; string name; bool isPartial; public WrappedView(IView wrapped, string name, bool isPartial) { this.wrapped = wrapped; this.name = name; this.isPartial = isPartial; } public void Render(ViewContext viewContext, System.IO.TextWriter writer) { using (MiniProfiler.Current.Step("Render " + (isPartial?"parital":"") + ": " + name)) { wrapped.Render(viewContext, writer); } } } IViewEngine wrapped; public ProfilingViewEngine(IViewEngine wrapped) { this.wrapped = wrapped; } public ViewEngineResult FindPartialView(ControllerContext controllerContext, string partialViewName, bool useCache) { var found = wrapped.FindPartialView(controllerContext, partialViewName, useCache); if (found != null && found.View != null) { found = new ViewEngineResult(new WrappedView(found.View, partialViewName, isPartial: true), this); } return found; } public ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) { var found = wrapped.FindView(controllerContext, viewName, masterName, useCache); if (found != null && found.View != null) { found = new ViewEngineResult(new WrappedView(found.View, viewName, isPartial: false), this); } return found; } public void ReleaseView(ControllerContext controllerContext, IView view) { wrapped.ReleaseView(controllerContext, view); }}
Get all the details on how to (more) automatically instrument your code with the MiniProfiler (or perhaps, the MiniInstrumentor?) over on Sam's Blog.
Enjoy!
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. I am a failed stand-up comic, a cornrower, and a book author.
Our open-source profiler is perhaps the best and most comprehensive production web page profiler out there for any web platform.Again Sam Saffron makes a bold, unfounded statement. First on InfoQ about o/r mappers and now this. I'm sure the profiler is nice and does a great job for what it is built for. But please, Sam, leave the bold claims out of the arena: you make your work and yourself only look like a marketing puppet.
Those are fighting words. I could argue that at the point you are serving 6 million dynamic pages a day you are not a trivial application. Stack Overflow has its complexity and there are some fairly tricky subsystems.
If I'm serving up 6mil pages a day and I'm serving up a blank page, the site is trivial, no? I'm sorry, but no, SO does not compare in complexity to systems that say, do purchasing, financial transactions, etc... and have to deal with numerous backend systems and still deliver content within competitive thresholds, lest be ranked at the bottom of an arbitrary list. I'm not saying you can't write complex code or systems, or what have you. I'm saying a discussion board (which is effectively what SO is), simply does not compare. Can you honestly say your site is complex as something like BofA, Delta, Amazon or Ebay? I'm assuming you're a reasonable person and the answer to that is no.There's a difference between popularity and complexity and those that have to deal with both have additional burdens.
I don't care if I am on a holiday in Hawaii, I still want to see this information coupled with the page I am looking at.
The points that you point out demonstrate that you probably haven't seen a lot of these tools, turned their dials up and down and have seen what they can do. They provide ALWAYS ON profiling. Not at every line of code, however if that's elected, that can be enabled as well. You don't provide that. They also provide this information across every server in a farm, consolidated into nice little charts and graphical views so that you can see, at run time, what systems are suffering bottlenecks or if it's a particular server, etc... So for what is presented here, I'd still have to write a tool that says "oh, server A isn't serving up pages fast enough" which would lead me to find out that some ops person installed a security tool for PCI compliance that caused an inordinate amount of I/O which lead to that particular server being slow. Why should I have to actively engage (i.e. open the site in a browser) to get that performance information? These tools coalesce data, they report and alert on it. The only time you need to touch it is if you need more detail and need to scale up the extent to which you're profiling and I can take that to any arbitrary line of code - without having the side effects of mangling my code base or introducing artificial dependencies in layers to which they don't belong.I'd rather sit on the beach and not worry about it unless I get an SMS or email that says there's a problem then be able to trace all of those transactions through the system and understand what happened. Yet another gap - you only get 'this is now, this is for me' numbers. Not 'this is for Bob, Sally, Joe in the last 30 minutes'.
I want to tell how long it took the server to generate the page I am looking at AND I want every millisecond the web/db server were working, accounted for. This accountability has resulted in some huge performance improvements on our sites.
The only way you could achieve this with your tool is to literally wrap this around every single statement and if you're doing that, you're introducing a dependency on ASP.NET + MVC in other layers that should have no knowledge of such dependencies. This is not something I need to suffer through with any of those tools.I can't really dispute the statement that you've gained improvements or not from the tool, I'm sure you have - but at best you only get generic areas to look at. Again, with proper tools, I get line level resolution of problem areas - and I should be doing that BEFORE it ever hits production. I know exactly what to target and why. I'd also be willing to wager a copy of RedGate that if I ran it on your application in a development environment, I could find ...oh, let's say at least 3...concrete areas for improvement that you have not caught with your tool. (kind of flying blind here, but I'm willing to part with a few $)
I believe we are the first to bring this approach and attitude to the market. Of course, being first to market, makes you implicitly "best". My intention with this hyperbole was to drive other vendors and platforms to build similar tools. Which in turn will make the Internet better. This has worked, Ayende is working on his set of tools and the concept has been ported to Google App Engine.
First of all, it's nothing new or novel. It's a stopwatch. It's QueryPerformanceCounter wrapped around blocks of code. Even my current customer has very similar code in a codebase that's ooooh, what, 5 years old? There's a reason it's all ripped out and not used anymore. It's because in a large scale system, this approach does not scale. Flat out. It sucks. If you want to call it the "best" for simple MVC applications that have no business logic (again, dependencies), then I guess I can stipulate that, but not the first.
Think of it as the old "this is how long it took to render the page" thing you used to see in footer of pages (on steroids). I may be wrong here, in fact I am very curious if there are others that solved this problem in a more elegant way. We searched for a long time and could not find anything that fits the bill.
Telling me how long it took to render the page is useless ;) Telling me exactly WHAT caused it to render slowly is what I care about. People have solved the problem. A LOT of vendors have and a lot of their solutions are pretty much PFM. As I phrased it in an off-thread conversation, if you're presented with finding THE critical line of code that's causing a problem in a system that leaks $15-20kish a MINUTE, is this the tool you want? If so, good luck, have fun. Then again, that's the difference between SO and a complex, revenue generating site. You're not pissing off customers and not losing any substantive amount of money if you're sluggish for an hour. In that context, a tool like this is ok, but still not the best.
Similarly, you would have to be mad to use something like dottrace or ants in production, default on. These tools are not designed with that use case in mind. There is a massive list of profilers that work great in dev and fail in prd due to being too intrusive.
No kidding. You SHOULD be using them before you release to production, or even DynaTrace is both developer and production oriented. The other tools pick up where those leave off and provide a lot more flexibility and configuration with regards to what level at which they profile the application. This is also where (as I tweeted to you), you should be designing your infrastructure with profiling and monitoring in mind, not just trying to throw it in the middle of a !@#)(storm and hoping all works well.
avicode is focused on isolating root causes of exceptions
Did you just read the product slicks or have you seen it work? It's focused on both.
Dynatrace does not couple the profiling info with the page rendered, you use another tool to view the results
We have differing views on this I suppose. I really don't care about looking at a page. That adds no value to me when I can use said single tool to see everything, every page, period.You also kinda left out HP ;)
There are probably thousands of profiling and monitoring suites out there but very few that have a similar approach.
For good reason. Hunting and pecking is not an effective way to spend time to resolve production performance problems.
I think a lot of the upset is due to the word "profiler" ... it just can mean too many things... perhaps I should have used the acronym UAOPP when referring to this work to avoid confusion :)
If you called it "always available and always on, but unable to pinpoint the exact line of code that represents a performance issue without exceptional effort on the part of the developer and introducing unnecessary dependencies in your non-web assemblies (you do have those, right?)", that sounds like an appropriate title. AAAAOBUTPTELOCTRAPIWEETPOTDAIUDIYNWAYDHTR. Helluva long acronym.
I admit, this tool will probably not help you that much, if you have no database access AND have 99% of your application living in 3rd party libraries. But I would argue that at that point you probably have bigger problems anyway.
LOL - really? This tool won't help you if you have it in your own libraries and have implemented separation properly. If you've kludged everything together in a monolithic application with no notion of tiering, I guess, but ...uh, that's where you have bigger problems and/or something so relatively simple to pretty much any application I've ever worked on that it's a moot point. When you have a set of requirements or user stories that can sit in a stack of paper about a foot high with, oh, I don't know...business rules. Then we can talk, but I can only assume by a lot of this that your internal architecture is little more than caching wrapped around a database.
Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.