Benchmarking SQL Server Execution Plans For Automated Testing?
I'm using SQL Server 2008. I noticed that when I run a sproc and include an execution plan, SQL Server will include an alert if I'm "missing" an index and will recommend one for me to add in order to increase the performance of the sproc.
To me at least, I think it would make sense for my automated tests to run some of my more performance intensive sprocs and then include an execution plan in XML. Then, parse the XML plan for any warnings or alerts spitted by SQL Server. If any are encountered, then this is considered a test failure. I could take it a step further and throw a failure if some query costs have significantly increased since the last time I ran it, but for now I'm just keeping it simple.
This sounds like something that someone would have done already, but I don't see anything from Googling. Is this a reasonable approach for automated performance testing/benchmarking, and, if so, are there any existing tools or frameworks that would make this easier?
I would advise against this. The missing indexes that are identified by the optimizer are very myopic. That is to say that they do not take the whole system into account and suggest indexes that might benefit this query but might tank the performance of DML. While your heart is in the right place, index analysis is more art than science (though the latter can and should inform the former).
the advise against this is valid. I've seen the missing index dmv recommend a half-dozen indices that are all tweaks of the same thing. I've seen it recommend adding the clustered key as an included column (really odd, since the clustered key is always on the index already). I've seen it recommend indices that already exist. Never just apply these indices. I think you'd be better off starting with the missing indexes dmv, finding those indices that make sense to investigate more thoroughly (high impact , etc). Then, query the xml for the procs that caused those indices to be recommended. I have a hard time doing that. we flush our plan cache daily, and with 9K procs, and a ton of inline sql from legacy apps, the time it takes to parse the xml is prohibative. Alternatively, start in the proc stats dmv (sys.dm_exec_procedure_stats) and query stats dmv(sys.dm_exec_query_stats). find the procs that are worst in terms of average duration, cpu, and i/o. check that subset for missing indices (and take a good look at the queries themselves, too.) Once that's done, you can start looking at the procs/queries that perform okay, but are executed so often that even a small-ish improvement could have noticeable system impact. I collect perf stats daily, and load a summary table of the top 20 worst procs for a bunch of different categories (total cpu, avg cpu, etc). makes it really easy to see when a proc changes its performance enough to make the list, and to see the impact of your tuning efforts as it falls off.