We are trying to resolve or figure out change in plan for one of the top sql queries by CPU.
This query we have found (it's a vendor-based product) which unfortunately I do not have access to extract out and share it here, is expected to take less than 5 ms CPU time. The so-called bad plan takes over 50 ms CPU time and create an issue on high volume day. I know that 50ms may sound very less but this query is executed 5 million times in an hour from 30+ app servers so you can understand that big difference is sort of creating issues.
Below is the plan with 50+ms CPU time (bad plan)
Below is the plan using under 5 ms CPU time (good plan)
What we tried?
Updating stats(sp_updatestats) and clearing plan cache for this specific query - 1 in 10 times it will work
adding forceseek in query to test and we get good plan. Problem is this change cannot be done in code as this being vendor product it could take over 3 months for final push.
Thought of plan guide, but somehow does not get picked up. Not to expertise in using it as this query is executed like sp_execute @p1, @p2, @p3........ . These parameters are not constant and keep changing on every run. Probably doing something wrong so not sure if we can actually hint forceseek in plan guide for such process
Please advise. I know it's very hard to assist without actual schema and query and just looking at plan, but any inputs what you think between 2 plans may assist. I can add details from exec plan if required. SQL version is 2017 but compatible mode is 2012
1 Answer 1
The Hash Match
operation is less likely your issue here. More so, notice in your fast plan with the Clustered Index Seek
there's actually 0 rows being returned. This is the difference that's likely driving the different performance you're seeing. It's literally a different amount of data being processed at this point, which can be influenced by a number of factors.
By the way, the amount of data being joined together at that step is what determines what kind of join operation gets used. Generally Nested Loops
are good for small amounts of data whereas a Hash Match
is better for larger sets of data, especially if one set is ordered already. So those operations appear to be correct for each execution plan you provided, respectively.
Can the Clustered Index Scan
be optimized?...possibly, but unadvisable with the information you've provided. And if there was a way, likely it would either be an index or query change, which sounds out of the applicable options for you anyway, since this is a vendor application.
If you truly proved out that this query is causing server contention, the only thing I can advise that you can change, with the given information, is increase your server resources. If you want to pursue additional help, we'll need additional information, even if anonymized. But this sounds like a vendor problem that you shouldn't be doing the work for them to fix their software for them anyway.
-
1I assume is due to the different parameters you're using in that case) - on a comment the OP claims to be using the same parameters on both plans. Couldn't the difference be simply the method chosen by the plan to process the data?Ronaldo– Ronaldo2022年04月03日 00:08:26 +00:00Commented Apr 3, 2022 at 0:08
-
1@Ronaldo Good point, I misread and thought OP said they were using different parameters which would definitely be a potential for different plans. But yes, a multitude of other reasons can influences two different plans even for the same exact query with the same parameters. Updated my answer.J.D.– J.D.2022年04月03日 00:30:43 +00:00Commented Apr 3, 2022 at 0:30
Explore related questions
See similar questions with these tags.
IN (@P1, @P2,...,@Pn)
list?