Report Portal

Building a Better Cache-Warmer, Part 1: the Storage Engine Cache

Reposted from Chris Webb's blog with the author's permission.

SSAS caching is an interesting subject: I’ve blogged numerous times about when caching can and can’t take place, and one of the most popular posts I’ve ever written was on how to build a cache-warmer in SSIS. However that last post was written a while ago, before the CREATE CACHE statement for warming the Storage Engine cache was reintroduced in SSAS 2005 SP2, and I’ve not been satisfied with it for a number of other reasons too. So as a result I’ve started to do more research into the issue and, with the help of Mosha, Akshai Mirchandani, and the chapter 29 of the indispensible “Microsoft SQL Server 2008 Analysis Services Unleashed” I’ve been collecting my thoughts on how to build a better cache warmer with a view to blogging about it again. This first post will deal with the relatively easy bit – warming the SE cache – and when I have time I’ll write about the more difficult but more useful task of warming the Formula Engine cache, and by hopefully then I’ll be in a position to create the definitive SSIS cache warmer package and be able to blog about that too.

Before I go on I need to mention that one other reason that has made me want to revisit this subject is that, at his MDX Deep Dive session at PASS last year, Mosha asserted that cache-warming was a Bad Thing. This intrigued me, since it seemed like such an obviously Good Thing to do, but of course it turns out he had good reason for saying what he did… SSAS caching is a lot more complex than it first appears and cache warming done badly can be counter-productive.

I’m also tempted to say that if you don’t need to warm the FE cache (ie you don’t have any complex calculations that take a long time to run) then there’s not much point in warming the SE cache. In my experience if you have the right aggregations built then you can get excellent response times even on a cold cache. Of course to build aggregations you need to know the granularities of data that your users’ queries are, but you need the same information to warm the cache. The only difference would be that you can make your cache warmer much more dynamic than your aggregation design: if your users’ query patterns change regularly it would be difficult to keep your aggregation design in synch with them, whereas with a cache-warmer you can record the most recent queries that have been run and warm the cache accordingly.

There are a number of things to bear in mind when warming the SE cache:

  • We must not try to cache too much data. If we go over the LowMemoryLimit data will start to be evicted from the cache, which means we get no benefit from that point on; if we go over the TotalMemoryLimit it’s likely that just about the entire cache will be emptied. We also have to remember that the cache will grow as a result of normal usage during the day, so we should plan to use only a fraction of the available memory when warming the cache.
  • We want to avoid cache fragmentation. The SE cache uses a data structure called the data cache registry to store its data, and when the FE requests a (potentially filtered) subcube of data from the SE that data subsequently gets stored in the SE cache as an entry in the data cache registry. As a result it is possible that instead of having one entry in the SE cache that contains a lot of data, the same data could be spread across multiple smaller entries. This is bad because:
    • It means that lookup in the cache is slower as there are more entries to search
    • The SE doesn’t bother to continue searching in the data cache registry after it has compared the current request with more than 1000 subcubes, and will then go to disk even if the required data is in the cache. If the cache is filled with lots of small entries then this will happen more often.
    • If the FE requests the slice {A,B} from a hierarchy in a subcube and the data for those two members A and B are held in different entries in the cache, then it will not be found. It will only be matched if a single entry in the cache contains data for both A and B.
  • Data in the SE cache can be aggregated to answer a request – but only if the data itself if aggregatable, and this is not the case when there are distinct count measures in the measure group or when there are many-to-many relationships present. Assuming that the cache is aggregatable though we should perform SE cache warming at lower rather than higher granularities, although if we go too low down we’ll end up caching data we don’t need, using up too much memory and find that the cache warming itself takes too long.

Of course all of the above is subject to change in later versions, but I understand that this is the current situation with SSAS 2008 SP1.

How can we find out what data to use to warm the SE cache then? The MDX queries that have been run by users aren’t much help, but we can see what subcubes are being requested when a query is run by looking at the Query Subcube and Query Subcube Verbose events in Profiler; we could therefore run a Profiler trace to capture this data, but in fact the easiest option is to just use the Usage Based Optimization wizard’s Query Log since it contains the same data as Query Subcube. The following blog entry from James Snape shows how to interpret this information:
http://www.jamessnape.me.uk/blog/2006/11/09/SubcubeQueries.aspx 
A truly sophisticated SE cache warmer would be able to take this data, along with data about attribute relationships, and work out what the most recent, frequent (but slowest) subcube requests were, then try to find some lowest (but not too low) common granularities to use for cache warming, perhaps also slicing so that only the most recent time periods were used as well.

Last of all, how do we actually go about loading data into the SE cache? We could run MDX queries but this is a bit of a risky approach since we may not know what calculations are present and so we can’t be sure that our query requests the right data; the best option is to use the CREATE CACHE statement that is described here:
http://blogs.msdn.com/sqlcat/archive/2007/04/26/how-to-warm-up-the-analysis-services-data-cache-using-create-cache-statement.aspx


chris-webb

Chris has been working with Microsoft BI tools since he started using beta 3 of OLAP Services back in the late 90s. Since then he has worked with Analysis Services in a number of roles (including three years spent with Microsoft Consulting Services) and he is now an independent consultant specialising in complex MDX, Analysis Services cube design and Analysis Services query performance problems. His company website can be found at http://www.crossjoin.co.uk and his blog can be found at http://cwebbbi.spaces.live.com/ .


Tags: performance, amo

 

2007-2015 VidasSoft Systems Inc.