Showing results for 
Show  only  | Search instead for 
Did you mean: 

This product reached the end of support date on March 31, 2021.

CAS database file size grew very large



The size of the CAS database file is very large and consumes a lot of disk space. I am worried that it will increase further in the future.

The Data Center RUM Knowledge Base describes how to handle Shrink Action using SQL Server Management Studio as a workaround for this problem.

However, I think that it is not realistic for customers to do this operation. Do you have any ideas that do not depend on customer operation?


Which version of Sql server is used? Which version of CAS is used?


Thank you for your comment. CAS is 2017 SP 6. SQL Server is 2016

Can you provide more details on SQL Server version. Which SP and is it Standard or Enterprise edition?


SQL Server 2016 Standard edition

And which SP is it? I am asking because since SP1 there is a possibility to use table compression. This also requires CAS 2018 (NAM Server) SP1.

Since we need to access the customer environment, SQLserver 2016 Standard Edition "SP" can not answer now. Please give me a few days time. And CAS Version is 2017 SP 6.

Ok. Table compression should help resolving problems with data storage. Maybe it would be worth upgrading to NAM Server 2018. When it comes to shrinking CAS database it is of little use as we truncate tables which are outdated so there should be no need for that. To analyze why the database is still growing we would need to take a look at database status diagnostic reports

Detailed information on Sql Server version. "13.0.5026.0"

Ok, so table compression is supported. You may want to upgrade NAM 2018 SP1 to utilize it and reduce the database size. Regardless of this, you may want to perform the analysis where are the growing places in terms of database size.

Dynatrace Pro
Dynatrace Pro

I think what we really need to ask is why it is so big. Is there data occupying the space in the database or is there empty space that could be recovered.

To check the amount of space actually used in the database. Open the Diagnostics item in the Report server main menu and select system status. One of the lines near the top shows database status. The reserved space entry is how much data is actually in the database (data size can also be seen from the SQL management studio if you have access to it) Any other space up to the file size is free space. If all of the space is used it's time to look at why.

There are a number of posts talking about the server cache limit being exceeded - those might be good places to start.

Additionally if the number of operations being tracked is too high you could run a SQL query to find out which software services are causing it. Go to the admin console from the main menu - open the SQL console in the right column. In the list is a query "number-of-sessions-diagnostic.sql". There is a section you will have to change - where it has detla.remsession - change it to just rtmsession.

Run the query - in the top section you will see the number of unique URLs per software servcice in the operation column. Usually these are in the 1000-5000 range. If it is too much above this it may be worth reviewing what operations are being reported for those software services. If too many operations are being reported then change the software service to report fewer.

Were any other changes made that could have affected the size - like say going from 5 minute intervals to 1 minute intervals? - if so it may be time to load balance the report server by creating/adding another report server to the cluster.

Thanks for your comment.

Customers with this problem have many bases around the world. About 700 sites are defined in DCRUM. Will the number of sites become a factor to increase the DB?
The user added only one software service.
The data collection cycle is 5 minutes.