DBAs in my company saw this error logged in Event Viewer on production OLAP server several times this week. Every time cube processing failed with same error.
A timeout (30000 milliseconds) was reached while waiting for a transaction response from the MSSQLServerOLAPService service.
There was a change made to one of the cubes recently but that cube was not failing. So I started looking into server side if there is anything wrong there. This timeout setting is controlled by server property ForceCommitTimeout. Following is description from books online
Property Name
|
General Page: ForceCommitTimeout
|
Default Value
|
30000 (30 seconds)
|
Unit of Measure
|
Milliseconds
|
Data Type
|
Integer
|
Minimum Value
|
0 – Will not force a commit timeout.
|
Maximum Value
|
2147483647
|
Requires Restart
|
No
|
Alternate GUI Tool
|
None
|
Special Notes
|
None
|
Security Implications
|
None
|
CommitTimeout: Analysis Server processing operations need to acquire a write lock before it can commit a transaction. In order to acquire a write lock, no other read locks can be taken by another process or query. Therefore, Analysis Services needs to wait until all read locks are released. The transaction will wait for a period of time to acquire a write lock, as specified by the CommitTimeout property before rolling back.
Property Name
|
General Page: CommitTimeout
|
Default Value
|
0
|
Unit of Measure
|
Milliseconds
|
Data Type
|
Integer
|
Minimum Value
|
0 – Indicates that Analysis Services will wait indefinitely to acquire a write lock in order to commit a transaction.
|
Maximum Value
|
2147483647 Milliseconds, or approximately 25 days.
|
Requires Restart
|
No
|
Alternate GUI Tool
|
None
|
Special Notes
|
None
|
Security Implications
|
None
|
So what exactly goes on while cube is processed?
There are multiple steps of processing an OLAP object
- First a new object is created with new version. So if you look under the DATA directory for OLAP (In my case it is \\ServerName\D$\MSAS10_50.MSSQLSERVER\OLAP\Data\CubeName.0.db) you will see that there 2 versions of each file which holds the version information. This you can see when cube is getting processed. After cube is processed successfully latest version is maintained. Let’s say I am processing Vendor dimension then version 1 file will be Vendor.1.(all).astore. Here 1 stands for version number
- Once file is processed the file name will change to new version number Vendor.2.(all).astore
- DDL definition of each dimension which in Vendor.1.dim.xml file under the tag <ObjectVersion>2</ObjectVersion>
- Successful completion of processing results in “pending commit” lock being placed on the object being processed along with dependent objects.
- If there are queries running against the object then queries will be allowed to continue with older version of the object (In our case version 1 of Vendor dimension) and all new queries will be directed to new version (version 2 of Vendor)
- Now since we have set default value for property ForceCommitTimeout to 30 seconds (30000 ms) then any running queries will be allowed to run for 30 seconds and once time has expired then queries will be cancelled and version will be swapped.
- But any queries executed after “pending commit” lock will stall for 30 seconds before then start execution.
- Another setting is CommitTimeout, which causes the cube processing operation to timeout and rollback allowing currently executing queries to continue to completion.
Conclusion: While cube is being processed the cube is accessible to users as I explained above. However the downside of it inconsistent performance of queries. So to troubleshoot above error you need to see what queries were running while the cube was being processed. Once you know that there are queries running while cube is being processed then you can change either of the 2 settings
ForceCommitTimeout to set timeout of queries
CommitTimeout to timeout and rollback cube processing queries
No comments:
Post a Comment