Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
147
views
Processing Dimension completed but error occurs
I have problem while processing cube in SSAS, everytime I have an error: [![enter image description here][1]][1] [1]: https://i.sstatic.net/L2Qal.png I tried to process every dimension separately and it looks like something bad was going on after processing 'Date' dimension. Any ideas why? Best rega...
I have problem while processing cube in SSAS, everytime I have an error:
I tried to process every dimension separately and it looks like something bad was going on after processing 'Date' dimension.
Any ideas why?
Best regards,
Ana

user201347
(1 rep)
Feb 18, 2020, 03:47 AM
• Last activity: Jul 13, 2025, 02:05 AM
5
votes
2
answers
1555
views
Should I snowflake or duplicate it across my facts?
I'm building up a data warehouse to be used by SSAS to create cubes on, and I'm debating between two possible schemas. In my warehouse, I've got two different fact tables that tracking daily changes in dollar values. Each entity in these fact tables have an underlying Sales Order and Line to which t...
I'm building up a data warehouse to be used by SSAS to create cubes on, and I'm debating between two possible schemas.
In my warehouse, I've got two different fact tables that tracking daily changes in dollar values. Each entity in these fact tables have an underlying Sales Order and Line to which they relate. These SOs and Lines then have other related dimensions, such as customer, product, etc. About 12 sub-dimensions total so far.
My question is if I should be rolling all these sub dimensions up directly into the fact tables, or if I should use a little snowflaking in my warehouse, and have them branching off the Sales Order and Lines dimension instead.
The first option obviously follows a star-schema model better. However, if changes are made such as adding additional dimensions, it becomes more maintenance, basically having to do the ETL twice for each fact table, rather than just the once on the SO dimension. As well, if a new fact is added that relates to Sales Orders, I'd have to go through the whole process again.
As this is my first DW/OLAP project, I'm not familiar on where the line should be drawn on snowflaking, and other people's thoughts would be highly appreciated.
Evan M.
(231 rep)
Mar 26, 2013, 03:45 PM
• Last activity: Jun 28, 2025, 10:06 PM
0
votes
0
answers
60
views
Unable to connect to SQL Server 2022 Analysis Services
I successfully installed SQL 2022 and can connect to it from SSMS. I then installed SSAS in the instance but I am unable to connect to it from SSMS. Port 2383 and 2382 are both open. I get the following error: "A connection cannot be made. Ensure that the server is running. (Microsoft.Analysis.Adomd...
I successfully installed SQL 2022 and can connect to it from SSMS.
I then installed SSAS in the instance but I am unable to connect to it from SSMS. Port 2383 and 2382 are both open. I get the following error:
"A connection cannot be made. Ensure that the server is running. (Microsoft.Analysis.AdomdClient)
A connection attempt failed because the connected party did not properly respond after a period of time, ...."
Any ideas what could be responsible please?
PTL_SQL
(427 rep)
Jun 18, 2025, 07:15 PM
1
votes
1
answers
30
views
Schema comparison tools for SSAS
Visual Studio with the SSDT add-in has schema comparison tools for relational databases that allow you to compare a repository to a target database and generate a .dacpac for deployment. Is there similar out-of-the-box functionality (with or without an add-in) for comparing/deploying SSAS cubes and...
Visual Studio with the SSDT add-in has schema comparison tools for relational databases that allow you to compare a repository to a target database and generate a .dacpac for deployment. Is there similar out-of-the-box functionality (with or without an add-in) for comparing/deploying SSAS cubes and tabular models? If so, how does one access it?
Alex Pixley
(43 rep)
Jun 13, 2025, 06:23 PM
• Last activity: Jun 13, 2025, 07:27 PM
0
votes
1
answers
247
views
SSAS- Dimension Attribute needs to be sorted
In one of my SSAS- cube one attribute(inset_date) which is Date datatype and format is yyyy-mm-dd, when I checked in excel pivots it is not coming in order. I have changed the order by property from name to key but it is not sorted, Key -- insert_date(date) Name -- insert_date(wchar) Value-- insert_...
In one of my SSAS- cube one attribute(inset_date) which is Date datatype and format is yyyy-mm-dd, when I checked in excel pivots it is not coming in order.
I have changed the order by property from name to key but it is not sorted,
Key -- insert_date(date)
Name -- insert_date(wchar)
Value-- insert_date(date)
Can you please what other properties I have to change to sort the attribute of a dimension?
Ravi Teja
(1 rep)
Sep 24, 2020, 10:32 AM
• Last activity: May 27, 2025, 12:00 AM
23
votes
2
answers
42606
views
What are Measures and Dimensions in Cubes
I'm new to Microsoft SQL Server Business Intelligence and `Analysis Service` (but I've been programming for years with SQL Server). Can any one describe Measures and Dimensions in Cubes in simple words (If it's possible with images)?
I'm new to Microsoft SQL Server Business Intelligence and
Analysis Service
(but I've been programming for years with SQL Server). Can any one describe Measures and Dimensions in Cubes in simple words (If it's possible with images)?
DooDoo
(203 rep)
Jul 3, 2013, 11:01 AM
• Last activity: May 5, 2025, 04:22 AM
0
votes
1
answers
331
views
SSAS Backup to Remote Location
I followed the steps in the best answer on this question - https://dba.stackexchange.com/questions/43493/how-can-i-dynamically-back-up-all-ssas-databases-on-a-given-instance - and can successfully create backups of our BI Cubes to a local drive on the server (Windows Server 2008 R2 running SQL Serve...
I followed the steps in the best answer on this question - https://dba.stackexchange.com/questions/43493/how-can-i-dynamically-back-up-all-ssas-databases-on-a-given-instance - and can successfully create backups of our BI Cubes to a local drive on the server (Windows Server 2008 R2 running SQL Server 2008 R2 Standard), but no matter what user I attempt to execute the package as, I cannot save the file to a remote drive (via UNC or mapped network drive).
The package simply fails with the following non-descript error...
`
File system error: The following error occurred during a file operation: Access is denied.
`
I have created new shares with read/write access on our NAS devices and I have tested whether the user that is executing the package (whether in VS, SSAS Backup Server dialogue or SQL Agent) can create files (and folders, as per the above linked article) and in each case, they can, but the package will always fail when writing the .abf file off-server.
The user accounts in question are also in the CubeAdmins role.
The XMLA is as follows (with UNC replaced)...
BI Database
\\UNC\SQL\BI\BI Database.abf
Have I hit a limitation of the SQL licensing? When I was attempting a SSAS database restore, I did run into a licensing error with regards to Storage Locations, but presumably that is just storing the OLAP cubes outside of the default location.
All help is appreciated!
Optimaximal
(101 rep)
Jun 8, 2023, 01:33 PM
• Last activity: May 1, 2025, 06:01 AM
2
votes
2
answers
2826
views
Has anyone deployed a succesful SSAS on docker?
I'm trying to create a docker image that contains MS SQL server, analysis services, and restore a db (bak) and a cube (abf). The Sql server part is the easy stuff, and I imagine that the restore will be too quite simple... the unsuccessful part is the installation of SSAS. I've already tried without...
I'm trying to create a docker image that contains MS SQL server, analysis services, and restore a db (bak) and a cube (abf). The Sql server part is the easy stuff, and I imagine that the restore will be too quite simple... the unsuccessful part is the installation of SSAS.
I've already tried without success:
- Splitting images (1 for MSSQL and 1 for SSAS)
- installing from the inside of a container with SQL Server installed
- trying differents account (NT AUTHORITY\NETWORK SERVICE,BUILTIN\ADMINISTRATORS)
Any chance that anyone has done something similar or is "impossible" to achieve such a [working] image?
This is a clean sample of what I'm doing:
FROM mcr.microsoft.com/windows/servercore:1903
ENV exe "https://go.microsoft.com/fwlink/?linkid=840945 "
ENV box "https://go.microsoft.com/fwlink/?linkid=840944 "
ENV sa_password="_" \
attach_dbs="[]" \
ACCEPT_EULA="_" \
sa_password_path="C:\ProgramData\Docker\secrets\sa-password" \
appname="demo"
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
WORKDIR /
RUN Invoke-WebRequest -Uri $env:box -OutFile SQL.box ; \
Invoke-WebRequest -Uri $env:exe -OutFile SQL.exe ; \
Start-Process -Wait -FilePath .\SQL.exe -ArgumentList /qs, /x:setup ; \
.\setup\setup.exe /q /ACTION=Install /INSTANCENAME=MSSQLSERVER /FEATURES=SQL,AS,Tools /UPDATEENABLED=0 /TCPENABLED=1 /NPENABLED=0 /IACCEPTSQLSERVERLICENSETERMS /BROWSERSVCSTARTUPTYPE=Automatic /ASSVCSTARTUPTYPE=Automatic /SQLSVCACCOUNT='domain\user' /SQLSVCPASSWORD='somepassword' /SQLSYSADMINACCOUNTS='domain\user' /ASSVCACCOUNT='domain\user' /ASSVCPASSWORD='somepassword' /ASSYSADMINACCOUNTS='domain\user' ; \
Remove-Item -Recurse -Force SQL.exe, SQL.box, setup
thorxas
(37 rep)
Oct 11, 2019, 07:33 AM
• Last activity: Apr 26, 2025, 10:05 PM
0
votes
2
answers
1038
views
how to find backup history of SSAS database
Shockingly on Internet, I am unable to find backup history of SSAS database. Is there no way to do it? Also should we take multiple backups of SSAS database? I am not much aware of SSAS database. Thanks in advance.
Shockingly on Internet, I am unable to find backup history of SSAS database.
Is there no way to do it?
Also should we take multiple backups of SSAS database? I am not much aware of SSAS database.
Thanks in advance.
sachin-SQLServernewbiee
(473 rep)
Feb 26, 2020, 12:20 PM
• Last activity: Apr 11, 2025, 02:00 AM
0
votes
0
answers
33
views
SSAS Processing Sanity check: Am I using partitions effectively?
I have been rebuilding a DataWarehouse and lately I have been focusing on decreasing processing times. This is using MS SQL Server 2022. I have been implementing partitions on some of the larger fact tables where appropriate, and am wondering if the way I am approaching partition processing is effec...
I have been rebuilding a DataWarehouse and lately I have been focusing on decreasing processing times. This is using MS SQL Server 2022. I have been implementing partitions on some of the larger fact tables where appropriate, and am wondering if the way I am approaching partition processing is effective.
Here's my situation:
I am running an SSIS job daily to do the following:
1) Truncate fact and dimension tables that will be reloaded with fresh data
2) Load Fact and Dimension tables with fresh data from sources
3) Rebuild indexes (>30% fragmentation, >1000 pages)
4) Reorganize Indexes (>15% fregmentation, >1000 pages)
5) Update Statistics
6) SSAS Process task - Process Full on the model
This job has been taking longer than desired. I have been working on decreasing time by taking a more targeted approach to model processing. Rather than perform a "Process Full" against the model, I am trying the following in order:
Fully process dimension tables, Fully process fact tables, Fully Process measure table, recalculate model.
Two of my fact tables are not rebuilt daily, but instead have a daily dataset added each day(snapshot). I have decided to partition these tables as they are burdensome. My partitioning strategy is what I'm mostly hoping for critique of, though I'd be happy for thoughts on any part of my approach.
To partition these snapshot tables, I have taken the following approach: I've created two database views for each table, latest and older. The views use a stored procedure as the date filter, which determines the last successful run date of this SSIS job. For instance, if my daily SSIS job has failed since March 8th, the "Latest" views would contain all snapshot data >= March 8th. "Older" views would include snapshot data < March 8th. My goal here is to avoid unnecessary processing of these snapshot tables due to their size. Each snapshot table in the model has two partitions: Latest and Older.
I have configured my processing task to fully process only the "Latest" partition of these snapshot tables. Once a week I run a Process Full on the entire model to cover my bases.
I am concerned about whether this is going to effectively keep my model up to date. It is performing admirably timewise, and I will be performing extensive testing to ensure data consistency, but am also interested if this is a coherent approach or if it's flawed.
Thank you for any thoughts or advice you can provide!
Kad
(1 rep)
Mar 11, 2025, 02:36 PM
1
votes
1
answers
1378
views
Improve SSAS Tabular query performance
I'be build a Tabular Cube with FactTable and four dimensions. Fact table has about 2.500.000 of rows (and about 20 calculated measures on the cube level) and dimension tables from 5k-50k of rows. When I connect to the cube from Excel and try to add dimension I have to wait like 15-20 seconds for thi...
I'be build a Tabular Cube with FactTable and four dimensions. Fact table has about 2.500.000 of rows (and about 20 calculated measures on the cube level) and dimension tables from 5k-50k of rows. When I connect to the cube from Excel and try to add dimension I have to wait like 15-20 seconds for this. Do you know some ways to improve the query performance of such tabular cube (SQL 2014) ?
Performance issue not visible (Year consolidated), filtering by Port works fast:
Performance issue visible (Year expandeding to weeks takes about 15 sec, filtering by Port after next 15 seconds everytime):



Tomasz Wieczorkowski
(362 rep)
Dec 14, 2016, 04:14 PM
• Last activity: Feb 15, 2025, 07:03 PM
2
votes
1
answers
614
views
SSAS Cube for tracking changes in parent child relationship over time
I would like to build an SSAS cube which tracks how objects in a graph who's edges represent a "belongs to" relationship change over time (daily). There are two components to the change: 1. which object belongs to which 2. attributes of each object. Here is my schema: fact.Edge: the_date date parent...
I would like to build an SSAS cube which tracks how objects in a graph who's edges represent a "belongs to" relationship change over time (daily). There are two components to the change:
1. which object belongs to which
2. attributes of each object.
Here is my schema:
fact.Edge:
the_date date
parent_id int
child_id int
fact.Vertex:
the_date date
id int
attribute1 int
attribute2 int
...
attributen int
dim.attribute{1...n}:
id int
value1 nvarchar(64)
value2 nvarchar(64)
...
valuem nvarchar(64)
These tables get new data once daily. If nothing changes, then there are two copies of the exact same data in the two fact tables with sequential dates.
I would like to know if it is possible to define a parent child hierarchy in SSAS based on the fact.Edge table referencing itself (via
child_id->parent_id
) but also only when the_date = the_date
.
I am new to SSAS, but it seems only one attribute can be the parent attribute. Are there any workarounds?
Additionally, is it possible to treat the vertex table as two "fact" related dimensions -- ie parent_vertex
and child_vertex
? Or else do I need to include edges with either a null parent_id
or null child_id
and choose the other to have the only vertex reference?
If my questions don't quite make sense (likely due to my limited SSAS experience), is there an example cube definition that demonstrates best practices for this case?
I'd appreciate any insights you might have!
Jonny
(121 rep)
Sep 29, 2015, 11:31 PM
• Last activity: Feb 14, 2025, 12:01 PM
0
votes
0
answers
18
views
SSAS Multidimensional Model: Questions on Key Attributes
I have several dimensions whose key attributes (i.e. attributes whose Usage property = "Key") takes a really long time to process (because the dimensions are really big; some of them are actually parts of a fact table). To speed up key attribute processing but without losing any functionalities, If...
I have several dimensions whose key attributes (i.e. attributes whose Usage property = "Key") takes a really long time to process (because the dimensions are really big; some of them are actually parts of a fact table). To speed up key attribute processing but without losing any functionalities,
If the key attribute, which is a surrogate key, is already ordered in the corresponding source table AND it NameColumn property is not set:
1. Is it still necessary to set its AttributeHierarchyOrdered to "True"?
2. Is it still necessary to set its AttributeHierarchyOptimizedState to "True"? I ask this because I think it should be its corresponding foreign key of the fact table that needs to be indexed, not the primary key of a dimension.
Would the answer of the above two questions be different if its NameColumn property is set to a different attribute? For example suppose the attribute's KeyColumn property = "ProductIdx" but the corresponding NameColumn property = "ProductName", and OrderBy property = "Name"?
Please advise.
HandsomePig
(7 rep)
Feb 4, 2025, 03:06 AM
1
votes
1
answers
1706
views
How to release SSAS memory without forcing restart
We have a production server with two SSAS instances, 1 for user querying and 1 with empty templates where we do new releases and (full) processing of the cubes (and then backup and restore processed cubes to main instance, and delete the processed cubes from the 'processing' instance). This is to pr...
We have a production server with two SSAS instances, 1 for user querying and 1 with empty templates where we do new releases and (full) processing of the cubes (and then backup and restore processed cubes to main instance, and delete the processed cubes from the 'processing' instance). This is to prevent any downtime on our instance used by clients.
These processed cubes seem to be kept in memory, even though they are deleted after processing and backup.
This processing instance holds no data as seen in ssms, but it keeps growing in memory (up to 40-50GB) untill it starts failing to process due to memory issues after a few days.
About 95% of this memory is outside of the shrinkable/non-shrinkable data cleaner memory, so the memory limits are not doing anything to release this memory. After processing all cleaner memory drops to several 100 mb, while total memory usage for this instance stays high, and will keep growing untill we have failures. I don't believe the solution lies in any memory limits, since the used memory is not detected by the SSAS data cleaner. I have tested adjusting these memory limits, no effect.
Doing a process clear before deleting the cubes also has no effect.
The only thing that works is a manual restart of this instance every 2-3 days, but this is obviously not a proper/maintainable solution (automating this in a job step would require a proxy account with full admin rights on our production server, something we would like to avoid).
All software is up-to-date (microsoft analysis server version 15.0.35.33), VertiPaqPagingPolicy = 1, server mode is Tabular and all cubes are in Import mode.
I've been researching for a while now, but can't find the same issue anywhere, let alone a solution.
Any help would be greatly appreciated!
DBHeyer
(11 rep)
May 26, 2023, 08:04 AM
• Last activity: Jan 19, 2025, 04:08 AM
3
votes
1
answers
1437
views
How to find last processed time of dimension in SSAS
I have a list of around 100+ dimensions. Is there any way to check the last processing time of each dimension using DMV's in SSAS?. Thanks in advance
I have a list of around 100+ dimensions. Is there any way to check the last processing time of each dimension using DMV's in SSAS?.
Thanks in advance
user13966865
(81 rep)
Dec 29, 2021, 06:35 AM
• Last activity: Jan 17, 2025, 02:41 PM
0
votes
1
answers
74
views
Does Analysis Services provide connection pooling
We know that when creating a SQL connection, for example: ```lang-cs using (SqlConnection sqlConn = new SqlConnection(conString)); ``` we are actually not always creating a new connection to the server, but rather we are taking an already open connection from the [connection pool][1]. --- However, w...
We know that when creating a SQL connection, for example:
-cs
using (SqlConnection sqlConn = new SqlConnection(conString));
we are actually not always creating a new connection to the server, but rather we are taking an already open connection from the connection pool .
---
However, when we open a connection to the Analysis Services, do we also use some kind of connection pooling?
-cs
using (AdomdConnection adomdConn = new AdomdConnection(conString));
Does this always lead to a new connection or does it take one from the pool if possible?
I am not able to infer this from the official documentation , but I have found mostly older unofficial articles that clearly state there is no connection pooling for AdomdConnection :
- SQL Server 2005 Analysis Services’s ADOMD.NET Connection Pooling, or Lack Thereof
- ADOMD.NET Connection Pooling
PajLe
(133 rep)
Apr 25, 2024, 08:36 PM
• Last activity: Dec 31, 2024, 04:30 PM
0
votes
0
answers
38
views
MS SSAS named instance doesn't work after NTLM deny
Named instance MS SSAS don't accept connections after NTLM incoming/outgoing traffic deny. There is SPN for this server like: FQDN SPN: Setspn -s MSOLAPSvc.3/AW-SRV01.AdventureWorks.com:AW-FINANCE AdventureWorks\SSAS-Service NetBIOS SPN: Setspn -s MSOLAPSvc.3/AW-SRV01:AW-FINANCE AdventureWorks\SSAS-...
Named instance MS SSAS don't accept connections after NTLM incoming/outgoing traffic deny.
There is SPN for this server like:
FQDN SPN: Setspn -s MSOLAPSvc.3/AW-SRV01.AdventureWorks.com:AW-FINANCE AdventureWorks\SSAS-Service
NetBIOS SPN: Setspn -s MSOLAPSvc.3/AW-SRV01:AW-FINANCE AdventureWorks\SSAS-Service
Also default instance on the same server works correct without any problems. SPN was added same way:
Setspn -s MSOLAPSvc.3/AW-SRV01.AdventureWorks.com AdventureWorks\SSAS-Service
FlegmaSpirit
(1 rep)
Aug 23, 2024, 08:37 AM
0
votes
1
answers
116
views
DMV CONNECTION_HOST_APPLICATION field is NULL
I am trying to monitor cubes and expressions of Analysis Services using DMV queries that are stored on Server's RAM. I found many useful information about connections on connection's table of SSMS's DMVS rowser using the following mdx statement. SELECT * FROM $SYSTEM.DISCOVER_CONNECTIONS The problem...
I am trying to monitor cubes and expressions of Analysis Services using DMV queries that are stored on Server's RAM.
I found many useful information about connections on connection's table of SSMS's DMVS rowser using the following mdx statement.
SELECT *
FROM $SYSTEM.DISCOVER_CONNECTIONS
The problem is that the [CONNECTION_HOST_APPLICATION] field of connection's table is not recorded and contains null values for my client application.
> CONNECTION_HOST_APPLICATION specifies "The name of the machine that initiated the connection".
As a result i can not filter the rows that are generated from my client tool to analyse only them. So how can i modify my client tool in order to register this field?
Stavros Koureas
(170 rep)
Nov 24, 2016, 02:08 PM
• Last activity: Aug 21, 2024, 07:23 PM
5
votes
3
answers
27260
views
SSAS Tabular: ImpersonationMode that is not supported for processing operations
I have a SQL 2016 SP1 SSAS Tabular Instance. I've deployed a model with the following properties [![enter image description here][1]][1] [![enter image description here][2]][2] When I try to process the database or a table I get an error **"The datasource contains an ImpersonationMode that is not su...
I have a SQL 2016 SP1 SSAS Tabular Instance. I've deployed a model with the following properties
When I try to process the database or a table I get an error **"The datasource contains an ImpersonationMode that is not supported for processing operations"**.
But if I change the impersonation info on the connection properties to use the service account instead of the current user it works fine.
We also don't get this issue if we change the Default Mode to DirectQuery instead of import, but we need to use Import because we need to use DAX username function for row level security.
I am an admin on the SSAS instance and also an admin on the SQL Server instance that is the data source. Why can't I process the SSAS tabular model as my user?


Adrian S
(326 rep)
Feb 22, 2017, 02:13 PM
• Last activity: Aug 8, 2024, 04:15 PM
0
votes
0
answers
18
views
SSAS DSV logical primary key performance impact
**TLDR:** does setting up a logical primary key on a named query in SSAS DSV (data source view) have any performance benefits and, if so, which? --- Named queries in SSAS DSV don't necessarily need a logical primary key in order for the dimensions/measures to use them. The DSV named queries also don...
**TLDR:** does setting up a logical primary key on a named query in SSAS DSV (data source view) have any performance benefits and, if so, which?
---
Named queries in SSAS DSV don't necessarily need a logical primary key in order for the dimensions/measures to use them. The DSV named queries also don't need a relationship between them.
I have found that adding relationships doesn't have any performance benefits . So my question is the same for the logical primary key: does adding it have any perf benefits?
I was only able to find one article that mentions that adding a logical primary key does improve performance:
> In SSAS Data Source Views, logical primary keys and relationships are essential for data integrity and query performance
and
> Logical primary keys and relationships can help you improve the performance, accuracy, and usability of your SSAS solutions
However, they don't explain how does it impact the performance (i.e. do they somehow add an index on those columns or what-have-you).
PajLe
(133 rep)
Aug 5, 2024, 03:48 PM
Showing page 1 of 20 total questions