Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
157
views
How to secure SQL Server in an enterprise environment?
So I've been tasked with learning how to secure our SQL Servers. Here's the scenario: Individual users have read access to a table in SQL Server. This table have apx. 33 million rows, and growing (telemetry data). Some *genius* user discovered Power BI and installed it (Desktop), along with the on-p...
So I've been tasked with learning how to secure our SQL Servers. Here's the scenario:
Individual users have read access to a table in SQL Server. This table have apx. 33 million rows, and growing (telemetry data).
Some *genius* user discovered Power BI and installed it (Desktop), along with the on-prem data gateway on their workstation. Then they did the same thing for 40 of their friends.
THEN they created a .pbix report to query this table.
THEN they published the report to their personal workspace.
THEN they emailed the report to their 40 friends with instructions on how to set up the gateway on each workstation - and how to enable the data sync schedule for twice a day.
So now our 33 million records are exiting our network (40 users * twice a day = 80 times per day). Besides being a strain on the SQL Server - we just don't want that data being published to a service we don't have any control over by users that aren't authorized to be doing that.
So the questions are:
How can we prevent this scenario? What's the proper way to set up a database so a user can access the data, but not be able to publish the data off-site? Are there firewall rules we can set up to block the gateway service? My understanding was it just needed port 443 outbound - which we obviously can't block.
Any recommended training courses that cover this kind of stuff?
Thanks!
Sam Axe
(183 rep)
Jun 10, 2019, 08:17 PM
• Last activity: Jul 23, 2025, 06:08 AM
0
votes
1
answers
168
views
In a star scheme, How to define the fact table when the data has different levels of dimensions?
In a PowerBI project, I need some help to define the fact table/s. Eventually, I would like to have a nice star scheme to preset some basic visuals as stacked bar etc. to the client. I'm new to BI and I'm afraid that I'm missing here something very obvious. Description of the data ==================...
In a PowerBI project, I need some help to define the fact table/s.
Eventually, I would like to have a nice star scheme to preset some basic visuals as stacked bar etc. to the client.
I'm new to BI and I'm afraid that I'm missing here something very obvious.
Description of the data
=======================
I have data stored in a tables:
**City**, **Company**, **Budget**, **CategorizedBudget** and **Expense**.
The **City** table contain city name and region.
| ID | City | Region |
| -- | ----------- | ------ |
| 1 | RocksVille | Center |
| 2 | GoldCity | North |
The **Company** table contain company name and city.
| ID | Company | FK_CityID |
| -- | ----------- | --------- |
| 1 | Nissan | 1 |
| 2 | Herosh LTD | 2 |
The **Budget** table contain companies annual budget amounts.
| ID | FK_CompanyID | Year | Amount |
| -- | ------------ | ---- | --------- |
| 1 | 1 | 2019 | 1000000 $ |
| 2 | 1 | 2020 | 1400000 $ |
The **CategorizedBudget** table contain **some** of the annual budget amounts of those companies, divided by categories
| ID | FK_BudgetID | Category | Amount |
| -- | ----------- | ---------- | -------- |
| 1 | 1 | Category A | 400000 $ |
| 2 | 1 | Category B | 300000 $ |
| 3 | 1 | Category C | 300000 $ |
The **Expense** table contain tertile budget amounts of those companies, divided by categories.
| ID | FK_CompanyID | Year | Tertile | Category | Amount |
| -- | ------------ | ---- | ------- | ---------- | -------- |
| 1 | 1 | 2019 | 1 | Category A | 100000 $ |
| 2 | 1 | 2019 | 1 | Category B | 100000 $ |
| 3 | 1 | 2019 | 1 | Category C | 150000 $ |
| 4 | 1 | 2019 | 2 | Category A | 150000 $ |
| 5 | 1 | 2019 | 2 | Category B | 50000 $ |
| 6 | 1 | 2019 | 2 | Category C | 150000 $ |
| 7 | 1 | 2019 | 3 | Category A | 100000 $ |
| 8 | 1 | 2019 | 3 | Category B | 150000 $ |
| 9 | 1 | 2019 | 3 | Category C | 50000 $ |
Expected Result
===============
The visuals should be:
1. Stacked Column Chart that show sum of the annual **budget** amounts for the companies city regions
2. Stacked Column Chart that show sum of the annual **expenses** amounts for the companies city regions
3. when drilling to the **expenses**, show the amounts by tertiles.
Dimensions
==========
It appears that the dimensions are: **Date** (year, tertile), **Location** (city, region), **Category** (name), **Comapny** (name)
Fact/s
======
The fact is where I'm getting confused. I thought about some practices:
First thought
-------------
Merging **Budget** and **CategorizedBudget** and **Expenses** into one fact.
| ID | FK_CompanyID | Year | Tertile | Category | ExpensesAmount | BudgetAmount |
| -- | ------------ | ---- | ------- | ---------- | ---------------- | ---------------- |
| 1 | 1 | 2019 | 1 | Category A | 100000 $ | ? |
| 2 | 1 | 2019 | 1 | Category B | 100000 $ | ? |
| 3 | 1 | 2019 | 1 | Category C | 150000 $ | ? |
| 4 | 1 | 2019 | 2 | Category A | 100000 $ | ? |
| 5 | 1 | 2019 | 2 | Category B | 100000 $ | ? |
| 6 | 1 | 2019 | 2 | Category C | 150000 $ | ? |
But the budget annual amounts are not divided to tertiles.
Second thought:
--
Merging the **Budget** and **CategorizedBudget** to one fact and getting the Expenses as one fact.
| ID | FK_CompanyID | Year | Category | BudgetAmount |
| -- | ------------ | ---- | ---------- | ---------------- |
| 1 | 1 | 2019 | Category A | 100000 $ |
| 2 | 1 | 2019 | Category B | 100000 $ |
| 3 | 1 | 2019 | Category C | 150000 $ |
| 4 | 1 | 2020 | Category A | ? |
| 5 | 1 | 2020 | Category B | ? |
| 6 | 1 | 2020 | Category C | ? |
But some of the budget annual amounts are not categorized.
Third thought:
--
Use three facts for **Budget** and **CategorizedBudget** and **Expenses**
But it looks not starry at all.. and I'm afraid It will be complicated if I'll have to add more fact and dimensions later.
It also feel like I'm duplicating something here.
Any help would be appreciated




ShayD
(1 rep)
Jan 4, 2022, 05:10 PM
• Last activity: Jul 6, 2025, 03:03 PM
0
votes
1
answers
523
views
Power BI refresh dataflow keep a snapshot of data
my requirement is to be able to save the past state of an entity at a certain point of time. Currently when the dataflow is refreshed, the past state is lost and only the current one is displayed. Is there a way to do it? I tried setting up the incremental refresh but not sure if it fits the require...
my requirement is to be able to save the past state of an entity at a certain point of time. Currently when the dataflow is refreshed, the past state is lost and only the current one is displayed. Is there a way to do it? I tried setting up the incremental refresh but not sure if it fits the requirement.
The ideal would be having the table with a column snapshot date that is populated by the current refresh time and at each refresh this table will be populated by new rows and a new refresh time instead of replacing all the rows.
Yousra
(1 rep)
Jun 15, 2021, 10:32 AM
• Last activity: Apr 17, 2025, 06:01 PM
0
votes
0
answers
44
views
Help with the complex DAX measure
I am struggling with a DAX measure. Here is the fact and 2 dimension tables I have: factAction(dimAccountID, dimActionID, Date, ActionQty) dimAction(dimActionID, ActionCode, ActionDescription) dimDate(Date, WorkingDay) The relationships are: dimDate.Date -> factAction.Date dimAction.dimActionID->fac...
I am struggling with a DAX measure. Here is the fact and 2 dimension tables I have:
factAction(dimAccountID, dimActionID, Date, ActionQty)
dimAction(dimActionID, ActionCode, ActionDescription)
dimDate(Date, WorkingDay)
The relationships are:
dimDate.Date -> factAction.Date
dimAction.dimActionID->factAction.dimActionID
And I have a DAX measure for
ActionQty = SUM('factAction'[ActionQty])
We want to report on the number of actions of ActionCode AAA. Fine, that's easy. Just select ActionCode = AAA into the Power BI filter.
Now we want to also report, for those Accounts that had ActionCode AAA, the sum of ActionQty for ActionCode AAB where the ActionDate is within 2 working days of the AAA Action Date.
Any suggestions would be welcome!
Edit: added sample data and expected output
| AccountID | Date | ActionCode | ActionQty |
|---|---|---|---|
| AC1 | 06-Apr-20 | AAA | 1 |
| AC1 | 09-Apr-20 | AAB | 1 |
| AC1 | 07-Apr-20 | BBB | 1 |
| AC2 | 16-Apr-20 | AAA | 1 |
| AC2 | 20-Apr-20 | AAB | 1 |
So when Filtering for AccountID AC1 and April 2020, this new measure would return 0 (because the number of working days between the AAA and AAB ActionCodes for this AccountID is greater than 2)
When filtering for AccountID AC2 and Aptil 2020, the new measure should return 1 (because the number of working days between 16 April and 20 April is less than 2)
Steve
(21 rep)
Apr 13, 2020, 06:55 PM
• Last activity: Mar 6, 2025, 09:32 AM
0
votes
1
answers
42
views
How to use current user credentials in published Power BI report against Snowflake?
We are using Power BI as our reporting and dashboard tool for our Snowflake data warehouse. WE have a need for a published report to use the credentials of the person calling the report rather than the person who created and published the report, and have been unable to determine a way of doing this...
We are using Power BI as our reporting and dashboard tool for our Snowflake data warehouse. WE have a need for a published report to use the credentials of the person calling the report rather than the person who created and published the report, and have been unable to determine a way of doing this.
We have two data visibility restrictions that need to be enforced. First, based on the user's role, they are limited as to what columns of data they can see; we're enforcing this with a data masking policy in Snowflake. Second, we have a need to restrict which rows are visible to certain individuals based on their role; we have a row access policy in Snowflake which determines through a user-defined function call what customers' data are visible to a user.
Both of these work in perfectly well in Snowflake, but when called from Power BI, the data returned is based on the user who published and shared the report rather than the one executing it.
Is there a way to pass through the calling user's ID instead of the publisher's?
Additional information:
We are using Direct Query in a published report to try and get results filtered and masked results based on the calling user's role.
Person A has the ability to view all 7 columns on a table.
Person B has the ability to see 5 of these columns
Person C has the ability to see 4 of these columns, but only where the value of column 1 is within a certain group of values
Person D has the ability to see 4 of these columns, but only where the value of column 1 is within a certain group of values that is different from the values seen by Person C.
Person A creates a report that shows all 7 columns for all rows.
We have masking rules that replace values that replace values for columns that are not in a user's allowed column list with "REDACTED"
We also have a row access policy that looks up a user's list of allowed values for column 1 and filters visible rows based on that list.
We have been unable to get the report to use anything other than the publisher's access, which exposes legally sensitive information.
Kris Cook
(11 rep)
Jul 26, 2024, 03:41 PM
• Last activity: Feb 5, 2025, 03:15 PM
0
votes
2
answers
93
views
How to simplify this intersect query?
SELECT * from M left JOIN K on M.id = k.id left JOIN W on K.str = W.str where str = N'أخبار' INTERSECT SELECT * from M left JOIN K on m.id = k.id left JOIN W on K.str = W.str where str = N'فلسطين' How else to get all rows in M that have a relationship to both أخبار AND فلسطين in W?
SELECT * from M
left JOIN K on M.id = k.id
left JOIN W on K.str = W.str where str = N'أخبار'
INTERSECT
SELECT * from M
left JOIN K on m.id = k.id
left JOIN W on K.str = W.str where str = N'فلسطين'
How else to get all rows in M that have a relationship to both أخبار AND فلسطين in W?
simpatico
(99 rep)
May 19, 2024, 12:17 PM
• Last activity: May 20, 2024, 01:01 PM
0
votes
2
answers
77
views
How to calculate revenue by sales team if sales agents change teams?
Agents ID | Agent 1 | A 2 | B Teams ID | Team 1 | AA 2 | BB Memberships ID | AgentID | TeamID | Start Date | End Date 1 | 1 | 1 | 2023-01-01 | 2023-12-31 2 | 1 | 2 | 2024-01-01 | 2099-12-31 Clients ID | AgentID 1 | 1 Orders Date | ClientID | Rev 2023-02-02 | 1 | 10 2024-02-01 | 1 | 20 How to get the...
Agents
ID | Agent
1 | A
2 | B
Teams
ID | Team
1 | AA
2 | BB
Memberships
ID | AgentID | TeamID | Start Date | End Date
1 | 1 | 1 | 2023-01-01 | 2023-12-31
2 | 1 | 2 | 2024-01-01 | 2099-12-31
Clients
ID | AgentID
1 | 1
Orders
Date | ClientID | Rev
2023-02-02 | 1 | 10
2024-02-01 | 1 | 20
How to get the following table based on the above?
Team | Rev
AA | 10
BB | 20
simpatico
(99 rep)
May 15, 2024, 08:26 AM
• Last activity: May 16, 2024, 01:16 PM
-1
votes
1
answers
177
views
Create a category column or a table for each category?
Which is better? ``` Table: Words Columns: ID, Word, ParentWordID, CatID Table: Categories Columns: CatID, Category ``` Or: ``` Table: Words Columns: ID, Word Table: Categories Columns: CatID, Categories Table: Cat1 Columns: WordID ... Table: CatN Columns: WordID ``` One possible advantage for the 2...
Which is better?
Table: Words
Columns: ID, Word, ParentWordID, CatID
Table: Categories
Columns: CatID, Category
Or:
Table: Words
Columns: ID, Word
Table: Categories
Columns: CatID, Categories
Table: Cat1
Columns: WordID
...
Table: CatN
Columns: WordID
One possible advantage for the 2nd option is that not all
need to have a
. But the same could be said for
, which is empty most of the time. Maybe that too could be replaced with a
table and a table for each
?
simpatico
(99 rep)
Apr 5, 2024, 07:07 PM
• Last activity: Apr 5, 2024, 07:54 PM
0
votes
0
answers
19
views
How does M:M relation in Power BI behave (cross join or simple filter)?
Generally in a M:M relation I can think of this doing something like a cross join. How does M:M relation in Power BI behave? Is it a cross join or a simple filter just like how a 1:M would behave?
Generally in a M:M relation I can think of this doing something like a cross join.
How does M:M relation in Power BI behave?
Is it a cross join or a simple filter just like how a 1:M would behave?
variable
(3590 rep)
Mar 18, 2024, 02:30 PM
0
votes
1
answers
837
views
Unable to create linked server to Power BI XMLA endpoint in SSMS
Unable to create linked server to **Power BI XMLA endpoint** in **SSMS** and getting below error: >The test connection to the linked server failed. Additional information: An exception occurred while executing a Transact-SQL statement or batch. > >(Microsoft.SqlServer.ConnectionInfo) 'Cannot initial...
Unable to create linked server to **Power BI XMLA endpoint** in **SSMS** and getting below error:
>The test connection to the linked server failed. Additional information: An exception occurred while executing a Transact-SQL statement or batch.
>
>(Microsoft.SqlServer.ConnectionInfo) 'Cannot initialize the data source object of OLE DB provider "MSOLAP" for linked server "POWERBIAPI". OLE DB provider "MSOLAP" for linked server "POWERBIAPI" returned message "Errors in the OLE DB provider.
>
>Failed to load managed ADAL component". (Microsoft SQL Server, Error: 7303)
Khanjan Shah
(1 rep)
Mar 10, 2023, 12:12 PM
• Last activity: Feb 13, 2024, 08:06 PM
1
votes
0
answers
24
views
PowerBI Data Modeling: Modeling with sub-dimension and sub-facts
We collect test data from network tests. There is common dimensions and facts for all the different tests, But there are also some facets of the dimensions and facts that are unique. If this was an OODB I would created a super-class with all the common properites, and sub-classes of the unique ones....
We collect test data from network tests. There is common dimensions and facts for all the different tests, But there are also some facets of the dimensions and facts that are unique.
If this was an OODB I would created a super-class with all the common properites, and sub-classes of the unique ones. But PowerBI is a traditional relational model.
I hae the following notional model that I want feedback on. The key words are:
DNS = DNS lookup tests
OCSP = This is protocol to get an SSL certificate
Data Throughput = This is a full session (does DNS lookup to IP address, gets OCSP, and does data transfer)
This looks nice and simple, but I wonder if the sub-dimension such as OCSP should be linked through the common dimension table or point directly to the Common Facts. The link between the OCSP to the COMMON dimension is 1-1. But I can see this as doing two JOINS to slice the OCSP facet to get to the Fact in Common, and three Joins to get to the OCSP facts. (so maybe I should flatten this model. It would not look as nice, but it might be more performant.
BTW, there is already about 150M rows in the fact table (the dimension tables are about 1K, so that is not a big concern.

Dr.YSG
(409 rep)
Dec 20, 2023, 10:44 PM
1
votes
1
answers
51
views
How do I write a Dax statement that returns the current counts whenever I filter for a TEam
Below is my sample table and I'd like to write a Dax statement that returns the current count whenever I filter for Team A or B. Ideally, the current count per team would be on a card in power BI. | team | yearmonth | active_cnt | |-------:|:----------|------------:| | a | 2023-06 | 10 | | a | 2023-...
Below is my sample table and I'd like to write a Dax statement that returns the current count
whenever I filter for Team A or B. Ideally, the current count per team would be on a card in power BI.
| team | yearmonth | active_cnt |
|-------:|:----------|------------:|
| a | 2023-06 | 10 |
| a | 2023-07 | 15 |
| a | 2023-08 | 30 |
| b | 2023-06 | 15 |
| b | 2023-07 | 25 |
| b | 2023-08 | 30 |
Below was my attempt but it returned a blank response:
CurrentCountPerTeam =
SUMX(
FILTER(
'Query1',
'Query1'[yearmonth] = YEAR(TODAY()) * 100 + MONTH(TODAY())
),
'Query1'[active_cnt]
)
Sam Johnson
(21 rep)
Sep 8, 2023, 06:02 PM
• Last activity: Sep 20, 2023, 07:51 PM
0
votes
0
answers
35
views
Extended Events - is that a proof that the issue is not at the SQL Server side?
I have a problem with one PowerBI dashboard and I would like to ask for your advise if my logic is correct. We are migrating a few databases from one server (```Server1```) to the another one (```Server2```). At this moment we are on the testing phase. At one of the reporting servers (which is going...
I have a problem with one PowerBI dashboard and I would like to ask for your advise if my logic is correct.
We are migrating a few databases from one server (
) to the another one (
). At this moment we are on the testing phase. At one of the reporting servers (which is going to stay where it is for now, but it will be moved in some near feature) we have a PowerBI dashboard which pulls data from databases which we are moving. The refresh process at
was taking ~40s while at
it is ~2 minutes.
The time difference for that particular issue is not a huge deal as it is refreshed in the background regularly so performance degradation will not be noticeable for the end users. But we are a little bit concern if that is an indication of the problem at the server level. So I need a way to check if the root cause of the performance degradation is at the SQL side or not.
So at
(which is quite quiet as it is a test) I opened new Extended Events sessions that suppose to capture 'sql_batch_complited' events at user databases. Then I asked my colleague to trigger a refresh. Once it was completed I filtered results so timestamp was within ~3 minutes which were covering the report execution. Then I grouped by event name and summed duration. The result was about ~20s.
My question is if that is 100% valid proof that the performance degradation is not caused by a new SQL engine, or new server being set up incorrectly. Or am I missing something?
Radek Gąska
(192 rep)
Aug 11, 2023, 08:14 AM
14
votes
2
answers
11220
views
Will SSRS be extinct soon, and PowerBI the new model?
I read SQL Server 2017 will now include PowerBI Server. They also moved SSRS to a different installer, so it will not come packaged in the original SQL Server installation. Does this mean Microsoft will eventually try to deprecate SSRS? Should our team try to build new reports in PowerBI and transit...
I read SQL Server 2017 will now include PowerBI Server. They also moved SSRS to a different installer, so it will not come packaged in the original SQL Server installation. Does this mean Microsoft will eventually try to deprecate SSRS? Should our team try to build new reports in PowerBI and transition the previous SSRS reports?
user129291
Sep 15, 2017, 03:18 AM
• Last activity: Feb 12, 2023, 10:58 PM
0
votes
1
answers
211
views
Converting EAVT table into SCD type 2
After a lot of research and head picking, I'm still unable to find a good/clean solution to convert an entity-attribute-value-timestamp table to an scd type 2 dimension. Here's the issue: I have a CRM source that stores all history in a EAVT model (Entity/Attribute/Value of the attribute/valid_from/...
After a lot of research and head picking, I'm still unable to find a good/clean solution to convert an entity-attribute-value-timestamp table to an scd type 2 dimension.
Here's the issue:
I have a CRM source that stores all history in a EAVT model (Entity/Attribute/Value of the attribute/valid_from/valid_to).
So for every object (Company, product...etc) I have a table with the current state that is in a relational model, and another history table that contains all value changes to all attributes with a valid_from/valid_to column for validity of the values themselves.
I want to be able to merge these two tables into an SCD table with a Valid_To/Valid_From and a column per attribute.
To give an example:
Company has two tables:
- **Current state of the Companies:**
| company_id | name | number_of_employees | city |
| -------- | -------------- | -------------- | -------------- |
| 1 | Company 1 | 500 | Paris |
| 2 | Company 2 | 500 | Paris |
- **History Table:**
| company_id | attribute | value | valid_from | valid_to |
| -------- | -------------- | -------------- | -------------- | -------------- |
| 1 | city | New York | 01/01/2020 | 01/05/2022 |
| 1 | city | Paris | 01/05/2022 | 12/31/9999 |
| 1 | number_of_employees | 50 | 01/01/2021 | 01/01/2022 |
| 1 | number_of_employees | 100 | 01/01/2022 | 12/31/9999 |
What I want to have as a result is the following:
| company_id | name| city | number_of_employees | valid_from | valid_to | is_active |
| -------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| 1 | Company 1 | New York | null | 01/01/2020 | 01/01/2021 | false |
| 1 | Company 1 | New York | 50 | 01/01/2021 | 01/01/2022 | false |
| 1 | Company 1 | New York | 100 | 01/01/2022 | 01/01/2022 | false |
| 1 | Company 1 | Paris | 100 | 01/05/2022 | 12/31/9999 | true |
So based on this example, we have a company that started on 01/01/2020 with New York as city and number of employees wasn't populated at that time.
We then modified our company to add 50 as the number of employees, this happened on 01/01/2021.
We modified our company again on 01/01/2022 to change the number of employees to 100, only to change the city of the company from New York to Paris on 01/05/2021.
This gives us 4 states for the company, so our SCD should contain a row per state or 4 rows.
The dates should be calculated to overlap and valid_from should be set to the valid_to of the attribute that changed from the "history" table, and valid_to should be set to the valid_from of the attribute that changed from the "history" table.
To add more complexity to the task, imagine we have about 120 attributes but also if a company was never changed (just created and still has the same state from creation) then it won't exist in the "Current State" table. So in our example, Company 2 will not exist in the history table at all and will have to be read from the first table into the SCD (union between current table and history result table). Fun right! :)
To give you a sense of the technical environment, the CRM is hubspot, data is replicated from hubspot to BigQuery and the reporting tool is Power BI.
I have tried to use pivoting in both Power BI and BigQuery, which is the standard solution when it comes to EAV model tables, but I'm stuck at the calculation of the valid/from valid/to in the result SCD. ( example of using the pivoting here: https://dba.stackexchange.com/questions/20275/solutions-for-reporting-off-of-an-eav-structured-database )
I need one process that can be applied to multiple tables (because this example is only for company, but I have also other objects that I need to convert into SCD).
So what is the best way to convert this EAVT data into an SCD without falling into a labyrinth of hard code and performance issues? And how to calculate the valid_from/valid_to dynamically<
Whether it's BigQuery or Power Query or just theoretical, any solutions, tips, ideas or just plain opinion is highly appreciated as this is the last step into the adoption of a whole data culture in the company I work for, and if I cannot make this, well... my credibility will be hit! so please help a fellow lost IT professional! :D
Rami Ouanes
(1 rep)
May 4, 2022, 02:07 PM
• Last activity: Dec 10, 2022, 06:04 AM
2
votes
2
answers
76539
views
Login Error while connecting SQL Database through Power BI?
I get the error message displayed in Power BI Online when using an enterprise gateway connected to an on-premise SQL Server Unable to connect: We encountered an error while trying to connect. > Details: Microsoft SQL: A connection was successfully established with > the server, but then an error occ...
I get the error message displayed in Power BI Online when using an enterprise gateway connected to an on-premise SQL Server
Unable to connect: We encountered an error while trying to connect.
> Details: Microsoft SQL: A connection was successfully established with
> the server, but then an error occurred during the login process.
> (provider: ssl provider, error: 0 - The signature of the certificate
> cannot be verified)
I have the Login credentials (SysAdmin) on server.
What is the cause of getting Login error? Any suggestions and resolution steps?
CR241
(1565 rep)
Jun 23, 2017, 04:12 PM
• Last activity: Jul 23, 2022, 04:22 PM
2
votes
1
answers
778
views
SQL Dynamic Query based on user input in PowerBI
I am pulling data from inSQL using the SQL Query below: ``` SET QUOTED_IDENTIFIER OFF SELECT * FROM OPENQUERY(INSQL, "SELECT DateTime, [X],[Y], [Z], [A] FROM WideHistory WHERE wwRetrievalMode = 'Cyclic' AND wwResolution = 1000 AND wwVersion = 'Latest' AND DateTime >= '20210601 20:02:33.000' AND Date...
I am pulling data from inSQL using the SQL Query below:
SET QUOTED_IDENTIFIER OFF
SELECT * FROM OPENQUERY(INSQL, "SELECT DateTime, [X],[Y], [Z], [A]
FROM WideHistory
WHERE wwRetrievalMode = 'Cyclic'
AND wwResolution = 1000
AND wwVersion = 'Latest'
AND DateTime >= '20210601 20:02:33.000'
AND DateTime <= '20210607 20:02:34.000'")
I'm using this table in PowerBI and want [X],[Y], [Z], [A] & the DateTime values to dynamically change based on user input.
Any advice on this would be greatly appreciated. I am new to PowerBI and SQL.
Felix Silva
(21 rep)
Jun 21, 2021, 01:02 AM
• Last activity: Jul 6, 2022, 11:00 PM
0
votes
1
answers
32
views
Filtering by value parameter
We're embedding PowerBI reports in a web page, and setting filters from our own code. Main fact table is `Event`, which has a `Date` and a `Duration` (in milliseconds). Desired output is something like Date # of events % Standard 01/02/2022 34 95% ... 01/03/2022 193 100% ... ... `% Standard` means "...
We're embedding PowerBI reports in a web page, and setting filters from our own code.
Main fact table is
Event
, which has a Date
and a Duration
(in milliseconds).
Desired output is something like
Date # of events % Standard
01/02/2022 34 95% ...
01/03/2022 193 100% ...
...
% Standard
means "percentage of these events that have a duration less than a value". Now, I can certainly CALCULATE
, SUM
and FILTER
my way to doing so, except...
That value is input from the user. It'll change from run to run. The user can put in 1ms, 55ms, 123456ms. How do I get that arbitrary value from the UI to somewhere I can get at it in the formula?
Stu
(287 rep)
Feb 28, 2022, 07:52 PM
• Last activity: Mar 7, 2022, 11:36 PM
0
votes
1
answers
2239
views
DAX Calculate the sum of values for a year with start date and end date
I can't manage in **DAX**, to calculate for a year, the sum of the values which are in this year, I would like the proportion of each of the values in a year according to its starting date and its ending date. [![example][1]][1] For example, I want to be able to calculate for each year, the value of...
I can't manage in **DAX**, to calculate for a year, the sum of the values which are in this year, I would like the proportion of each of the values in a year according to its starting date and its ending date.
For example, I want to be able to calculate for each year, the value of the "workload" during that year, so in 2019 the share of 150 between 01/02/2019 and the end of the year then the share of 150 between 01/01/2020 and 12/31/2020 then the share between 01/01/2021 and 05/05/2021 (without forgetting the phase 1-2 and the other IDs obviously).

Gabriel Valvin
(63 rep)
Feb 22, 2022, 08:19 AM
• Last activity: Feb 23, 2022, 03:53 PM
0
votes
1
answers
29
views
Using tablenames from a config table for joins
``` SELECT $model_master_id$, $model_market_segment$, $country$, $ctry_cd$, $score_run_id$, $period_id$, $prediction_from_dt$, $prediction_to_dt$, $buy_acq$, $eval_prd$, $scr.model_score$, $scr.model_score_bin$ from (select model_master_id, model_version_id, score_run_id, $site_rsdnc_id$, segment_na...
SELECT
$model_master_id$,
$model_market_segment$,
$country$,
$ctry_cd$,
$score_run_id$,
$period_id$,
$prediction_from_dt$,
$prediction_to_dt$,
$buy_acq$,
$eval_prd$,
$scr.model_score$,
$scr.model_score_bin$
from
(select
model_master_id,
model_version_id,
score_run_id,
$site_rsdnc_id$,
segment_name,
model_score,
model_score_bin
FROM $master_scoring_table_name$) scr
LEFT JOIN
(SELECT
$site_rsdnc_id$,
$segment_column_name$
FROM $feature_table_name$) f
ON f.$site_rsdnc_id$=scr.$site_rsdnc_id$;
Could there be a possibly simple way to achieve the above code from my attached excel Config other than dynamic sql? Everything within dollar quotes comes from my config. I tried dynamic sql with for loops but since I have multiple tablenames coming from my config the dynamic sql runs for a very long time.

Reshmi Nair
(31 rep)
Feb 3, 2022, 09:12 PM
• Last activity: Feb 4, 2022, 07:51 AM
Showing page 1 of 20 total questions