Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
143
views
Snowflake SQL replace function - leverage a dictionary?
Looks like translate works for multi translations of single characters (abc -> xyz), but Replace in snowflake will only do 1 replacement. I have a list of JSON objects in a Snowflake SQL database where I need to replace 60 static items into 60 something else items. I was wondering if Replace (or ano...
Looks like translate works for multi translations of single characters (abc -> xyz), but Replace in snowflake will only do 1 replacement.
I have a list of JSON objects in a Snowflake SQL database where I need to replace 60 static items into 60 something else items. I was wondering if Replace (or another function) could leverage a dictionary table for this...
But I'm guessing the simplest I can do is a For-Loop using a dictionary table. Like "for each X in dictionary table, replace (dict.A->dict.B) in fact table."
Or is there a simpler method that doesn't require a for-loop?
user45867
(1739 rep)
Nov 25, 2024, 10:35 PM
• Last activity: Jul 27, 2025, 03:09 AM
-1
votes
0
answers
19
views
SnowFlake/DataView connection to SSMS
Has anyone successfully connected SnowFlake/DataView to SSMS whether it was through VS or other means, if so, may I please know how you did it? Essentially, I am trying to get the SnowFlake data into SSMS. I have read their articles and they helped some, I talked to their techs and they told me they...
Has anyone successfully connected SnowFlake/DataView to SSMS whether it was through VS or other means, if so, may I please know how you did it? Essentially, I am trying to get the SnowFlake data into SSMS. I have read their articles and they helped some, I talked to their techs and they told me they cannot help with my VS issue. See, I have been able to bring in some tables into our server via VS package. The problem is that when I have SSMS run the package I have to run it in 32bit instead of 64bit. I have the driver for both, please correct me if I am wrong, but I think SSMS only supports 32bit? This change is making the small amount of tables I am bringing in take over 5 hours to run, and this is after the MANY creative modifications I do before bring in the tables so that the package does not time out. Another way I have tried was through a linked server; However, in the catalog section, the databases do not show up. It looks like it connected but if I cannot access the database, is it really connected? you know what I mean? I also tried to talk to their techs about this and they directed me to an article where they said they do not support linked servers. Now I have ran out of options and need you guy's help.
If you want to know why I want to bring in the data into SSMS, it is bc Snowflake SQL is VERY limiting making it difficult to query the data, which is not the issue with SSMS.
Gianpiero Loli
(1 rep)
Jul 15, 2025, 11:33 AM
0
votes
1
answers
58
views
simple package fails with The transaction log for database 'X' is full due to 'ACTIVE_TRANSACTION
I am not a DBA and our DBA has quit and now I have to try to figure out DBA issues. So I am trying to create a job that deletes certain rows from a table and then uploads a set of new rows onto it from ODBC "Snowflake" for Athena, no truncation involved bc I do not want to truncate the whole table....
I am not a DBA and our DBA has quit and now I have to try to figure out DBA issues.
So I am trying to create a job that deletes certain rows from a table and then uploads a set of new rows onto it from ODBC "Snowflake" for Athena, no truncation involved bc I do not want to truncate the whole table.
The package fails in both VS and SSMS and it gives the following error:
[Execute SQL Task] Error: Executing the query "QUERY" failed with the following error: "The transaction log for database 'X' is full due to 'ACTIVE_TRANSACTION'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Since the package has 2 tasks and it kept failing on the first one which is the delete query, I thought maybe I needed to change the first task from running a query to executing a SP, that did not work. I read up on the error and some ppl said I might have to reboot the VM, that did not work. I saw some other solutions like shrinking the log files but I also read that that could cause issues with the data, which I would like to avoid. I saw I could increase the size of the log file but IDK how to do that nor if it will mess up the data like shrinking has the possibility to do. I also read that best practice is to find what is causing the log to fill up and try to fix the problem by optimizing it but I have no remote idea how to even start doing that.
Any advise ya'll can give?
Gianpiero Loli
(1 rep)
Jun 4, 2025, 06:30 PM
• Last activity: Jun 4, 2025, 10:04 PM
2
votes
1
answers
61
views
setup snowflake task to run every 2nd Monday of the month
I setup task with following schedule to run every 2nd Monday of the month but seems like it setup task to run every Monday. How can I setup cron schedule to run every 2nd Monday of the month? ```USING CRON 0 11 1 * 1-1 UTC```
I setup task with following schedule to run every 2nd Monday of the month but seems like it setup task to run every Monday. How can I setup cron schedule to run every 2nd Monday of the month?
CRON 0 11 1 * 1-1 UTC
SqlDBA
(171 rep)
Apr 16, 2025, 01:10 AM
• Last activity: May 26, 2025, 05:19 AM
0
votes
1
answers
304
views
Snowflake - extracting country/ state names - RegEx or Array?
I have a bunch of sloppy Geo text in a field. However the country/ state names or codes are relatively clean. It'll say Poland 23 or Illinois Remote or Frogballs, Germany or TX, AL, AK. I have a finite list of country names/ codes ... and US 50 state names, codes. I'm trying to figure out the best w...
I have a bunch of sloppy Geo text in a field.
However the country/ state names or codes are relatively clean.
It'll say Poland 23 or Illinois Remote or Frogballs, Germany or TX, AL, AK.
I have a finite list of country names/ codes ... and US 50 state names, codes.
I'm trying to figure out the best way to convert the "trash STATENAME trash" into a clean state name or country name.
I'm thinking either go the array route
STRTOK_TO_ARRAY(location_field)
- which will convert the string to 'word items' in an array. But I'm not sure the best function to extract a matching 'item' within an array. Array_contains()
merely is true/ false. Not "Poland".
Maybe regex is better for this purpose? Something like regexp_like(location_field,country_list|country_list,'i')
. Only issue here is that -- only want to match countries/ states that are a "word" (preceding or trailing space) -- not "AL" for Alabama when it's part of portugAL for instance.
user45867
(1739 rep)
Jun 5, 2023, 09:32 PM
• Last activity: May 14, 2025, 08:00 AM
0
votes
1
answers
1285
views
Create an ERD diagram from a set of SQL CREATE TABLE statements
We have a Snowflake database that does not maintain relationships between tables. The lack of relationships is not my decision and I have no control over this. I would like to generate an ERD diagram showing how a number of tables are related to each other. I do have the set of CREATE TABLE statemen...
We have a Snowflake database that does not maintain relationships between tables. The lack of relationships is not my decision and I have no control over this. I would like to generate an ERD diagram showing how a number of tables are related to each other.
I do have the set of CREATE TABLE statements for the database, which includes the relationship information.
Is there a Windows software package or online tool that can take these CREATE statements and generate and ERD?
Calab
(101 rep)
Dec 22, 2022, 02:17 PM
• Last activity: May 6, 2025, 02:08 AM
0
votes
1
answers
25
views
Can you 'monitor' curl/ HTTP requests to Snowflake API to see issues?
I'm using a 3rd party tool that can 'connect' to Snowflake API but is throwing errors. They are translating/ botching something on their end, as I can get the Snowflake API easily working with about 5 other tools. As a Snowflake admin, can I see what 'failed curl' or whatever text is being sent to S...
I'm using a 3rd party tool that can 'connect' to Snowflake API but is throwing errors. They are translating/ botching something on their end, as I can get the Snowflake API easily working with about 5 other tools.
As a Snowflake admin, can I see what 'failed curl' or whatever text is being sent to Snowflake is raw form, like the raw https/ curl request? Would this be a monitor/ trace of some kind? (I'm not the main admin).
user45867
(1739 rep)
Mar 3, 2025, 04:32 PM
• Last activity: Apr 1, 2025, 06:45 PM
0
votes
1
answers
320
views
Snowflake/S3 Pipeline: ETL architecture Questions
I am trying to build a pipeline which is sending data from Snowflake to S3 and then from S3 back into Snowflake (after running it through a production ML model on Sagemaker). I am new to Data Engineering, so I would love to hear from the community what the recommended path is. The pipeline requireme...
I am trying to build a pipeline which is sending data from Snowflake to S3 and then from S3 back into Snowflake (after running it through a production ML model on Sagemaker). I am new to Data Engineering, so I would love to hear from the community what the recommended path is. The pipeline requirements are the following:
1. I am looking to schedule a monthly job. Do I specify such in AWS or on the Snowflake side?
The monthly pulls should get the last full month (since this should be a monthly pipeline).
2. All monthly data pulls should be stored in own S3 subfolder like this
query_01012020,query_01022020,query_01032020
etc.
3. The data load from S3 (query_01012020,query_01022020,query_01032020
) back to a specified Snowflake table should be triggered after the ML model has successfully scored the data in Sagemaker.
4. I want to monitor the performance of the ML model in production overtime to catch if the model is decreasing its accuracy (some calibration-like graph perhaps).
5. I want to get any error notifications in real-time when issues in the pipeline occur.
I hope you are able to guide me on what components the pipeline should include. Any relevant documentation/tutorials for this effort are truly appreciated.
Thank you very much.
cocoo84hh
(101 rep)
Jun 14, 2020, 06:54 PM
• Last activity: Mar 13, 2025, 06:02 AM
2
votes
1
answers
1168
views
Dimension modelling for HR with Employee Dimension and multiple departments in a Data warehouse
What is the best way to configure a dimension model (preferably star schema) when we have the following requirements? 1. There is an Employees table (25 attributes) where we are required to make some of the attributes to SCD2. For e.g. Salary, LastSalaryIncreaseDate, LastBonusAmount, LastBonuesDate,...
What is the best way to configure a dimension model (preferably star schema) when we have the following requirements?
1. There is an Employees table (25 attributes) where we are required to make some of the attributes to SCD2. For e.g. Salary, LastSalaryIncreaseDate, LastBonusAmount, LastBonuesDate, Designation. We don't have to maintain the reporting hierarchy.
2. There are different Departments. Every Department is head by exactly a single Department head (Employee).
3. An Employee may belong to multiple Departments and vice versa.
4. Monthly payroll information is required to maintain for every Employee.
**Understanding and Questions**
1. Should we split the Employees entity into two considering only 5/25 of the attributes are SCD2 based (historical)?
2. I suppose there is a bridge table required for the Employee and the Departments. So every employee must have an attribute (e.g. DepatementGroupCode) showing multiple departments in the bridge table. Correct?
3. There is a direct relationship between employees and the Department. So Department will have the attribute EmployeeKey in it. How do I deal with SCD2 changes of employees with respect to the Department entity?
4. The payroll periodic Fact entity will be linked only with the Employee and the date dimension. This should not be linked with the Department because it is already linked with the Employee entity...Please correct my understanding.
Irfan Gowani
(21 rep)
Jul 8, 2021, 07:39 PM
• Last activity: Mar 10, 2025, 07:06 PM
1
votes
1
answers
45
views
Effective way to grant limited access to 3rd party service account in Snowflake?
I'm green as a dba somewhat. Not really a DBA actually. So we have a bunch of databases/ datatables ... we need to grant a small subsection of access (say 3 tables, show half the fields for PPI reasons). For some reason the project lead initially thought we should create an entirely different databa...
I'm green as a dba somewhat. Not really a DBA actually.
So we have a bunch of databases/ datatables ... we need to grant a small subsection of access (say 3 tables, show half the fields for PPI reasons).
For some reason the project lead initially thought we should create an entirely different database (DB_service_account) and either replicate or ETL-filter-write the limited data necessary to this.
The thinking would be this simplifies management -- the service account has access to everything in that database.
Is this a common design pattern? I mean --- an alternative would be create a limited/ secure views on these tables ... maybe with a 'schema' name (more of a logical than physical separation) .. right? That would avoid write jobs, additional storage? Then grant access to this schema ... right?
Project lead also wants a separate warehouse (compute) for cost monitoring -- again probably not the most efficient but that can be done at the User level anyway --- is it possible or necessary to use "Shares" or "Replication" or are these completely different use cases and over-complicating matters? Just wondering thoughts -- thanks!
user45867
(1739 rep)
Feb 13, 2025, 03:40 PM
• Last activity: Feb 14, 2025, 07:26 AM
0
votes
1
answers
42
views
How to use current user credentials in published Power BI report against Snowflake?
We are using Power BI as our reporting and dashboard tool for our Snowflake data warehouse. WE have a need for a published report to use the credentials of the person calling the report rather than the person who created and published the report, and have been unable to determine a way of doing this...
We are using Power BI as our reporting and dashboard tool for our Snowflake data warehouse. WE have a need for a published report to use the credentials of the person calling the report rather than the person who created and published the report, and have been unable to determine a way of doing this.
We have two data visibility restrictions that need to be enforced. First, based on the user's role, they are limited as to what columns of data they can see; we're enforcing this with a data masking policy in Snowflake. Second, we have a need to restrict which rows are visible to certain individuals based on their role; we have a row access policy in Snowflake which determines through a user-defined function call what customers' data are visible to a user.
Both of these work in perfectly well in Snowflake, but when called from Power BI, the data returned is based on the user who published and shared the report rather than the one executing it.
Is there a way to pass through the calling user's ID instead of the publisher's?
Additional information:
We are using Direct Query in a published report to try and get results filtered and masked results based on the calling user's role.
Person A has the ability to view all 7 columns on a table.
Person B has the ability to see 5 of these columns
Person C has the ability to see 4 of these columns, but only where the value of column 1 is within a certain group of values
Person D has the ability to see 4 of these columns, but only where the value of column 1 is within a certain group of values that is different from the values seen by Person C.
Person A creates a report that shows all 7 columns for all rows.
We have masking rules that replace values that replace values for columns that are not in a user's allowed column list with "REDACTED"
We also have a row access policy that looks up a user's list of allowed values for column 1 and filters visible rows based on that list.
We have been unable to get the report to use anything other than the publisher's access, which exposes legally sensitive information.
Kris Cook
(11 rep)
Jul 26, 2024, 03:41 PM
• Last activity: Feb 5, 2025, 03:15 PM
1
votes
2
answers
2344
views
Search a text column for a string from a list, and return the found string
I'm trying to search larger text for one of multiple values, lets say "Jack", "Jill", or "Bob" (the words can be 50 characters long). With Snowflake I have something like ``` select count(*) from my_table where text_field like any ('%jack%', '%Jill%', %bob%') ``` That's roughly it. Now how do I indi...
I'm trying to search larger text for one of multiple values, lets say "Jack", "Jill", or "Bob" (the words can be 50 characters long).
With Snowflake I have something like
select count(*)
from my_table
where text_field like any ('%jack%', '%Jill%', %bob%')
That's roughly it. Now how do I indicate if the field contains 'jack'
, 'Jill'
, or 'bob'
specifically? I need to EXTRACT the found text. Also now imagine that instead of 3 values, there are a 100 values to search, so preferably not a solution where manual typing is involved like charindex('jack')
or something like it.
I can store the values to search (100 words) in an another table. Let's call it my_dictionary
with one column my_word
.
How can I accomplish this?
user45867
(1739 rep)
May 25, 2023, 07:42 PM
• Last activity: Jan 14, 2025, 11:09 AM
2
votes
1
answers
677
views
Best practices for large JOINs - Warehouse or External Compute (e.g. Spark)
I am working on a problem that requires a very large join. The JOIN itself is pretty straightforward but the amount of data I am processing is very large. I am wondering for very large JOINs, is there a preferred type of technology. For example, is it more effective to a Data Warehouse (like Snowfla...
I am working on a problem that requires a very large join. The JOIN itself is pretty straightforward but the amount of data I am processing is very large. I am wondering for very large JOINs, is there a preferred type of technology. For example, is it more effective to a Data Warehouse (like Snowflake) or in some other MPP system like Spark?
To make the problem more concrete I created a hypothetical problem similar to my actual problem. Assume I have a table that looks like this:
I am working on some logic that requires account pairs that have the same name. To find pairs of accounts with the same account I can easily do something like this:

SELECT
account1.name,
account2.name
FROM accounts as account1
JOIN accounts as account2 ON account1.name = account2.name AND account1.acount_id != account2.acount_id
The problem I am facing is due to the amount of data I am processing. There are roughly ~2 trillion records I am trying to self JOIN on. Obviously, this will take some time and some pretty serious compute. I have run a similar query in Snowflake using XL and 3XL warehouses but after several hours of running, I canceled them. I am hoping there is a more cost-effective or time-efficient way.
Has anyone had success with massive JOINs? Are there any other tricks I could deploy? What tool did you find the most effective?
Arthur Putnam
(553 rep)
Feb 24, 2022, 09:06 PM
• Last activity: Dec 18, 2024, 12:01 PM
1
votes
1
answers
129
views
Snowflake DB Clone?
So this may be confusing but just looking to see if anyone has any insight on how to maybe do this in a simpler fashion. So our company utilizes Fivetran for data transport, Snowflake for warehousing and Tableau for BI. Unfortunately, we weren't able to get table or column descriptions from our On-p...
So this may be confusing but just looking to see if anyone has any insight on how to maybe do this in a simpler fashion.
So our company utilizes Fivetran for data transport, Snowflake for warehousing and Tableau for BI. Unfortunately, we weren't able to get table or column descriptions from our On-prem into Snowflake; so all of our columns have system names (like FFDITM rather than Item_Number for instance).
This obviously was not going to work so I created a new "Transformed" DB and manually added all columns with updated names. I am now in the process of creating streams to update the transformed DB on a daily basis but am running into many problems.
I've done a bit of research and I do see that Snowflake does have a Cloning option for cloning our schemas and tables or even our full DB.
If this is the case, can I clone our production DB once, then updated schema, table, and column names, and then finally set up streams to update it? That way its not so cumbersome?
I attempted to manually do this all manually but it seems there are automated work arounds which will also not eat at our storage.
AKBirite
(11 rep)
Dec 6, 2024, 01:30 AM
• Last activity: Dec 7, 2024, 10:14 PM
1
votes
1
answers
42
views
creating a CTE to filter previous CTE populaiton down. How to find new diagnosis of certain ICD 10 code?
The first CTE grabs all patients with the diagnosis codes for substance abuse, the second CTE should filter down to only NEW paitents. Meaning of the people in the first CTE, which of thses people have noy been diagnosised with the 2 codes before. The 3rd CTE grabs all information about the person,...
The first CTE grabs all patients with the diagnosis codes for substance abuse, the second CTE should filter down to only NEW paitents.
Meaning of the people in the first CTE, which of thses people have noy been diagnosised with the 2 codes before. The 3rd CTE grabs all information about the person, locations, date, and age.
**In need of figuring out the logic for the 2nd CTE: from diag ... only people who havent been diagnosed before (ie. new/inital diagnosis)**
with diag as (
select distinct d.person_id
, d.encntr_id
, d.diagnosis_id
, d.nomenclature_id
, d.diag_dt_tm
, d.diag_prsnl_name
, d.diagnosis_display
, cv.display as Source_Vocabulary
, nm.source_identifier as ICD_10_Code
, nm.source_string as ICD_10_Diagnosis
, nm.concept_cki
from wny_prod.diagnosis d
inner join (select * from nomenclature
where source_vocabulary_cd in (151782752, 151782747) --ICD 10 CM and PCS
and (source_identifier ilike 'F10.10' or source_identifier ilike 'F19.10')) nm on d.nomenclature_id = nm.nomenclature_id
left join code_value cv on nm.source_vocabulary_cd = cv.code_value
)
diag_final as (
select *
from diag
--?
)
select distinct ea.FIN
, dem.patient_name
, dem.dob
, age_in_years(e.beg_effective_dt_tm::date, dem.dob) as Age_at_Encounter
, amb.location_name
, e.beg_effective_dt_tm
, diag.Source_Vocabulary
, diag.ICD_10_Code
, diag.ICD_10_Diagnosis
from (select * from encounter where year(beg_effective_dt_tm) = 2022) e
left join (select distinct encntr_id, alias as FIN from encntr_alias where encntr_alias_type_cd = 844) ea on e.encntr_id = ea.encntr_id
inner join diag on e.encntr_id = diag.encntr_id
inner join (select distinct * from DEMOGRAPHICS where mrn_rownum = 1 and phone_rownum = 1 and address_rownum = 1) dem on e.person_id = dem.person_id
inner join locations_amb amb on amb.loc_facility_cd = e.loc_facility_cd
where age_in_years(e.beg_effective_dt_tm::date, dem.dob) > 13
Michelle
(21 rep)
Aug 2, 2023, 01:44 PM
• Last activity: May 22, 2024, 04:14 AM
1
votes
1
answers
636
views
Regex help in Snowflake
I am trying to write a regex that can replace '[', ']' and '].' with '_' and in cases where ']' is the last character it should be replaced with '' but I am struggling to come up with a regex for it that works in all cases because Snowflake does not support lookahead & lookbehind. My question is, ha...
I am trying to write a regex that can replace '[', ']' and '].' with '_' and in cases where ']' is the last character it should be replaced with '' but I am struggling to come up with a regex for it that works in all cases because Snowflake does not support lookahead & lookbehind.
My question is, has anyone tried/achieved to do this before? or is it impossible and I should just give up??
Eg:
-
look_behind --> look_behind_0
- look_behind_positive --> look_behind_0_positive_1
Pepe Silvia
(11 rep)
Apr 14, 2024, 11:20 PM
• Last activity: Apr 15, 2024, 11:40 PM
0
votes
0
answers
739
views
In Snowflake, default value NULL on a not-nullable column?
In Snowflake, what are the implications of having a default value of null on a non-nullable column? Also, when I describe a particular table in Snowflake, I see a default value of NULL in all columns. When my colleague describes the same table in Snowflake, in the same environment (PROD), they are s...
In Snowflake, what are the implications of having a default value of null on a non-nullable column?
Also, when I describe a particular table in Snowflake, I see a default value of NULL in all columns. When my colleague describes the same table in Snowflake, in the same environment (PROD), they are seeing the default value as null (lower-case) but with all those lower case nulls greyed-out. Anyone out there run into this? What could it mean?
Jimbo
(65 rep)
Apr 2, 2024, 02:11 PM
0
votes
0
answers
268
views
Recursive CTE hiearchy snowflake -- join/ expand outward (columns) instead of rows?
I'm doing a dependency mapping explosion (like parts explosion). I did a typical recursive CTE with a union all. It looks like with CTE as ( select abc from myTable where start_point = X union all select abc from CTE join myTable where myTable.parent = CTE.child ) select * from CTE ... However this...
I'm doing a dependency mapping explosion (like parts explosion).
I did a typical recursive CTE with a union all.
It looks like
with CTE as
( select abc from myTable where start_point = X
union all
select abc from CTE join myTable where myTable.parent = CTE.child
)
select * from CTE
... However this ends up with a list like
Root -> Child 1
Root -> Child 2
Root -> Child 3
Child 1-> 1 Grandchild 1
Child 2 -> 2 Grandchild 1
Child 2 -> 2 Grandchild 2
I'd prefer it looked like
Root -> Child 1 > Grandchild 1
Root -> Child 2 - >2 Grandchild 1
Root -> Child 2 -> 2 Grandchild 2
Root -> Child 3
It's like ... I need a recursive join, not a recursive union -- but when I replace the union with a join, I can't quite get it to work. Any ideas?
user45867
(1739 rep)
Mar 14, 2024, 11:22 PM
2
votes
2
answers
393
views
Snowflake won't allow an analytic function to be renamed
In the code below, I can do the LAG and PARTITION BY with no problem as a query. But I need it as a table. If I do CTAS, I need an alias on the column. But when I try to put an alias on it, it fails. I think this is valid SQL. But I'm using [Snowflake][1], so I'm asking it on Super User as I fear th...
In the code below, I can do the LAG and PARTITION BY with no problem as a query. But I need it as a table. If I do CTAS, I need an alias on the column. But when I try to put an alias on it, it fails. I think this is valid SQL. But I'm using Snowflake , so I'm asking it on Super User as I fear this is a vendor issue.
How can I create a table with the output of that query?
drop table a;
create table a (
a1 varchar,
a2 varchar
);
insert into a values ( 'a', 1 );
insert into a values ( 'a', 3 );
insert into a values ( 'a', 5 );
insert into a values ( 'a', 9 );
insert into a values ( 'b', 1 );
insert into a values ( 'b', 3 );
insert into a values ( 'b', 4 );
insert into a values ( 'c', 3 );
insert into a values ( 'c', 4 );
insert into a values ( 'c', 5 );
-- This works fine.
select a1
, a2
, lag(a2)
over (partition by a1 order by a2)
from a
;
-- This fails.
select a1
, a2
, lag(a2) new_col_name
over (partition by a1 order by a2)
from a
;
-- But if I can't name the column, I can't CTAS.
create table b as
select a1
, a2
, lag(a2)
over (partition by a1 order by a2)
from a
;
Output is:
status
A successfully dropped.
status
Table A successfully created.
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
number of rows inserted
1
A1 A2 LiteralRedacted0
b 1 NULL
b 3 1
b 4 3
c 3 NULL
c 4 3
c 5 4
a 1 NULL
a 3 1
a 5 3
a 9 5
001003 (42000): SQL compilation error:
syntax error line 4 at position 2 unexpected 'over'.
002022 (42601): SQL compilation error:
Missing column specification
James Madison
(131 rep)
Feb 22, 2024, 05:04 PM
• Last activity: Feb 22, 2024, 08:33 PM
2
votes
2
answers
16887
views
How can you insert data into a ARRAY column on a Snowflake table using SQL?
I am having difficultly finding documentation on how to insert data into an ARRAY column type using SQL on a Snowflake table. Snowflake Documentation: https://docs.snowflake.net/manuals/sql-reference/data-types-semistructured.html#array // example table CREATE OR REPLACE TABLE array_test_table ( id...
I am having difficultly finding documentation on how to insert data into an ARRAY column type using SQL on a Snowflake table.
Snowflake Documentation: https://docs.snowflake.net/manuals/sql-reference/data-types-semistructured.html#array
// example table
CREATE OR REPLACE TABLE array_test_table (
id number,
ids array
);
// This does not work
INSERT INTO array_test_table (id, ids)
VALUES (1, '[1,2,3]' );
I have tried using using postgres's syntax like ones described here: https://stackoverflow.com/questions/33335338/inserting-array-values
I originally asked this question here: Snowflake Community
Arthur Putnam
(553 rep)
Sep 13, 2018, 03:19 PM
• Last activity: Nov 30, 2023, 10:38 PM
Showing page 1 of 20 total questions