Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

2 votes
1 answers
478 views
Bulk load data and provide row-by-row feedback
We want to allow our users to import CSV files containing many thousands of records to "pre-load" a particular table in our system. It's not strictly a one-time operation, it may be performed multiple times as they get feedback, tweak the CSV, and re-import. I understand that this is not a lot of da...
We want to allow our users to import CSV files containing many thousands of records to "pre-load" a particular table in our system. It's not strictly a one-time operation, it may be performed multiple times as they get feedback, tweak the CSV, and re-import. I understand that this is not a lot of data, relatively speaking. But, we want the operation to be atomic (this is a multi-user system other users may also be adding or modifying data) and we want to provide per-row feedback (like "Row 123 is a duplicate of item 'abc'"). I know that statements like COPY or tools like pg_bulkload can do the work very quickly, but this is user-interactive via our web app and we _require_ row-by-row feedback. Our naïve approach was to run sequential INSERT statements and to catch exceptions. Even though we're re-using a single connection, this makes separate calls and uses a separate transaction. This is taking _way_ too long.
for(var i = 0; i < rows.Count; i++)
{
    try
    {
        connection.Execute("INSERT INTO table1 VALUES (...)");
        // format a "successful" response
    }
    catch (PostgresException e)
    {
        // format a "failed" response based on exception details... 
    }
}
We tried creating a single INSERT statement with multiple VALUES, but this does not provide row-by-row feedback and it stops on the 1st error (I found no useful way to use ON CONFLICT here). We also tried sharing a transaction and running sequential statements but the transaction is "aborted" on the 1st error.
using(var tx = connection.BeginTransaction())
{
    for(var i = 0; i < rows.Count; i++)
    {
        try
        {
            connection.Execute("INSERT INTO table1 VALUES (...)", tx);
            // format a "successful" response
        }
        catch (PostgresException e)
        {
            // format a response based on the type of failure... 
        }
    }
}
I'm beginning to think that we need to run this entire operation _within_ the database in SQL. We have only one constraint right now, a NOT NULL UNIQUE "name" column. So the statuses would be something like... * "OK": the row inserted fine (return the new PRIMARY KEY "id" column) * "Conflict": the name in this row is a duplicate of an existing row either a row seen earlier in the CSV or already in the table (return the duplicate name) * "Bad Request": the name in this row is missing, empty, or all whitespace We also need to return the index (in insert order, I guess) in order to approximate the line number in the original CSV file. I'm thinking we need to INSERT into a temporary table and use something like row_number(). Given a CSV like this (name + some example columns, just to make the point)
name,region,code,set
A1,A,100,1
A2,A,101,1
A1,A,102,1
,B,200,2
C1,C,300,3
I would need responses like this: index | id | name | status ------|--------|--------|------- 1 | 100 | A1 | OK 2 | 101 | A2 | OK 3 | [null] | A1 | Conflict 4 | [null] | [null] | Bad Request 5 | 102 | C1 | OK Am I on the right path? Is this a workable solution or is there some other technique that would work better? Does it make sense to work the queries out by status? Like, rather than try to process one record at a time, can I try to find all of the bad requests 1st, then all of the duplicates, then all of the new rows (and insert them)? I'm fine with this response, we can always reorganize them in the application code. index | id | name | status ------|--------|--------|------- 4 | [null] | [null] | Bad Request 3 | [null] | A1 | Conflict 1 | 100 | A1 | OK 2 | 101 | A2 | OK 5 | 102 | C1 | OK Believe me, I'm asking around about the requirement, but it seems stuck for now &emdash; they really want to provide the user with specific row-by-row feedback. If I insert into a temporary staging table, can I run 3 set-based operations? One for bad requests, one for duplicates, and one for new records?
Anthony Mastrean (121 rep)
Jul 9, 2018, 05:32 PM • Last activity: Aug 1, 2025, 05:02 AM
0 votes
1 answers
2793 views
0 rows from the .csv (semi colon) are imported into MySQL Workbench
I am trying to import the .csv below (columns and 3 first records in txt in differenet blockquotes) into MySQL Workbench using the Table Data Import Wizard. I have tried many ways and realized that the issue lies with this particular .csv. I even used the LOAD DATA LOCAL INFILE statement with no suc...
I am trying to import the .csv below (columns and 3 first records in txt in differenet blockquotes) into MySQL Workbench using the Table Data Import Wizard. I have tried many ways and realized that the issue lies with this particular .csv. I even used the LOAD DATA LOCAL INFILE statement with no success using the terminal but things get complicated dipslaying different errors. > LOAD DATA LOCAL INFILE "C:\Users\user\Documents\Edu_research\SQL\Project_TED\talks_final_effort_new1.csv" INTO TABLE talks FIELDS TERMINATED BY ';' ENCLOSED BY '"' LINES TERMINATED BY '/n' IGNORE 1 ROWS; At first I used delimiter ',' and strings enclosed with '"' but everything was messed up, probably because of a column with long text with commas and double quotes etc., and it could not identify the columns. So I changed the delimiter to ';' but the same happened. Then I kept the ';'. I tried for the 1st record and it was imported but for the rest of the records it didn't. I broke it down to 3 records (displayed below) used the settings below enter image description here and it showed this message again enter image description here I believe the "enclosed by" setting with '"' is the problematic beacuse there are some """ in the description column. I am not sure if I can deal with it not manually neither using TABLE IMPORT WIZARD or LOAD statement. > talk_id;title;speaker_1;all_speakers;occupations;about_speakers;views;recorded_date;published_date;event;native_lang;available_lang;comments;duration;topics;related_talks;url;description > 1;Averting the climate crisis;Al Gore;{0: 'Al Gore'};{0: ['climate advocate']};{0: 'Nobel Laureate Al Gore focused the world’s attention on the global climate crisis. Now he’s showing us how we’re moving towards real solutions.\r\n'};3523392;2/25/2006;6/27/2006;TED2006;en;"['ar', 'bg', 'cs', 'de', 'el', 'en', 'es', 'fa', 'fr', 'fr-ca', 'gl', 'gu', 'he', 'hi', 'hr', 'hu', 'id', 'it', 'ja', 'ko', 'lt', 'lv', 'mk', 'nl', 'pl', 'pt', 'pt-br', 'ro', 'ru', 'sk', 'sl', 'sq', 'sr', 'sv', 'sw', 'th', 'tl', 'tr', 'uk', 'ur', 'vi', 'zh-cn', 'zh-tw']";272;977;"['alternative energy', 'cars', 'climate change', 'culture', 'environment', 'global issues', 'science', 'sustainability', 'technology']";"{243: 'New thinking on the climate crisis', 547: 'The business logic of sustainability', 2093: 'The state of the climate — and what we might do about it', 54715: 'How we can turn the tide on climate', 29968: 'The most important thing you can do to fight climate change: talk about it', 2339: ""Climate change is happening. Here's how we adapt""}";https://www.ted.com/talks/al_gore_averting_the_climate_crisis/ ;"With the same humor and humanity he exuded in ""An Inconvenient Truth,"" Al Gore spells out 15 ways that individuals can address climate change immediately, from buying a hybrid to inventing a new, hotter brand name for global warming." 92;The best stats you've ever seen;Hans Rosling;{0: 'Hans Rosling'};{0: ['global health expert; data visionary']};"{0: 'In Hans Rosling’s hands, data sings. Global trends in health and economics come to vivid life. And the big picture of global development -- with some surprisingly good news -- snaps into sharp focus.'}";14501685;2/22/2006;6/27/2006;TED2006;en;"['ar', 'az', 'bg', 'bn', 'bs', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fa', 'fi', 'fr', 'he', 'hi', 'hr', 'hu', 'id', 'is', 'it', 'ja', 'kn', 'ko', 'ku', 'lv', 'mk', 'ml', 'mn', 'nl', 'pl', 'pt', 'pt-br', 'ro', 'ru', 'sk', 'sr', 'sv', 'sw', 'ta', 'te', 'th', 'tr', 'uk', 'ur', 'vi', 'zh-cn', 'zh-tw']";628;1190;"['Africa', 'Asia', 'Google', 'demo', 'economics', 'global issues', 'health', 'statistics', 'global development', 'visualizations', 'math']";"{2056: ""Own your body's data"", 2296: 'A visual history of human knowledge', 620: 'Let my dataset change your mindset', 2806: ""Doesn't everyone deserve a chance at a good life?"", 2560: 'How Africa can keep rising', 1418: ""Let's put birth control back on the agenda""}";https://www.ted.com/talks/hans_rosling_the_best_stats_you_ve_ever_seen/ ;"You've never seen data presented like this. With the drama and urgency of a sportscaster, statistics guru Hans Rosling debunks myths about the so-called ""developing world.""" 7;Simplicity sells;David Pogue;{0: 'David Pogue'};{0: ['technology columnist']};"{0: 'David Pogue is the personal technology columnist for the New York Times and a tech correspondent for CBS News. He\'s also one of the world\'s bestselling how-to authors, with titles in the For Dummies series and his own line of ""Missing Manual"" books. '}";1920832;2/24/2006;6/27/2006;TED2006;en;"['ar', 'bg', 'de', 'el', 'en', 'es', 'fa', 'fr', 'he', 'hr', 'hu', 'it', 'ja', 'ko', 'nb', 'nl', 'pl', 'pt', 'pt-br', 'ro', 'ru', 'sr', 'tr', 'vi', 'zh-cn', 'zh-tw']";124;1286;"['computers', 'entertainment', 'interface design', 'media', 'music', 'performance', 'simplicity', 'software', 'technology']";"{1725: '10 top time-saving tech tips', 2274: 'The first secret of design is ... noticing', 172: 'Designing for simplicity', 2664: 'Meet the inventor of the electronic spreadsheet', 2464: 'The mind behind Linux', 1347: 'The secret structure of great talks'}";https://www.ted.com/talks/david_pogue_simplicity_sells/ ;"New York Times columnist David Pogue takes aim at technology’s worst interface-design offenders, and provides encouraging examples of products that get it right. To funny things up, he bursts into song." This .csv is a real trouble. I believe I have searched any possible link on the Internet to figure it out but still no success. What could be the problem here? I The only method i haven't used is via phpmyadmin but I don't think that the issue lies with the method/tool. Appreciate your help Thanks
transmod_kdtk (1 rep)
Apr 17, 2022, 08:51 PM • Last activity: Jul 28, 2025, 06:04 AM
1 votes
1 answers
413 views
How can I import latitude and longitude data into a database using Mysql Workbench?
I am currently trying to get some latitude and longitude data to import into a database using the data import wizard in mysql workbench. the problem is that the floating point values that get imported are modified by mysql workbench, or whatever query it's preforming, so they are useless. Example: 4...
I am currently trying to get some latitude and longitude data to import into a database using the data import wizard in mysql workbench. the problem is that the floating point values that get imported are modified by mysql workbench, or whatever query it's preforming, so they are useless. Example: 40.8152 turns into 40.0000. I cannot figure out why this is happening. I have the columns set to DECIMAL(6,4), but they still get modified when imported. Anybody have any ideas as to what might be causing this and how i can get these to import correctly? Edit: I have also tried FLOAT, FLOAT(6,4), DOUBLE and DOUBLE(6,4) with the same results. It either removes the trailing decimal values completely or sets them to 0.
Duck Puncher (11 rep)
Jul 16, 2015, 05:36 PM • Last activity: Jul 26, 2025, 04:06 PM
1 votes
1 answers
756 views
MongoDB + Compass exporting strings and numbers "columns" with value of 1 as "true"(boolean) and 2 or null as "false" to CSV
When I export my MongoDB collection to a CSV, the String and Number "column" data types values are converted to "true" for 1 and "false" for 2 and null - how do I retain the values as numbers, strings and integers on export to CSV from Compass?
When I export my MongoDB collection to a CSV, the String and Number "column" data types values are converted to "true" for 1 and "false" for 2 and null - how do I retain the values as numbers, strings and integers on export to CSV from Compass?
CoolElectricity (21 rep)
Dec 9, 2020, 05:19 PM • Last activity: Jul 26, 2025, 03:04 AM
0 votes
1 answers
487 views
importing bacpac file using sql package
How can I specify Database maximum size when importing bacpac file using sql package? The maximum size of 32GB is auto-sizing, but my actual DB size is over 35GB. Can I get to use DatabaseMaximumSize=(INT60) instead of DatabaseMaximumSize=(INT32)? eg. import parameter /p:DatabaseEdition=GeneralPurpo...
How can I specify Database maximum size when importing bacpac file using sql package? The maximum size of 32GB is auto-sizing, but my actual DB size is over 35GB. Can I get to use DatabaseMaximumSize=(INT60) instead of DatabaseMaximumSize=(INT32)? eg. import parameter /p:DatabaseEdition=GeneralPurpose /p:DatabaseServiceObjective=GP_Gen5_2 /p:DatabaseMaximumSize=(INT60)
Cho Wai Tun (1 rep)
Jun 12, 2021, 07:12 AM • Last activity: Jun 30, 2025, 07:08 AM
3 votes
1 answers
1327 views
MySQL: Skip INSERT / UPDATE statement that triggers error but show them in output
I have a million of rows of customers to be imported via SQL dump. Customer data will be inserted into main table `customer` and child table `customer_address` The dump contains the following SQL statements: **File**: customer-dump.sql BEGIN TRANSACTION; INSERT INTO customer (firstname, lastname) VA...
I have a million of rows of customers to be imported via SQL dump. Customer data will be inserted into main table customer and child table customer_address The dump contains the following SQL statements: **File**: customer-dump.sql BEGIN TRANSACTION; INSERT INTO customer (firstname, lastname) VALUES ('John', 'Doe'); INSERT INTO customer_address (customer_id, city, country) VALUES (LAST_INSERT_ID(), 'New York', 'USA'); COMMIT; BEGIN TRANSACTION; INSERT INTO customer (firstname, lastname) VALUES ('Lorem', 'Ipsum'); INSERT INTO customer_address (customer_id, city, country) VALUES (LAST_INSERT_ID(), 'Chicago', 'USA'); COMMIT; # So on for N no of customers I am importing via CLI as: mysql -u [user] -p [database] /tmp/customer-import-error.log **My Queries:** - How to continue the script execution even if there is a failure in some rows? - How to show/log errors for failed rows? - Any advice on such massive import?
MagePsycho (201 rep)
Apr 18, 2017, 08:13 PM • Last activity: Jun 27, 2025, 11:08 PM
0 votes
1 answers
199 views
How to deal with a large database containing zipped files.
(I'm a PhD student in applied math, I know some programming but admittedly I know **very** little about databases.) My question is: *how do I tackle/handle this database?* The problem is as follows. I have a "large" database (1 Tb). The top level maps are 15 batches which are named BATCH1 ... BATCH1...
(I'm a PhD student in applied math, I know some programming but admittedly I know **very** little about databases.) My question is: *how do I tackle/handle this database?* The problem is as follows. I have a "large" database (1 Tb). The top level maps are 15 batches which are named BATCH1 ... BATCH15 as follows: enter image description here The only relevant map is the RawTAS map (actually the TAS map, but RawTAS will do for the sake of this question) Going one level deeper we find the following directory structure enter image description here Each RawTAS map has a year folder going from 2006 to 2015 and then each year folder has monthly folders going from 01 to 12. Each monthly folder now contains a bunch of zipped csv files. The issue is that when I exctract these zip files they become insanely large so unzipping all of them is infeasible. What would be the proper way of handling this kind of database?
Rainymood (143 rep)
Mar 8, 2018, 10:17 AM • Last activity: Jun 19, 2025, 02:09 AM
1 votes
2 answers
2854 views
How can I force the MySQL client to output to stdout or a log file when importing a SQL file?
Let's say I am importing a small SQL file using the MySQL client like so: ``` mysql < 10k-file.sql ``` The mysql client does not output anything. How can I force it to show how many rows are affected for each query, errors, so on?
Let's say I am importing a small SQL file using the MySQL client like so:
mysql < 10k-file.sql
The mysql client does not output anything. How can I force it to show how many rows are affected for each query, errors, so on?
bteo (111 rep)
Jun 24, 2020, 08:33 AM • Last activity: Jun 3, 2025, 09:06 AM
1 votes
1 answers
236 views
Import from incremental backups to a new host in Oracle 11g
I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that? For level 1 I am using differential method.
I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that? For level 1 I am using differential method.
Maximin (211 rep)
Aug 7, 2013, 05:14 AM • Last activity: May 28, 2025, 12:01 AM
1 votes
1 answers
258 views
How can I simplify the process of importing only changed data?
## Background Two years ago, I asked [a question about how to model a food's relationship with its nutrients][1]. Today, that design has evolved. The biggest change is that whenever a food is updated, we insert a new row into the `food_versions` table. We do not delete or update food versions. This...
## Background Two years ago, I asked a question about how to model a food's relationship with its nutrients . Today, that design has evolved. The biggest change is that whenever a food is updated, we insert a new row into the food_versions table. We do not delete or update food versions. This design was influenced by the following answer: https://dba.stackexchange.com/a/278893/240214 . I've since learnt this is based on a Type 2 'slowly changing dimension' . ### Additional information A food version's nutrients (e.g. fat), allergens (e.g. milk), and diets (e.g. vegetarian) are stored in seperate tables and can also change over time. For example, a new food version that has a change to ingredients will often need new nutrient records also. ## The relevant tables
CREATE TABLE "foods" (
  "id" uuid DEFAULT gen_random_uuid() PRIMARY KEY, -- Surrogate key
  "code" text NOT NULL, -- Natural key
  "organisation_id" uuid NOT NULL REFERENCES "manufacturer"("id") ON DELETE CASCADE,
  UNIQUE("code", "organisation_id") -- One natural key per manufacturer
);

CREATE TABLE "food_versions" (
  "food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
  "created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
  "name" text NOT NULL,
  "quantity" numeric NOT NULL,
  "quantity_unit_id" text NOT NULL REFERENCES "units"("id"),
  "ingredients" text NOT NULL,
  PRIMARY KEY("food_id", "created_time")
);

CREATE TABLE "food_nutrients" (
  "food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
  "nutrient_id" text REFERENCES "nutrients"("id"),
  "created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
  "quantity" numeric NOT NULL,
  "quantity_unit_id" text NOT NULL REFERENCES "units"("id"),
  PRIMARY KEY("food_id", "nutrient_id", "created_time")
);

CREATE TABLE "food_allergens" (
  "food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
  "allergen_id" text REFERENCES "allergens"("id"),
  "created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
  "status" text NOT NULL, -- 'FREE_FROM', 'MAY_CONTAIN', 'CONTAINS'
  PRIMARY KEY("food_id", "allergen_id", "created_time")
);

CREATE TABLE "food_diets" (
  "food_id" uuid REFERENCES "foods"("id") ON DELETE CASCADE,
  "diet_id" text REFERENCES "diets"("id"),
  "created_time" TIMESTAMP(6) WITH TIME ZONE DEFAULT NOW(),
  PRIMARY KEY("food_id", "diet_id", "created_time")
);
The data in these tables changes in bulk several times a year when we receive new catalogues (semi-structured spreadsheets) from food manufacturers. ## Our import process 1. Move the data we need from the manufacturer's semi-structured spreadsheet into our own structured spreadsheet. This looks similar to if all tables above were joined together: | code | name | ... | fat | ... | peanuts | ... | vegetarian | ... | | ---- | ------------ | --- | --- | --- | ------- | --- | ---------- | --- | | 1 | Sausage roll | ... | 21 | ... | FALSE | ... | FALSE | ... | 2. For each row in the structured spreadsheet, select the food from the database by code and compare its fields against those in the spreadsheet. This also requires querying relations (nutrients, allergens, and diets). 3. If fields have changed, determine which tables to insert new rows into. For brevity, I've not included cases where the update is an addition (e.g. a new food) or removal (e.g. delisting of a food or removal of an allergen). It's worth noting that these problems are inherent in the supply chain, because manufacturers do not always create new codes when they update existing products. ## Question **Is there a better design, than the one proposed below, to simplify our process of importing only changed data?** Join all tables together into one "foods" table, like our structured spreadsheet, and add a column to store a hash for change comparison. ### Pros - Instead of comparing all fields against those to be imported, we only have to compare the hashes. - Any change (name, nutrient etc) results in a single transaction/insertion, which is easy reason about. - We almost always need nutrients, allergens, and diets when we request a food's information. ### Cons - We are denormalising our data, which will result in a wide table with a large quantity of NULLs. - Due to denormalisation, we will also have _more_ duplicated data (e.g. a change in name will mean a new version that has the same nutrient values as the one above it). Duplication already happens to some degree.
Check12 (65 rep)
Nov 18, 2023, 03:47 PM • Last activity: May 25, 2025, 05:00 PM
0 votes
1 answers
298 views
Selectively load data using Oracle 11g r2 External Table Preprocessor
I am using the Preprocessor feature in Oracle 11g r2 to load several data files in a table. However, the raw files seem to have certain rows that shouldn't be imported with the rest. For example: col1 col2 col3 col4 A 1 2 3 A 2 3 4 B ab bc cd So, I want to only load the rows with col1 = 'A' and load...
I am using the Preprocessor feature in Oracle 11g r2 to load several data files in a table. However, the raw files seem to have certain rows that shouldn't be imported with the rest. For example: col1 col2 col3 col4 A 1 2 3 A 2 3 4 B ab bc cd So, I want to only load the rows with col1 = 'A' and load the rows with col1 = 'B' into another table. How can I do this inside the processor?
sfactor (111 rep)
Sep 15, 2014, 10:14 AM • Last activity: May 23, 2025, 06:04 AM
0 votes
1 answers
287 views
Fastest way to sync (or keep import) 3.5TB data from hadoop to sharded mongodb cluster
There are 3.5TB of data in our Hadoop cluster(yes on hdfs). And we have newly built a sharded MongoDB cluster(the latest 3.x) with 3 mongos, 3 configdb and 3 shards(each shard has 1 primary and 2 secondary nodes) We are looking for the best/fastest way to import these data from Hadoop/hdfs to our ne...
There are 3.5TB of data in our Hadoop cluster(yes on hdfs). And we have newly built a sharded MongoDB cluster(the latest 3.x) with 3 mongos, 3 configdb and 3 shards(each shard has 1 primary and 2 secondary nodes) We are looking for the best/fastest way to import these data from Hadoop/hdfs to our newly built sharded MongoDB cluster. All these data will be into shared collections in the MongoDB cluster. We don't have much experience with this and have no clue how to do this in the fastest way in our environment. Appreciate it if anyone can give a clue or the tools we can leverage. open source tools or commercials are both ok to us. Joe
nntp (13 rep)
Sep 15, 2015, 04:21 PM • Last activity: May 19, 2025, 07:04 PM
1 votes
1 answers
1138 views
SSIS - Data Import & export wizard - existing tables
For a project I'm attempting to move data from a view in **source** to a table in **destination**. And this will need to be done **periodically**. The quickest way to do this seems to be a SSIS package (as this also grants me additional control for possible data transformations in a following phase....
For a project I'm attempting to move data from a view in **source** to a table in **destination**. And this will need to be done **periodically**. The quickest way to do this seems to be a SSIS package (as this also grants me additional control for possible data transformations in a following phase. Now, the first time this works excellent, I go through the wizard and the destination table(s) are created and the data imjected. However, when I run the same package a **2nd** time SSIS complains that the tables already exist (which is right). The only way I see how to rectify this is to change the source queries and add an ifexists clause. This though will become a lot of work as I will need to do this for 144 destination tables. Hence my question, is the if exists clause the only way to capture this or is there an option within the wizard/SSIS to have the package check this for me and only create the table if it doesn't exist yet?
Dirk R. (45 rep)
Jul 20, 2017, 08:52 AM • Last activity: May 8, 2025, 07:01 AM
1 votes
1 answers
3065 views
How do I import CSV UUID data into Postgres on the command line?
I'm using PostGres 10. I have this table > \d myapp_currencyprice; Table "public.myapp_currencyprice" Column | Type | Modifiers --------------------+--------------------------+----------- id | uuid | not null price | double precision | not null created | timestamp with time zone | not null currency_...
I'm using PostGres 10. I have this table > \d myapp_currencyprice; Table "public.myapp_currencyprice" Column | Type | Modifiers --------------------+--------------------------+----------- id | uuid | not null price | double precision | not null created | timestamp with time zone | not null currency_id | uuid | not null I have a CSV of data, which looks like id,price,created,currency_id fa9944a6-f622-499c-89b8-42534a541307,59451.41,2021-11-20 15:56:00-05,3965e495-d5a5-41ec-83fc-359545ca2716 13013751-d84a-441f-a19a-3c7b50c557d0,59474.17,2021-11-20 15:58:00-05,3965e495-d5a5-41ec-83fc-359545ca2716 but when I try and import the data on the command line, I get this error > PGPASSWORD=$DB_PASS psql -U $DB_USER -d $DB_NAME -c "\copy myapp_currencyprice FROM '/tmp/prices.csv' delimiter ',' csv" ERROR: invalid input syntax for uuid: "id" CONTEXT: COPY myapp_currencyprice, line 1, column id: "id" what's the right way to import data from a CSV into a PostGres table?
Dave (753 rep)
Dec 12, 2021, 12:58 AM • Last activity: Apr 28, 2025, 03:06 PM
4 votes
5 answers
4300 views
import 100 csv files into postgresql
I have a 100 csv files named sequentially 1,2,3....100.csv I copy the csv files with this script: COPY location FROM 'C:\Program Files\PostgreSQL\9.5\data\&#180;1.csv' DELIMITER ','; I would to execute the COPY with a single script that covers all the files (1,2,3 .... 100.csv). I use pgadminIII on...
I have a 100 csv files named sequentially 1,2,3....100.csv I copy the csv files with this script: COPY location FROM 'C:\Program Files\PostgreSQL\9.5\data\´1.csv' DELIMITER ','; I would to execute the COPY with a single script that covers all the files (1,2,3 .... 100.csv). I use pgadminIII on MS Windows
michal (103 rep)
Apr 1, 2017, 04:17 PM • Last activity: Apr 24, 2025, 07:46 AM
0 votes
2 answers
335 views
Importing mysqlnd database to MySQL server
Importing a mysqlnd (5.0.12) database to MySQL server (5.1.66) causes error? Currently we are performing this task and encountering errors (like below): > ERROR 1146 (42S02): Table 'xx-xxx-xxx-xxx' doesn't exist > > ERROR 1273 (HY000): Unknown collation: 'utf8mb4_unicode_ci' > > ERROR 1115 (42000):...
Importing a mysqlnd (5.0.12) database to MySQL server (5.1.66) causes error? Currently we are performing this task and encountering errors (like below): > ERROR 1146 (42S02): Table 'xx-xxx-xxx-xxx' doesn't exist > > ERROR 1273 (HY000): Unknown collation: 'utf8mb4_unicode_ci' > > ERROR 1115 (42000): Unknown character set: 'utf8mb4'
Zakir HC (133 rep)
Mar 8, 2016, 09:12 AM • Last activity: Apr 23, 2025, 01:06 AM
3 votes
1 answers
389 views
MySQL importing csv file by Data import wizard
I tried importing a CSV file in MySQL via the data import wizard. My CSV file has 70,000 records but only 61 are imported. I tried checking the CSV file, recreating the file from source, and changing the location of the CSV file, but nothing works. I even reinstalled MySQL version 8.0.25. Please sug...
I tried importing a CSV file in MySQL via the data import wizard. My CSV file has 70,000 records but only 61 are imported. I tried checking the CSV file, recreating the file from source, and changing the location of the CSV file, but nothing works. I even reinstalled MySQL version 8.0.25. Please suggest a solution or any other way of importing.
Namu N (31 rep)
May 27, 2021, 07:24 AM • Last activity: Apr 19, 2025, 11:07 AM
2 votes
2 answers
9152 views
Load csv file to mysql using load data local infile
I need a simple way to load a csv file from my computer to the database (which is on a remote server). I'm trying to load the file using a php page that executes this code: ```php $file = $_FILES['csv']['tmp_name']; $handle = fopen($file,"r"); $name = $_FILES['csv']['name']; $import = "LOAD DATA LOC...
I need a simple way to load a csv file from my computer to the database (which is on a remote server). I'm trying to load the file using a php page that executes this code:
$file = $_FILES['csv']['tmp_name']; 
    $handle = fopen($file,"r"); 
    $name = $_FILES['csv']['name'];

   $import = "LOAD DATA LOCAL INFILE '" . $name .
                "' INTO TABLE temporal_load
                  FIELDS TERMINATED BY ','  
                  optionally ENCLOSED BY '\"' 
                  LINES TERMINATED BY '\\n' 
				  IGNORE 1 LINES
                  (num,ticker,company,sector,industry) ";

mysql_query($import) or die(mysql_error());
But when I select a file on the php page and try to load it, shows the error message: File 'test11-14-15.csv' not found (Errcode: 2) I already reviewed that the variable **mysql.allow_local_infile** is ON and the database connection was made this way:
mysql_connect(HOST,USER,PASSWORD,false,128);
The file is not on the remote server, the file is on my computer, but it should work anyway because I'm using LOCAL. What am I doing wrong???
Claudia Sanchez (21 rep)
Nov 23, 2015, 07:04 PM • Last activity: Apr 11, 2025, 03:04 AM
0 votes
1 answers
1311 views
How to view databases and tables after 'imp' import using Oracle SQL developer
Maybe that's a wrong form of answer... The problem is: I've got an Oracle database dump file (very old, created with imp utility) At last, I've imported it. The command was `imp file=E:\Downloads\db_dispatcher\disp.dmp full=y` The console shows me there was success. [![enter image description here][...
Maybe that's a wrong form of answer... The problem is: I've got an Oracle database dump file (very old, created with imp utility) At last, I've imported it. The command was imp file=E:\Downloads\db_dispatcher\disp.dmp full=y The console shows me there was success. enter image description here Now when I open my Oracle SQL developer and create any connection, I can't list my database (there must be at least one): in 'tables' section I only see this system stuff. enter image description here I the DB name should be 'meltshop', so I tried to find it in my Windows Services list. But there's none. Oracle SQL developer 19, Windows 10. DB was created in 2009. I am a newbie with Oracle( The last year I worked with MySql database. Maybe here are any similarities? I also have php interface; the config shows me $DBNAME = "/orcl.meltshop"; $DBLOGIN = "ml2_rt"; Instance, version, etc.: select instance_name, host_name, version from v$instance; INSTANCE_NAME ---------------- HOST_NAME ---------------------------------------------------------------- VERSION ----------------- xe DESKTOP-T6U2873 11.2.0.2.0 The log of my last try: google disk
Victor Gorban (101 rep)
Jun 26, 2019, 01:21 PM • Last activity: Apr 10, 2025, 12:06 AM
0 votes
1 answers
944 views
Debuging import of a large dump file to MySQL
I'm trying to import a 50GB dump file to AWS RDS instance and have an issue. I get the following error: ERROR 2013 (HY000) at line 15850: Lost connection to MySQL server during query In the beaning i got this error after a few minutes, Then i increased the parameter `max_allowed_packet` to 1G. Now i...
I'm trying to import a 50GB dump file to AWS RDS instance and have an issue. I get the following error: ERROR 2013 (HY000) at line 15850: Lost connection to MySQL server during query In the beaning i got this error after a few minutes, Then i increased the parameter max_allowed_packet to 1G. Now i get this error after 2.5 hours. What could be the issue? Is there any way to debug this issue? I couldn't find any relevant logs. Here are some of the relevant DB paramameters: mysql> SHOW VARIABLES like '%innodb_bu%'; +-------------------------------------+----------------+ | Variable_name | Value | +-------------------------------------+----------------+ | innodb_buffer_pool_dump_at_shutdown | OFF | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 8 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | OFF | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_size | 50503614464 | +-------------------------------------+----------------+ 8 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE '%_timeout'; +-----------------------------+----------+ | Variable_name | Value | +-----------------------------+----------+ | connect_timeout | 10 | | delayed_insert_timeout | 300 | | innodb_flush_log_at_timeout | 1 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 28800 | | lock_wait_timeout | 31536000 | | net_read_timeout | 30 | | net_write_timeout | 60 | | rpl_stop_slave_timeout | 31536000 | | slave_net_timeout | 3600 | | wait_timeout | 28800 | +-----------------------------+----------+ 12 rows in set (0.00 sec) mysql> SHOW VARIABLES like '%max_all%'; +--------------------------+------------+ | Variable_name | Value | +--------------------------+------------+ | max_allowed_packet | 1048576000 | | slave_max_allowed_packet | 1073741824 | +--------------------------+------------+ Please advise.
Omri (383 rep)
Dec 24, 2017, 06:07 PM • Last activity: Apr 9, 2025, 06:05 PM
Showing page 1 of 20 total questions