Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
153
views
create table based on another table
I have a table called " Product " it has: **ID_P**, **Name**, **expiration date** Now I want to create another table called "Stock" this table has **ID**, **Quantity_of_the_product**, **product_location_in_the_stock**, **ID_P** (foreign key), and **expiration date** I want to bring the expiration da...
I have a table called " Product " it has: **ID_P**, **Name**, **expiration date**
Now I want to create another table called "Stock" this table has **ID**, **Quantity_of_the_product**, **product_location_in_the_stock**, **ID_P** (foreign key), and **expiration date** I want to bring the expiration date from the table product
I am new to SQL oracle so please be patient with me
I am using: SQL ORACLE
**UPDATE:**
and is it right if I do this
CREATE TABLE stock (ID, Quantity_of_the_product,product_location_in_the_stock, ID_P foreign key)
AS SELECT expiration_date
FROM Product
WHERE stock.id_p = product.id_P
kylie
(39 rep)
Jan 6, 2023, 08:34 AM
• Last activity: Jul 14, 2025, 10:04 PM
0
votes
1
answers
1963
views
Unpivot in MYSQL?
Please I need help on how I can transform the data that appears in the attached photo. Maybe Unpivot SQL? [![enter image description here][1]][1] [1]: https://i.sstatic.net/kYFsI.jpg
Please I need help on how I can transform the data that appears in the attached photo. Maybe Unpivot SQL?

Christian Fleidl
(1 rep)
Jul 1, 2019, 03:22 PM
• Last activity: Jun 4, 2025, 06:03 AM
0
votes
1
answers
243
views
How to restore MySQL database files without having a MySQL instance?
After a crash of my PC I'm left with several databases (27Gb in total) that are left on my data drive, while my OS drive (Windows 7) is gone. I've reinstalled the machine by now (Windows 10) and set up a new MySQL server (via installation of XAMPP). I would like to recover the old databases and add...
After a crash of my PC I'm left with several databases (27Gb in total) that are left on my data drive, while my OS drive (Windows 7) is gone.
I've reinstalled the machine by now (Windows 10) and set up a new MySQL server (via installation of XAMPP). I would like to recover the old databases and add them to the new MySQL server.
What would be the best way to copy/move/restore these databases to the new installation?
The old MySQL install was version 5, but I have no idea what specific version. I could set up a new MySQL install version 5 if needed, but I'd prefer a solution that works regardless of version (within reasonable limits, of course).
I've browsed through two questions that are similar: https://dba.stackexchange.com/questions/74258/how-to-restore-mysql-database-from-files and https://dba.stackexchange.com/questions/189045/can-i-restore-a-mysql-table-from-a-file-non-sql-backup but they don't address the possible difference in version.
Thanks!
Marlon
(1 rep)
Jul 9, 2020, 09:38 PM
• Last activity: May 27, 2025, 09:08 AM
3
votes
1
answers
389
views
MySQL importing csv file by Data import wizard
I tried importing a CSV file in MySQL via the data import wizard. My CSV file has 70,000 records but only 61 are imported. I tried checking the CSV file, recreating the file from source, and changing the location of the CSV file, but nothing works. I even reinstalled MySQL version 8.0.25. Please sug...
I tried importing a CSV file in MySQL via the data import wizard.
My CSV file has 70,000 records but only 61 are imported.
I tried checking the CSV file, recreating the file from source, and changing the location of the CSV file, but nothing works.
I even reinstalled MySQL version 8.0.25.
Please suggest a solution or any other way of importing.
Namu N
(31 rep)
May 27, 2021, 07:24 AM
• Last activity: Apr 19, 2025, 11:07 AM
1
votes
1
answers
906
views
How can I arrange database structure for ranking card in a list in a Kanban Board?
thank you for helping me. I got a problem when I design my database structure for `kanban board` application. How can I design kanban board to rank card in a list for sorting,drag and drop,.. Currently this is my design 1. cards table -id (P) -list_id -name -title ??? (How to design for ranking card...
thank you for helping me.
I got a problem when I design my database structure for
Imagine I am doing the card movement function in the kanban board, (move in the list or move from one list to another). I'm having a problem using a weight as an metric that determines the order of cards in a list.
The more we move the card, the more number of decimal digits of the
kanban board
application.
How can I design kanban board to rank card in a list for sorting,drag and drop,..
Currently this is my design
1. cards table
-id (P)
-list_id
-name
-title
??? (How to design for ranking cards in list for sorting, drag and drop in list)
2. card_lists table
-id (P)
-board_id
-name
?? (How to design for ranking status card list in boards for sorting, drag and drop in board)
3. boards table
-id
-name

weight
=> Leads to the data type: FLOAT or DOUBLE cannot be saved correctly (divide and divide and divide,... )
So is there any way to overcome this problem and is there a better way to calculate the weight
to store in the database?
Thank you so much for your help

DFX Nguyễn
(111 rep)
Dec 30, 2021, 11:14 AM
• Last activity: Apr 8, 2025, 08:03 PM
1
votes
1
answers
2451
views
How to recover Postgresq data from $PGDATA/base files after system crash
I was working on my Ubuntu machine and left it on to grab food. Upon returning, I found it had shut itself down. It was not power failture because I have a home backup system and lights never go off in my house. When I turned it on, it went straight to Busybox because apparently the `fsck` tool had...
I was working on my Ubuntu machine and left it on to grab food. Upon returning, I found it had shut itself down. It was not power failture because I have a home backup system and lights never go off in my house.
When I turned it on, it went straight to Busybox because apparently the
fsck
tool had somehow moved the entire contents of the root partition and everything else to **lost+found** on my laptop's primary drive, which is an SSD.
So I backed up the lost+found directory to an external HDD and installed Ubuntu 22, from Ubuntu 20. I did not lose personal data because I had my /home directory on my secondary HDD. However, everything else was on the same partition on the SSD.
So, after perusing through the lost+found files, I was able to extract all the files from /lib/postgresql/
. Now because Postgresql uses OIDS, unlike mysql which uses names, I had to figure out the databases based on the information on this website here https://www.postgresql.fastware.com/blog/how-postgresql-maps-your-tables-into-physical-files
For reference, I was able to recover MySQL tables since they simply use table names. With PostgreSQL however, I'm not sure if just copying the **$PGDATA/base/** files will work given the information mentioned on that website.
Is it possible to extract the data (through some tool) from the base files or to re-import them back into an active cluster/instance?
TheRealChx101
(121 rep)
Jul 31, 2022, 12:20 AM
• Last activity: Mar 22, 2025, 01:00 PM
2
votes
2
answers
696
views
Post migration MySQL database validation
We are migrating from 5.5 enterprise mysql to 8.0 mysql enterprise edition. And we are migrating version by version like from 5.5 to 5.6 then 5.6 to 5.7 and lastly from 5.7 to 8.0 also os migration is there from the rhel5. 4 to 8.4 it's all okay but client want data validation value by value. Normal...
We are migrating from 5.5 enterprise mysql to 8.0 mysql enterprise edition. And we are migrating version by version like from 5.5 to 5.6 then 5.6 to 5.7 and lastly from 5.7 to 8.0 also os migration is there from the rhel5. 4 to 8.4 it's all okay but client want data validation value by value. Normal data validations like row counts, and checksum is not sufficient to client as data is senstive. Suggest any good tool, script or query to check value to value data validation
Sameer Bille
(21 rep)
Mar 30, 2022, 11:11 AM
• Last activity: Mar 6, 2025, 03:03 AM
-1
votes
1
answers
57
views
How to identify the type of database file from a data dump?
I have a file that I suspect is a database, but I don't know what kind of database it is (SQLite, Postgres, etc.). How can I infer test if it's a database, and if so, what kind of database it is? If it's a database, it's using a public database system. I tried reading the file with an hex editor but...
I have a file that I suspect is a database, but I don't know what kind of database it is (SQLite, Postgres, etc.). How can I infer test if it's a database, and if so, what kind of database it is?
If it's a database, it's using a public database system. I tried reading the file with an hex editor but didn't see any obvious hint in it (at least, nothing obvious to me).
Franck Dernoncourt
(2093 rep)
Jan 13, 2025, 12:56 AM
• Last activity: Jan 13, 2025, 03:36 PM
-1
votes
1
answers
852
views
How to reduce big size database files (Data File ) SQL Server without SHRINK?
I found a couple of 'question' regarding shrinking big datafiles but I think that is not very accurate to my situation. So here is my question I have a big database with almost 8TB of Data File Size and almost 160 GB of log file The database is in offline status Shrink process due to the big size of...
I found a couple of 'question' regarding shrinking big datafiles but I think that is not very accurate to my situation. So here is my question
I have a big database with almost 8TB of Data File Size and almost 160 GB of log file
The database is in offline status
Shrink process due to the big size of the data file will take a long time so I'm not sure how to proceed in order to avoid any problem in the instance.
One option that I have in mind is to create a new data file and then, copy the tables to the new data files and finally shrink the old data file
Best regards
SakZepelin
(21 rep)
Sep 21, 2021, 02:28 PM
• Last activity: May 16, 2024, 10:03 PM
0
votes
1
answers
730
views
Exporting FoxPro DBF Files
I am attempting to export data stored by a client's Visual FoxPro database to be used in a MySQL context. The data given is a folder of .dbf, .cdx, and .tbk files, as well as a few .dct, .dcx, and .dbc files. I have access to the directory with the files, but I do not have access to FoxPro. I have t...
I am attempting to export data stored by a client's Visual FoxPro database to be used in a MySQL context. The data given is a folder of .dbf, .cdx, and .tbk files, as well as a few .dct, .dcx, and .dbc files. I have access to the directory with the files, but I do not have access to FoxPro.
I have tried opening them with dBASE, which has been more supported and is supposed to be able to read DBF files, but that was to no avail: dBASE kept claiming that there were corrupted table/index headers, which prevented it from opening the DBF files even when in the same directory as the associated CDX files. I have tried opening them with data recovery tools to see what I could gather, and those appear to show strings stored in the rows, but due to what I believe is an encoding issue, they're surrounded by gibberish and not well organized. I have even tried opening them with spreadsheet tools like LibreOffice, but they gave similar results.
Are there any other tools or methods I can attempt to get the data from these files and store them in a more modern format? I tried obtaining FoxPro, but it refuses to run on my version of Windows 10 due to its discontinuation.
I appreciate any assistance or advice that you can give. Thank you in advance!
Eli
(101 rep)
Oct 21, 2022, 07:57 PM
• Last activity: Apr 4, 2024, 02:06 PM
0
votes
1
answers
38
views
How to implement historical data with this EERD?
I am looking for a way to implement historical data with the following EERD (what I have tried, below the image): [![Entity Relationship Diagram][1]][1] [1]: https://i.sstatic.net/21vnK.png ## What have I tried? - Temporal tables > Didn't work because I use MySQL and it has no native support. And al...
I am looking for a way to implement historical data with the following EERD (what I have tried, below the image):
## What have I tried?
- Temporal tables
> Didn't work because I use MySQL and it has no native support. And also isn't the best solution (when doing natively) because then I am fixed to one database provider.
- History tables
> This is a valid option but some things won't work.
>For example, I change the supplier name, a new record will be added to SupplierHistory table, and the data in the Supplier row will get mutated.
>
>Then lets say for example the next day we change the address that supplier was/is linked to, address gets added into history table, row gets mutated...
>
>But then the History Row of the updated supplier will still be pointing to the old address in the address table?
- isActive strategy
>This is also a strategy, in which I set fields that are current and valid, to isActive true, otherwise false. There is also a problem with this one: my supplier and customer tables are used for login. So I cannot just add a new row for changed data, because it is linked to the Primary Key, ID, and all other tables reference to this. So I need to change the original row, but then there is data loss.
I don't know what to do/try anymore, doees anybody have suggestions?
Thanks in advance,
Kind Regards,

user23600793
(1 rep)
Mar 16, 2024, 02:56 PM
• Last activity: Mar 16, 2024, 11:01 PM
0
votes
0
answers
47
views
copying data from database with the same skeleton
I have a database design with the same tables, procedures and all other details on two different servers. Since one of them ran out of traffic, I forwarded the web software project to the other one. I want to create insert into codes for all table data between the last 15 days (January 15 - January...
I have a database design with the same tables, procedures and all other details on two different servers. Since one of them ran out of traffic, I forwarded the web software project to the other one.
I want to create insert into codes for all table data between the last 15 days (January 15 - January 31, 2024) from server A and add them manually to server B.
I am aware that codes are generated with the Generate script, but it pulls the whole table, I do not want to select search from it, I want it to directly generate the insert into code between these dates.
The database is SQL server and the server is Windows Server.
Kadir Çolak
(3 rep)
Jan 31, 2024, 07:47 PM
1
votes
1
answers
4598
views
How to grant read-write access to a user within the whole database cluster?
I got a database cluster with 1300 databases. I created a role and a user and granted access using the command `GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO myuser`, so `myuser` is able to run queries in one particular database. When I connect to a different database with `...
I got a database cluster with 1300 databases. I created a role and a user and granted access using the command
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO myuser
, so myuser
is able to run queries in one particular database. When I connect to a different database with myuser
and run select * from person
, I get "permission denied", so I need to connect as admin to that database and run GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO myuser
again, then login as myuser
and then I'm able to run select * from person
.
I have 1300 databases in the cluster, as I said. Is there any command which grants read and write privileges in the whole cluster for myuser
?
Please note: myuser
should not be a superuser.
I tried:
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO service;`
\t
SELECT '\connect ' || datname || '; ' ||E'\n'|| 'GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public to myuser;' FROM pg_database WHERE datistemplate = false;
but these commands only work in the database where I run them, and for all different databases I have to run them again like 1000 times.
azamat
(11 rep)
Jun 2, 2022, 05:32 PM
• Last activity: Jan 10, 2024, 12:00 PM
-2
votes
2
answers
114
views
Insert data from one table to another
I have a table in which new entries come in every second. The table is just like this: | value1 |data1| Value2| data2| |-|-|-|-| | 1 | Er | 2 | Er| | 3 | Er | 4 | Er| Now insert these value in other table as |Value | data| |-|-| |1| Er| |2| Er| |3| Er| |4| Er| And when new data insert into 1 table i...
I have a table in which new entries come in every second. The table is just like this:
| value1 |data1| Value2| data2|
|-|-|-|-|
| 1 | Er | 2 | Er|
| 3 | Er | 4 | Er|
Now insert these value in other table as
|Value | data|
|-|-|
|1| Er|
|2| Er|
|3| Er|
|4| Er|
And when new data insert into 1 table insert that data into second one in each execution of query but not previous one.
charu gupta
(1 rep)
Dec 13, 2023, 11:41 AM
• Last activity: Dec 31, 2023, 06:10 PM
0
votes
2
answers
3885
views
insert on table being unnaturally slow
I have an "Basic" pricing tier Azure SQL database with a table of 7 columns, an ID column of int that is the clustered index and primary key, one datetime2(0) column, 3 varchar(100) columns and 2 varchar(MAX) columns, all nullable. The table has no triggers, constraints or foreign keys. Now I'm inse...
I have an "Basic" pricing tier Azure SQL database with a table of 7 columns, an ID column of int that is the clustered index and primary key, one datetime2(0) column, 3 varchar(100) columns and 2 varchar(MAX) columns, all nullable.
The table has no triggers, constraints or foreign keys.
Now I'm inserting a large amount of test data, I'm doing an
INSERT INTO table_name ()
values ()`
GO 680000
However the query has been running for 5 hours and only 290000 rows have been inserted.
I'm trying to find out why.
Tessaract
(167 rep)
Aug 16, 2021, 04:45 PM
• Last activity: Dec 21, 2023, 02:12 AM
1
votes
0
answers
25
views
Power Pivot/Data model issue
Good day, I am facing two blocking points where I would need your insights please. I have downloaded all my sheets in power query and made the necessary transformations, including creating a common Column ID on each sheet based on a project number and company ID (=Project number (IFS)). The goal is...
Good day,
I am facing two blocking points where I would need your insights please.
I have downloaded all my sheets in power query and made the necessary transformations, including creating a common Column ID on each sheet based on a project number and company ID (=Project number (IFS)). The goal is to create a report (in the form of a PivotTable with tabular form to display/align the data) that will relate the different sheets based on this Project ID. To create the report, I would like to create a data model and use PowerPivot object. Below is the idea of the desired report design:
.
My first blocking point, once the data model is created and start using the power pivot, all the data that should be displayed in the different project number is put in line per project and duplicated in all the other project lines also.
For example, 'Inspection Activities.' All the data in the PivotTable ends up in a single row in sequence and duplicates for each Project ID. Below is a screenshot of the issue. Did I miss a step? Or is this a limitation of the system?"
My second blocking point is the below error. The sheet 'Nadia POs 2020-2023' lists a set of purchase orders by projects and companies. However, it contains replicated data in the 'Project ID (IFS)' column because for each project/company, we have multiple purchase orders.
When I try to establish the relationship, I realize that it is blocking. Do you see a way to bypass this error to be able to relate it to the rest of the sheets?




Maissa Oueslati
(11 rep)
Dec 6, 2023, 10:38 AM
0
votes
0
answers
46
views
Seeking Clarity on Calculating Tuples in Slotted Page Array with Unspanned Organization
I'm currently grappling with a database administration question and would greatly appreciate some guidance. Here's the scenario: I'm working with a slotted page array organized under the unspanned model, consisting of a header (11B) and 3B slot references. The relation in question is 'Book,' which i...
I'm currently grappling with a database administration question and would greatly appreciate some guidance. Here's the scenario: I'm working with a slotted page array organized under the unspanned model, consisting of a header (11B) and 3B slot references. The relation in question is 'Book,' which includes fields for ISBN (char(11)), title (char(26)), and author (char(22)).
My task is to determine the number of tuples that can fit into a 16000B slotted page array. I've followed the calculations, considering the storage requirements per tuple and the available space within the page. However, I've not understood why the answer is 257, and I'm struggling to understand why it's that specific number.
Could someone kindly walk me through the steps or provide insights into why the answer is precisely 257?
cricket900
(129 rep)
Dec 3, 2023, 02:54 PM
1
votes
2
answers
292
views
Should a frequently-growing log file be stored as a table or a blob in the database?
I'm an undergrad and haven't worked with any actually database yet so my apologies if I misunderstand/misuse any terms. My question comes from an assignment of mine, but this is my attempt at learning and not having my homework done for me. The assignment, paraphrased: A university's printing servic...
I'm an undergrad and haven't worked with any actually database yet so my apologies if I misunderstand/misuse any terms. My question comes from an assignment of mine, but this is my attempt at learning and not having my homework done for me.
The assignment, paraphrased:
A university's printing service allows students on the campus to upload their files to the system and send uploaded files to one of the university's printers for printing...
The system logs all printings done by all students, recording the student's ID, which printer was used, the print file's name as well as the time of printing. An admin of the system should be able to view the log file filtered to any students, printers and/or time period (to my knowledge this requires indexing).
A question I come across is which data storage approach (database or file system) I should be using for the student's files and the log file, with both being frequently-growing data.
From this SO question I gathered that frequenting updating files are better stored as file system storage, and from this question that files could be stored in a database with blobs.
My questions are:
1. Should the log file be stored with database or file system? (Or I reckon the determining question would be "Is it possible to filter by multiple values without indexing?")
2. If student files are stored with a blob related to a student in the database table, is it possible for each student to only have access to their own files?
3. Why is the file system better fitted for frequently-growing files?
4. Is it possible (or practical) for a server to use both database and file system?
Cydennn
(13 rep)
Nov 5, 2023, 12:27 PM
• Last activity: Nov 6, 2023, 12:30 PM
0
votes
1
answers
17
views
Correct way of storing the "reach profile" of an object
We've got to store information about a diversity of mechanical "arms" that have a certain reach profile. We need to store the information about what kind of reach profile the arm has. Since there are a multitude of joints in the arms, the reach is not a perfect parabola, it could look as follows. [!...
We've got to store information about a diversity of mechanical "arms" that have a certain reach profile. We need to store the information about what kind of reach profile the arm has. Since there are a multitude of joints in the arms, the reach is not a perfect parabola, it could look as follows.
The porpuse of the data in our database is for a "webshop" to be able to filter on horizontal reach versus vertical reach. As you can imagine, when a certain vertical reach has been entered, there is only a certain horizontal reach on that height.
What would be the best format for storing this data? We're allowed to have a "resolution"-aspect of this, so the outcome of filtering this data does not have to be perfect.

Ambidex
(111 rep)
Oct 31, 2023, 09:30 AM
• Last activity: Oct 31, 2023, 12:25 PM
0
votes
2
answers
559
views
Compare two tables for discrepencies
I am trying to compare two tables in two separate databases (same instance) for a new data extract The tables have around 4 millions rows and due to how they've been extracted are not in the same order in each table, as they are just staging tables (there is no PKs). I have tried using (select * fro...
I am trying to compare two tables in two separate databases (same instance)
for a new data extract
The tables have around 4 millions rows and due to how they've been extracted are not in the same order in each table, as they are just staging tables (there is no PKs).
I have tried using
(select * from table 1
Except
select * from table 2)
UNION ALL
(select * from table 2
Except
select * from table 1)
however this results in around 10 millions rows so is not working as expected.
I would guess the results should be around a few thousand.
QueryQuirk
(3 rep)
Oct 2, 2023, 01:57 PM
• Last activity: Oct 2, 2023, 04:09 PM
Showing page 1 of 20 total questions