Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
284
views
Command line command to read Oracle Database export Success Message In log File
We have several oracle 19c servers with daily exports. I would like to find a way to run a command to read the export log file to find the success message for the day. e.g. I would like to putty into the linux server have a command I can copy and paste that will read the daily export log file and te...
We have several oracle 19c servers with daily exports. I would like to find a way to run a command to read the export log file to find the success message for the day. e.g. I would like to putty into the linux server have a command I can copy and paste that will read the daily export log file and tell me in one command/output that the export was successful or not? or any better ideas to automate reporting that daily exports are successful on 40+ servers?
crcoder
(1 rep)
Jan 7, 2023, 11:42 PM
• Last activity: May 17, 2025, 07:07 AM
1
votes
1
answers
65
views
Take a backup from specific range
I need to backup the data of a specific table by week and then restore it but without truncate table . This table contains the fechaRegistro I prepared the following script but I'm not sure about the way to declare date variables, could you help me about it, please? #!/bin/bash # VARIABLES FECHA=$(d...
I need to backup the data of a specific table by week and then restore it but without truncate table . This table contains the fechaRegistro
I prepared the following script but I'm not sure about the way to declare date variables, could you help me about it, please?
#!/bin/bash
# VARIABLES
FECHA=$(date +"%Y%m%d")
SERVIDOR="$(hostname)"
PASSWORD='PASSWORD'
BD=ordenes
TABLA="backlogPedidos"
ARCHIVO="$SERVIDOR-${BD^^}${TABLA^^}-$FECHA.sql"
RUTA_NAS="/tmp/"
# Establecer variables para calcular el rango semanal
INICIO_SEMANA=$(date -d "last sunday - 6 days" +"%Y-%m-%d 00:00:00")
FIN_SEMANA=$(date -d "last saturday" +"%Y-%m-%d 23:59:59")
# Se generar el backup por rangoo de una semana
mysqldump --databases $BD --tables $TABLA --where="fechaRegistro BETWEEN '$INICIO_SEMANA' AND '$FIN_SEMANA'" --skip-comments --compact --single-transaction --default-character-set=UTF8 --insert-ignore --complete-insert --skip-triggers --skip-routines --no-create-info --disable-keys --set-gtid-purged=OFF -q --lock-tables=false --user=backup -p$PASSWORD > /tmp/$ARCHIVO
echo "Backup semanal generado: $ARCHIVO"
Carolina
(47 rep)
Nov 26, 2024, 11:18 PM
• Last activity: Nov 27, 2024, 09:15 AM
0
votes
1
answers
185
views
capture deleted rows count in a variable from mysql table using bash script
I am trying to capture the number of deleted rows from mysql table from a bash script using `--execute` option, and it seems to be not working. I have gone through some questions where it mentions using `-vv` options, it works but not as expected. I used various combinations of `-vvv --silent` and `...
I am trying to capture the number of deleted rows from mysql table from a bash script using
--execute
option, and it seems to be not working. I have gone through some questions where it mentions using -vv
options, it works but not as expected. I used various combinations of -vvv --silent
and --skip-column-names
with mysql command and awk and sed as well but nothing seems to work.
MySQL Client is mysql Ver 8.0.39 for Linux on aarch64 (MySQL Community Server - GPL)
[How to get number of rows deleted from mysql in shell script](https://stackoverflow.com/questions/2628310/how-to-get-number-of-rows-deleted-from-mysql-in-shell-script)
[How to get number of rows affected, while executing MySQL query from bash?](https://stackoverflow.com/questions/1083866/how-to-get-number-of-rows-affected-while-executing-mysql-query-from-bash)
[How can you output UPDATE / INSERT query results when using MySQL -e,--execute?](https://dba.stackexchange.com/questions/23527/how-can-you-output-update-insert-query-results-when-using-mysql-e-execute)
Here is the code
deleted=$(mysql --login-path=lp1 --init-command="SET SQL_LOG_BIN = 0" -vv --silent --skip-column-names db -e "delete from table1 where column1 date_format('2022-04-01 00:00:00','%Y-%m-%d 00:00:00') and date_format('2023-03-31 23:59:59','%Y-%m-%d 23:59:59') order by column1 limit 20;")
echo ${deleted}
The statement deletes certain rows but prints an output like this
-------------- delete from table1 where column1 between date_format('2022-04-01 00:00:00','%Y-%m-%d 00:00:00') and date_format('2023-03-31 23:59:59','%Y-%m-%d 23:59:59') order by column1 limit 20 -------------- Query OK, 20 rows affected -------------- 1 row in set Bye
If I add one more line to get the row_count then it gives me -1
mysql --login-path=${LP_wallet_0} -e "select row_count()"
Even if I try row_count() in the same delete query it just appends this line select row_count() -------------- 2 1 row in set Bye
I tried different combinations of --vv --silent --skip-column-names
and awk
and sed
too but I am unable to get the count.
Avinash Pawar
(216 rep)
Nov 3, 2024, 05:36 PM
• Last activity: Nov 4, 2024, 10:51 AM
0
votes
1
answers
43
views
How do I ensure that MySQL is writing out its error log as the correct user?
I have a development Airflow server. It uses MySQL as its database. In the mysqld settings, I have the user set to mysql. However, the log is being written out as a different user named adm whenever the log is rolled over. I have confirmed I have set the desired user to mysql in the config, restarte...
I have a development Airflow server. It uses MySQL as its database. In the mysqld settings, I have the user set to mysql.
However, the log is being written out as a different user named adm whenever the log is rolled over. I have confirmed I have set the desired user to mysql in the config, restarted the MySQL service to pick up this change, but it is still changing daily to adm. How can I make it so that the mysql user writes out the error log file after rolling over?
[mysqld]
#
# * Basic Settings
#
user = mysql
...
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
Sample current directory:
drwxr-x--- 2 mysql mysql 4096 Oct 23 13:37 .
drwxrwxr-x 13 root syslog 4096 Oct 23 00:00 ..
-rw-r----- 1 adm adm 0 Oct 23 00:00 error.log
Desired directory:
drwxr-x--- 2 mysql mysql 4096 Oct 23 13:37 .
drwxrwxr-x 13 root syslog 4096 Oct 23 00:00 ..
-rw-r----- 1 mysql mysql 0 Oct 23 00:00 error.log
Zack
(121 rep)
Oct 24, 2024, 04:19 PM
• Last activity: Oct 24, 2024, 06:15 PM
0
votes
1
answers
143
views
Help with creating postgres databases with - in the name
I'm currently trying to script some migration of databases on a server. As part of this process I will need to create a new database with a similar name to the existing database. Whilst I have this working for the majority of cases I have come across a problem when the name contains the character `-...
I'm currently trying to script some migration of databases on a server.
As part of this process I will need to create a new database with a similar name to the existing database.
Whilst I have this working for the majority of cases I have come across a problem when the name contains the character
-
in them e.g.
sample-database-name
My script will execute the following command
psql -c "CREATE DATABASE sample-database-name WITH TEMPLATE = template0;"
Which leads to the following error message
>ERROR: syntax error at or near "-"\
LINE 1: CREATE DATABASE sample-database-name WITH TEMPLATE = templat...
I believe this is due to the -
in the name as others without this work fine, so is there a way to escape the the database name in the command to allow these names to be passed in from the script? Ideally I'm looking for a solution which I doesn't require me to check for problematic characters and simply make the name as escaped in some fashion.
UPDATE:
This is the function in the script that handles the database creation call
create_new_database() {
local old_db=$1
local new_db="${old_db}_new"
psql -c "CREATE DATABASE $new_db WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';"
}
Mr-F
(101 rep)
Sep 7, 2024, 11:27 PM
• Last activity: Sep 11, 2024, 04:47 PM
1
votes
0
answers
58
views
Taking full and incremental backups with authentication in MongoDB
We have a multi-tenant micro-service (it's basically kafka with some bells and whistles). We are using Mongodb to store multi-tenant credentials and kafka data so the authentication is really tight. Like, tenant1 will get to read/write tenant1-db and it's collections, it has no other access, same wi...
We have a multi-tenant micro-service (it's basically kafka with some bells and whistles). We are using Mongodb to store multi-tenant credentials and kafka data so the authentication is really tight. Like, tenant1 will get to read/write tenant1-db and it's collections, it has no other access, same with tenant2 and so on.. If for some reason I want to access tenant data I will have to login using credentials for that user on that database.
My question is, how do I create a full and incremental backup script for this? My concern is since I can't read other databases even as root without authenticating will I have to write a loop for it or something where it logs in for every user then uses mongodump and stores it in a file? Is there a simpler way? I know there's a user called backup but it can only take backup of the admin / root user and not other databases Every time there's a new tenant I feel like I will have to make changes to the backup script
I am fairly new to mongodb so I would appreciate some help. Thank you!
DarkDead
(11 rep)
Jun 10, 2024, 12:00 PM
0
votes
1
answers
1073
views
ORA-19504: failed to create file "bh2jk0h7_1_1"
I want to create a script to get incremental backup of the remote Oracle database every week. I save this command in *rman_command.sh* and give permission to it: BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG FORMAT '/home/oracle/backup_oracle_process/backup_files/%U'; Then I run this script th...
I want to create a script to get incremental backup of the remote Oracle database every week. I save this command in *rman_command.sh* and give permission to it:
BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG FORMAT '/home/oracle/backup_oracle_process/backup_files/%U';
Then I run this script that have this content,*script_backup.sh*:
#!/bin/bash
export ORACLE_HOME=/home/oracle/oracle_database_client
export PATH=$ORACLE_HOME/bin:$PATH
rman target sys/pA55w0rd@//172.28.10.22:1521/DSLDB22 @rman_command.sh
exit;
Also I run these commands to give permission to the folder for saving backup files:
chmod +x backup_oracle_process/*
chown -R root backup_oracle_process/*
But,when run *bash script_backup.sh* I receive this error:
Starting backup at 20-FEB-24
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=273 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1464 RECID=1 STAMP=1161427056
input archived log thread=1 sequence=1465 RECID=2 STAMP=1161427247
input archived log thread=1 sequence=1466 RECID=3 STAMP=1161427335
input archived log thread=1 sequence=1467 RECID=4 STAMP=1161427356
input archived log thread=1 sequence=1468 RECID=5 STAMP=1161429542
channel ORA_DISK_1: starting piece 1 at 20-FEB-24
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup plus archivelog command at 02/20/2024 11:17:55
ORA-19504: failed to create file "/home/oracle/backup_oracle_process/backup_files/bh2jk0h7_1_1"
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 1
Would you please guide me how to solve the problem?
Any help is really appreciated.
M_Gh
(101 rep)
Feb 20, 2024, 07:57 AM
• Last activity: Feb 20, 2024, 11:48 AM
0
votes
1
answers
475
views
Issues using a psql -v variable with jsonb_set
Pretty obscure situation and a couple hours of trying different things with no luck. I have a bash script which restores a production database backup locally. Part of the script runs some SQL to set things in the database. One of those items is setting the correct `slack_token` in a `slack_details`...
Pretty obscure situation and a couple hours of trying different things with no luck.
I have a bash script which restores a production database backup locally. Part of the script runs some SQL to set things in the database.
One of those items is setting the correct
slack_token
in a slack_details
table. Here's how I run the bash script:
$ ./restore-prod-database.sh -t "xoxb-1234567-1234....."
The script has a line which triggers running a .sql file like this:
psql -h $DATABASE_HOST -d $DATABASE_NAME -a -v slack_token=$SLACK_TOKEN -f restore-populate-slack-details.sql
Finally, the SQL does something like this:
UPDATE slack_details
SET installation = jsonb_set(installation, '{bot,token}', ('"' || :slack_token ||'"')::jsonb)
WHERE id = 1;
No matter what I try, whether its concatenating the :slack_token with quotes, passing it in with quotes, using single quotes and " I get the following error:
> UPDATE slack_details SET installation = jsonb_set(installation,
> '{bot,token}', ('"' || :slack_token ||'"')::jsonb) WHERE id = 1;
> psql:restore-populate-slack-details.sql:61: ERROR: column "xoxb" does
> not exist LINE 2: ... = jsonb_set(installation, '{bot,token}', ('"' ||
> xoxb-12345...
Quick note, installation is a jsonb column and the following code works fine, moment I use the -v variable it doesn't work:
UPDATE slack_details
SET installation = jsonb_set(installation, '{bot,token}', ('"' || '123456' ||'"')::jsonb)
WHERE id = 1; --- works fine
Luke Belbina
(103 rep)
Aug 15, 2023, 12:31 AM
• Last activity: Aug 16, 2023, 02:32 PM
6
votes
5
answers
36203
views
What does "psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)" actually mean?
I have a question regarding following output: [nikola@192 ~] $ sudo su - postgres [postgres@192 ~] $ psql dvdrental postgres psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql) psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)...
I have a question regarding following output:
[nikola@192 ~] $ sudo su - postgres
[postgres@192 ~] $ psql dvdrental postgres
psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)
psql (11.7, server 11.8)
Type "help" for help.
dvdrental=#
What do following two messages stand for and why? Messages I am asking about:
1)
> Install package 'postgresql' to provide command 'psql'? [N/y] I presume this is asking me to install AGAIN **Fedora package** which in 1st place I am trying to get rid off (to not show psql: */usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)* message). Any suggestion how to solve this without installing Fedora package but from **PostgreSQL Global Development group**? **P.S.** BTW, I've installed Postgres server by following [this](https://computingforgeeks.com/how-to-install-postgresql-on-fedora/) link which uses PGDG packages (but something didn't work right on my machine and I guess I've been offered by Fedora30 OS to install Fedora packages which is obviously what I did :) ).
psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)
2) psql (11.7, server 11.8)
- BTW, for this psql --version
command I get following output:
psql (PostgreSQL) 11.8
(**had to install** PostgreSQL Server version 11.7 in order to perform command such as \dt
and \l
)
...and also this output:
> postgres=# select * FROM version();
> version
> --------------------------------------------------------------------------------------------------------
> PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2), 64-bit
> (1 row)
P.S. I'm running PostgreSQL 11.8 on Fedora 30.
**UPDATE1:**
As requested by @LaurenzAlbe output for rpm -qa|grep postgres
command is:
[nikola@192 ~] $ rpm -qa|grep postgres
postgresql-11.7-2.fc30.x86_64
postgresql11-11.8-1PGDG.f30.x86_64
postgresql11-libs-11.8-1PGDG.f30.x86_64
postgresql11-server-11.8-1PGDG.f30.x86_64
postgresql-server-11.7-2.fc30.x86_64
**UPDATE2**:
I've ran dnf remove postgresql postgresql-server
command -> databases are still there and I can access them via **pgAdmin4** and **psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql** message is no longer showing up. But, there is now new issue -> when I access **postgres shell** in terminal using sudo su - postgres
command and then perform psql some-database-name postgres
I get message
> bash: psql: command not found... > Install package 'postgresql' to provide command 'psql'? [N/y] I presume this is asking me to install AGAIN **Fedora package** which in 1st place I am trying to get rid off (to not show psql: */usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)* message). Any suggestion how to solve this without installing Fedora package but from **PostgreSQL Global Development group**? **P.S.** BTW, I've installed Postgres server by following [this](https://computingforgeeks.com/how-to-install-postgresql-on-fedora/) link which uses PGDG packages (but something didn't work right on my machine and I guess I've been offered by Fedora30 OS to install Fedora packages which is obviously what I did :) ).
NikolaS
(272 rep)
Jun 19, 2020, 09:32 AM
• Last activity: Jun 27, 2023, 04:34 AM
0
votes
0
answers
744
views
Escape single quotes when passing command to psql -c
I need to run the following command in a BASH shell: sudo -u postgres /usr/pgsql-15/bin/psql -c "select count(*) from web_site where web_page like '%something%' " -d 'some_database' The output of this command is empty, although if I paste the same query (not enclosed in double quotes) in the psql CL...
I need to run the following command in a BASH shell:
sudo -u postgres /usr/pgsql-15/bin/psql -c "select count(*) from web_site where web_page like '%something%' " -d 'some_database'
The output of this command is empty, although if I paste the same query (not enclosed in double quotes) in the psql CLI, it works. So I realised that the issue is in the single quotes, because if I run the following (no single quotes in the statement):
sudo -u postgres /usr/pgsql-15/bin/psql -c "select count(*) from web_site " -d 'some_database'
It works flawlessly.
I tried the following:
sudo -u postgres /usr/pgsql-15/bin/psql -c "select count(*) from web_site where web_page like \'%something%\' " -d 'some_database'
sudo -u postgres /usr/pgsql-15/bin/psql -c 'select count(*) from web_site where web_page like \'%something%\' ' -d 'some_database'
sudo -u postgres /usr/pgsql-15/bin/psql -c "select count(*) from web_site where web_page like \"%something%\" " -d 'some_database'
sudo -u postgres /usr/pgsql-15/bin/psql -c 'select count(*) from web_site where web_page like \"%something%\" ' -d 'some_database'
sudo -u postgres /usr/pgsql-15/bin/psql -c 'select count(*) from web_site where web_page like "%something%" ' -d 'some_database'
None of these work. So how do I escape the single quotes?
enrico_steez
(11 rep)
May 25, 2023, 07:41 AM
• Last activity: May 25, 2023, 07:48 AM
0
votes
1
answers
420
views
Insert file content (with white line and special characters ) in bash variable for insert in oracle table with sqlplus
I write this simpe test.ksh scipt: ```bash #!/bin/ksh TEST=$(cat $1 ) sqlplus -S *****/*****@*****<<- EOF ``` ```sql set define off; SET SQLBLANKLINES ON; INSERT INTO DBADOC.TEST ( "TEST" ) VALUES ( q'[ $TEST ]' ) ; commit; EOF exit ``` ************************************************ Now if i run i...
I write this simpe test.ksh scipt:
#!/bin/ksh
TEST=$(cat $1 )
sqlplus -S *****/*****@*****<<- EOF
set define off;
SET SQLBLANKLINES ON;
INSERT INTO DBADOC.TEST
(
"TEST"
)
VALUES
(
q'[ $TEST ]'
)
;
commit;
EOF
exit
************************************************
Now if i run it with a simple file (3 lines without special char):
./test.ksh /home/oracle/scripts/test.txt
1 row created.
Commit complete.
**It runs perfectly.**
If I choose a complex file (with blank lines and special character ), it doesn't work:
./test.ksh /home/oracle/scripts/check_free_space_dg.ksh
SP2-0734: unknown command beginning "# ========..." - rest of line ignored.
SP2-0734: unknown command beginning "# @(#) NOM..." - rest of line ignored.
SP2-0734: unknown command beginning "# @(#) GRU..." - rest of line ignored.
SP2-0734: unknown command beginning "# @(#) DES..." - rest of line ignored.
SP2-0044: For a list of known commands enter HELP
and to leave enter EXIT.
[...]
Michele M
(11 rep)
Dec 17, 2020, 01:35 PM
• Last activity: Mar 30, 2023, 01:04 PM
2
votes
1
answers
10706
views
Run PSQL command from bash script
I would like to know how to run `\du` within a script, and output that to a file. Something like the following: ``` # some_script.sh psql & \du > output_of_du.txt ``` I can find information about running `.sql` files via psql, but I can't find information about running psql commands themselves. I fo...
I would like to know how to run
\du
within a script, and output that to a file.
Something like the following:
# some_script.sh
psql & \du > output_of_du.txt
I can find information about running .sql
files via psql, but I can't find information about running psql commands themselves.
I found the following whilst searching:
* https://stackoverflow.com/questions/35347368/unable-to-run-psql-command-from-within-a-bash-script - about running SQL not a PSQL command
* https://stackoverflow.com/questions/54619776/cannot-get-psql-command-to-run-in-bash-script - seems to be about running a .sql
script
But they're not what I'm after.
baxx
(326 rep)
Nov 21, 2022, 12:01 PM
• Last activity: Nov 21, 2022, 01:14 PM
1
votes
1
answers
1606
views
How to return values from psql to bash with custom session settings
I need to set the `datestyle` to `"dd/mm/yyyy"` before calling a procedure and getting it's return value back to bash. I tried the following: ```sh read return <<< $(psql \ -x \ -U postgres \ -d MY_DB \ --quiet`\ -t \ -c "SELECT CURRENT_TIMESTAMP") //just to make sure the date format is correct echo...
I need to set the
datestyle
to "dd/mm/yyyy"
before calling a procedure and getting it's return value back to bash.
I tried the following:
read return <<< $(psql \
-x \
-U postgres \
-d MY_DB \
--quiet`\
-t \
-c "SELECT CURRENT_TIMESTAMP") //just to make sure the date format is correct
echo ${return} | cut -d'|' -f 2
I'm getting back a normal date format yyyy-mm-dd
.
Majd
(121 rep)
Nov 6, 2021, 11:01 PM
• Last activity: Apr 8, 2022, 12:19 AM
0
votes
1
answers
2430
views
I get the error "(psycopg2.OperationalError) FATAL: role "wsb" does not exist", but the user does exits
I am trying connect to my postgress database using SQLAlchemy. I was working fine yesterday (01/27/22). Now I get the following error: > sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: role "wsb" does not exist Only the user does exist. ```wsb=# \du List of roles Role name | Attr...
I am trying connect to my postgress database using SQLAlchemy. I was working fine yesterday (01/27/22). Now I get the following error:
> sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: role "wsb" does not exist
Only the user does exist.
=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
wsb | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
wsb2 | | {}
As you can see I tried making another user "wsb2", but that didn't work either.
Below is my script:
db_string = "postgresql://wsb:wsb@localhost:5432/wsb"
db = create_engine(db_string)
meta = MetaData()
submissions_table = Table('wsb_submissions', meta,
Column('index', INTEGER),
Column('submission_id', TEXT),
Column('title', TEXT),
Column('created', TIMESTAMP),
Column('author', TEXT),
Column('ticker_mentioned', TEXT),
Column('subreddit', TEXT),
Column('emotes', ARRAY(TEXT)),
Column('rocket_emotes', INTEGER)
)
comments_table = Table('wsb_comments', meta,
Column('index', INTEGER),
Column('comment_id', INTEGER),
Column('body', TEXT),
Column('created', TIMESTAMP),
Column('author', TEXT),
Column('rocket_emotes', INTEGER),
Column('subreddit', TEXT),
Column('emotes', ARRAY(TEXT)),
Column('submission_key', TEXT)
)
db.connect()
also here is my Dockerfile:
FROM postgres:13.2
ENV POSTGRES_PASSWORD=wsb
ENV POSTGRES_USER=wsb
ENV POSTGRES_DB=wsb
COPY wsb.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
W. Hunter Giles
(1 rep)
Jan 28, 2022, 04:50 PM
• Last activity: Jan 30, 2022, 09:19 PM
2
votes
2
answers
2518
views
Could not find a "psql" to execute
I have a bash script that run on a Linux (AWS Centos) server. There's an odd, but annoying warning message that pops up in the stdout whenever the script executes psql: **could not find a "psql" to execute**. What is odd is that that the Postgresql command line psql runs just fine! Yet it still gene...
I have a bash script that run on a Linux (AWS Centos) server. There's an odd, but annoying warning message that pops up in the stdout whenever the script executes psql: **could not find a "psql" to execute**.
What is odd is that that the Postgresql command line psql runs just fine! Yet it still generates this error message.
For further context, the bash script is executed from a shell_exec in a php page (fastcgi/nginx). If you run the bash script from the command line, you don't see the warning. But again, even with the warning, psql does actually run properly.
Does anyone know why this warning appears, and how to eliminate it?
apt605
(83 rep)
Jul 20, 2020, 03:42 PM
• Last activity: Dec 10, 2021, 03:43 PM
1
votes
1
answers
8368
views
Postgres how do return a message if no results for SELECT query
Have been trying to work out how to return a message if the result is empty. I am running a select statement as below: ``` select * from maxIDValue where max_value > 1000000 order by max_value desc; ``` I am running this in a bash script via psql however the results will be sent out in an email whic...
Have been trying to work out how to return a message if the result is empty.
I am running a select statement as below:
select * from maxIDValue where max_value > 1000000 order by max_value desc;
I am running this in a bash script via psql however the results will be sent out in an email which I have working.
What I want to know is if the results return nothing could I have it so a message is at least inputted instead of (0 rows), even if it was just a message to say there are no Values or something to that extent.
I have tried via bash also:
Note: the .sql file is the above query
{
if [ "$( psql -h $SERVERNAME -U test -d Titan -tAf "/home/postgres/tools/testmaxresults.sql" )" = '1' ]
then
psql -h ${SERVERNAME} -U test -d Titan -f /home/postgres/tools/testmaxresults.sql >>${TMPFILE01}
else
echo "No Value Exists" >>${TMPFILE01}
fi
}
To sum up
I want the bash to run the script and if there is data i.e results then run and report (the reporting part works in terms of email etc)
But the select runs and there is 0 results then print/echo "No Values"
I want to run this query against multiple databases. How would I have this enclosed in a bash script so I can run this against each database.
If I run this it only reports the last database as opposed to both.
Any help is appreciated.
rdbmsNoob
(459 rep)
Dec 3, 2021, 05:39 PM
• Last activity: Dec 3, 2021, 08:30 PM
2
votes
2
answers
2091
views
pg_dump hangs in Git Bash but works in Windows command line
I tried to make a backup of a database from Git Bash environment, but the executable does not output anything. If I specify the filename using the `-f` flag, a blank file gets created, so the executable obviously runs, but it does not do anything - there is no output or activity. Using the `-v` flag...
I tried to make a backup of a database from Git Bash environment, but the executable does not output anything.
If I specify the filename using the
-f
flag, a blank file gets created, so the executable obviously runs, but it does not do anything - there is no output or activity. Using the -v
flag also does not make it produce any output.
If I run pg_dump --help
, I do get output though.
When I do the same from a Windows command line, I am asked for a password and the dump proceeds fine.
What is different about the Git Bash environment that pg_dump
does not work there? Is there any workaround? I prefer to stay out of the Windows command line if at all possible.
JohnEye
(193 rep)
Nov 3, 2021, 08:40 PM
• Last activity: Nov 4, 2021, 10:57 AM
0
votes
2
answers
569
views
Connecting to multiple Oracle databases hosted on different machines - Username as common and different password for each database
Environment: We have ~100 databases hosted on multiple machines. All databases have common username but password is different on every database. Issue: How can I read the password and TNS (Service name) from an external file one by one and connect to the each database and run the sql query and pass...
Environment:
We have ~100 databases hosted on multiple machines. All databases have common username but password is different on every database.
Issue:
How can I read the password and TNS (Service name) from an external file one by one and connect to the each database and run the sql query and pass the output to a logfile.
What is done so far:
1. Created
pass
file that contains password and Service_Name and delimiter ":"
cat pass
Loufr#123:PROD
Brinj#6523:TEST
2. Another file dblist
is created with all the dbnames
(that matches the service names) in step 1
cat dblist
PROD
TEST
DEV
QA
Quality
-----------------------------------------
Goal:
read name of the database from dblist
and find the password from pass
file for that database.
Use dbname
and password
to connect to database in following string
sqlplus -s dbsnmp@${TNS}/${Password}
spool output.log
@query.sql
exit
Gaurav Gupta
(11 rep)
Sep 11, 2021, 05:20 PM
• Last activity: Sep 16, 2021, 10:22 AM
3
votes
1
answers
1217
views
execute mysqlshell from within a bash script? is it possible?
I am writing a backup script for a database and want to use mysqlshell `util.dumpSchemas()` utility. But how on earth can I make it work from within a shell script? The basic process I have is this: filename_1: backup.sh . . . mysqlsh --file mysqlshell_backup.js filename_2: mysqlshell_backup.js \con...
I am writing a backup script for a database and want to use mysqlshell
util.dumpSchemas()
utility.
But how on earth can I make it work from within a shell script?
The basic process I have is this:
filename_1: backup.sh
. . .
mysqlsh --file mysqlshell_backup.js
filename_2: mysqlshell_backup.js
\connect backup_user:@127.0.0.1:3306
util.dumpSchemas()
\q
But when I run it all I get are errors:
SyntaxError: Invalid or unexpected token at mysqlshell_backup.js:2:0
in \connect backup_user:@127.0.0.1:3306
but if I run the code manually, e.g:
mysqlsh
MySQL JS > \connect backup_user:@127.0.0.1:3306
it works fine.
MySQL 127.0.0.1:3306 ssl JS >
I tried making the file an executable in it's own right and running it on it's own by adding:
#!/usr/local/mysql-shell/bin/mysqlsh --file
\connect backup_user:@127.0.0.1:3306
util.dumpSchemas()
\q
but then I get:
mysqlshell_backup.js: line 2: connect: command not found
mysqlshell_backup.js: line 3: syntax error near unexpected token `[],'
mysqlshell_backup.js: line 3: `util.dumpSchemas()
But again, if I run it directly from within mysqlshell the code works fine, so I know there are no syntax errors.
Is there actually a way to run a mysqlshell command as part of a bigger script, or am I just wasting my time here?
IGGt
(2276 rep)
Jul 15, 2021, 03:34 PM
• Last activity: Aug 26, 2021, 06:34 AM
0
votes
0
answers
1160
views
Dump all data to json files
I'm working on a script that has to dump app data from a schema to json files. Here is what I have thus far: ```sh export SCHEMA="citydb" export DB="kv_db" export PGPASSWORD=root psql -U postgres -Atc "select tablename from pg_catalog.pg_tables where schemaname='$SCHEMA'" $DB |\ while read TBL; do n...
I'm working on a script that has to dump app data from a schema to json files. Here is what I have thus far:
export SCHEMA="citydb"
export DB="kv_db"
export PGPASSWORD=root
psql -U postgres -Atc "select tablename from pg_catalog.pg_tables where schemaname='$SCHEMA'" $DB |\
while read TBL; do
name=${TBL::-1} # for some reason a whitespace character is appended to the end of each name
echo -e "Table name: $name";
touch $name".json"
psql -U postgres \o $name".json"
psql -U postgres -c "COPY (SELECT row_to_json(t) FROM $SCHEMA.$name as t) TO 'c:/users/me/desktop/$name.json'" $DB > $name.json
done
I'm pretty much building up on information about how to export to csv's since I didn't find any information about dumping all the data to json.
I'm getting an error that says:
psql: warning: extra command-line argument "plant_cover.json" ignored
psql: error: FATAL: database "o" does not exist
ERROR: could not open file "c:/users/me/desktop/plant_cover.json" for writing: Permission denied
HINT: COPY TO instructs the PostgreSQL server process to write a file. You may want a client-side facility such as psql's \copy.
Any ideas?
tdranv
(109 rep)
Aug 5, 2021, 10:17 AM
Showing page 1 of 20 total questions