Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
1 answers
3122 views
Upgrading Berkeley DB rpm in a Centos OS
I have a software which needs Berkeley DB 4.5 or above. But in my CentOS 5.11 x86_64 Server I have: * Package db4-4.3.29-10.el5_5.2.x86_64 already installed and latest version * Package db4-4.3.29-10.el5_5.2.i386 already installed and latest version How can I upgrade these rpm to a newer version? I...
I have a software which needs Berkeley DB 4.5 or above. But in my CentOS 5.11 x86_64 Server I have: * Package db4-4.3.29-10.el5_5.2.x86_64 already installed and latest version * Package db4-4.3.29-10.el5_5.2.i386 already installed and latest version How can I upgrade these rpm to a newer version? I tried to upgrade using Centos 6.6 rpm in this way: rpm -Uvh ftp://195.220.108.108/linux/centos/6.6/os/x86_64/Packages/db4-4.7.25-18.el6_4.i686.rpm but I receive this error: Retrieving ftp://195.220.108.108/linux/centos/6.6/os/x86_64/Packages/db4-4.7.25-18.el6_4.i686.rpm warning: /var/tmp/rpm-xfer.IKWqHE: Header V3 RSA/SHA1 signature: NOKEY, key ID c105b9de error: Failed dependencies: rpmlib(FileDigests) = 4.4.0 conflicts with pam-0.99.6.2-12.el5.i386 db4 >= 4.4.0 conflicts with pam-0.99.6.2-12.el5.x86_64 libdb-4.3.so is needed by (installed) subversion-1.6.11-12.el5_10.i386 libdb-4.3.so is needed by (installed) pam_ccreds-3-5.i386 libdb-4.3.so is needed by (installed) apr-util-1.2.7-11.el5_5.2.i386 libdb-4.3.so is needed by (installed) db4-devel-4.3.29-10.el5_5.2.i386 libdb_cxx-4.3.so is needed by (installed) db4-devel-4.3.29-10.el5_5.2.i386 I also tried to compile from source db-4.5.20.tar.gz. I compiled it with no problem, however my software is still seeing the Berkley DB preinstalled in rpm package db4-4.3.29. Any help please ?
gr68 (334 rep)
Jan 28, 2015, 04:01 PM • Last activity: Jul 24, 2025, 04:00 PM
0 votes
1 answers
42 views
ldd /usr/bin/pg_restore gives error not a dynamic executable
I am trying to determine the libraries needed for `pg_restore` in the official `postgres:15` docker image, but when I run `ldd /usr/bin/pg_restore` inside the container it returns this error: `not a dynamic executable`. When I install the same postgres client version 15.12 directly in my VM, `ldd` w...
I am trying to determine the libraries needed for pg_restore in the official postgres:15 docker image, but when I run ldd /usr/bin/pg_restore inside the container it returns this error: not a dynamic executable. When I install the same postgres client version 15.12 directly in my VM, ldd works as expected. What could be wrong that ldd is not working properly on a docker container?
Ricardo Silva (103 rep)
May 6, 2025, 09:52 AM • Last activity: May 6, 2025, 10:04 AM
0 votes
1 answers
2109 views
How to check if a database exists in teradata while firing a sqoop import command in shell script?
I'm trying to fire a sqoop import command from the shell script. It is working fine as expected. But, if suppose the database is missing in the Teradata then it needs to throw the error and should not go for processing of further commands in the script. As the syntax is correct, the sqoop import com...
I'm trying to fire a sqoop import command from the shell script. It is working fine as expected. But, if suppose the database is missing in the Teradata then it needs to throw the error and should not go for processing of further commands in the script. As the syntax is correct, the sqoop import command is returning "0" and it is assuming that the sqoop import command is successful. How to handle this type of error in the shell script even though the database is missing in the Teradata?
Stack 777
Jun 28, 2017, 09:41 AM • Last activity: May 5, 2025, 12:05 AM
0 votes
2 answers
2681 views
How can Nextcloud data be backed up and restored independently? (Versioning / Snapshots)
Whenever multiple users are interacting with a Nextcloud installation, there is a possibility of error. Potentially, a family member may delete an old picture or a co-worker might accidentally tick off a task or calendar event, resulting in issues for other users. When full filesystem snapshots or b...
Whenever multiple users are interacting with a Nextcloud installation, there is a possibility of error. Potentially, a family member may delete an old picture or a co-worker might accidentally tick off a task or calendar event, resulting in issues for other users. When full filesystem snapshots or backups of the whole Nextcloud directory are available, they can be used to restore an old state of the entire server. This is fine in case of a complete system crash. However, it becomes an issue if the problem is only noticed after a while and in the mean time, users have modified other data. Then, a choice must be made between rescuing old data and destroying all progress or keeping the current state. The Nextcloud documentation only describes a way to restore the entire installation. Is there a way to more intelligently back up all Nextcloud data automatically (files, messages, calendars, tasks, etc.) so that it can be restored independently? (Maybe even in an online state?)
Prototype700 (73 rep)
Nov 2, 2021, 10:14 PM • Last activity: Apr 6, 2025, 04:01 PM
0 votes
1 answers
90 views
System call to remove content from file or Append it in the middle?
One issue that I ran into when making a custom database, without creating an entire block-chain based filesystem from scratch, is deletion and insertion from/to the *middle*. It's easy to many a binary file format that one can continually append to, and truncate from the end or from the beginning. B...
One issue that I ran into when making a custom database, without creating an entire block-chain based filesystem from scratch, is deletion and insertion from/to the *middle*. It's easy to many a binary file format that one can continually append to, and truncate from the end or from the beginning. But the issues happen when trying to edit a variable length entry in the middle or remove an entry from the middle, or insert a new entry to the middle. One generally has to rewrite at least the second half of the file to adjust for the new middle. Do any Linux based filesystems offer some kind of system call that allows one to simply "insert" data in the middle of a large file, without loading the 2nd half into RAM in order to rewrite it (and same goes for removing content from the center)? If this exists, it would be significantly easier to manage "array" or even hash based mini databases on disk without creating elaborate linked lists etc. If there's no native API is it possible to manually modify the inodes of the filesystem to achieve minimal rewriting, possibly only one or 2 4kb blocks loaded into ram for each operation, regardless of file size?
B''H Bi'ezras -- Boruch Hashem (121 rep)
Mar 18, 2025, 05:54 PM • Last activity: Mar 18, 2025, 09:55 PM
28 votes
3 answers
12232 views
What kind of database do `updatedb` and `locate` use?
The `locate` program of `findutils` scans one or more databases of filenames and displays any matches. This can be used as a very fast `find` command if the file was present during the last file name database update. There are many kinds of databases nowadays, - relational databases (with query lang...
The locate program of findutils scans one or more databases of filenames and displays any matches. This can be used as a very fast find command if the file was present during the last file name database update. There are many kinds of databases nowadays, - relational databases (with query language e.g. SQL), - NoSQL databases - document-oriented databases (e.g. MongoDB) - Key-value database (e.g. Redis) - Column-oriented databases (e.g. Cassandra) - Graph database So what kind of database does updatedb update and locate use?
Tim (106422 rep)
Jul 20, 2017, 12:05 PM • Last activity: Mar 16, 2025, 04:24 AM
0 votes
1 answers
667 views
system ID mismatch, node belongs to a different cluster
I have etcd cluster, and now trying to initialize patroni (on server A and server B), but whenever I try to do so I'm getting the error about cluster mismatch. Tried removing 'initialize' from etcd, tried completely clearing up etcd saved configuration, but none worked So if I will start patroni on...
I have etcd cluster, and now trying to initialize patroni (on server A and server B), but whenever I try to do so I'm getting the error about cluster mismatch. Tried removing 'initialize' from etcd, tried completely clearing up etcd saved configuration, but none worked So if I will start patroni on server A, it will successfully initialize, but if I will run it on server B, it will fail with that error. The first who initializes wins, I could say ___ Patroni on server A (B has the same but with different placement of addresses):
scope: postgres-cluster
name: vb-psql2
namespace: /service/

restapi:
  listen: 192.168.8.141:8008
  connect_address: 192.168.8.141:8008
  authentication:
    username: patroni
    password: '*'

etcd:
  hosts: 192.168.8.141:2379,192.168.8.164:2379

bootstrap:
  method: initdb
  dcs:
    ttl: 60
    loop_wait: 10
    retry_timeout: 27
    maximum_lag_on_failover: 2048576
    master_start_timeout: 300
    synchronous_mode: true
    synchronous_mode_strict: false
    synchronous_node_count: 1
    # standby_cluster:
      # host: 127.0.0.1
      # port: 1111
      # primary_slot_name: patroni
    postgresql:
      use_pg_rewind: false
      use_slots: true
      parameters:
        max_connections: 800
        superuser_reserved_connections: 5
        max_locks_per_transaction: 64
        max_prepared_transactions: 0
        huge_pages: try
        shared_buffers: 512MB
        work_mem: 128MB
        maintenance_work_mem: 256MB
        effective_cache_size: 4GB
        checkpoint_timeout: 15min
        checkpoint_completion_target: 0.9
        min_wal_size: 2GB
        max_wal_size: 4GB
        wal_buffers: 32MB
        default_statistics_target: 1000
        seq_page_cost: 1
        random_page_cost: 4
        effective_io_concurrency: 2
        synchronous_commit: on
        autovacuum: on
        autovacuum_max_workers: 5
        autovacuum_vacuum_scale_factor: 0.01
        autovacuum_analyze_scale_factor: 0.02
        autovacuum_vacuum_cost_limit: 200
        autovacuum_vacuum_cost_delay: 20
        autovacuum_naptime: 1s
        max_files_per_process: 4096
        archive_mode: on
        archive_timeout: 1800s
        archive_command: cd .
        wal_level: replica
        wal_keep_segments: 130
        max_wal_senders: 10
        max_replication_slots: 10
        hot_standby: on
        hot_standby_feedback: True
        wal_log_hints: on
        shared_preload_libraries: pg_stat_statements,auto_explain
        pg_stat_statements.max: 10000
        pg_stat_statements.track: all
        pg_stat_statements.save: off
        auto_explain.log_min_duration: 10s
        auto_explain.log_analyze: true
        auto_explain.log_buffers: true
        auto_explain.log_timing: false
        auto_explain.log_triggers: true
        auto_explain.log_verbose: true
        auto_explain.log_nested_statements: true
        track_io_timing: on
        log_lock_waits: on
        log_temp_files: 0
        track_activities: on
        track_counts: on
        track_functions: all
        log_checkpoints: on
        logging_collector: on
        log_statement: mod
        log_truncate_on_rotation: on
        log_rotation_age: 1d
        log_rotation_size: 0
        log_line_prefix: '%m [%p] %q%u@%d '
        log_filename: 'postgresql-%a.log'
        log_directory: /var/log/postgresql

  initdb:  # List options to be passed on to initdb
    - encoding: UTF8
    - data-checksums

  pg_hba:
    - host all all 0.0.0.0/0 md5
    - host replication replicator 127.0.0.1/32 md5
    - host replication replicator 10.0.2.0/24 md5

postgresql:
  listen: 192.168.8.141,127.0.0.1:5432
  connect_address: 192.168.8.141:5432
  use_unix_socket: true
  data_dir: /var/lib/postgresql/11/main
  bin_dir: /usr/lib/postgresql/11/bin
  config_dir: /etc/postgresql/11/main
  pgpass: /var/lib/postgresql/.pgpass_patroni
  authentication:
    replication:
      username: replicator
      password: ****
    superuser:
      username: postgres
      password: ****
  parameters:
    unix_socket_directories: /var/run/postgresql
    stats_temp_directory: /var/lib/pgsql_stats_tmp

  remove_data_directory_on_rewind_failure: false
  remove_data_directory_on_diverged_timelines: false

#  callbacks:
#    on_start:
#    on_stop:
#    on_restart:
#    on_reload:
#    on_role_change:

  create_replica_methods:
    - basebackup
  basebackup:
    max-rate: '100M'
    checkpoint: 'fast'

watchdog:
  mode: off  # Allowed values: off, automatic, required
  device: /dev/watchdog
  safety_margin: 5

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false

  # specify a node to replicate from (cascading replication)
#  replicatefrom: (node name)
etcd:
ETCD_NAME="etcd2"
ETCD_LISTEN_CLIENT_URLS="http://192.168.8.141:2379,http://127.0.0.1:2379 "
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.8.141:2379 "
ETCD_LISTEN_PEER_URLS="http://192.168.8.141:2380 "
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.8.141:2380 "
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.8.164:2380,etcd2=http://192.168.8.141:2380 "
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_ELECTION_TIMEOUT="10000"
ETCD_HEARTBEAT_INTERVAL="2000"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
ETCD_ENABLE_V2="true"
How can I fix this? I've faced this issue several times back then (couldn't fix)
Slen (1 rep)
Nov 22, 2024, 03:13 PM • Last activity: Nov 25, 2024, 08:30 AM
0 votes
2 answers
93 views
How to retrieve notes content without knotes?
Now that Knotes from KDE has been killed, buried, wiped from rolling release distros repositories, and also from my computer when trying to reinstall it while ignoring the situation, how can I get my notes data back ?
Now that Knotes from KDE has been killed, buried, wiped from rolling release distros repositories, and also from my computer when trying to reinstall it while ignoring the situation, how can I get my notes data back ?
Bogey Jammer (199 rep)
Sep 16, 2024, 07:21 PM • Last activity: Nov 4, 2024, 11:54 AM
0 votes
1 answers
60 views
Script bash - Insertion of data into MariaDB from Oracle database
This script bash makes the data extraction to Oracle DB (another server), processing it and insert into MariaDB (my server), but the insertion it's misaligned and let it some columns in blank. * This is the code: ```lang-bash #!/bin/bash ORACLE_USER="user" ORACLE_PASSWORD="password" ORACLE_DB="IP/SI...
This script bash makes the data extraction to Oracle DB (another server), processing it and insert into MariaDB (my server), but the insertion it's misaligned and let it some columns in blank. * This is the code:
-bash
    #!/bin/bash
    
    ORACLE_USER="user"
    ORACLE_PASSWORD="password"
    ORACLE_DB="IP/SID"
    
    MYSQL_USER="user"
    MYSQL_PASSWORD="password"
    MYSQL_DB="DB"
    
    echo "Fetching data from Oracle..."
    ORACLE_DATA=$(sqlplus -s "$ORACLE_USER/$ORACLE_PASSWORD@$ORACLE_DB" > SALIDA.TXT
    
    echo "Data insertion completed."
* And the results are like this:
Region:
    
    Central:
    
    Nombre_Banco:
    
    Modelo:
    
    Bateria:
    
    Tecnologia:
    
    Amperaje_CA:
    
    Medido:
    
    Porcentaje:
    
    Voltaje:
    
    Voltaje_AC:
    
    Creado:
    
    Region: REGION 7
    
    Central: LERFIC
    
    Nombre_Banco: 7LERFICCT1B6
    
    Modelo: BATERIA GNB ABSOLYTE, 100A-19 896AH
    
    Bateria:
    
    Tecnologia:
    
    Amperaje_CA:
    
    Medido:
    
    Porcentaje:
    
    Voltaje:
    
    Voltaje_AC:
    
    Creado:
    
    Region: J10B                                                                                                                        
    
    Central: TYPHOON(TY1)
    
    Nombre_Banco: 0
    
    Modelo: 2016-12-30
    
    Bateria: 54.45
    
    Tecnologia: 2.25
    
    Amperaje_CA: 0
    
    Medido: 2017-01-10
    
    Porcentaje:
    
    Voltaje:
    
    Voltaje_AC:
    
    Creado:
Gatito (3 rep)
Oct 29, 2024, 05:38 PM • Last activity: Oct 30, 2024, 09:56 AM
5 votes
1 answers
306 views
How do I go about identifying / using this (exotic?) .db file?
I own a Synology DS220J which I use to host a few different NFS shares / "Shared folders" (as DSM calls them), and I was snooping on some files in order to find out where the thing is storing information about them. I found an interesting file that contains my "Shared folders" names in `/usr/syno/et...
I own a Synology DS220J which I use to host a few different NFS shares / "Shared folders" (as DSM calls them), and I was snooping on some files in order to find out where the thing is storing information about them. I found an interesting file that contains my "Shared folders" names in /usr/syno/etc/synoshare.db:
-shell
% strings synoshare.db 
BACKUP.OLD
HOMES
BACKUP
[REDACTED STRING]
LAVORO
[REDACTED STRING]
WEB_PACKAGES
STORAGE
NETBACKUP
[REDACTED STRING]
TFTP
I copied this file over from the NAS to my local machine, and the first thing I tried to do was to interact with it as if it was an SQLite database:
-shell
% sqlite3 synoshare.db 
SQLite version 3.44.0 2023-11-01 11:23:50
Enter ".help" for usage hints.
sqlite> .databases
main: /home/dev/playground/temp/synoshare.db r/w
sqlite> .tables
Error: file is not a database
Then I resorted to file -k:
-shell
% file -k synoshare.db
synoshare.db: Berkeley DB 1.85/1.86 (Btree, version 3, native byte-order)\012- Berkeley DB 1.85/1.86 (Btree, version 3, little-endian)\012- data
file seems convinced it's a Berkeley DB file. So I tried using db_dump (db_dump doesn't come with a man page on openSUSE, so I referred to Oracle's [db_dump man page](https://docs.oracle.com/cd/E17276_01/html/api_reference/C/db_dump.html) , which suggests using -l to list the databases in a file):
-shell
% db_dump -l synoshare.db
db_dump: __db_meta_setup: synoshare.db: unexpected file type or format
db_dump: open: synoshare.db: Invalid argument
What should I try next? I understand this database may be structured in some sort of proprietary format, but how likely would that be? And it appears to bear some resemblance with Berkeley DB files anyways, so perhaps I could use some trick to convert it to a proper Berkeley DB file?
kos (4255 rep)
Oct 26, 2024, 12:59 AM • Last activity: Oct 26, 2024, 10:20 AM
2 votes
4 answers
881 views
How can I append comma except lastline?
With following shell script, * I read file * and for each line * generates SQL fragments ```bash #!/bin/bash CURRDATE=$(date +'%Y-%m-%d %H:%M:%S') echo $CURRDATE echo $OUTPUT echo "SET SESSION AUTOCOMMIT = 0;" > $OUTPUT echo 'START TRANSACTION;' >> $OUTPUT echo $'INSERT INTO ...' >> $OUTPUT echo 'VA...
With following shell script, * I read file * and for each line * generates SQL fragments
#!/bin/bash
CURRDATE=$(date +'%Y-%m-%d %H:%M:%S')
echo $CURRDATE
echo $OUTPUT

echo "SET SESSION AUTOCOMMIT = 0;" > $OUTPUT
echo 'START TRANSACTION;' >> $OUTPUT

echo $'INSERT INTO ...' >> $OUTPUT
echo 'VALUES' >> $OUTPUT
cat $INPUT | tr -d '\r' | while IFS= read -r LINE; do
echo $"(..., '$LINE', ...)," >> $OUTPUT
done

echo "COMMIT;" >> $OUTPUT
The output look like this,
SET SESSION AUTOCOMMIT = 0;
START TRANSACTION;
SET @created_at = '2024-08-09 09:30:45';
INSERT INTO ...
VALUES
(...),
(...),
(...), -- <<<<<<<<<<<<<<<< should be a semi-colon
COMMIT;
How can suppress the last comma? So that I put semi-colon?
Jin Kwon (564 rep)
Aug 9, 2024, 12:36 AM • Last activity: Aug 10, 2024, 03:20 AM
1 votes
1 answers
626 views
Updatedb include the path pointed by symbolic link
I have created `mlocate` database with the contents of a particular folder. I see that the `updatedb` doesn't include the path pointed by the symbolic links in the database. How can I include the path pointed by the symbolic links in the database? **Surprisingly**: *mlocate* has a default option `-L...
I have created mlocate database with the contents of a particular folder. I see that the updatedb doesn't include the path pointed by the symbolic links in the database. How can I include the path pointed by the symbolic links in the database? **Surprisingly**: *mlocate* has a default option -L or --follow that follows trailing symbolic links when checking file existence (default). > What purpose does it serves when *updatedb* doesn't include symlinks! --- References: - [updatedb(8): update database for mlocate - Linux man page](https://linux.die.net/man/8/updatedb) - [mlocate - Gentoo Wiki](https://wiki.gentoo.org/wiki/Mlocate)
Porcupine (2156 rep)
Jun 9, 2020, 09:33 AM • Last activity: Jun 18, 2024, 01:26 PM
25 votes
7 answers
6696 views
Standard key/value datastore for unix
I know about the **key/value** libraries for unix (**berkeleydb**, **gdbm**, **redis**... ). But before I start coding, I wonder if there is a standard tool for unix that would allow me to perform the following operations: $ tool -f datastore.db put "KEY" "VALUE" $ tool -f datastore.db put -f file_k...
I know about the **key/value** libraries for unix (**berkeleydb**, **gdbm**, **redis**... ). But before I start coding, I wonder if there is a standard tool for unix that would allow me to perform the following operations: $ tool -f datastore.db put "KEY" "VALUE" $ tool -f datastore.db put -f file_key_values.txt $ tool -f datastore.db get "KEY" $ tool -f datastore.db get -f file_keys.txt $ tool -f datastore.db remove "KEY" $ etc... Thanks
Pierre (1803 rep)
Oct 3, 2011, 12:42 PM • Last activity: May 18, 2024, 03:42 AM
2 votes
1 answers
314 views
MSSQL database server on UNIX
I have heard rumors, but did not manage to find anything more, about some sort of MSSQL serving on UNIX machine, or MSSQL/MySQL server, that can handle MSSQL storage. I would like to know, if it is possible, and what are possible methods to achieve this.
I have heard rumors, but did not manage to find anything more, about some sort of MSSQL serving on UNIX machine, or MSSQL/MySQL server, that can handle MSSQL storage. I would like to know, if it is possible, and what are possible methods to achieve this.
Deele (293 rep)
Dec 26, 2011, 03:43 PM • Last activity: Feb 27, 2024, 09:32 AM
3 votes
1 answers
233 views
Ways to store data for command line API
I am developing an API in Unix environment for virtual machines. I have to store some information in a table about virtual machines. Currently I am using python dictionary of virtual machine objects and storing the same in pickle. I would like to know about other best ways(if any) to store the data...
I am developing an API in Unix environment for virtual machines. I have to store some information in a table about virtual machines. Currently I am using python dictionary of virtual machine objects and storing the same in pickle. I would like to know about other best ways(if any) to store the data in command line API's. Any suggestion would be helpful.
Dany (189 rep)
Nov 9, 2014, 10:08 AM • Last activity: Feb 17, 2024, 09:07 PM
0 votes
1 answers
4169 views
No space left on disk (PostgreSQL)
I was trying to write a 55GB SQL file to my database. It used to crash each time. I removed logs and did some cleaning to get the server running, but the transaction never used to complete. Latter, I emptied all the tables I was writing into (I had a backup), but even after that, when I tried writin...
I was trying to write a 55GB SQL file to my database. It used to crash each time. I removed logs and did some cleaning to get the server running, but the transaction never used to complete. Latter, I emptied all the tables I was writing into (I had a backup), but even after that, when I tried writing the SQL file, it gave me this error after a series of successful inserts - Could not write to file "pg_subtrans/14C6" at offset 106496: No space left on device. Is there a way out of this? Are there logs related to PGSQL that I should still clean? I've read that moving the pg_xlog might just help? Should I try this? Any help will be appreciated. Thanks! Output of df -h reformatted from comment for readability. Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_zarkin-lv_root 574G 540G 4.3G 100% / /dev/sda1 194M 35M 150M 19% /boot tmpfs 5.9G 928K 5.9G 1% /dev/shm /dev/mapper/smeidc0-smeisky0 7.2T 5.2T 2.0T 73% /smeidc/sky0 /dev/md0 7.2T 6.5T 400G 95% /smeidc/backup
Pushpak Raj Gautam (1 rep)
Jul 5, 2017, 10:35 PM • Last activity: Jan 11, 2024, 05:51 AM
2 votes
2 answers
7804 views
How to change adminer's default port?
My Adminer (Database management in a single PHP file) runs on a Debian Server, and I want to move my database management web login screen from port 80 to port 50001. How can I do it ?
My Adminer (Database management in a single PHP file) runs on a Debian Server, and I want to move my database management web login screen from port 80 to port 50001. How can I do it ?
πter (121 rep)
Jan 5, 2018, 08:43 AM • Last activity: Dec 24, 2023, 03:17 PM
0 votes
1 answers
403 views
accessing docker database from other docker container(s)
This is a very confusing matter to me. I have multiple containers, each comes with it's own database, which creates resources overhead. I've deployed a mariadb container and I want the other containers to use it, but it appears something is always going wrong and I can't figure it out. So here are s...
This is a very confusing matter to me. I have multiple containers, each comes with it's own database, which creates resources overhead. I've deployed a mariadb container and I want the other containers to use it, but it appears something is always going wrong and I can't figure it out. So here are some notes: 1) When deploying the mariadb I configured it to be on the same docker network as the other services. 2) when deploying mariadb I used bind volume /a/path:/config ... but I can't find folder called config on that path, I can only find a log/ , databases/ and custom.cnf ... should I create the /config directory myself or is it the same as databases? 3) inside the /databases there are /maria_db and /mysql ... where are the databases kept? **NOTE** that I called the database environment variable as maria_db .. so is the databases saved here? 4) The directory /databases/mysql/ and /databases/maria_db/ both have files that seems to me as databases (example .frm , .ibd) but the /mysql has more file extensions that I can't find in /maria_db, (example .MAD , .MAI) and that is confusing ( see next note). 5) If I have to let the other containers use this DB, should I just make environment variables that calls the database (example: MYSQL_DATABASE=maria_db) or should I also give the other dockers access to the databases directory? and if so, **which one?**, /mariadb ,, /mysql ,, or the parent folder /databases that has both? 6) For each docker I want to give a different database username and pass (MYSQL_USER , MYSQL_PASSWORD) is this a good idea?? ( what I know I should add them to the database by console not passing them with variables, except for the first username and password) 7) If I should mount the database directory to each docker that will use it, **where** should I mount it on the docker that will use it? It's a lot of questions I know, but it's a confusing topic that I couldn't figure out myself, even that I spent hours and hours reading (yesterday alone I spent 12 hours on this). Thanks in advance.
Abd Alhaleem Bakkor (347 rep)
Nov 9, 2023, 07:58 AM • Last activity: Nov 9, 2023, 08:51 AM
0 votes
0 answers
357 views
How to Fix "MySQL error message was : DBI connect failed : Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) ?"
I upgraded my VPS with the provider Contabo, and since then, I have been experiencing issues. Unfortunately, they have not been responsive or provided any support. when i do command : service mysql start, it return The full MySQL error message was : DBI connect failed : Can't connect to local MySQL...
I upgraded my VPS with the provider Contabo, and since then, I have been experiencing issues. Unfortunately, they have not been responsive or provided any support. when i do command : service mysql start, it return The full MySQL error message was : DBI connect failed : Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) ? Command : sudo systemctl status mysql , gives: mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2023-10-30 03:51:56 CET; 9s ago Process: 4936 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=1/FAILURE) Oct 30 03:51:56 vmi891613.contaboserver.net systemd: mysql.service: Scheduled restart job, restart counter is at 5. Oct 30 03:51:56 vmi891613.contaboserver.net systemd: Stopped MySQL Community Server. Oct 30 03:51:56 vmi891613.contaboserver.net systemd: mysql.service: Start request repeated too quickly. Oct 30 03:51:56 vmi891613.contaboserver.net systemd: mysql.service: Failed with result 'exit-code'. Oct 30 03:51:56 vmi891613.contaboserver.net systemd: Failed to start MySQL Community Server. I tried restart "*sudo systemctl start mysql* ", but shows: Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details. the systemctl status mysql.service gives the same mentioned before. if any one know how to fix this without losing my websites in the VPS, please help i have like 6 domians in this VPN and the service provider (Contabo) not helping. its been 48 hours and the financial losses continue to mount with each passing hour.
MD.Favaz (1 rep)
Oct 30, 2023, 02:58 AM • Last activity: Oct 31, 2023, 05:16 AM
-1 votes
2 answers
176 views
Running SQL databases offline
I'm running a Debian 10 based distro. As an experiment, I downloaded one of these Wikipedia-like sites which gives you a download size of several gigabytes. I was hoping to run it offline. I don't know if I ever had any experience with `.sql`, maybe 15+ years ago in high school. When unzipped, it ha...
I'm running a Debian 10 based distro. As an experiment, I downloaded one of these Wikipedia-like sites which gives you a download size of several gigabytes. I was hoping to run it offline. I don't know if I ever had any experience with .sql, maybe 15+ years ago in high school. When unzipped, it has a bunch of folders, images is the largest folder, anyway I think the main one is the database.sql which is 3GB. How easy is it to navigate this database offline? Obviously when it runs online, there is a search function, etc. If running something like this is feasible on an offline desktop, what .sql programs need to be installed in order to do it?
1toneboy (465 rep)
Nov 28, 2021, 09:43 PM • Last activity: Oct 21, 2023, 10:46 AM
Showing page 1 of 20 total questions