Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
155 views
Completeness of Patch Population
I'm performing an audit on Oracle database changes and the database is on a Linux OS. After using the command **opatch lsinventory**, we learned that this command has not been pulling the complete population of patches on the database. It is only showing the last patch applied. Contrary to many onli...
I'm performing an audit on Oracle database changes and the database is on a Linux OS. After using the command **opatch lsinventory**, we learned that this command has not been pulling the complete population of patches on the database. It is only showing the last patch applied. Contrary to many online definition; which is ALL patches. What could be causing this? What's a foolproof way to pull all changes (patch, scheme, table etc) applied? Thank you all!
PlatosSpaghetti (1 rep)
Oct 9, 2017, 10:44 PM • Last activity: Jul 22, 2025, 04:06 AM
0 votes
1 answers
195 views
Sysbench - Public key for postgresql-libs-9.2.24-7.el7_9.x86_64.rpm is not installed
I have a `mariadb` on a `redhat server (7.9)` and trying to install `sysbench` for benchmarking with this command: sudo yum -y install sysbench I got this error: Loaded plugins: fusioninventory-agent, product-id, rhnplugin, search-disabled-repos This system is receiving updates from RHN Classic or R...
I have a mariadb on a redhat server (7.9) and trying to install sysbench for benchmarking with this command: sudo yum -y install sysbench I got this error: Loaded plugins: fusioninventory-agent, product-id, rhnplugin, search-disabled-repos This system is receiving updates from RHN Classic or Red Hat Satellite. Resolving Dependencies --> Running transaction check ---> Package sysbench.x86_64 0:1.0.17-2.el7 will be installed --> Processing Dependency: libck.so.0()(64bit) for package: sysbench-1.0.17-2.el7.x86_64 --> Processing Dependency: libluajit-5.1.so.2()(64bit) for package: sysbench-1.0.17-2.el7.x86_64 --> Processing Dependency: libpq.so.5()(64bit) for package: sysbench-1.0.17-2.el7.x86_64 --> Running transaction check ---> Package ck.x86_64 0:0.5.2-2.el7 will be installed ---> Package luajit.x86_64 0:2.0.4-3.el7 will be installed ---> Package postgresql-libs.x86_64 0:9.2.24-7.el7_9 will be installed --> Finished Dependency Resolution Dependencies Resolved ===================================================================================================================================================================================================== Package Arch Version Repository Size ===================================================================================================================================================================================================== Installing: sysbench x86_64 1.0.17-2.el7 rhel-x86_64-7-epel-20220531 152 k Installing for dependencies: ck x86_64 0.5.2-2.el7 rhel-x86_64-7-epel-20220531 26 k luajit x86_64 2.0.4-3.el7 rhel-x86_64-7-epel-20220531 343 k postgresql-libs x86_64 9.2.24-7.el7_9 rhel-x86_64-7-updates-20220531 235 k Transaction Summary ===================================================================================================================================================================================================== Install 1 Package (+3 Dependent packages) Total size: 756 k Installed size: 2.3 M Downloading packages: warning: /var/cache/yum/x86_64/7Server/rhel-x86_64-7-updates-20220531/packages/postgresql-libs-9.2.24-7.el7_9.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Public key for postgresql-libs-9.2.24-7.el7_9.x86_64.rpm is not installed And I don't unterstand why it can't find the key.
TimLer (163 rep)
Aug 30, 2022, 12:58 PM • Last activity: Jun 19, 2025, 05:09 PM
0 votes
2 answers
222 views
Implementing older version of mySQL on RH9
I'm building a virtual machine running Red Hat Linux 9 (Shrike) and MySQL. This is to virtualize a current production system that needs new hardware. Questions: 1. Can I copy the existing mysql daemon executables to the new machine from the old one and if so, where are they located, and what is invo...
I'm building a virtual machine running Red Hat Linux 9 (Shrike) and MySQL. This is to virtualize a current production system that needs new hardware. Questions: 1. Can I copy the existing mysql daemon executables to the new machine from the old one and if so, where are they located, and what is involved? 2. Are their archived packages of the early mySQL packages? I was hoping to find the exact version running on my old server, to do an install, and the copy the database file. ### Update: Hi..... many thanks for the ideas. I'm pretty sure the copy is the way to go.... (thanks Tometzky), but it is somewhat fraught, as the old hardware has a software RAID array, so I'm not sure I can do a bitcopy and have it work. And, the original mySQL is 3.23 (just about the first production version....). Earliest source I've found was at SkyServer which started at version 4.x. A dd from one /dev/sda to /dev/sdb seems to work fine (simulated in VirtualBox). So far an /dev/md0 (from a RAID 1 drive) doesn't work....but I'm going to try copying partitions individually, and then see if that will work. Our original mySQL is 3.23 (just about the first production version....). Earliest packaged binaries at SkyServer are 4.1 Obviously, this is a temporary move to buy us some time while we re-write the applications that query the mySQL.
larryk (1 rep)
Feb 6, 2014, 02:45 PM • Last activity: Jun 19, 2025, 04:03 PM
3 votes
2 answers
4351 views
db2_install do not have write permission on the directory or file
This is the error I encountered (I ran the `db2_install` and everything else as root,): DBI1288E The execution of the program /home/DB_SERVER/ibm/db2/v10.5 failed. This program failed because you do not have write permission on the directory or file . I tried to change the access permission of the p...
This is the error I encountered (I ran the db2_install and everything else as root,): DBI1288E The execution of the program /home/DB_SERVER/ibm/db2/v10.5 failed. This program failed because you do not have write permission on the directory or file . I tried to change the access permission of the path: chmod -R a+rw /home/DB_SERVER/ibm/db2/v10.5 And then I could create a new file in that folder: vi test Result of ll in that directory: total 1 -rw-rw-rw- 1 root root 12 Nov 23 00:24 test But when I ran the db2_install again, it failed for the same reason. PS: I tried to change permission of the setup file too chmod -R a+rwx db2Setup/ Result of ll total 72 drwxrwxrwx 6 root root 4096 Nov 23 00:10 db2 -rwxrwxrwx 1 root root 5349 Nov 23 00:10 db2ckupgrade -rwxrwxrwx 1 root root 5302 Nov 23 00:10 db2_deinstall -rwxrwxrwx 1 root root 5172 Nov 23 00:10 db2_install -rwxrwxrwx 1 root root 5136 Nov 23 00:10 db2ls -rwxrwxrwx 1 root root 5154 Nov 23 00:10 db2prereqcheck -rwxrwxrwx 1 root root 5154 Nov 23 00:10 db2setup drwxrwxrwx 10 root root 4096 Nov 23 00:10 ibm_im -rwxrwxrwx 1 root root 5190 Nov 23 00:10 installFixPack drwxrwxrwx 4 root root 4096 Nov 23 00:10 nlpack -rw-r--r-- 1 root root 8 Nov 23 01:00 test So I have no idea what's wrong? How to fix this?
Tiana987642 (131 rep)
Nov 22, 2014, 06:03 PM • Last activity: May 31, 2025, 03:06 AM
0 votes
2 answers
237 views
How to create oracle user for more than one version
I have server under RHEL 6.3 installed on that server Oracle 11gr2 and 12c, I want to create user lets say Ahmad but I want this user to be `sysdba` on both servers, for single server I edit `.bach_profile` to be like the following: 11g: ORACLE_HOSTNAME=oracledev; export ORACLE_HOSTNAME ORACLE_UNQNA...
I have server under RHEL 6.3 installed on that server Oracle 11gr2 and 12c, I want to create user lets say Ahmad but I want this user to be sysdba on both servers, for single server I edit .bach_profile to be like the following: 11g: ORACLE_HOSTNAME=oracledev; export ORACLE_HOSTNAME ORACLE_UNQNAME=DB11G; export ORACLE_UNQNAME ORACLE_BASE=/oracle11gr2/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME ORACLE_SID=DB11G; export ORACLE_SID PATH=/usr/sbin:$PATH; export PATH PATH=$ORACLE_HOME/bin:$PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH PATH=$PATH:$HOME/bin unset USERNAME cd $ORACLE_BASE export PATH and if I want to make him sysdba for 12c I use the following: export TMP=/tmp export TMPDIR=$TMP export ORACLE_HOSTNAME=oracledev export ORACLE_UNQNAME=cdb1 export ORACLE_BASE=/oracle12c/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1 export ORACLE_SID=cdb1 export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib My question is it possible to marge those parameters together without making a conflict, if yes please advise
Ahmad Abuhasna (2718 rep)
Apr 29, 2015, 05:38 PM • Last activity: May 30, 2025, 07:02 PM
0 votes
1 answers
372 views
Postgres Installed in Linux (RHEL) and trying to access / connect using pgadmin (Windows)
I've installed and configured postgresql in Linux successfully, 1. postgres user created 2. new db created successfully 3. I'm trying to connect using pgadmin III (Windows) 4. Following details are adding in pgadmin window Host: DEMO Host: XXXX Port: 5432 Maintenance DB: postgres Username: postgres...
I've installed and configured postgresql in Linux successfully, 1. postgres user created 2. new db created successfully 3. I'm trying to connect using pgadmin III (Windows) 4. Following details are adding in pgadmin window Host: DEMO Host: XXXX Port: 5432 Maintenance DB: postgres Username: postgres password: G0!mf17. 5. Clicked on OK It's tried to connect and displayed message connecting to database.... Failed. I'm not sure why it's failed and what needs to configure in linux system. I'm new to linux, If anyone able to guide me solve this issue it will be greatful. Please let me know if you need more details.
Madhu Manohar (9 rep)
Oct 28, 2015, 12:24 PM • Last activity: May 21, 2025, 02:00 AM
0 votes
2 answers
465 views
COPY Command spawning multiple processes
I have a lengthy data pipeline that imports a CSV, creates a sequence of dependent views and then exports the results using ```COPY``` as a CSV. This process had been working fine, completing in around 30-60 seconds. Suddenly, this process has started running much slower when it reaches the copy com...
I have a lengthy data pipeline that imports a CSV, creates a sequence of dependent views and then exports the results using
as a CSV. This process had been working fine, completing in around 30-60 seconds. Suddenly, this process has started running much slower when it reaches the copy command. Running
usename, state, query FROM pg_stat_activity;
shows three identical, active
commands under my username. The process does eventually complete but now takes up to 20 minutes or more. Other than changing some initial sub-setting, the data has remained unchanged. No one else is using this database but there are other users on the cluster. Has anyone encountered this behavior before? Does anyone know what might cause a sudden slow down in a
operation? Postgres 11.6, RHEL 7
Matt (291 rep)
Feb 19, 2020, 08:45 PM • Last activity: May 4, 2025, 01:05 AM
0 votes
1 answers
549 views
Oracle 12c create manual script for automatic startup on RHEL7
I need to create a script to start and shut down a 12c database in rhel7 automatically when the operating system starts and shuts down, it turns out that I have tried the common options with dbstart and dbshut, but it only works for the listener, it does nothing with the database instance ... recent...
I need to create a script to start and shut down a 12c database in rhel7 automatically when the operating system starts and shuts down, it turns out that I have tried the common options with dbstart and dbshut, but it only works for the listener, it does nothing with the database instance ... recently I did an migration of asm to the operating system's filesystem for that reason I disabled the start of asm since I didn't need it, I don't know if this would be influencing the dbstart and dbshut scripts Don't start, I have the oratab file like this: myoradb:/u01/app/oracle/db/12.1:Y The truth is that the content of these dbstart and dbshut seems a bit dense and I do not see where the problem could be so that they do not run correctly with the database instance. Is there an example of how to create these scripts manually without using dbstart and dbshut ??? Thanks
miguel ramires (169 rep)
Aug 28, 2019, 04:11 PM • Last activity: Apr 12, 2025, 05:06 PM
1 votes
1 answers
58 views
Is Cassandra 3.11 compatible with RHEL 9?
Is Cassandra 3.11.17 is compatible with RHEL 9.7 or 9.x Please help. Thanks, Rohit
Is Cassandra 3.11.17 is compatible with RHEL 9.7 or 9.x Please help. Thanks, Rohit
RohitAwasthiDBA (11 rep)
Jan 21, 2025, 04:08 PM • Last activity: Jan 22, 2025, 03:12 AM
0 votes
0 answers
35 views
Metabase error when connecting to SQL Server 2016, happens on RHEL9 but not on RHEL7, why?
We are testing an upgraded instance of Metabase, and the upgraded components include - RHEL (from v7 to v9) operating system, - Java (from 11 to 21), and - Metabase software (from v0.32.x to v0.50.x as the current). And we noticed the following error in the upgraded instance when connecting to a dat...
We are testing an upgraded instance of Metabase, and the upgraded components include - RHEL (from v7 to v9) operating system, - Java (from 11 to 21), and - Metabase software (from v0.32.x to v0.50.x as the current). And we noticed the following error in the upgraded instance when connecting to a data source of SQL Server 2016. > "encrypt" property is set to "false" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: Certificates do not conform to algorithm constraints. ClientConnectionId:6bf62750-d5c4-49c5-8ada-4253d8b55055 We suspect the error is related to the system behavior of RHEL9, therefore we designed the following simplified test case to support our point: Test configuration: - Java openjdk v11.0.23 - metabase.jar v0.32.5 We start the application by calling command java -jar metabase.jar, so it starts a fresh demonstration instance of Metabase running on local H2 for application database. Then, we try to add a data source of SQL Server 2016 configured with encryption not required. Running on RHEL7, the above test succeeded. However, running on RHEL9, it failed with the following log events, notice the timestamp of 12-19 14:58:44:
12-19 14:58:09 DEBUG middleware.log :: GET /api/setup/admin_checklist 200 14 ms (10 DB calls) Jetty threads: 8/50 (3 busy, 5 idle, 0 queued) (48 total active threads)
12-19 14:58:44 INFO metabase.driver :: Initializing driver :sqlserver...
12-19 14:58:44 DEBUG plugins.classloader :: Setting current thread context classloader to shared classloader clojure.lang.DynamicClassLoader@1dc9fc0...
12-19 14:58:44 INFO plugins.classloader :: Added URL file:/data/metabase-test/v0.32.5/plugins/sqlserver.metabase-driver.jar to classpath
12-19 14:58:44 DEBUG plugins.init-steps :: Loading plugin namespace metabase.driver.sqlserver...
12-19 14:58:44 INFO metabase.driver :: Registered driver :sqlserver (parents: :sql-jdbc) 🚚
12-19 14:58:44 DEBUG plugins.jdbc-proxy :: Registering JDBC proxy driver for class com.microsoft.sqlserver.jdbc.SQLServerDriver...
Load lazy loading driver :sqlserver took 173 ms
12-19 14:58:44 DEBUG middleware.log :: POST /api/database 400 314 ms (0 DB calls) Jetty threads: 8/50 (3 busy, 4 idle, 0 queued) (45 total active threads)
{:valid false,
 :dbname
 "com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: \"Certificates do not conform to algorithm constraints\". ClientConnectionId:ca179b99-b3b3-4351-a0de-736b7dc8e765",
 :message
 "com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: \"Certificates do not conform to algorithm constraints\". ClientConnectionId:ca179b99-b3b3-4351-a0de-736b7dc8e765"}

12-19 14:58:44 DEBUG middleware.log :: GET /api/database 200 5 ms (3 DB calls) Jetty threads: 8/50 (3 busy, 4 idle, 0 queued) (45 total active threads)
12-19 15:00:00 INFO task.send-pulses :: Sending scheduled pulses...
We wonder if we missed anything with regard to the new operating system, and we highly appreciate any pointers and hints.
Mike (57 rep)
Dec 19, 2024, 10:19 PM • Last activity: Dec 19, 2024, 10:25 PM
0 votes
2 answers
217 views
Is Cassandra 4.1.5 supported on RedHat 9?
Is cassandra 4.1.5 supported on Redhat 9.X. Where can I find a doc or webpage stating this is a supported combination?
Is cassandra 4.1.5 supported on Redhat 9.X. Where can I find a doc or webpage stating this is a supported combination?
leidosian (1 rep)
Sep 5, 2024, 05:03 PM • Last activity: Sep 11, 2024, 03:32 AM
0 votes
1 answers
96 views
Installing Cassandra: RedHat - Only version 4.1~alpha1-1 is available
It looks like some configuration of the rpm repository for Cassandra 41x has changed and means only version 4.1~alpha1-1 is available. We are deploying a stack which includes Cassandra using Ansible. The Cassandra version 4.1.5 is fixed in our deployment. The deployment was working some weeks ago, b...
It looks like some configuration of the rpm repository for Cassandra 41x has changed and means only version 4.1~alpha1-1 is available. We are deploying a stack which includes Cassandra using Ansible. The Cassandra version 4.1.5 is fixed in our deployment. The deployment was working some weeks ago, but today I returned from vacation and no longer works, error message: No package cassandra-4.1.5-* available. Inspecting the repository https://apache.jfrog.io/ui/native/cassandra-rpm/41x/ I can see that version 4.1.5 is listed, but it is not returned when executing yum search / yum list: $ yum list cassandra --showduplicates Available Packages cassandra.noarch 4.1~alpha1-1 cassandra cassandra.src 4.1~alpha1-1 cassandra A series of files were modified on 2024-08-03 (three days ago) in the directory https://apache.jfrog.io/ui/native/cassandra-rpm/41x/repodata - could this be the change that caused this behaviour? We are now unable to install any version of Cassandra 4.1.x except the alpha release. $ cat /etc/yum.repos.d/cassandra.repo [cassandra] baseurl = https://redhat.cassandra.apache.org/41x/ enabled = 1 gpgcheck = 1 gpgkey = https://downloads.apache.org/cassandra/KEYS name = apache cassandra repository repo_gpgcheck = 1 --- Side note: the latest Cassandra installation documentation references 42x, but this does not exist on the repository remote: https://cassandra.apache.org/doc/latest/cassandra/installing/installing.html
Andreas Larfors (3 rep)
Aug 5, 2024, 10:39 AM • Last activity: Aug 9, 2024, 10:31 AM
1 votes
0 answers
22 views
Am I misunderstanding man rpm or is there a bug?
Looking for a way to extract certain data elements from `rpm` package query, I read its `man` page and came across the following blurb: QUERY OPTIONS The general form of an rpm query command is rpm {-q|--query} [select-options] [query-options] You may specify the format that package information shou...
Looking for a way to extract certain data elements from rpm package query, I read its man page and came across the following blurb: QUERY OPTIONS The general form of an rpm query command is rpm {-q|--query} [select-options] [query-options] You may specify the format that package information should be printed in. To do this, you use the --qf|--queryformat QUERYFMT option, followed by the QUERYFMT format string. Based on the above, expecting that the query format string is supposed to work as stated, I devised the following command and got the following result: $ rpm -q -i --qf "%{changelogtext}" xyz Name : xyz ... blah-blah ... Summary : Blah blah Description : xyz is ... - Update to blah blah[me@localhost]$ It appears as if query format is not honored in a sense that the default query output is still present, followed by changelog text. Although it is possible to add to the query format string an indicator of where the changelog text begins, it is still a major inconvenience to get rid of the default output that precedes it. Is there a way to enforce query format string and get rid of the default output, in a single rpm command, without having to resort to additional parsing of the output? Chahgelog text is only a single example. The goal is to obtain values of arbitrary tags supported by rpm queries.
Satoro Inikei (26 rep)
Jul 22, 2024, 05:41 PM
3 votes
2 answers
17078 views
psql (PostgreSQL client) tool install on RHEL
I would like to install just the PostgreSQL client tool (psql) on RHEL on a container to connect to an Azure database for PostgreSQL server. Can you please point me step by step instructions on how to do the same?. I do not want to install the entire postgresql server
I would like to install just the PostgreSQL client tool (psql) on RHEL on a container to connect to an Azure database for PostgreSQL server. Can you please point me step by step instructions on how to do the same?. I do not want to install the entire postgresql server
kevin (133 rep)
Sep 1, 2021, 09:21 AM • Last activity: May 22, 2024, 09:09 AM
0 votes
2 answers
133 views
How do I change the user running DSE?
The user I'm referring to is the user created by the Datastax DSE package install. I'd like to replace that user with another account. I've found a couple of places where the "Cassandra user" is defined, in the `init.d` file as well as in the `/default/DSE` directory. I've also already reassigned pe...
The user I'm referring to is the user created by the Datastax DSE package install. I'd like to replace that user with another account. I've found a couple of places where the "Cassandra user" is defined, in the init.d file as well as in the /default/DSE directory. I've also already reassigned permissions from "Cassandra" to my new user, but the DSE service won't start up unless I run it as **root** now. I've gone as far as completely wiping out the data, removing the node from the cluster, and repairing it with nodetool, but I can still only run the service as **root** or change everything back to the "Cassandra" user. The error message states that it's a read error in the metadata directory, but I've already ensured that the account has rights to it as well as the parent, I've also cleared out the local and peers file that was in the metadata directory.
mkvdba (1 rep)
Apr 26, 2024, 06:09 AM • Last activity: Apr 27, 2024, 05:35 AM
2 votes
1 answers
915 views
SQL Server for Linux 2019 - Windows Authentication not working
Hoping to pick the brains of those more knowledgeable than me. I've been trying to set up a SQL Server 2019 instance on Linux; specifically on AWS using AMI **amzn2-x86_64-SQL_2019_Standard-2019.11.12** I've followed the steps to connect the instance to our domain, and am able to successfully login...
Hoping to pick the brains of those more knowledgeable than me. I've been trying to set up a SQL Server 2019 instance on Linux; specifically on AWS using AMI **amzn2-x86_64-SQL_2019_Standard-2019.11.12** I've followed the steps to connect the instance to our domain, and am able to successfully login using my domain credentials. sssd.conf below:
[sssd]
domains = [domain name]
config_file_version = 2
services = nss, pam

[domain/[domain name]
ad_server = [FQDN of Domain Controller]
ad_domain = [domain name]
krb5_realm = [DOMAIN NAME]
realmd_tags = manages-system joined-with-adcli
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u
access_provider = ad
I've followed all the steps on the MSDN documentation on how to set up Windows Authentiation: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-active-directory-authentication?view=sql-server-linux-ver15 But whenever I try to login to the SQL Server using my domain credentials I receive the following error:
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Login failed. The login is from an untrusted domain and cannot be used with Integrated authentication..
And in the mssql-server logs (using systemctl status mssql-server -l) I see the following:
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.24 Logon       Error: 17806, Severity: 20, State: 14.
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.24 Logon       SSPI handshake failed with error code 0x80090304, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The operating system error code indicates the cause of failure.
The Local Security Authority cannot be contacted   [CLIENT: [IP ADDRESS]]
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.25 Logon       Error: 18452, Severity: 14, State: 1.
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.25 Logon       Login failed. The login is from an untrusted domain and cannot be used with Integrated authentication. [CLIENT: [IP ADDRESS]]
The error would suggest an issue contacting the Local Security Authority, so I've double checked my krb5.conf file and don't see any obvious issues:
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

includedir /var/lib/sss/pubconf/krb5.include.d/
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
# default_realm = EXAMPLE.COM
 default_ccache_name = KEYRING:persistent:%{uid}
 udp_preference_limit=0

 default_realm = [DOMAIN NAME]
[realms]
# EXAMPLE.COM = {
#  kdc = kerberos.example.com
#  admin_server = kerberos.example.com
# }

 [DOMAIN NAME] = {
 kdc = [Domain Controller 1 FQDN]:88
 kdc = [Domain Controller 2 FQDN]:88
 kdc = [Domain Controller 3 FQDN]:88
}

[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
 [DOMAIN NAME] = [DOMAIN NAME]
 .[DOMAIN NAME] = [DOMAIN NAME]
And the fact that I can login using my AD credentials to me suggests there's no overarching issue contacting the domain controllers. Any tips or pointers would be greatly appreciated on where to go next with this!
Anthony (21 rep)
Apr 22, 2020, 09:14 AM • Last activity: Dec 2, 2023, 12:11 PM
0 votes
0 answers
139 views
What is the purpose for no gpgcheck in the pgdg-redhat-all.repo?
A repository installed from the official [PostgreSQL site][1] (`/pgdg-redhat-repo-latest.noarch.rpm`) has gpgcheck disabled for PostgreSQL 17 Downloads. See below. ``` [pgdg17-updates-testing] name=PostgreSQL 17 for RHEL / Rocky $releasever - $basearch - Updates testing baseurl=https://download.post...
A repository installed from the official PostgreSQL site (/pgdg-redhat-repo-latest.noarch.rpm) has gpgcheck disabled for PostgreSQL 17 Downloads. See below.
[pgdg17-updates-testing]
name=PostgreSQL 17 for RHEL / Rocky $releasever - $basearch - Updates testing
baseurl=https://download.postgresql.org/pub/repos/yum/testing/17/redhat/rhel-$releasever-$basearch 
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
repo_gpgcheck = 1

[pgdg17-source-updates-testing]
name=PostgreSQL 17 for RHEL / Rocky $releasever - $basearch - Source updates testing
baseurl=https://download.postgresql.org/pub/repos/yum/srpms/testing/17/redhat/rhel-$releasever-$basearch 
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
repo_gpgcheck = 1
Why is that? Edit: After contacting PostgreSQL support they confirmed that it is for faster rpm builds. This issue will be corrected later.
Jakub Liška (1 rep)
Oct 30, 2023, 04:09 PM • Last activity: Nov 2, 2023, 11:04 AM
0 votes
0 answers
92 views
Mount MariaDB data in a separate directory on RHEL 8?
We are migrating from a CentOS 7 server (physical server) to RHEL 8 (Azure server). This server was setup by someone else, and admittedly I know enough to install/configure LAMP. I am not a sysadmin. That said, here is the output of `df -h` on the current server: ``` Filesystem Size Used Avail Use%...
We are migrating from a CentOS 7 server (physical server) to RHEL 8 (Azure server). This server was setup by someone else, and admittedly I know enough to install/configure LAMP. I am not a sysadmin. That said, here is the output of df -h on the current server:
Filesystem                    Size  Used Avail Use% Mounted on
devtmpfs                       16G     0   16G   0% /dev
tmpfs                          16G  4.0K   16G   1% /dev/shm
tmpfs                          16G  1.5M   16G   1% /run
tmpfs                          16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/vglocal00-root00  270G  156G  115G  58% /
/dev/sda1                     477M  225M  249M  48% /boot
/dev/mapper/vglocal00-tmp00   2.0G  388M  1.6G  21% /tmp
You'll notice the main storage is used in /. Here's the new server we are trying to migrate to:
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                    16G     0   16G   0% /dev
tmpfs                       16G     0   16G   0% /dev/shm
tmpfs                       16G   25M   16G   1% /run
tmpfs                       16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/rootvg-rootlv  2.0G  323M  1.7G  16% /
/dev/mapper/rootvg-usrlv    10G  3.9G  6.2G  39% /usr
/dev/sda1                  496M  229M  267M  47% /boot
/dev/mapper/rootvg-homelv 1014M   50M  965M   5% /home
/dev/mapper/rootvg-varlv   8.0G  8.0G  280K 100% /var
/dev/sdc1                  256G  1.9G  255G   1% /data
/dev/mapper/rootvg-tmplv   2.0G   47M  2.0G   3% /tmp
/dev/sda15                 495M  5.8M  489M   2% /boot/efi
/dev/sdb1                  147G   32K  140G   1% /mnt
You'll see the largest storage is on /data. Since the biggest file hogs on the current server are the DB's, I was thinking I could put the DB's and their nightly backups on /data and leave everything else as-is. Is this a mistake? Or should the server be reconfigured? /var currently has a LARGE file in it that I can delete so it'll be much smaller. Any help would be appreciated!
Joel Firestone (1 rep)
Sep 8, 2023, 04:23 PM • Last activity: Sep 8, 2023, 05:37 PM
0 votes
0 answers
1280 views
SQL*Net message from client too important
It is not easy to describe my problem but I will try. We noticed a slow processing problem with the database when migrating an application from a current VM named VM1 (RedHat 7.9, Java 8, Tomcat, Apache in a DC data center) to a new VM named VM2 (RedHat 8.7, more performance on the Azure platform)....
It is not easy to describe my problem but I will try. We noticed a slow processing problem with the database when migrating an application from a current VM named VM1 (RedHat 7.9, Java 8, Tomcat, Apache in a DC data center) to a new VM named VM2 (RedHat 8.7, more performance on the Azure platform). The database is Oracle 19c on a RedHat 7.9 (named VM_DB) on the DC. Following this observation, I made a small Java program, which launches on the 2 VMs, VM1 and VM2, and we discovered the time "SQL*Net message from client" is too large on VM2 compared to VM1 (same driver ojdbc8-19.18.0.0.jar). VM1 VM2 We looked on the network side, database and found nothing for the moment. Have you had any ideas or experiences on this subject? my **test** app int nLoop = 100; long time0 = System.currentTimeMillis(); Connection con=null; //---------------- connection 1 --------------- con=DriverManager.getConnection(url,usr,pwd); //---------------- connection 2 --------------- /* PoolDataSource pds = PoolDataSourceFactory.getPoolDataSource(); pds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource"); pds.setURL(url); //"jdbc:oracle:thin:@//localhost:1521/XE" pds.setUser(usr); pds.setPassword(pwd); pds.setInitialPoolSize(5); pds.setMinPoolSize(5); pds.setMaxPoolSize(25); //pds.setFastConnectionFailoverEnabled(true); //pds.setImplicitCachingEnabled(true); //pds.setConnectionCachingEnabled(true) con = pds.getConnection(); */ long time1 = System.currentTimeMillis(); System.out.println("T1 - connection :"+ ((time1-time0))); //step3 create the statement object Statement stmt=con.createStatement(); for(int t=0; t listOfArguments = runtimeMxBean.getInputArguments(); // print the arguments using my logger listOfArguments.forEach(e -> { System.out.println(e); //logger.log(s"ARG: $a") });
dsea
May 4, 2023, 08:50 AM • Last activity: May 8, 2023, 11:00 AM
0 votes
1 answers
83 views
Is it recommended changing the file /sys/kernel/mm/transparent_hugepage/enabled for MariaDB node?
Is it recommended changing the file /sys/kernel/mm/transparent_hugepage/enabled from 'always' to 'never' for a MariaDB Galera node?
Is it recommended changing the file /sys/kernel/mm/transparent_hugepage/enabled from 'always' to 'never' for a MariaDB Galera node?
user3637971 (129 rep)
Nov 10, 2022, 03:25 PM • Last activity: Nov 10, 2022, 04:07 PM
Showing page 1 of 20 total questions