Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
0
answers
11
views
where is your vendor location from Composer?
My *vendor* directory from Composer is in the Home directory of my Linux user account. And I am very allergic to loading files into projects with something arbitrary as the user account name in the directory-link. Preferably, I'd like the *vendor* directory much closer to the web-directory (but not...
My *vendor* directory from Composer is in the Home directory of my Linux user account. And I am very allergic to loading files into projects with something arbitrary as the user account name in the directory-link.
Preferably, I'd like the *vendor* directory much closer to the web-directory (but not in public of course).
What is making me very insecure is that in most examples I see from others, is that they do have the vendor directory at a much more convenient place and closer to the public-server-root.
I googled all night on how to move my vendor folder, and I didn't find anything in plain English that I could understand. I guess I am not the smartest. And Linux is moslty headaches and anger for me.
And I am surprised by the outcome of my *vendor* directory location as I did what most tutorials showed.
cd /var/www
php -r "copy('https://getcomposer.org/installer ', 'composer-setup.php');"
php -r "if (hash_file('sha384', 'composer-setup.php') === 'dac665fdc30fdd8ec78b38b9800061b4150413ff2e3b6f88543c636f7cd84f6db9189d43a81e5503cda447da73c7e5b6') { echo 'Installer verified'.PHP_EOL; } else { echo 'Installer corrupt'.PHP_EOL; unlink('composer-setup.php'); exit(1); }"
php composer-setup.php
php -r "unlink('composer-setup.php');"
*Most likely, you want to put the composer.phar into a directory on your PATH, so you can simply call composer from any directory (Global install), using for example:*
sudo mv composer.phar /usr/local/bin/composer
I did look at composer.json. But I found nothing on how to re-link a vendor directory in that file, as I do not care if that file stays in the User home dir.
Anyone with words of wisdome, advice, or plain old telling me what I did wrong?
Desert Wind
(43 rep)
Jun 2, 2025, 07:57 PM
0
votes
1
answers
844
views
IIS web application access SQL DB as service account
I've setup a new Python site on iis using FastCGI handler. The site has windows authentication enabled in iis and the app checks that the AD user belongs to an active directory group when they access the site. If authorisation fails access is denied. Windows authentication uses Kerberos but it is no...
I've setup a new Python site on iis using FastCGI handler. The site has windows authentication enabled in iis and the app checks that the AD user belongs to an active directory group when they access the site. If authorisation fails access is denied.
Windows authentication uses Kerberos but it is not a double hop.
However the web app reads/writes to a SQL Server database and the DB calls are made using the service account which runs the app pool. The service account has limited access to run the web app and can only access one single database which the web app uses.
I've read that impersonation would be better from a DB security perspective using constrained delegation. Although the app does log which user has accesed the db.
I wouldn't remember the URL now. but it was essentially stating that the SQL database is checking that the actual AD user who is using the web app has access to the database. As opposed to the database checking that the service account has access.
Is there any obvious security risks with the approach I'm using?
DeadlyDan
(111 rep)
Apr 13, 2022, 11:10 AM
• Last activity: Apr 6, 2025, 12:06 AM
0
votes
1
answers
98
views
Different response time because it does not use the indexes created with the same query and same tables from the web server to the Oracle database
I have a database with Oracle 12c (12.1), where when making a query locally and even from TOAD or another remote tool, it returns the response in milliseconds, reviewing the execution plan I can see that it makes correct use of the indexes created, the problem is when I run the same query (same quer...
I have a database with Oracle 12c (12.1), where when making a query locally and even from TOAD or another remote tool, it returns the response in milliseconds, reviewing the execution plan I can see that it makes correct use of the indexes created, the problem is when I run the same query (same query, same tables ... everything exactly the same) but from my application server, where I have installed an oracle instantclient and instantclient-sqlplus (12.1), it turns out that if I launch The query from there (web server), takes a long time and when I check the execution plan I can see that it does not use any of the indexes unlike the executions from other positions different from the web, evidently there is the delay when not using the indexes, I would like to know why this can happen and how I could solve it, both servers (web and database) are on the same network segment, there is no connectivity element such as a firewall or any other type of routing involved, changing the query is not an option. Thank you.
miguel ramires
(169 rep)
Oct 15, 2021, 07:27 PM
• Last activity: Oct 15, 2021, 07:58 PM
-1
votes
2
answers
464
views
Creating Tables with the same table names from other databases doesn't have any bad effects?
I have a WordPress website and I cloned it to another website(domain) using a plugin called Duplicator. So in the backend, phpMyAdmin, I've two different databases but all the tables in both databases have the same names. [![enter image description here][1]][1] **I just want to make sure that this d...
I have a WordPress website and I cloned it to another website(domain) using a plugin called Duplicator. So in the backend, phpMyAdmin, I've two different databases but all the tables in both databases have the same names.
**I just want to make sure that this does not have any kind of bad effects.**

Umer Mansoor
(1 rep)
Jul 23, 2021, 06:17 PM
• Last activity: Jul 24, 2021, 10:46 AM
0
votes
1
answers
141
views
Web server accessing the database server in Always on availability group
We have a web server that is on the DMZ but that will need to query a database that resides inside the network. Now the network team wont allow port 1433 from the web server to the Database server. The database server is on the Always on availability group and uses a specific listener port. Now my q...
We have a web server that is on the DMZ but that will need to query a database that resides inside the network. Now the network team wont allow port 1433 from the web server to the Database server.
The database server is on the Always on availability group and uses a specific listener port. Now my question is if my network team only allows only the Always on availability group listener port will that be a problem and what if the network team dont allow access to that port what are my options.
SQL_NoExpert
(1117 rep)
Oct 27, 2020, 01:23 PM
• Last activity: Oct 27, 2020, 02:52 PM
0
votes
2
answers
67
views
Does my database need to connect to a server?
I'm very new to SQL and databases. I'm currently following this tutorial for creating a Magento website: https://www.tutorialspoint.com/magento/magento_installation.htm and I'm on Step 4. It says to create a new database for my site (which I'm running on localhost/magento right now). I downloaded My...
I'm very new to SQL and databases. I'm currently following this tutorial for creating a Magento website: https://www.tutorialspoint.com/magento/magento_installation.htm
and I'm on Step 4. It says to create a new database for my site (which I'm running on localhost/magento right now). I downloaded MySQLWorkbench for this, and when I click "Create new connection", I put localhost/magento into the "hostname" section.
I'm just not sure how I'm supposed to create a database for my specific website, and if it needs to be connected to the URL.
Hope somebody can help me out in simple terms :)
Thanks.
user184434
(11 rep)
Jul 1, 2019, 03:02 AM
• Last activity: Jul 1, 2019, 08:43 AM
0
votes
2
answers
245
views
WARN max_connections from pt-variable-advisor - what to configure for DBaaS?
Our VM configuration (hosted on Vmware) # cat /proc/cpuinfo |grep "cpu cores" | awk -F: '{ num+=$2 } END{ print "cpu cores", num }' cpu cores 32 # free -h total used free shared buffers cached Mem: 62G 23G 39G 500K 349M 10G -/+ buffers/cache: 12G 50G Swap: 50G 0B 50G I got from pt-variable-advisor a...
Our VM configuration (hosted on Vmware)
# cat /proc/cpuinfo |grep "cpu cores" | awk -F: '{ num+=$2 } END{ print "cpu cores", num }'
cpu cores 32
# free -h
total used free shared buffers cached
Mem: 62G 23G 39G 500K 349M 10G
-/+ buffers/cache: 12G 50G
Swap: 50G 0B 50G
I got from pt-variable-advisor a warning about
max_connections
:
pt-variable-advisor h=localhost,u=root,p=Quule0juqu7aifohvo2Ahratit --socket /var/vcap/sys/run/mysql/mysqld.sock
(...)
# WARN max_connections: If the server ever really has more than a thousand threads running, then the system is likely to spend more time scheduling threads than really doing useful work.
(...)
Why? Any details?
Configuration in my.cnf
max_connections = 15360
settings from prd db cluster (MariaDB 10.1.x and Galera)
MariaDB [(none)]> SHOW STATUS WHERE variable_name
= 'Threads_connected';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 718 |
+-------------------+-------+
1 row in set (0.01 sec)
MariaDB [(none)]> SHOW STATUS WHERE variable_name
= 'Max_used_connections';
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| Max_used_connections | 924 |
+----------------------+-------+
1 row in set (0.02 sec)
I found a MySQL manual and an (very) old similar question .
The default value is 151 to improve performance
> The number of connections permitted is controlled by the
> max_connections system variable. The default value is 151 to improve
> performance when MySQL is used with the Apache Web server.
> (Previously, the default was 100.) If you need to support more
> connections, you should set a larger value for this variable.
and
> The maximum number of connections MySQL supports depends on the
> quality of the thread library on a given platform, the amount of RAM
> available, how much RAM is used for each connection, the workload from
> each connection, and the desired response time. Linux or Solaris
> should be able to support at least 500 to 1000 simultaneous
> connections routinely and as many as 10,000 connections
We currently have 460 users and every user is allowed to do 100 max_connections
. This would be the max value. Is the 100 max_connections
per user and database too high? With modern connection pooling we can set this to 20? How we should configure that not risking overloading our server with context switching? Is it possible that one web app use one connection (every statement over same connection)?
Sybil
(2578 rep)
Jun 8, 2018, 11:10 AM
• Last activity: Jun 9, 2018, 11:48 AM
0
votes
1
answers
1944
views
Two Oracle ORDS on one web server accessing one database
Oracle ORDS 3.0 Oracle 12c database Red Hat Linux 6.8 Tomcat 8.0.22 Can two ORDS be deployed to one web server (Tomcat) and access the same database? If so, how is this accomplished? That is, I would be deploying to Tomcat an ords1.war file and an ords2.war file that both connect to the same Oracle...
Oracle ORDS 3.0
Oracle 12c database
Red Hat Linux 6.8
Tomcat 8.0.22
Can two ORDS be deployed to one web server (Tomcat) and access the same database?
If so, how is this accomplished?
That is, I would be deploying to Tomcat an ords1.war file and an ords2.war file that both connect to the same Oracle database.
Better yet, I need two URLs that point to two different applications running on the same database. Perhaps another ORDS is not needed.
The context root (normally just 'ords') has to be different for each application.
For example, ords1 and ords2. Hence, the thought that there needs to be a ords1.war and ords2.war file deployed.
tale852150
(237 rep)
May 3, 2018, 07:27 PM
• Last activity: May 4, 2018, 08:47 PM
1
votes
2
answers
5368
views
What are possible ways of handling multiple users accessing the same database via custom web API?
I am very new to databases and Web design, but I understand enough to know that designing a good database is crucial, so I would like to prepare for that conceptually. However, I was unable to find any answers on my level on the web/in tutorials. I am working towards a developing a database (MariaDB...
I am very new to databases and Web design, but I understand enough to know that designing a good database is crucial, so I would like to prepare for that conceptually. However, I was unable to find any answers on my level on the web/in tutorials.
I am working towards a developing a database (MariaDB, to be exact) that will be accessed by my website (written in Django). I want the website to serve as an API for several people to work on the database. Naturally, I want to set some use- or group-specific privileges.
What are the possible ways of doing that, and what are their upsides and downsides?
I can only think of two generals approaches:
1. Connecting the website to the database as root, and letting the website API limit user capacity to edit certain tables.
2. Using some sort of server-side user system directly in a web API.
This is important to me both in the context of limited access to certain tables within a database and syncing multiple people editing the same table. Is there an approach generally considered as a good practice?
MrVocabulary
(115 rep)
Jun 11, 2017, 08:21 AM
• Last activity: Jun 13, 2017, 04:49 PM
0
votes
1
answers
6046
views
Where is the data stored in MySQL databases?
I don't know much about RDBMS but I am interested, I am working on a project which requires a database, I am planning to use MySQL to maintain that database. I have installed MySQL on macOS and I have created a database which has one table in it. now I have following questions in my mind : 1) Does t...
I don't know much about RDBMS but I am interested, I am working on a project which requires a database, I am planning to use MySQL to maintain that database.
I have installed MySQL on macOS and I have created a database which has one table in it.
now I have following questions in my mind :
1) Does turning on MySQL server turn my local computer into a server and stores databases in my computer when I create them using MySQL ?
If so can I locate the database ?
2) Can I make a offline database using MySQL which can be located through 'Finder' ?
user120246
Mar 22, 2017, 04:54 PM
• Last activity: Mar 22, 2017, 05:54 PM
1
votes
1
answers
354
views
SQL Server Web Service in a web environment architecture
I am new at SQL Server web Service and not too knowledgeable in setting up a secure web environment... From what I've read until now regarding SQL Web Service, I would need to: 1. create a stored procedure, 2. create an endpoint 3. create a web service 4. call the stored procedure from the web... So...
I am new at SQL Server web Service and not too knowledgeable in setting up a secure web environment...
From what I've read until now regarding SQL Web Service, I would need to:
1. create a stored procedure,
2. create an endpoint
3. create a web service
4. call the stored procedure from the web...
Sounds easy enough although for Steps 3 and 4, I don't have a clue how to do for now... but my question is in the architecture before I start testing. From my understanding, the SQL web service resides on the SQL Server?
If this is the case, it means that the DMZ would have direct access to our internal network? that's a nono as far as I know...
In an architecture where I would want an SQL Server within a closed off network, and an application/business tier within a DMZ and web servers in another DMZ... How would I be able to get some information that is needed from the SQL Server in the closed off network while keeping it secure? hence the reason why I started investigating the possibility of using the SQL Server Web Service...
In my mind, each web server in DMZ1 will have their DB in the DMZ which will contain minimal non sensitive info to run the website, but retrieving general accumulated information (such as points, sales or historic info, etc.) **that can be generated from different web sites and different sources** cannot be transferred to the Website DBs due to their size and need the data to be in real-time.
I was thinking of having a DMZ2 which contains the SQL Server Web Service (as an SQL Web Service Layer) which would retrieve data from the SQL server within our network.
Is it possible?
Is it a good idea?
If not, How would you go about it?
Thank you for your help.
JohnG
(1093 rep)
Mar 3, 2017, 08:17 PM
• Last activity: Mar 3, 2017, 10:41 PM
1
votes
0
answers
2387
views
SQL state S1000 in SQLConnect (but not all the time)
I have a CentOS6 webserver and a database server (MS SQL Server 2012). When I use the website everything is going fine except that once an hour or so I get a can't connect error in my php code. I looked in the ssl_error_log and found the following error at the same time I got the error: PHP Warning:...
I have a CentOS6 webserver and a database server (MS SQL Server 2012). When I use the website everything is going fine except that once an hour or so I get a can't connect error in my php code. I looked in the ssl_error_log and found the following error at the same time I got the error:
PHP Warning: odbc_connect(): SQL error: [unixODBC][FreeTDS][SQL Server]Unable to connect to data source, SQL state S1000 in SQLConnect
Can anyone help me solve this strange problem?
I'm not sure if it's a problem with the sql server or the webserver. I can't find any problems in the event viewer on the sql server.
Thijs
(11 rep)
Jan 17, 2017, 09:34 AM
• Last activity: Jan 17, 2017, 10:21 AM
0
votes
1
answers
647
views
How to get mysql automatically running when I start tomcat apache on Windows?
When starting my Tomcat Apache Server, I've been having to start up mysql before than using the `mysqld` command. I was wondering if there was a work around this so that I could have mysql start up when I run the startup.bat command for tomcat. Sorry, if this is an easy question. I have a very limit...
When starting my Tomcat Apache Server, I've been having to start up mysql before than using the
mysqld
command. I was wondering if there was a work around this so that I could have mysql start up when I run the startup.bat command for tomcat.
Sorry, if this is an easy question. I have a very limited understanding of both databases and web servers so I wasn't able to find or understand any information I found on google.
danielle
(1 rep)
Dec 12, 2014, 04:33 AM
• Last activity: Jul 10, 2016, 05:11 AM
1
votes
1
answers
107
views
Sessionizing Web Log Data using MySQL (perhaps by simulating window functions)
I have a large MySQL Table containing about 15 Million rows of web log data. Depending on the IP, useragent and site_id (different sections on the website) I want to create user sessions. The session delimiter should be 30 minutes. So if a unique user (same IP and useragent) is inactive for 30 minut...
I have a large MySQL Table containing about 15 Million rows of web log data. Depending on the IP, useragent and site_id (different sections on the website) I want to create user sessions. The session delimiter should be 30 minutes. So if a unique user (same IP and useragent) is inactive for 30 minutes or if he visits a different site area (2, 3, or 4) a new session starts.
I found a very good blog entry, that describes the general procedure: http://randyzwitch.com/sessionizing-log-data-sql/ But I can't implement this 1:1 because MySQL doesn't support window functions and I'm not proficient enough with MySQL to figure it out on my own, e.g. with GROUP_CONCAT().
Here is anonymized example data on sqlfiddle: http://www.sqlfiddle.com/#!9/a8b1f/1
It would be really great if you could help me.
I already posted this question on Stack Overflow .
LarsVegas
(55 rep)
Nov 6, 2015, 08:48 AM
• Last activity: Nov 10, 2015, 12:56 AM
6
votes
1
answers
1590
views
Do multiple SQL Server instances increase security?
# Context # Let's assume you have a server which exposes a web server and one or more web services to store and manage sensible information about physical persons (assume, for this example, complete medical history but also phone numbers, e-mails and other private information). Access is authenticat...
# Context #
Let's assume you have a server which exposes a web server and one or more web services to store and manage sensible information about physical persons (assume, for this example, complete medical history but also phone numbers, e-mails and other private information).
Access is authenticated and you did in your code everything you need to reach a reasonable level of security.
Web server and web services are running on a Windows Server with IIS + ASP.NET and databases are in a single SQL Server instance. Assume system is always up-to-date, logs are carefully evaluated and system is properly configured and an attacker has no physical access to machine itself.
# Current Architecture #
- SQL Server is installed on a separate - firewalled - machine (What is the best practice for placing DataBase servers in secure network topologies and How do you explain to experts that a database server should not reside in the DMZ? ).
- Obviously every user input is validated and sanitized (if/when required), also information sent by client (even when not directly entered by user) are re-validated and inconsistencies trigger alerts).
Even if not directly related to DB also remember that:
- Proper password managing (store hashes with a good - and slow - hashing algorithm, also described How to securely hash passwords? ) and security rules (passphrases are encouraged over short/complicated password and password changes are required only after many failed login attempts, see also How does changing your password every 90 days increase security? ).
- Handling of parallel attacking (incremental delays for each failed login - both from same IP and for same user name - and black lists). Related: How to uniquely identify users with the same external IP address? .
- Sessions have timeout (user activity reset short one, long one is fixed). Also client side there is a _weak_ protection that automatically disconnect user (same described in Google Chrome restores session cookies after a crash, how to avoid? but also to automatically log-out when navigating away) if he navigates away without disconnecting.
- Logs are manually monitored but there are rules that automatically trigger alerts.
Data are stored in N+3 different different databases (each one configured with exactly required permissions, no
sa
-like access, as described How to secure database table of users for an application? ).
- One database for logs (write-only for web server accounts).
- One database to store login information (readonly for web server accounts, a different Intranet web application will run with a different user).
- One database to map login with a physical database (and other _internal_ stuff to manage accounts), again read-only for external accessible web application.
- One separate database for each user of the system (read and write for everyone).
# Question #
To use three different SQL Server instances (one for logs, one for accounts and mapping and one for all user databases) will increase security or just complexity?
Will this also affect performance? (If you can't answer this without more context, you may simply ignore performance issues unless they're obviously much worse)
Moreover is there any drawback to merge together mapping information and accounts? (separate databases with same permissions will increase security in any way?)
## Considerations ##
I know that in security often "more is better" (at least it's a common motto) but drawbacks may be greater than any benefit (if any):
- Increased cost for hardware and software.
- Increased complexity (both for setup and maintenance), this is IMO a big drawback because a (possibly) safer system with a non-optimal configuration may be much less secure than a simpler one.
My perplexity is because if an attacker will be able to run arbitrary code (because of a bug in my application or because of an exploit) then it doesn't matter where things are: it has all resources to do what he wants, I assume we won't detect attack quickly enough to stop service then time he'll need to understand there is another machine to connect to is small compared to total time he has to perform his _actions_.
I don't know if this question is strictly on-topic here, it seems to span across multiple SE sites and I'm not sure which one is right one.
Adriano Repetti
(163 rep)
Jul 13, 2015, 12:52 PM
• Last activity: Jul 13, 2015, 02:22 PM
0
votes
1
answers
66
views
Big Size Database and increasing number of users
Me and my colleagues are using quite big size normalized database. Basically it's used for warehouse management and daily minimum 100 of the user perform data manipulation operations. **80% of the users access stock information from web. 20% users are from the warehouse operation team.** Most of the...
Me and my colleagues are using quite big size normalized database.
Basically it's used for warehouse management and daily minimum 100 of the user perform data manipulation operations.
**80% of the users access stock information from web.
20% users are from the warehouse operation team.** Most of the time in a day, warehouse operation team performs major operations of stock taking, stock adjustment, sales, purchases, reporting etc.. But at the same time web users get quite slow response from SQL Database. Keeping in view increasing number of web users, my question is how I could make the web access faster at the same time when major operations/transactions are running in the background? As web users are accessing subset of the tables which are being used by Operations team, what strategy I could implement to keep the database responsive as well sync at same time? Please bear with my language skills, I hope this is right place to ask this question, otherwise please guide.
20% users are from the warehouse operation team.** Most of the time in a day, warehouse operation team performs major operations of stock taking, stock adjustment, sales, purchases, reporting etc.. But at the same time web users get quite slow response from SQL Database. Keeping in view increasing number of web users, my question is how I could make the web access faster at the same time when major operations/transactions are running in the background? As web users are accessing subset of the tables which are being used by Operations team, what strategy I could implement to keep the database responsive as well sync at same time? Please bear with my language skills, I hope this is right place to ask this question, otherwise please guide.
Hassan
(101 rep)
May 27, 2015, 07:51 PM
• Last activity: May 27, 2015, 09:40 PM
3
votes
1
answers
7678
views
best way to store tags
I'm a university student building a web application for a project with three collegues , I took the database portion. Our site is basically a link sharing site and supports tags, what is an efficient way to store tags in Mysql database ? I thought of these ways : 1- table for tags : the relation bet...
I'm a university student building a web application for a project with three collegues , I took the database portion.
Our site is basically a link sharing site and supports tags, what is an efficient way to store tags in Mysql database ? I thought of these ways :
1- table for tags : the relation between link and tag is many-to-many so I must break it with an intermediate table, I can't think of an SQL statements to fetch links with specific tags example : the user might search for links tagged with (php , mysql ) or (hardware-issues , gigabyte).
Also even if I did , it needs join , we're making the site global, that means the database will be very very huge some day , I don't think we can bare a join.
2-putting tags as an array in links table : I can put an array of strings, this means only one select statement(better than join I think) but manipulating the array will be hard, the link's poster might edit the tags on his/her link, I can figure out a way to deal with this but I'm not sure if this is the most efficient way , of course it's not normalized and the database will grow faster but I think on a server , speed is more important than size.
So is there a better way ?
niceman
(131 rep)
Apr 18, 2015, 10:42 AM
• Last activity: Apr 18, 2015, 11:07 AM
3
votes
2
answers
22650
views
mysql connection timeout on small traffic server
**How can I set up MySQL so that connections never time out? And how do I even check to see what the current settings for variables such as `wait_timeout` and `interactive_timeout`, etc. are?** Given that these variables are not defined in the config shown below. Here is the situation: A low (micro)...
**How can I set up MySQL so that connections never time out? And how do I even check to see what the current settings for variables such as
wait_timeout
and interactive_timeout
, etc. are?** Given that these variables are not defined in the config shown below.
Here is the situation:
A low (micro) traffic web server sometimes does not get a database transaction for a couple of weeks. Mysql is running on the same server box as are a few private web applications. I set autoReconnect=true
in the connection string, but when I check back days later, I always notice that the web application cannot connect to the database, and when I open the catalina.out
log, it gives the following:
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.7.0_75]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) ~[na:1.7.0_75]
at java.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[na:1.7.0_75]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_75]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_75]
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3969) ~[mysql-connector-java-5.1.27.jar:na]
... 111 common frames omitted
I then restart tomcat and the webapps are able to make database connections again, but the problem repeats, and the long periods involved mean that it is not reasonable to test waiting days to see when the broken pipe error will resurface.
I have read many postings about setting max timeout for mysql, including the tutorial at this link, but they describe variables that are not defined in the conf files on my machine. For example, the mysql config files on the server in question are:
/etc/my.cnf
is:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
As you can see, the file includes the contents of the my.cnf.d
folder, which are three files, as follows:
/etc/my.cnf.d/client.cnf
is:
#
# These two groups are read by the client library
# Use it for options that affect all clients, but not the server
#
[client]
# This group is not read by mysql client library,
# If you use the same .cnf file for MySQL and MariaDB,
# use it for MariaDB-only client options
[client-mariadb]
/etc/my.cnf.d/mysql-clients.cnf
is:
#
# These groups are read by MariaDB command-line tools
# Use it for options that affect only one utility
#
[mysql]
[mysql_upgrade]
[mysqladmin]
[mysqlbinlog]
[mysqlcheck]
[mysqldump]
[mysqlimport]
[mysqlshow]
[mysqlslap]
/etc/my.cnf.d/server.cnf
is:
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]
Tomcat's server.xml
defines timeout as 20 seconds (20,000 milliseconds), which obviously does not refer to MySQL's global wait_timeout
. But the only reference to a timeout in /opt/tomcat/conf/server.xml
is as follows:
I confirmed that the 20000 connectionTimeout
setting refers to milliseconds by reading this link.
I read this other posting about settings from the spring mvc application level, but the answers refer to setting application level variables that are less than the my.cnf
settings. I hesitate to fiddle with the application level if I cannot at least see the server level settings, which are not in the versions of my.cnf
and include files shown above. Nonetheless, my dataSource
definition in the spring mvc web app xml config is:
Finally, the variables referenced in the above dataSource
bean are defined in a property file (data-access.properties
) as follows:
jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/dbasename?autoReconnect=true
jdbc.username=username
jdbc.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
jpa.database=MYSQL
jpa.showSql=false
**So how do I set up MySQL so that connections never time out? And how do I even check to see what the current settings for variables such as wait_timeout
and interactive_timeout
, etc. are?** Given that they are not in the config files shown above.
----------------------------------------------------------------------------
**EDIT:**
----------------------------------------------------------------------------
As per @RickJames' suggestion, I edited /etc/my.cnf
to become the following:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout = 2147483
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
I then stopped and restarted the database by typing systemctl stop mariadb.service
, followed by systemctl start mariadb.service
. But when I subsequently logged into MySQL and followed @Verace's suggestion by typing show variables like '%timeout%';
, the result was:
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| connect_timeout | 10 |
| deadlock_timeout_long | 50000000 |
| deadlock_timeout_short | 10000 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 28800 |
+----------------------------+----------+
13 rows in set (0.00 sec)
What am I doing wrong?
CodeMed
(497 rep)
Apr 7, 2015, 01:36 AM
• Last activity: Apr 7, 2015, 11:59 PM
2
votes
1
answers
133
views
How dangerous is it to have a user's password in plaintext in a cgi script?
I am setting up a MySQL database and am providing a web interface for it. Now, the DB is extremely simple and the interface just runs certain SELECT statements. I have created a user for my script to use when connecting with these commands: CREATE USER 'foo' IDENTIFIED BY PASSWORD 'bar'; GRANT SELEC...
I am setting up a MySQL database and am providing a web interface for it. Now, the DB is extremely simple and the interface just runs certain SELECT statements.
I have created a user for my script to use when connecting with these commands:
CREATE USER 'foo' IDENTIFIED BY PASSWORD 'bar';
GRANT SELECT ON dbname.* TO foo
I then have these lines in my Perl script:
my $user = "foo";
my $pw = "bar";
**NOTES:**
* The data on the database are completely public and users can use/redistribute them at will. I have no interest in blocking anyone from accessing it, it is to provide a dataset to the scientific community.
* The data are static, there is no need for anyone to be able to modify them in any way and the only user who has the right to is
root
(the only other user on the system).
So, my question is, how dangerous is this? Given that the user only has SELECT
rights, what bad things can happen with this, clearly insecure, setup? Is there a better way of allowing access to my script bearing in mind the two notes above?
terdon
(123 rep)
Mar 19, 2014, 05:41 PM
• Last activity: Mar 19, 2014, 06:12 PM
1
votes
1
answers
911
views
Pl/SQL web application
I have a PL/SQL web application,got the entire DB dump from remote DB. Its very old application, running on oracle 9i. I basically want to get the application up and running on my laptop,but i am not able to figure out how. Can somebody help me with this?
I have a PL/SQL web application,got the entire DB dump from remote DB.
Its very old application, running on oracle 9i.
I basically want to get the application up and running on my laptop,but i am not able to figure out how.
Can somebody help me with this?
bcp
(11 rep)
Jan 24, 2014, 08:58 AM
• Last activity: Jan 24, 2014, 10:14 AM
Showing page 1 of 20 total questions