Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
543 views
show detail processlist like at mariadb on maxscale
on mariadb when i need check what are query is running or sleep on mariadb instance. i can see with `SHOW PROCESSLIST` or `select * information_schema.processlist`. But after mariadb join and client connect to maxscale i can't see processlist like `SHOW PROCESSLIST` or `select * information_schema.p...
on mariadb when i need check what are query is running or sleep on mariadb instance. i can see with SHOW PROCESSLIST or select * information_schema.processlist. But after mariadb join and client connect to maxscale i can't see processlist like SHOW PROCESSLIST or select * information_schema.processlist any advice ? maxscale configuration [maxscale] threads=auto max_auth_errors_until_block=0 admin_host=192.168.101.107 admin_port=8989 admin_enabled=1 Edit. I already update paramater at maxscale configuration [maxscale] threads=auto max_auth_errors_until_block=0 admin_host=192.168.101.107 admin_port=8989 admin_enabled=1 retain_last_statements=20 dump_last_statements=on_error query already show from command maxctrl show sessions but there is query from last execution on each session. and i can't see the different like sleep or still running when i run command SHOW PROCESSLIST on mariadb. enter image description here
febry (57 rep)
Jun 17, 2020, 03:57 AM • Last activity: Jun 23, 2025, 04:09 AM
1 votes
2 answers
425 views
MySQL processes consuming too much VIRT and RES memory
i have over 100 wordpress websites on a server with 64GB ram and 16 processors I recently isolated 5 of them with PHP-FPM pool, but in the process also migrated them from php7.4-fpm to php8.3-fpm. Now both php7.4-fpm and php8.3-fpm are running. The 5 websites have 50 processes on startup each, and t...
i have over 100 wordpress websites on a server with 64GB ram and 16 processors I recently isolated 5 of them with PHP-FPM pool, but in the process also migrated them from php7.4-fpm to php8.3-fpm. Now both php7.4-fpm and php8.3-fpm are running. The 5 websites have 50 processes on startup each, and the other 95+ websites have around 256 processes on startup altogether with .conf configuration for php7.4-fpm. Just as i isolated these 5 websites, i noticed drastic increase in RAM memory consumption over time, in 2-3 days, the RAM usage went from 30GBs to 57GBs out of 61GBs available. And the only processes that are holding such memory are /usr/sbin/mysqld i.e. mysql processes. I haven't changed anything in mysql configuration and it just started consuming so much VIRT and RES memory that i think it affects the RAM overall. Here is a screenshot of htop table currently. [htop screenshot](https://i.sstatic.net/6mAIjCBM.png) I have been dealing with this problem for too long, and i am desperate to find a solution. What should i do? I hope for help. After noticing the problem, i have tweaked (increased) innodb_buffer_pool_size to 24G instead of 12G increased innodb_buffer_pool_instances from 1 to 24 introduced innodb_redo_log_capacity = 8G and removed innodb_log_file_size (because it is deprecated added innodb_log_buffer_size = 16G and innodb_flush_log_at_trx_commit = 2 but the result is still the same, htop shows RED color in the VIRT column and as i increase the innodb_buffer_pool_size, it also increases in parallel and is now 32.5G and mysqld has over 50 processes like this running. I am wondering, does the isolation of websites have something to do with this problem?
Zhivko Apostoloski
Jun 3, 2024, 08:02 AM • Last activity: Apr 22, 2025, 08:01 PM
0 votes
0 answers
65 views
MySQL Processlist slowly filling up with processes of state "login" and info "PLUGIN", eventually causing error 08004/1040: Too many connections
I've got a pretty basic Linux system (Debian 12) with MySQL/Percona Server 8.0.37 for Linux, using the caching_sha2_password authentication plugin for logging in onto MySQL. All my PHP-scripts (not using user root to login to MySQL!) use the `mysqli::close()` method at the end of the files/in the `_...
I've got a pretty basic Linux system (Debian 12) with MySQL/Percona Server 8.0.37 for Linux, using the caching_sha2_password authentication plugin for logging in onto MySQL. All my PHP-scripts (not using user root to login to MySQL!) use the mysqli::close() method at the end of the files/in the __destruct of the classes. When testing, I've got no I have no reason to believe that at the end of a script, connections aren't closed properly. For some reason, over time (*couple of weeks/months*), the MySQL processlist is slowly filling up with processes with state "login" and info "PLUGIN", see also the output below and the attached screenshot.
+----------+----------------------+-----------------+----------------------+---------+--------+------------------------+------------------+-----------+-----------+---------------+
| Id       | User                 | Host            | db                   | Command | Time   | State                  | Info             | Time_ms   | Rows_sent | Rows_examined |
+----------+----------------------+-----------------+----------------------+---------+--------+------------------------+------------------+-----------+-----------+---------------+
| 6        | event_scheduler      | localhost       |                      | Daemon  | 709122 | Waiting on empty queue |                  | 709122565 | 0         | 0             |
| 3740365  | root                 | localhost       |                      | Sleep   | 622722 | login                  | PLUGIN           | 622722577 | 0         | 0             |
| 7453355  | root                 | localhost       |                      | Sleep   | 536322 | login                  | PLUGIN           | 536322576 | 0         | 0             |
| 11165124 | root                 | localhost       |                      | Sleep   | 449922 | login                  | PLUGIN           | 449922576 | 0         | 0             |
| 14877373 | root                 | localhost       |                      | Sleep   | 363522 | login                  | PLUGIN           | 363522575 | 0         | 0             |
| 18589114 | root                 | localhost       |                      | Sleep   | 277122 | login                  | PLUGIN           | 277122575 | 0         | 0             |
| 22302053 | root                 | localhost       |                      | Sleep   | 190722 | login                  | PLUGIN           | 190722575 | 0         | 0             |
| 26013793 | root                 | localhost       |                      | Sleep   | 104322 | login                  | PLUGIN           | 104322574 | 0         | 0             |
| 29724803 | root                 | localhost       |                      | Sleep   | 17922  | login                  | PLUGIN           | 17922574  | 0         | 0             |
| 30494127 | root                 | localhost:57288 |                      | Query   | 0      | init                   | show processlist | 0         | 0         | 0             |
+----------+----------------------+-----------------+----------------------+---------+--------+------------------------+------------------+-----------+-----------+---------------+
[![mysql processlist](https://i.sstatic.net/OSjd6J18.png)](https://i.sstatic.net/OSjd6J18.png) Eventually, after a significant amount of time, these processes take up all the MySQL connections (151, default setting), causing error 08004/1040: Too many connections, leaving no room for any PHP script to connect anymore. Using SELECT user, host, plugin FROM mysql.user WHERE user = 'root', I've narrowed it down to the caching_sha2_password plugin. [![caching_sha2_password plugin](https://i.sstatic.net/bZJiQvCU.png)](https://i.sstatic.net/bZJiQvCU.png) Killing them, using KILL on the CLI from MySQL only changes the Command from "Sleep" to "Killed", but they remain in the processlist. Restarting the MySQL server seems to fix the issue, but after some time, the same behaviour occurs again. Anyone got any clues what is causing this and how to prevent this? Killing the MySQL server every 4 weeks doesn't really feels like a solution.
Bazardshoxer (101 rep)
Jan 22, 2025, 09:46 PM • Last activity: Jan 23, 2025, 07:17 AM
0 votes
0 answers
17 views
What are some example workflows for automatic loading of Foreign tables into tables each month?
I have a PostgresDB and PgAdmin4 installed. Beginning of each month a database with monthly payments is loaded as a Foreign table payments and a data for monthly customer numbers is loaded in Foreign table customers. On the 5th of the month I run code taking the previous month data from my foreign t...
I have a PostgresDB and PgAdmin4 installed. Beginning of each month a database with monthly payments is loaded as a Foreign table payments and a data for monthly customer numbers is loaded in Foreign table customers. On the 5th of the month I run code taking the previous month data from my foreign table payments and joining it with the data from my foreign table Customers and loading it in a historical table called Finance. So on the 5th of each month (in this case January) I have to write a code that is **SELECT A.cust_number, A.transaction_amount, B.cust_number_of_accounts FROM payments A LEFT JOIN customers B ON A.cust_number = B.cust_number WHERE A.transaction_date > '2024-11-30' (this should change each month);** And then insert the result into Finance Also I have to run a code inserting the separate foreign tables into historical tables. **INSERT INTO payments_historical SELECT * from payments WHERE A.transaction_date > '2024-11-30' (this should change each month);** **INSERT INTO customers_historical SELECT * from customers WHERE A.date > '2024-11-30' (this should change each month);/*the customer table has a date the beginning of the month*/** How to make this loading automatic but also checking whether the data has been correctly loaded into the foreign table to being with. Also changing the where clause as needed in order to only load the data for 1 month.
IKNv99 (1 rep)
Jan 17, 2025, 06:53 PM • Last activity: Jan 17, 2025, 06:54 PM
0 votes
1 answers
260 views
Query running successfully but much longer due to wait_event MessageQueueSend
I have a long running bug where some larger queries, sometimes run much much longer due to being stuck on wait_even MessageQueueSend. The difference can be anything from <100% to 1000s percent when compared to optimal run time, which sometimes also happens, so this is a benchmark. The problem seems...
I have a long running bug where some larger queries, sometimes run much much longer due to being stuck on wait_even MessageQueueSend. The difference can be anything from <100% to 1000s percent when compared to optimal run time, which sometimes also happens, so this is a benchmark. The problem seems to not be related to query performance/optimization in itself. Earlier the query had several CTEs which I then removed. This decreased the happy-path processing from ~43s to ~23s, but it did not get rid of this problem. When it gets stuck it can be there for several minutes or more. The same query can run sometimes in 23s and sometimes in 15mins. I've digged into this topic a little, it is seems to be something related to IPC and shared memory of processes comprising the query - screenshot attached. MessageQueueSend wait event signifies that the parallel worker processes are waiting to send bytes to a shared message queue. But I'm lost when it comes to why that may happen - shared message queue is full? main process is busy and cannot collect data from queue? - and what can be done about it. stuck processes from pg_stat_activity I've tried adjusting postgresql.conf, and the common values related to performance are as follows: - max_connections = 20 - shared_buffers = 2GB - effective_cache_size = 6GB - maintenance_work_mem = 1GB - checkpoint_completion_target = 0.9 - wal_buffers = 16MB - default_statistics_target = 500 - random_page_cost = 1.1 - effective_io_concurrency = 200 - work_mem = 26214kB - huge_pages = off - min_wal_size = 4GB - max_wal_size = 16GB - max_worker_processes = 4 - max_parallel_workers_per_gather = 2 - max_parallel_workers = 4 - max_parallel_maintenance_workers = 2 **Tech stack:** - postgres v.16 on linux VM, 8gb of ram, 4 CPUs from cluster - queries are triggered on database by dbt v1.9 - db purpose: data warehouse, running daily several larger queries P.S. I've searched stack exchange posts, stack overflow posts, google, asked multiple LLMs and spent many hours trying to figure it out, although I'm a junior, so that's that.
user20061 (1 rep)
Jan 7, 2025, 05:19 PM • Last activity: Jan 8, 2025, 04:31 AM
2 votes
1 answers
179 views
Can data streams from Informatica to SQL Server be multi-threaded?
Suppose we have a database server, with SQL Server on it. And we are moving data from another database server, through our Informatica server, and onto that SQL Server database server. And our SQL Server database server has four processors. Is it possible to force the connections to be multi-threade...
Suppose we have a database server, with SQL Server on it. And we are moving data from another database server, through our Informatica server, and onto that SQL Server database server. And our SQL Server database server has four processors. Is it possible to force the connections to be multi-threaded so it sends data quicker? How can this be done? Right now, only one of the processors is being used on the SQL Server database server.
JustBeingHelpful (2116 rep)
Jul 21, 2015, 08:30 PM • Last activity: Nov 6, 2024, 10:14 PM
3 votes
1 answers
176 views
What does 0 mean next to the db2sysc process?
On Db2 v11.5 on Linux if I execute a command to check if database instance is up and running: ```ps -e -o command | grep db2sysc``` the output is: db2sysc 0 What does the number 0 means? I am just asking because, maybe this could be useful to use in bash scripting.
On Db2 v11.5 on Linux if I execute a command to check if database instance is up and running:
-e -o command | grep db2sysc
the output is: db2sysc 0 What does the number 0 means? I am just asking because, maybe this could be useful to use in bash scripting.
folow (523 rep)
Oct 17, 2024, 10:34 AM • Last activity: Oct 17, 2024, 09:12 PM
10 votes
4 answers
7696 views
Lots of "FETCH API_CURSOR0000..." on sp_WhoIsActive ( SQL Server 2008 R2 )
I have a strange situation. Using `sp_whoisactive` I can see this: [![Strange][1]][1] Ok, with this query, I can see what is triggering ( does this word exists in english? ) it: SELECT c.session_id, c.properties, c.creation_time, c.is_open, t.text FROM sys.dm_exec_cursors (SPID) c --0 for all cursor...
I have a strange situation. Using sp_whoisactive I can see this: Strange Ok, with this query, I can see what is triggering ( does this word exists in english? ) it: SELECT c.session_id, c.properties, c.creation_time, c.is_open, t.text FROM sys.dm_exec_cursors (SPID) c --0 for all cursors running CROSS APPLY sys.dm_exec_sql_text (c.sql_handle) t the result: its just a select it's a simple select. Why is this using fetch_cursor? Also, I see a lot of "blank" sql_texts too. Does this has something with this "cursor"? blank DBCC INPUTBUFFER (spid) shows me this: print there's this question Here ( made by me ) but i don't know if this is the same thing. _____________________________________________________________________________ **EDIT1:** Using the query provided by kin, I see this: still  no code. ___________________________________________________________________________ **EDIT2:** Using Activity Monitor, I can see this: Mos expensive query It is the most expensive query ( The first one is intentional, we know about it ). And again, I would like to know, why this select * from... is the reason of FETCH CURSOR... ________________________________________________________________________________ **EDIT3:** This "select * from..." is running from another server ( via linked server ). Well, Now i'm having problems to understand what @kin said. This is the execution plan of the query ( running in the same server of the database): same server of the database this is now, the execution plan, running in the other server, via linked server: enter image description here Ok, Not a problem too. And NOW! the execution plan , via **activity monitor** ( the same select * from ): what the hell is going on here?
Racer SQL (7546 rep)
Aug 12, 2015, 12:33 PM • Last activity: Jul 12, 2024, 09:01 PM
0 votes
0 answers
597 views
SQL Server Service has stopped, but process still appears in TaskMgr
A customer has the strangest situation. Windows Server 2016 with SQL Server 2016. There is only one SQL instance on the machine (the default). Service name: `MSSQLSERVER`. The service take a very long time to stop (about a minute), but once it is shown as Stopped, the `SQLSERVER.EXE` process still a...
A customer has the strangest situation. Windows Server 2016 with SQL Server 2016. There is only one SQL instance on the machine (the default). Service name: MSSQLSERVER. The service take a very long time to stop (about a minute), but once it is shown as Stopped, the SQLSERVER.EXE process still appears in the TaskMgr for up to another minute. If I try terminating the process from TaskMgr I get error: Access Denied. This causes problems when restarting SQL Server. This is what happens: - Stop SQL Server and wait for service to Stop (net stop mssqlserver) - Process remains in TaskMgr - Try to start SQL Server again (net start mssqlserver) - SQL Server fails to start because the Master database is currently in use Has anyone seen anything like this? Why would the process still be running after the Service has stopped?
Neil Weicher (157 rep)
Dec 17, 2019, 09:58 PM • Last activity: Jun 21, 2024, 11:39 AM
0 votes
1 answers
989 views
Oracle prevent CPU usage per process/session
I have an Oracle instance to which several applications from different machines connect (http servers). Normally Oracle CPU usage is ~5-10% even on high server loads. But from time to time, one of http servers impose high (100%) CPU load on Oracle, which causes other oracle clients (in the case othe...
I have an Oracle instance to which several applications from different machines connect (http servers). Normally Oracle CPU usage is ~5-10% even on high server loads. But from time to time, one of http servers impose high (100%) CPU load on Oracle, which causes other oracle clients (in the case other http servers) timeout and do not work properly. Is it possible to limit CPU usage on a per process or per session basis? Or is it possible to limit CPU usage for clients connected to oracle from a special Machine/IP?
Mohsen (103 rep)
Aug 17, 2013, 06:50 AM • Last activity: Mar 29, 2024, 07:02 AM
1 votes
1 answers
179 views
track down which application / procedure / function modified the value of a column in a specific table
in our enterprise application, value in one column of a table is being modified at client side. This happen usually once in a week. i have verified in all stored procedures and other modules, value in that column is just inserted first time (when some request is generated through web app) but never...
in our enterprise application, value in one column of a table is being modified at client side. This happen usually once in a week. i have verified in all stored procedures and other modules, value in that column is just inserted first time (when some request is generated through web app) but never updated again. so it seems quite strange. now i have created a temp table and a trigger to track the record when that column is updated, but my problem is i don't know how to track the stored Procedure or any other part of db / or application which is modifying that value. I have used HOST_NAME(), APP_NAME(), SUSER_NAME() in my trigger. these functions just return the machine IP, and db name "sa", and ".Net SQL client" things, but i need to know the actual SP / Trigger / function name which modify the value. i cant use CONTEXT_INFO() since i don't know the actual issue point. Any help from you experts, will be highly appreciated.
user104962 (11 rep)
Sep 2, 2016, 01:09 PM • Last activity: Dec 28, 2023, 05:16 PM
9 votes
3 answers
43258 views
Internal reason for killing process taking up long time in mysql
I copied a big table's structure with (it is an **InnoDB** table btw) CREATE TABLE tempTbl LIKE realTbl Then I changed an index, and filled it up so I could run some test. Filling it was done using: INSERT INTO `tmpTbl` SELECT * FROM `realTbl` This took too long, so I wanted to stop this test. 1 I k...
I copied a big table's structure with (it is an **InnoDB** table btw) CREATE TABLE tempTbl LIKE realTbl Then I changed an index, and filled it up so I could run some test. Filling it was done using: INSERT INTO tmpTbl SELECT * FROM realTbl This took too long, so I wanted to stop this test.1 I killed the process while it was in a "Sending data" state: it is now "killed", and still in the state "Sending data". I know some killed processes need to revert changes and so could take (equally?) long to kill compared to how long they were running, but I can't imagine why this would be the case now: The whole table needs to be emptied. I'm curious as to what is happening that would take stopping/killing a simple query like this very long. To give you some numbers: the insert was running for an hour or 3, the kill is closer to 5 7 now. It almost looks like it runs a DELETE for every INSERT it did, and the delete takes longer then the insert did? Would that even be logical? (And if anyone knows how to kick my test-server back into shape that would be nice too, as it's eating some resources, but that's not really important at this moment :) ) ---------- 1) *I don't know yet why (it's a big table, 10M rows, but it should take that long?), but that's another thing / not part of this question :). It might be that my test could have been smarter or quicker, but that is also not the question now :D*
Nanne (285 rep)
Sep 12, 2011, 12:40 PM • Last activity: Aug 15, 2023, 09:58 PM
3 votes
3 answers
2531 views
Question about 'user process' in the context of Oracle
The following is an excerpt from [Oracle's documentation][1]: > When a user runs an application > program (such as a Pro*C program) or > an Oracle tool (such as Enterprise > Manager or SQL*Plus), Oracle creates a > user process to run the user's > application. As per my understanding, a user process...
The following is an excerpt from Oracle's documentation : > When a user runs an application > program (such as a Pro*C program) or > an Oracle tool (such as Enterprise > Manager or SQL*Plus), Oracle creates a > user process to run the user's > application. As per my understanding, a user process is a piece of software that can connect to an Oracle server. You (the user) can start this piece and then connect to Oracle. If so, why does Oracle create a user process to run the user's application?
Just a learner (2082 rep)
Jul 6, 2011, 04:21 PM • Last activity: May 15, 2023, 03:23 PM
1 votes
0 answers
592 views
Postgres reopens closed processes on Mac. How to solve?
I tried to terminate Postgres on Mac in various ways. I used commands in the terminal: ```pg_ctl -D /usr/local/var/postgres stop``` , ```brew services stop postgresql``` , ```systemctl stop postgres``` . Did not work. The ```kill -9``` command I didn't try, because I read that it was dangerous. The...
I tried to terminate Postgres on Mac in various ways. I used commands in the terminal:
-D /usr/local/var/postgres stop
,
services stop postgresql
,
stop postgres
. Did not work. The
-9
command I didn't try, because I read that it was dangerous. The PostgreSQL.pref.pane program, which starts and stops Postgre servers, did not provide a solution for this either. The command
pkill -u postgres
worked for most processes, but over time the processes reopen. Is there a way not to reopen?
Senhor Filipe (11 rep)
Oct 11, 2022, 03:54 PM
0 votes
0 answers
183 views
Process for upgrading MySQL Master-Slave to a new release series version
I have several MySQL servers that I want to upgrade from 5.7.31 to the most recent release series (5.7.36 I believe), but in one test I found that replication on one server set failed, and it required me to re-import a backup from the primary to recover. I don't see any official documentation from M...
I have several MySQL servers that I want to upgrade from 5.7.31 to the most recent release series (5.7.36 I believe), but in one test I found that replication on one server set failed, and it required me to re-import a backup from the primary to recover. I don't see any official documentation from MySQL on how to do this properly. Is there anyone who has done this before with Master-Slave replication, and if so, what steps did you take to make sure you didn't break replication?
vPilot (47 rep)
Jan 17, 2022, 04:44 PM
31 votes
5 answers
1970 views
Is there a "best-practices" type process for developers to follow for database changes?
What is a good way to migrate DB changes from Development to QA to Production environments? Currently we: 1. Script the change in a SQL file and attach that to a TFS work item. 2. The work is peer-reviewed 3. When the work is ready for testing then the SQL is run on QA. 4. The work is QA tested 3. W...
What is a good way to migrate DB changes from Development to QA to Production environments? Currently we: 1. Script the change in a SQL file and attach that to a TFS work item. 2. The work is peer-reviewed 3. When the work is ready for testing then the SQL is run on QA. 4. The work is QA tested 3. When the work is ready for production then the SQL is run on the production databases. The problem with this is that it is very manual. It relies on the developer remembering to attach the sql or the peer-reviewer catching it if the developer forgets. Sometimes, it ends up being the tester or QA deployer who discovers the problem. A secondary problem is that you sometimes end up needing to manually coordinate changes if two separate tasks change the same database object. This may just be the way it is but it still seems like there should be some automated way of "flagging" these issues or something. Our setup: Our development shop is full of developers with a lot of DB experience. Our projects are very DB oriented. We are mainly a .NET and MS SQL shop. Currently we are using MS TFS Work Items to track our work. This is handy for code changes because it links the changesets to the work items so I can find out exactly what changes I need to include when migrating to QA and Production environments. We are not currently using a DB project but may switch to that in the future (maybe that is part of the answer). I am very used to my source control system taking care of things like this for me and would like to have the same thing for my SQL.
Beth Lang (952 rep)
Jan 3, 2011, 11:23 PM • Last activity: May 15, 2020, 11:08 PM
1 votes
0 answers
60 views
MySQL using CPU after cancelling slow query
I had to cancel a very slow running query after it ran for a few days without completing on my Ubuntu server. Since then MySQL has been persistently running a process which is using 25-45% of the CPU at any given time. The process just restarts when attempting to kill it. Has anyone observed this is...
I had to cancel a very slow running query after it ran for a few days without completing on my Ubuntu server. Since then MySQL has been persistently running a process which is using 25-45% of the CPU at any given time. The process just restarts when attempting to kill it. Has anyone observed this issue before and know how to fix it?
Omega142 (119 rep)
Dec 26, 2015, 05:50 PM • Last activity: Dec 30, 2019, 08:27 PM
0 votes
0 answers
158 views
NULL queries getting stuck and using up connections
Running `show processlist` in phpmyadmin for me often shows queries with an Info of NULL and a Time of apparently a few seconds, but which have in fact been stuck there for a long time (each time I run `show processlist` the same Id may have a different Time). I have no idea what these queries are o...
Running show processlist in phpmyadmin for me often shows queries with an Info of NULL and a Time of apparently a few seconds, but which have in fact been stuck there for a long time (each time I run show processlist the same Id may have a different Time). I have no idea what these queries are or why they get stuck, but they can often cause all my available connections to be used up and stuck on waiting in apache status, causing sites not to load. Is there a setting I can change to terminate queries that are stuck for a long time? Is the php.ini setting max_execution_time responsible for this (I had it set to 3000 from the default 90). I have attached a screenshot of running show processlist. enter image description here
Garu (1 rep)
Nov 9, 2019, 02:59 AM
2 votes
2 answers
500 views
Differences between EDU and thread
In DB2, there is this command that shows active EDUs (engine dispatchable units): db2pd -edus There are two fields in the respective output: `EDU ID` and `TID`. According to [this db2pd page from the DB2 knowledge center](https://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admi...
In DB2, there is this command that shows active EDUs (engine dispatchable units): db2pd -edus There are two fields in the respective output: EDU ID and TID. According to [this db2pd page from the DB2 knowledge center](https://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0011729.html#r0011729__pdedus) , each is defined as follows: > **EDU ID:** The unique identifier for the engine dispatchable unit (EDU). Except on Linux operating systems, the EDU ID is mapped to the thread ID. On Linux operating system the EDU ID is a DB2 generated unique identifier > > **TID**: Thread identifier. Except on Linux operating systems, the thread ID is the unique identifier for the specific thread. On Linux operating systems, this is a DB2 generated unique identifier I wanted to know what is considered EDU or thread on a Linux/Unix operating system? Are they the same? What is the difference?
user164007
Sep 14, 2019, 06:38 AM • Last activity: Sep 17, 2019, 07:41 PM
6 votes
1 answers
83296 views
Oracle intermittently throws "ORA-12516, TNS:listener could not find available handler with matching protocol stack"
While testing the Oracle XE connection establishing mechanism I bumped into the following issue. Although connections are closed on each iteration, after 50-100 connections Oracle starts throwing intermittently the following exception: java.sql.SQLException: Listener refused the connection with the...
While testing the Oracle XE connection establishing mechanism I bumped into the following issue. Although connections are closed on each iteration, after 50-100 connections Oracle starts throwing intermittently the following exception: java.sql.SQLException: Listener refused the connection with the following error: ORA-12516, TNS:listener could not find available handler with matching protocol stack at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:553) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:254) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:280) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:207) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:157) ~[ojdbc6-11.2.0.4.jar:11.2.0.4.0] at com.vladmihalcea.book.high_performance_java_persistence.jdbc.connection.OracleConnectionCallTest.test(OracleConnectionCallTest.java:57) [test-classes/:na] The test can be found [on GitHub](https://github.com/vladmihalcea/hibernate-master-class/blob/master/core/src/test/java/com/vladmihalcea/book/high_performance_java_persistence/jdbc/connection/OracleConnectionCallTest.java) : for (int i = 0; i One of the most common reasons for the TNS-12516 and/or TNS-12519 errors being reported is the configured maximum number of PROCESSES and/or SESSIONS limitation being reached. When this occurs, the service handlers for the TNS listener become "Blocked" and no new connections can be made. Once the TNS Listener receives an update from the PMON process associated with the Database instance telling the TNS Listener the thresholds are below the configured limit, and the database is now accepting connections connectivity resumes. > > - PMON is responsible for updating the listener with information about a particular instance such as load and dispatcher information. Maximum load for dedicated connections is determined by the PROCESSES parameter. The frequency at which PMON provides SERVICE_UPDATE information varies according to the workload of the instance. The maximum interval between these service updates is 10 minutes. > - The listener counts the number of connections it has established to the instance but does not immediately get information about connections that have terminated. Only when PMON updates the listener via SERVICE_UPDATE is the listener informed of current load. Since this can take as long as 10 minutes, there can be a difference between the current instance load according to the listener and the actual instance load. > > When the listener believes the current number of connections has reached maximum load, it may set the state of the service handler for an instance to "blocked" and begin refusing incoming client connections with either of the following errors: ora-12519 or ora-1251" I wanted to know if this is some sort of bug or is it simply just how Oracle is designed to work. Update ------ On Oracle 11g Enterprise Edition, it works just fine so it's an XE limitation. Fix --- Using connection pooling is probably the best way of fixing this issue, which also reduces the connection acquisition time and levels-up traffic spikes.
Vlad Mihalcea (917 rep)
Aug 12, 2015, 05:03 PM • Last activity: Sep 7, 2019, 10:13 AM
Showing page 1 of 20 total questions