Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
2
answers
368
views
Mongodb tree structure questions and answers
I'm looking to build a question answer survey system where some questions will be based on the answer of the parent question. The hierarchy level of the questions can go any number of depth based on the questions. The questions and answers will be like the diagram shown here. [
User 99x
(111 rep)
May 23, 2020, 07:15 PM
• Last activity: Jul 29, 2025, 09:07 AM
2
votes
1
answers
660
views
MongoDB Container Connection Hangs with no Response
This is my first time trying MongoDB. I'm just trying to simply connect to it successfully. I'm following the W3 tutorial here: https://www.w3schools.com/nodejs/nodejs_mongodb.asp docker-compose.yml ``` version: '3.1' services: mongo: image: mongo restart: unless-stopped ports: - 27017:27017 volumes...
This is my first time trying MongoDB. I'm just trying to simply connect to it successfully. I'm following the W3 tutorial here: https://www.w3schools.com/nodejs/nodejs_mongodb.asp
docker-compose.yml
version: '3.1'
services:
mongo:
image: mongo
restart: unless-stopped
ports:
- 27017:27017
volumes:
- /home/username/mongodb/data:/data/db
mongo-express:
image: mongo-express
restart: unless-stopped
ports:
- 8089:8081
With that, the mongo-express UI seems to work fine. I can create a database etc...
but, this does not work:
create_db_test.js
var MongoClient = require('mongodb').MongoClient;
var url = "mongodb://localhost:27017/mytestdb";
MongoClient.connect(url, function(err, db) {
if (err) throw err;
console.log("Database created!");
db.close();
});
and
node create_db_test.js
and this just hangs. There's just a blank line on the command line and it does nothing. The database is not created. The output from the container looks like this:
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:04:20.449+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.1:39946","uuid":"abbbc8ad-fceb-4a6b-b6e4-12d123a837b7","connectionId":2,"connectionCount":2}}
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:04:20.469+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"172.18.0.1:39946","client":"conn2","doc":{"driver":{"name":"nodejs","version":"5.5.0"},"platform":"Node.js v18.12.1, LE","os":{"name":"linux","architecture":"arm64","version":"6.1.25-37.47.amzn2023.aarch64","type":"Linux"}}}}
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:04:30.983+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.1:60246","uuid":"47895fe4-78f3-4401-a886-4d5ef832c66b","connectionId":3,"connectionCount":3}}
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:04:30.985+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"172.18.0.1:60246","client":"conn3","doc":{"driver":{"name":"nodejs","version":"5.5.0"},"platform":"Node.js v18.12.1, LE","os":{"name":"linux","architecture":"arm64","version":"6.1.25-37.47.amzn2023.aarch64","type":"Linux"}}}}
When I interrupt create_db_test.js with Ctrl-C, the container log suggests that the connection has been open the whole time and is now ended:
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:05:10.053+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.18.0.1:60246","uuid":"47895fe4-78f3-4401-a886-4d5ef832c66b","connectionId":3,"connectionCount":2}}
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:05:10.053+00:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn2","msg":"Interrupted operation as its client disconnected","attr":{"opId":620}}
mongodb-mongo-1 | {"t":{"$date":"2023-05-27T16:05:10.054+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"172.18.0.1:39946","uuid":"abbbc8ad-fceb-4a6b-b6e4-12d123a837b7","connectionId":2,"connectionCount":1}}
What am I missing here?
Adam Winter
(131 rep)
May 27, 2023, 04:08 PM
• Last activity: Apr 8, 2025, 06:01 PM
0
votes
1
answers
3645
views
Error: "A slave with the same server_uuid/server_id as this slave has connected to the master"
I am testing a node.js client which connects to a mysql DB (mariadb). It works fine on its own, but if I launch this client simultaneously on another host, the first instance gets disconnected with the following error. I have read some posts suggesting to change the server-id in my.cnf, but that did...
I am testing a node.js client which connects to a mysql DB (mariadb). It works fine on its own, but if I launch this client simultaneously on another host, the first instance gets disconnected with the following error. I have read some posts suggesting to change the server-id in my.cnf, but that didn't help. Besides, why would I need to change the id of the server since both clients are pointing to the same server?
{ Error: UNKNOWN_CODE_PLEASE_REPORT: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event 'mysql-b
in.000024' at 334, the last event read from 'mysql-bin.000024' at 256, the last byte read from 'mysql-bin.000024' at 334.
at Binlog.Sequence._packetToError (/home/myapp/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
at Binlog.Sequence.ErrorPacket (/home/myapp/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:17)
at Protocol._parsePacket (/home/myapp/node_modules/mysql/lib/protocol/Protocol.js:278:23)
at Parser.write (/home/myapp/node_modules/mysql/lib/protocol/Parser.js:76:12)
at Protocol.write (/home/myapp/node_modules/mysql/lib/protocol/Protocol.js:38:16)
at Socket. (/home/myappp/node_modules/mysql/lib/Connection.js:91:28)
at Socket. (/home/myapp/node_modules/mysql/lib/Connection.js:502:10)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
--------------------
at Protocol._enqueue (/home/myapp/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Immediate._start (/home/myapp/node_modules/@rodrigogs/zongji/index.js:234:31)
at processImmediate (timers.js:632:19)
code: 'UNKNOWN_CODE_PLEASE_REPORT',
errno: 4052,
sqlMessage:
"A slave with the same server_uuid/server_id as this slave has connected to the master; the first event 'mysql-bin.000024' at 334, the last event
read from 'mysql-bin.000024' at 256, the last byte read from 'mysql-bin.000024' at 334.",
sqlState: 'HY000' }
{ Error: Connection lost: The server closed the connection.
at Protocol.end (/home/myapp/node_modules/mysql/lib/protocol/Protocol.js:112:13)
at Socket. (/home/myapp/node_modules/mysql/lib/Connection.js:97:28)
at Socket. (/home/myapp/node_modules/mysql/lib/Connection.js:502:10)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1094:12)
at process.internalTickCallback (internal/process/next_tick.js:72:19) fatal: true, code: 'PROTOCOL_CONNECTION_LOST' }
ChrisF
(21 rep)
Mar 6, 2019, 02:15 AM
• Last activity: Mar 21, 2025, 06:02 PM
0
votes
0
answers
59
views
node-oracledb Connection Pool Not Releasing Idle Connections Back to poolMin
We're experiencing a steady increase in PGA memory in our Oracle database, and we're trying to identify the root cause. We suspect that the connection pool in our application is not releasing connections back to the database as expected. Here are the details of our configuration and the issue we're...
We're experiencing a steady increase in PGA memory in our Oracle database, and we're trying to identify the root cause. We suspect that the connection pool in our application is not releasing connections back to the database as expected. Here are the details of our configuration and the issue we're facing:
**Connection Pool Configuration:**
-
poolMin
: 5
- poolMax
: 10
- poolIncrement
: 1
- poolTimeout
: 10 seconds
**Issue Description:** During periods of traffic, the number of connections increases from 5 (poolMin
) to 10 (poolMax
). However, when the traffic is low, the connections are not being released back to 5 (poolMin
), even after 10 seconds (poolTimeout
) of inactivity.
**Reference:** According to the [node oracledb documentation](https://node-oracledb.readthedocs.io/en/latest/api_manual/oracledb.html#oracledb.poolTimeout) :
> If the application returns connections to the pool with connection.close(), and the connections are then unused for more than poolTimeout seconds, then any excess connections above poolMin will be closed. When using Oracle Client prior to version 21, this pool shrinkage is only initiated when the pool is accessed.
Any insights or suggestions would be greatly appreciated.
**Additional Info:**
- We use _thick_ mode
Let me know if you need me to add anything to provide better answers.
**What I Tried:**
1. **Monitoring Connections**: Used the pool.getStatistics()
method to monitor the number of open, in-use, and idle connections in the pool.
2. **Traffic Simulation**: Simulated traffic using a k6 script to observe the behavior of the connection pool during periods of high and low traffic.
3. **Database Query**: Ran a query to monitor active connections and session PGA memory in the Oracle database.
**What I Expected:**
1. During periods of high traffic, I expected the number of connections to increase from 5 (poolMin
) to 10 (poolMax
).
2. During periods of low traffic, I expected the number of connections to decrease back to 5 (poolMin
) after 10 seconds (poolTimeout
) of inactivity.
3. I expected the session PGA memory to decrease correspondingly as idle connections are terminated.
**What Actually Happened:**
1. During high traffic, the number of connections increased to 10 as expected.
2. During low traffic, the number of connections did not decrease back to 5, even after 10 seconds of inactivity. Though sometimes, it decreased one by one, but not all the way back to 5 (poolMin
value)
3. The session PGA memory did decrease to some extent, but few idle connections were not being terminated.
**Question**: What could be the possible reasons for the connection pool not releasing idle connections back to poolMin
?
Srikanth Vudharapu
(1 rep)
Nov 2, 2024, 06:00 AM
• Last activity: Nov 8, 2024, 11:30 AM
0
votes
1
answers
65
views
PostgreSQL error code not covered in documentation
I get `errno: -111` and `code: ECONNREFUSED`. When reading the [docs][1] there is no error code `-111` described in it. What does this error code mean and where is it documented? P. S. I'm querying PostgresSQL database through Node.js (using Express.js and node-postgres). Both Node.js backend as wel...
I get
errno: -111
and code: ECONNREFUSED
.
When reading the docs there is no error code -111
described in it.
What does this error code mean and where is it documented?
P. S. I'm querying PostgresSQL database through Node.js (using Express.js and node-postgres). Both Node.js backend as well as PostgreSQL database are containerized locally using Docker.
Full error:
Error: connect ECONNREFUSED 127.0.0.1:5432
at /app/backend/node_modules/pg-pool/index.js:45:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getAverageSalary (file:///app/backend/src/db/crudFunctions/readCrud.js:10:24)
at async file:///app/backend/src/routes/getRoute.js:18:19 {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5432
}
not-a
(3 rep)
Jun 24, 2024, 07:13 AM
• Last activity: Jun 24, 2024, 08:03 AM
0
votes
0
answers
129
views
Installing pgBouncer for postgres 14
Please note: I'm still learning a lot of things about postgres. I'm asking for advice to install pgBouncer, but if you have any better suggestions what I could do, please tell me. I have a nodejs application on a single vps which uses [pg-promise][1] for interacting with a postgres db. pg-promise us...
Please note: I'm still learning a lot of things about postgres. I'm asking for advice to install pgBouncer, but if you have any better suggestions what I could do, please tell me.
I have a nodejs application on a single vps which uses pg-promise for interacting with a postgres db. pg-promise uses pg-pool so connection pooling is done on the application level.
I'm planning to add a second vps to scale my application horizontally. I want to implement connection pooling on the database level by using pgBouncer.
I'm installing pgBouncer on top of my Postgres 14 db on my vps, following the instructions on their website.
Currently, I have a database with 2 users:
1) "admin", which has admin access to the database
2) "user1", which can only read, update, write, and delete data in tables, but it cannot do other stuff (like deleting tables)
My SQL to setup this db and the users looks like this (simplified):
CREATE ROLE "admin" WITH LOGIN PASSWORD 'myadminpassword' SUPERUSER;
CREATE DATABASE "mydb" WITH OWNER "admin";
CREATE ROLE "user1" WITH LOGIN PASSWORD 'myuserpassword';
GRANT CONNECT ON DATABASE "mydb" TO "user1";
I then connect to the db "mydb" with "admin" and run the following SQL:
REVOKE ALL PRIVILEGES ON SCHEMA public FROM "user1";
GRANT USAGE ON SCHEMA public TO "user1";
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO "user1";
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "user1";
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO "user1";
Now, in the authlist.txt file from pgBouncer, I have to add a user and an md5-hash of the password of the user of the postgres db. And I'm not sure which user I have to add. I use the "admin" user when I'm managing the database (I connect to the db using pgAdmin) like adding or deleting tables, columns, ..., but my application uses the "user1" user. Which user should I add to pgBouncer (I'm guessing "user1").
Second question: is my connection string (postgres://user1:myuserpassword@:5432/mydb) still the same? Or do I need to use another connection string to connect to pgBouncer?
Thanks in advance!
Edit: please note that my postgresql server is a separate vps than the vps which is running the nodejs application. I'm planning to install pgBouncer on the postgresql vps.
Edit 2: I installed pgBouncer with the help of the docs and this tutorial . I HAVE NOT TESTED IT YET!
/etc/pgbouncer/pgbouncer.ini:
[databases]
mydb host=localhost port=5432 dbname=mydb
[pgbouncer]
...
pool_mode = transaction
max_client_conn = 1000
auth_type = hba
auth_file = /etc/pgbouncer/pb_hba.conf
admin_users = admin
...
/etc/pgbouncer/pb_hba.conf
host all all 127.0.0.1/32 scram-sha-256
host all all ::1/128 scram-sha-256
host all all /32 scram-sha-256
I've added my application vps ip to the allowed ip's in pgBouncer's pb_hba.conf file so only this vps can connect to it.
Edit 3:
I got it working, my pg_hba.conf file for pgbouncer looks like this:
host all all /32 scram-sha-256
local all all scram-sha-256
I added the necessary credentials to userlist.txt
Sam Leurs
(141 rep)
May 13, 2024, 02:02 PM
• Last activity: May 14, 2024, 09:51 AM
1
votes
2
answers
7149
views
Mongoose is not showing the ISODate in the result
I am using mongodb version `4.2.2`. I have following data: ``` { "_id" : ObjectId("5f0c58089c058304f7a64fa9"), "date" : ISODate("2021-07-11T00:00:00.000Z"), "dayOfWeek" : 0, "staffMember" : ObjectId("5f0c58076b559419d62efe47"), "startTime" : 885 }, /* 2 createdAt:2020-07-13T12:48:08.000Z*/ { "_id" :...
I am using mongodb version
4.2.2
.
I have following data:
{
"_id" : ObjectId("5f0c58089c058304f7a64fa9"),
"date" : ISODate("2021-07-11T00:00:00.000Z"),
"dayOfWeek" : 0,
"staffMember" : ObjectId("5f0c58076b559419d62efe47"),
"startTime" : 885
},
/* 2 createdAt:2020-07-13T12:48:08.000Z*/
{
"_id" : ObjectId("5f0c58089c058304f7a64fa8"),
"date" : ISODate("2021-07-11T00:00:00.000Z"),
"dayOfWeek" : 0,
"staffMember" : ObjectId("5f0c58076b559419d62efe47"),
"startTime" : 870
},
/* 3 createdAt:2020-07-13T12:48:08.000Z*/
{
"_id" : ObjectId("5f0c58089c058304f7a64fa7"),
"date" : ISODate("2021-07-11T00:00:00.000Z"),
"dayOfWeek" : 0,
"staffMember" : ObjectId("5f0c58076b559419d62efe47"),
"startTime" : 855
},
/* 4 createdAt:2020-07-13T12:48:08.000Z*/
{
"_id" : ObjectId("5f0c58089c058304f7a64fa6"),
"date" : ISODate("2021-07-11T00:00:00.000Z"),
"dayOfWeek" : 0,
"staffMember" : ObjectId("5f0c58076b559419d62efe47"),
"startTime" : 840
}
I am using mongoose
in my project. When I run this query in node js:
const { memberId, date } = req.params;
let isoDate = date.toISOString();
let timeSlots = await TimeSlots.find(
{
staffMember: memberId,
date: isoDate
});
The result printed on the console
:
db.spaTimeSlots.find({"staffMember" : ObjectId("5f0c58076b559419d62efe47"), date: new Date("Wed, 11 Jul 2020 00:00:00 GMT")})
.projection({})
.sort({_id:-1})
.limit(100)
However, I want this result:
db.spaTimeSlots.find({"staffMember" : ObjectId("5f0c58076b559419d62efe47"), date: ISODate("2021-07-11T00:00:00.000Z")})
.projection({})
.sort({_id:-1})
.limit(100)
I want the result returned with the date in ISODate
format, not as text (i.e. not Wed, 11 Jul
).
Rakesh Singh
(113 rep)
Jul 14, 2020, 02:14 PM
• Last activity: Aug 17, 2023, 01:41 PM
1
votes
0
answers
144
views
Connecting to SQL in docker in EC2 becoming slower
I am now running a PostgreSQL inside a Docker inside a EC2, and a Node.js instance is connecting to it. I recently found that the connection is becoming slower. Before, restarting the instance only need a few seconds to establish the connection. But now, it requires about 30+ seconds. I wonder what...
I am now running a PostgreSQL inside a Docker inside a EC2, and a Node.js instance is connecting to it.
I recently found that the connection is becoming slower. Before, restarting the instance only need a few seconds to establish the connection. But now, it requires about 30+ seconds.
I wonder what could be happening there and how can I optimize it. It never happened to me before.
Terry Windwalker
(113 rep)
Jun 24, 2023, 05:02 PM
0
votes
2
answers
18508
views
MongoDB - Mongoose query findOneAndUpdate() doesn't update/duplicates the DB
I am trying to save and update (upsert: true - creates the object if it doesn't exist) the result of the Web Api which contains multiple arrays of data to populate the stock chart. Each time I enter a symbol and click the "Get Quote" button it should fetch data from the web api, save/update it under...
I am trying to save and update (upsert: true - creates the object if it doesn't exist) the result of the Web Api which contains multiple arrays of data to populate the stock chart. Each time I enter a symbol and click the "Get Quote" button it should fetch data from the web api, save/update it under the Child Schema in the database. For some reason it adds duplicate values. Is there a way to fix avoid duplicate values? Here is the code that I tried....
let curValueSchema = new parentSchemaSymbol()
curValueSchema.symbol = curValue
highLow.map(item => {
curValueSchema.data.push(item)
})
const query = { symbol:
${curValue.toUpperCase()}
}
const update = curValueSchema
const options = { upsert: true, new: true }
parentSchemaSymbol.findOneAndUpdate(query, update, options).then(doc => {
console.log('Saved the symbol', doc)
return res.send(highLow)
}).catch(e => {
console.log(e)
})
John John
(111 rep)
Nov 16, 2018, 02:27 AM
• Last activity: Apr 20, 2023, 06:46 AM
0
votes
0
answers
3459
views
Sequelize and Mysql throw connect ETIMEDOUT error
Hello everyone this is my first time here but I'm a quite desperate with this situation, I hope you can help me. I have a nodejs server with express and sequelize. When I was in development enviroment I didn't have any problem then in production enviroment sometimes the entire app throw this error i...
Hello everyone this is my first time here but I'm a quite desperate with this situation, I hope you can help me.
I have a nodejs server with express and sequelize. When I was in development enviroment I didn't have any problem then in production enviroment sometimes the entire app throw this error in everyrequest
ConnectionError [SequelizeConnectionError]: connect ETIMEDOUT
for a few minutes then the server starts to work normally.
In the server I use pm2 cluster mode(8 instances) to handle all the requests.
Is the first that I manage a server and a DB and I read some articles, I've tried changing mysql enviroment variables and the sequelize pool options but the problem persists.
I'm not sure if the problem is in the variables or the server specs or pm2 config. I hope someone can give a hint of the problem.
## mysql variables ##
skip_name_resolve = 1
join_buffer_size = 50M
tmp_table_size = 2G
max_heap_table_size = 2G
thread_cache_size = 40
max_connections=1600
key_buffer_size=195k
innodb_buffer_pool_size = 4G
innodb_log_file_size = 512M
key_buffer_size = 1M
innodb_buffer_pool_instances = 4
## sequelize configuration ##
{
host: config.bd_host,
dialect: 'mysql',
logging: false,
pool: {
max: 100,
min: 0,
acquire: 1000000,
idle: 100000,
evict: 2000,
},
dialectOptions: {
decimalNumbers: true,
},
}
## Server specifications ##
- 64gb RAM
- 16 cores proccesor
- ubuntu 18.04.5 lts
- mysql version 14.14 Distrib 5.7.40, for Linux (x86_64) using EditLine wrapper
- express 4.17.1
- sequelize 6.19.1
- node version 14.17.2
Gamaliel Castro
(1 rep)
Dec 13, 2022, 10:49 PM
• Last activity: Dec 14, 2022, 05:39 PM
3
votes
1
answers
2730
views
RequestError: Connection lost - read ECONNRESET
I am new to Microsoft SQL Server, please advise. I am receiving the following error message when I deploy code into my development environment (which is hosted on Azure using Node.js (version 10) and Objection.js (1.4.0). It works fine locally and only seems to have issues with specific queries on t...
I am new to Microsoft SQL Server, please advise.
I am receiving the following error message when I deploy code into my development environment (which is hosted on Azure using Node.js (version 10) and Objection.js (1.4.0). It works fine locally and only seems to have issues with specific queries on the development environment.
### Error Details
>
RequestError: Connection lost - read ECONNRESET
at handleError (/src/node_modules/mssql/lib/tedious.js:519:15)
at Connection.emit (events.js:203:15)
at Connection.socketError (/src/node_modules/tedious/lib/connection.js:1028:14)
at Socket. (/src/node_modules/tedious/lib/connection.js:878:18)
at Socket.emit (events.js:203:15)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
>
### Code
return Order.query()
.joinEager('[shipments.shipmentQuantities.lineItem, lineItems.shipmentQuantities]')
.findById(id);
## Connection File
database: {
client: 'mssql',
connection: {
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
charset: 'utf8',
stream: true,
options: process.env.ENABLE_DB_SSL ? {
port: 1433,
database: process.env.DB_NAME,
encrypt: true,
} : undefined,
},
debug: false,
},
Kiwi Rupela
(133 rep)
Oct 30, 2019, 07:17 AM
• Last activity: Oct 6, 2022, 10:00 PM
0
votes
0
answers
1752
views
MariaDB: How to return JSON array instead of string
I am trying to get MariaDB(v.10.3.34) to return a JSON array, but it's returned as a string instead. I'm reading the returned value in Node.js, where the returned array is a string instead of an array. If I want to pass it as an array to the frontend, I need to do JSON.parse() on each JSON column of...
I am trying to get MariaDB(v.10.3.34) to return a JSON array, but it's returned as a string instead.
I'm reading the returned value in Node.js, where the returned array is a string instead of an array. If I want to pass it as an array to the frontend, I need to do JSON.parse() on each JSON column of each row, which I would preferably not do, and instead let MariaDB return the value as an JSON array to begin with.
So for an example, if I have a table like this:
CREATE TABLE method (
id int(11) NOT NULL,
blocks JSON DEFAULT NULL,
PRIMARY KEY (id)
);
INSERT INTO method VALUES (1,JSON_ARRAY(1,2)),(2,JSON_ARRAY(3,4));
If I then do:
SELECT id, blocks FROM method;
I get the following object back to Node.js:
[{"id":1, "blocks":"[1,2]"},{"id":2, "blocks":"[3,4]"}]
When instead I would want the arrays unquoted:
[{"id":1, "blocks":[1,2]},{"id":2, "blocks":[3,4]}]
Is there a way to do this?
In a newer version of MariaDB (v.10.6.8) I got this working with:
SELECT id, JSON_EXTRACT(blocks, '$[*]') AS blocks FROM method;
But unfortunately our production server is stuck with the older version for now at least, and on that older version the returned value was still a string.
TTT
(1 rep)
Sep 2, 2022, 01:28 PM
6
votes
1
answers
1883
views
What kind of database to store Quotes
I'm designing a very simple application to send positive text messages to subscribers at a certain frequency. These are just famous, motivational, encouraging and positive quotes. When I first thought of the idea I thought, *oh we can just store these quotes as one Collection in MongoDB. And the pho...
I'm designing a very simple application to send positive text messages to subscribers at a certain frequency.
These are just famous, motivational, encouraging and positive quotes.
When I first thought of the idea I thought, *oh we can just store these quotes as one Collection in MongoDB. And the phone #s in a separate Collection*
If the app remains as simple as this...then is a NoSQL approach ideal and better than a RDBS? I feel like it'd lead to faster dev time to get something up and running (Node Stack)
However, I think making the little initial extra effort and using MySQL could make this more expendable, future-proof, and prevent technical debt. Say...I want to track what Quotes were sent out to specific users, etc..
What are some insights?
This question isn't even about the app itself, but rather to get a better idea of when to use NoSQL vs a traditional RDBMS.
**Update
When I say NoSQL..I pretty much mean MongoDB or any other similar JSON document store**
Thanks!
user2402616
(211 rep)
Jun 14, 2022, 09:51 PM
• Last activity: Jun 15, 2022, 12:26 AM
0
votes
1
answers
2839
views
logon denied for oracle wallet user
Following a video tutorial on oracle wallets: https://youtu.be/LLZzUNb9ALU The db server is run from oracle's official docker container. The steps I have done are as follows: 1. On server shell, ran sql plus as sysdba: ``` $ sqlplus / as sysdba ``` 2. Created a user: ```sql SQL> alter session set "_...
Following a video tutorial on oracle wallets: https://youtu.be/LLZzUNb9ALU
The db server is run from oracle's official docker container.
The steps I have done are as follows:
1. On server shell, ran sql plus as sysdba:
$ sqlplus / as sysdba
2. Created a user:
SQL> alter session set "_ORACLE_SCRIPT"=true;
SQL> CREATE USER fred identified by flintstone;
SQL> GRANT CONNECT, RESOURCE, DBA to fred;
3. Tested logon:
$ sqlplus fred/flintstone
SQL> show user;
User is "fred"
4. Created a wallet at default location: $ORACLE_BASE/admin/$ORACLE_SID/wallet
$ orapki wallet create -wallet $ORACLE_BASE/admin/$ORACLE_SID/wallet -auto_login -pwd MyWalletPass
5. Created a wallet profile for user fred
:
$ mkstore -wrl $ORACLE_BASE/admin/$ORACLE_SID/wallet -createCredential $ORACLE_SID fred flintstone
6. Tried to login sqlplus:
$ sqlplus /@$ORACLE_SID
This fails. Got the error ORA-01017: invalid username/password; logon denied
Not able to understand whats wrong. The final goal is to use this wallet for a nodejs application with external auth (node-oracledb). But I think if I can fix step 6 I should be able to use from nodejs.
deostroll
(189 rep)
Jan 2, 2021, 03:57 PM
• Last activity: Apr 20, 2022, 06:27 AM
0
votes
3
answers
5203
views
Return all except some fields using findOne() in mongodb for node.js
I am trying to use the following script to query for a document but the returned document still contains the field (password) that I don't want. const client = await MongoClient.connect( MONGODB_CONNECTION_STRING, { useUnifiedTopology: true } ); const db = client.db(DB_NAME); const collection = db.c...
I am trying to use the following script to query for a document but the returned document still contains the field (password) that I don't want.
const client = await MongoClient.connect(
MONGODB_CONNECTION_STRING,
{ useUnifiedTopology: true }
);
const db = client.db(DB_NAME);
const collection = db.collection(collectionName);
const data = await collection
.find(
{
_id: someId
},
{
password: 0
}
);
It supposed to work as I have read in the mongodb documentation but I didn't find any documentation on findOne in mongodb for node.js.
picklu
(101 rep)
Jan 7, 2020, 01:39 PM
• Last activity: Apr 1, 2022, 06:25 PM
0
votes
2
answers
2291
views
After many successive and successful queries SQL Server starts refusing connections
I have a GraphQL server ( Node v14.15.1 ) which uses typeorm to connect to a SQL Server 12.0.4100.1, also it runs inside a docker container, connection parameters are: ``` javascript let connection = await createConnection({ type: "mssql", host: HOST, port: PORT, database: DB, username: USER, passwo...
I have a GraphQL server ( Node v14.15.1 ) which uses typeorm to connect to a SQL Server 12.0.4100.1, also it runs inside a docker container, connection parameters are:
javascript
let connection = await createConnection({
type: "mssql",
host: HOST,
port: PORT,
database: DB,
username: USER,
password: PASSWORD,
// NOTE: Use typeorm-model-generator for Entities
logging: false,
synchronize: true,
requestTimeout: 300000,
connectionTimeout: 300000,
entities: ["./models/mssql"],
pool: {
max: 1000, min: 1,
idleTimeoutMillis: 3000,
evictionRunIntervalMillis: 1500000,
},
options: {
encrypt: false,
}
})
**Problem:**
After numerous sucessful queries the SQL Server starts refusing connections with the error:
ConnectionError: Failed to connect to adress: port - connect ECONNREFUSED adress: port\n
at ConnectionError (/home/raa/code/wcc-gql/node_modules/mssql/node_modules/tedious/lib/errors.js: 13: 12)\n
at Connection.socketError (/home/raa/code/wcc-gql/node_modules/mssql/node_modules/tedious/lib/connection.js: 1290: 56)\n
at _connector.Connector.execute (/home/raa/code/wcc-gql/node_modules/mssql/node_modules/tedious/lib/connection.js: 1116: 21)\n
at Socket.onError (/home/raa/code/wcc-gql/node_modules/mssql/node_modules/tedious/lib/connector.js: 140: 7)\n
at Socket.emit (events.js: 198: 13)\n
at emitErrorNT (internal/streams/destroy.js: 91: 8)\n
at emitErrorAndCloseNT (internal/streams/destroy.js: 59: 3)\n
at process._tickCallback (internal/process/next_tick.js: 63: 19)'
The problem happens when query a specific type which looking down the hierarchy has a lot of resolvers, also the amount of data varies.
The more data the more likely that the problem will occur.
What also interesting is that it is more likely that the problem will occur if the program is running inside a container or as PM2 service.
I've tested the same data where connections are refused inside a container and not refused in VS Code debugger or when run from terminal.
I tried fiddling with the pool options which gave modest results.
I've also checked that I do not exceed the amount of connections allowed by the DB.
I've checked the logs on the SQL Server, there are no issues or errors in them at the time I make requests. I've just now added support for tracking log attempts on the SQL Server and it now registers some 776 successful logins from this application in period 15:03:46.01 - 15:04:48.07 and after that I assume the I get the errors.
## EDIT:
I've tried changing the Network Packet Size option in Sql Server options and packetSize connection options of typeorm -> mssql -> tedious which didn't work but revealed some new details. Since I had to reset the server apply new Sql Options, immediately after the reset the queries successfully finished.
So I tried tracking the resources and saw that the server ramps up to 100% processor capacity pretty quickly and after that has problems opening new connections and if I allow a too large max in pool options (in my case 500) of connections it start refusing connections.
When I bring down the max property, however, a new type of error arises which manifests as connection timed out from unknown reason
this time stemming from tarn.js
.
What I think is happening is that the tarm.js(underlying connection pool) has a default timeout for createTimeoutMillis/ adding new connections to pool, and this option is not revealed through the typeorm API and while it waits on the server to unfreeze and allow a new connection to be allotted it times out.
Milos Grujic
(101 rep)
Dec 3, 2020, 07:55 PM
• Last activity: Mar 30, 2022, 11:21 AM
3
votes
1
answers
2938
views
What are some likely causes of high Client:ClientRead AAS in AWS RDS?
I have a particular query, its a simple select with an inner join. Seems to be pretty well indexed. But it's registering as a worst offender in performance insights. With a 0.13 on CPU and a 4.5 on Client:ClientRead. All the documentation and articles I can find indicate this is common for bulk data...
I have a particular query, its a simple select with an inner join. Seems to be pretty well indexed. But it's registering as a worst offender in performance insights. With a 0.13 on CPU and a 4.5 on Client:ClientRead.
All the documentation and articles I can find indicate this is common for bulk data copying. But I don't think this is the case. Since it's such a basic select. It's being called around 70 times per second. Which isn't even that high compared to others I have in the 500 calls per second, and it's only serving back about 7 rows, 10 columns each.
So other than bulk data copying, what other factors might be giving me a high AAS for clientRead?
invertedSpear
(201 rep)
Apr 6, 2021, 03:30 PM
• Last activity: Dec 16, 2021, 02:06 PM
2
votes
1
answers
3741
views
Postgres connections are in "idle" state but cannot find any connected client
I have a single Postgres server and a set of PgBouncers configured to help me with connection pooling. Recently, I discovered that the database connections are getting accumulated in "idle" state over time. ```sql => SELECT COUNT(*), state FROM pg_stat_activity GROUP BY state; count | state -------+...
I have a single Postgres server and a set of PgBouncers configured to help me with connection pooling.
Recently, I discovered that the database connections are getting accumulated in "idle" state over time.
=> SELECT COUNT(*), state FROM pg_stat_activity GROUP BY state;
count | state
-------+--------
9 |
1 | active
11 | idle
(3 rows)
I executed the following query on the admin console of each PgBouncer to check whether I have any server connection from PgBouncers. I got the same result as below for every PgBouncer.
pgbouncer=# SHOW SERVERS;
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | wait | wait_us | close_needed | ptr | link | remote_pid | tls
------+------+----------+-------+------+------+------------+------------+--------------+--------------+------+---------+--------------+-----+------+------------+-----
(0 rows)
Here are the details of the connections which are in "idle" state.
=> SELECT * FROM pg_stat_activity WHERE state='idle';
-[ RECORD 1 ]----+---------------------------------------------------------------------
datid | 16498
datname | users
pid | 337260
usesysid | 14419
usename | novadeadmin
application_name | psql
client_addr | x.x.x.x
client_hostname |
client_port | 38850
backend_start | 2021-11-20 14:26:53.648459+00
xact_start |
query_start | 2021-11-20 14:27:12.585782+00
state_change | 2021-11-20 14:27:12.585782+00
wait_event_type | Client
wait_event | ClientRead
state | idle
backend_xid |
backend_xmin |
query | SELECT name FROM workspaces WHERE id='wnujb0lh37tsou44k7b1o';
backend_type | client backend
-[ RECORD 2 ]----+---------------------------------------------------------------------
datid | 376313
datname | wnujb0lh37tsou44k7b1o
pid | 363348
usesysid | 14419
usename | novadeadmin
application_name |
client_addr | x.x.x.x
client_hostname |
client_port | 12992
backend_start | 2021-11-19 19:24:03.696311+00
xact_start |
query_start | 2021-11-19 19:26:41.16366+00
state_change | 2021-11-19 19:26:41.16366+00
wait_event_type | Client
wait_event | ClientRead
state | idle
backend_xid |
backend_xmin |
query | SELECT "currentVersion"."updatedAt" AS "currentVersion_updatedAt", "currentVersion"."deletedAt" AS "currentVersion_deletedAt", "currentVersion"."updatedBy" AS "currentVersion_updatedBy", "currentVersion"."id" AS "currentVersion_id", "currentVersion"."fileName" AS "currentVersion_fileName", "currentVersion"."drawingID" AS "currentVersion_drawingID", "currentVersion"."versionNumber" AS "currentVersion_versionNumber", "currentVersion"."issuedAt" AS "currentVersion_issuedAt", "currentVersion"."createdBy" AS "currentVersion_createdBy", "currentVersion"."createdAt" AS "currentVersion_createdAt", "currentVersion"."mime" AS "currentVersion_mime", "currentVersion"."size" AS "currentVersion_size" FROM "drawingVersions" "currentVersion" INNER JOIN "drawings" "Drawing" ON "Drawing"."currentVersionID" = "currentVersion"."id" AND "Drawing"."deletedAt" IS NULL WHERE ( "Drawing"."id" IN ($1) ) AND ( "currentVersion"."deletedAt" IS NULL )
backend_type | client backend
--More--
1. The query
within the first record was executed by myself yesterday (2021-11-20) through a psql client (not through a PgBouncer). I'm not entirely sure whether I forgot to close the session in psql. So this might be my mistake.
2. The query
values from the other records are all the same. They were all executed from my backend application (using NodeJS TypeORM) through PgBouncers. But as I showed earlier, there are no active connections to Postgres server from PgBouncers.
The reason for having the same client_addr
in all the records is because the psql
client I used and the PgBouncers are in the same Kubernetes cluster.
Could you help me understand how 1. and 2. are happening? I would also like a mechanism to stop having long-lived "idle" connections.
Thanks in advance 🙏!
---
## Update:
I see the following netstat result in PgBouncer host (y.y.y.y
is the IP of my Postgres server):
/ $ netstat -tp | grep "y.y.y.y"
tcp 0 0 api-7659758786-2tpfv:57250 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40576 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40570 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40580 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40572 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40566 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40574 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:37988 y.y.y.y:postgresql ESTABLISHED 1/pgbouncer
tcp 0 0 api-7659758786-2tpfv:40568 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:57252 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:40578 y.y.y.y:postgresql ESTABLISHED -
tcp 0 0 api-7659758786-2tpfv:46606 y.y.y.y:postgresql ESTABLISHED 92/psql
tcp 0 0 api-7659758786-2tpfv:51548 y.y.y.y:postgresql ESTABLISHED 1/pgbouncer
This is the result from PgBouncer admin console for open server connections:
pgbouncer=# SHOW SERVERS;
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | wait | wait_us | close_needed | ptr | link | remote_pid | tls
------+----------------------------------------+------------+-------+---------+------+--------------+------------+-------------------------+-------------------------+------+---------+--------------+----------------+------+------------+--------------------------------------------------------------
S | novadeadmin@psql-frc-stage-novade-lite | novadelite | idle | y.y.y.y | 5432 | 10.244.4.196 | 51548 | 2021-11-22 05:01:42 UTC | 2021-11-22 05:14:34 UTC | 0 | 0 | 0 | 0x562d3d78aba0 | | 900512 | TLSv1.2/ECDHE-RSA-AES256-GCM-SHA384/ECDH=secp384r1/OCSP=good
S | novadeadmin@psql-frc-stage-novade-lite | users | used | y.y.y.y | 5432 | 10.244.4.196 | 37988 | 2021-11-22 04:38:56 UTC | 2021-11-22 05:09:35 UTC | 0 | 0 | 0 | 0x562d3d78a740 | | 879848 | TLSv1.2/ECDHE-RSA-AES256-GCM-SHA384/ECDH=secp384r1/OCSP=good
(2 rows)
Eranga Heshan
(133 rep)
Nov 21, 2021, 02:09 AM
• Last activity: Nov 22, 2021, 05:23 AM
1
votes
1
answers
2807
views
Trouble connecting to local MongoDB server through app.js
I'm a newbie to MongoDB, and I just set up MongoDB within my Ubuntu terminal with mongod and mongo commands running in separate terminals. However, I can't get my app.js that would launch a website to run properly. Here is the beginning of the code: ``` var express = require('express'); var path = r...
I'm a newbie to MongoDB, and I just set up MongoDB within my Ubuntu terminal with mongod and mongo commands running in separate terminals. However, I can't get my app.js that would launch a website to run properly. Here is the beginning of the code:
var express = require('express');
var path = require('path');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var exphbs = require('express-handlebars');
var helpers = require('handlebars-helpers')();
var expressValidator = require('express-validator');
var flash = require('connect-flash');
var session = require('express-session');
var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var mongoose = require('mongoose');
var fs = require('fs');
var net = require('net');
process.on('uncaughtException', (err) => {
console.log(err);
});
//const config = {
// autoIndex: false,
// useNewUrlParser: true
//};
//return mongoose.connect(uri, config);
//Log into MongoDB server
//var db = mongoose.connection;
const db = mongoose.connection
const url = 'mongodb://127.0.0.1:27017/modeltest'
mongoose.connect(process.env.MONGO_URI, {
useUnifiedTopology: true,
useNewUrlParser: true,
})
.then(() => console.log('DB Connected!'))
.catch(err => {
console.log('DB Connection Error: ' + err);
});
When I run app.js
, I get the following error:
{ MongooseError: The uri
parameter to openUri()
must be a string, got "undefined". Make sure the first parameter to mongoose.connect()
or mongoose.createConnection()
is a string.
at new MongooseError (/home/ubuntu/modeltest/web/node_modules/mongoose/lib/error/mongooseError.js:10:11)
at NativeConnection.Connection.openUri (/home/ubuntu/modeltest/web/node_modules/mongoose/lib/connection.js:579:11)
at Mongoose.connect (/home/ubuntu/modeltest/web/node_modules/mongoose/lib/index.js:333:15)
at Object. (/home/ubuntu/modeltest/web/app.js:31:10)
at Module._compile (module.js:652:30)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
at Function.Module._load (module.js:497:3)
at Function.Module.runMain (module.js:693:10)
at startup (bootstrap_node.js:188:16)
at bootstrap_node.js:609:3
message: 'The uri
parameter to openUri()
must be a string, got "undefined". Make sure the first parameter to mongoose.connect()
or mongoose.createConnection()
is a string.',
name: 'MongooseError' }
ubuntu@ip-172-31-27-34:~/mod
I've also tried to connect to the Mongo Atlas DB I've set up through some modifications to the code:
var express = require('express');
var path = require('path');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var exphbs = require('express-handlebars');
var helpers = require('handlebars-helpers')();
var expressValidator = require('express-validator');
var flash = require('connect-flash');
var session = require('express-session');
var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var mongoose = require('mongoose');
var fs = require('fs');
var net = require('net');
process.on('uncaughtException', (err) => {
console.log(err);
});
//const config = {
// autoIndex: false,
// useNewUrlParser: true
//};
//return mongoose.connect(uri, config);
//Log into MongoDB server
var db = mongoose.connection;
//const db = mongoose.connection
//console.log(process.env.MONGO_URI)
const {MongoClient} = require('mongodb');
async function main(){
const uri = "mongodb+srv://theusernamewhichientered:thepassword@modeltest-3lpoi.mongodb.net/test?retryWrites=true&w=majority"
const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });
try {
await client.connect();
await listDatabases(client);
} catch (e) {
console.error(e);
} finally {
await client.close();
}
}
main().catch(console.error);
async function listDatabases(client){
databasesList = await client.db().admin().listDatabases();
console.log("Databases:");
databasesList.databases.forEach(db => console.log( - ${db.name}
));
};
But I get the following output when I run app.js
:
DIRNAME
/home/ubuntu/modeltest/web
Server started on port 3000
{ MongoServerSelectionError: connection to 52.203.196.37:27017 closed
at Timeout.waitQueueMember.timer.setTimeout [as _onTimeout] (/home/ubuntu/modeltest/web/node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at ontimeout (timers.js:482:11)
at tryOnTimeout (timers.js:317:5)
at Timer.listOnTimeout (timers.js:277:5)
name: 'MongoServerSelectionError',
reason:
TopologyDescription {
type: 'ReplicaSetNoPrimary',
setName: null,
maxSetVersion: null,
maxElectionId: null,
servers:
Map {
'modeltest-shard-00-00-3lpoi.mongodb.net:27017' => [Object],
'modeltest-shard-00-01-3lpoi.mongodb.net:27017' => [Object],
'modeltest-shard-00-02-3lpoi.mongodb.net:27017' => [Object] },
stale: false,
compatible: true,
compatibilityError: null,
logicalSessionTimeoutMinutes: null,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: null },
What could I do to move past this?
mazuka487
(11 rep)
Apr 29, 2020, 12:10 AM
• Last activity: Oct 27, 2021, 06:04 PM
3
votes
3
answers
5757
views
Is it normal for MySQL to gradually increase memory usage over time?
Is it normal for MySQL to increase memory usage over time? See image below of my server's memory usage over the last two weeks. After "service mysql restart" it drops to 40%. The database is used by a node.js app. [![][1]][1] [1]: https://i.sstatic.net/r8DJ7.png Server has 4GB of RAM with below addi...
Is it normal for MySQL to increase memory usage over time? See image below of my server's memory usage over the last two weeks. After "service mysql restart" it drops to 40%. The database is used by a node.js app.
Server has 4GB of RAM with below additions to mysqld.cnf:

language=no-lang
innodb_ft_min_token_size = 1
ft_min_word_len = 1
innodb_buffer_pool_size = 3G
innodb_buffer_pool_instances = 5
innodb_read_io_threads = 8
innodb_write_io_threads = 8
innodb_log_file_size = 128M
innodb_flush_method = O_DIRECT
max_connections = 300
long_query_time = 1
innodb_ft_enable_stopword = 0
Thank you!
steven
(31 rep)
Aug 14, 2020, 03:38 PM
• Last activity: Oct 1, 2021, 02:06 PM
Showing page 1 of 20 total questions