Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
822 views
Redis docker container not coming up with custom config file
I'm using the below command to run the Redis docker container docker run -tid -v /data1/REDIS_DOCKER_IMAGE/6379/redis.conf:/usr/local/etc/redis/6379/redis.conf -p 6379:6379 --name node_6379 redis:5.0.8 redis-server /usr/local/etc/redis/redis.conf After I run this command, I check `"docker ps"` but i...
I'm using the below command to run the Redis docker container docker run -tid -v /data1/REDIS_DOCKER_IMAGE/6379/redis.conf:/usr/local/etc/redis/6379/redis.conf -p 6379:6379 --name node_6379 redis:5.0.8 redis-server /usr/local/etc/redis/redis.conf After I run this command, I check "docker ps" but it comes up empty. There are no logs shown by docker logs so I don't know what's going wrong.
Vishal Sharma (71 rep)
Apr 8, 2020, 07:45 AM • Last activity: Aug 4, 2025, 08:03 PM
0 votes
1 answers
157 views
Advice on WooCommerce DB and really slow WC core admin queries
Thanks for reading! So I'm kind of banging my head against a wall but I firstly want to say that I think this is a DB setup problem rather than the code as even when stripped back to bare bones WC the queries are still slow so no interfering 3rd party code. Basically I have a WP/WooCommerce website...
Thanks for reading! So I'm kind of banging my head against a wall but I firstly want to say that I think this is a DB setup problem rather than the code as even when stripped back to bare bones WC the queries are still slow so no interfering 3rd party code. Basically I have a WP/WooCommerce website that has quite a lot of data (11GB DB) and the WooCommerce core admin queries like listing orders on the orders page is taking 2 seconds plus other queries which totals to 9 seconds in DB queries. I really want to speed up these queries so query monitor plugin doesn't have any slow queries or at very least get much nearer to the 0.2s target but I have asked around and seem to be getting quite a few varied responses such as Redis, object cache and tools to add indexes but someone else fairly enough said that this will obviously add caching and speed up the queries but should the cache be cleared it will still be slow which doesn't (I guess) really solve the original issue? I'm not a DB admin expert by any means so I've previously just chucked memory at the VPS (48GB) and set the InnoDB pool size as high as mysqltuner told me to at 28GB but I get the feeling this is wrong as most google results suggest it should be 8GB or at very max be the same GB amount as the DB is big and the DB is back to being slow! Any ideas on what's going wrong? I've seen other things like increasing innodb_io_capacity (which is currently set to 200 but max is set to 2000 and the VPS is using an SSD) but mysqltuner has not mentioned these variables and I've not updated these values before. Would just like to know if I do just need to implement Redis etc or to start I do need more memory or I do need to change some variables or I need to do them all! Happy to provide what info I can. Thanks in advance, Brad
Devon Developer (1 rep)
Jul 26, 2023, 04:18 PM • Last activity: Jul 31, 2025, 05:07 PM
0 votes
1 answers
158 views
"Cluster nodes" command showing different status on different Redis Cluster nodes
I had created a redis cluster of 6 instances running on 2 RHEL-7.4 servers some time ago. Today I found that 3 of those instances(on 10.32.129.79) were down. Since I've restarted those 3 instances, I've observed that the command "cluster nodes" is not giving the same output on different nodes. Given...
I had created a redis cluster of 6 instances running on 2 RHEL-7.4 servers some time ago. Today I found that 3 of those instances(on 10.32.129.79) were down. Since I've restarted those 3 instances, I've observed that the command "cluster nodes" is not giving the same output on different nodes. Given below is one of the output: dcfc502ffae3810b61368e774187406f774114d8 10.32.129.77:6379@16379 myself,master - 0 1571299699000 1 connected 0-5460 274c70586d2ac7ce2723d8a23b4f0126ddae4174 10.32.129.77:6378@16378 master - 0 1571299701559 8 connected 5461-10922 e8b9f87f369da10a3b5f4b9453aaae24c744a156 10.32.129.79:6380@16380 slave,fail 66649746bbd41f06748063cd268f71f4be063aca 1571290310404 1571290310404 5 connected 66649746bbd41f06748063cd268f71f4be063aca 10.32.129.77:6380@16380 master - 0 1571299700000 2 connected 10923-16383 40503860d939a2d70e47c48050e6eaaf109e7f3f 10.32.129.79:6379@16379 slave,fail 274c70586d2ac7ce2723d8a23b4f0126ddae4174 1571290310404 1571290310404 8 connected 8e5c582a49a868ace78d10b8b0d24c4647e1b2dc 10.32.129.79:6378@16378 slave,fail dcfc502ffae3810b61368e774187406f774114d8 1571290310404 1571290310404 6 connected And given below is the other one: 274c70586d2ac7ce2723d8a23b4f0126ddae4174 10.32.129.77:6378@16378 master,fail? - 1571297986302 1571297981292 8 connected 5461-10922 40503860d939a2d70e47c48050e6eaaf109e7f3f 10.32.129.79:6379@16379 slave 274c70586d2ac7ce2723d8a23b4f0126ddae4174 0 1571299542894 8 connected 66649746bbd41f06748063cd268f71f4be063aca 10.32.129.77:6380@16380 master - 0 1571299538884 2 connected 10923-16383 e8b9f87f369da10a3b5f4b9453aaae24c744a156 10.32.129.79:6380@16380 myself,slave 66649746bbd41f06748063cd268f71f4be063aca 0 1571299541000 5 connected 8e5c582a49a868ace78d10b8b0d24c4647e1b2dc 10.32.129.79:6378@16378 slave dcfc502ffae3810b61368e774187406f774114d8 0 1571299543895 6 connected dcfc502ffae3810b61368e774187406f774114d8 10.32.129.77:6379@16379 master,fail? - 1571297991309 1571297985000 1 connected 0-5460 Is there any way of recovering from this inconsistent state without losing my data? Also, in which type of scenarios can this happen? **Redis Version**: 5.0.4
Vishal Sharma (71 rep)
Oct 17, 2019, 08:10 AM • Last activity: Jul 8, 2025, 05:02 PM
1 votes
1 answers
62 views
What database should we use to store ~1000 image URLs per hour — NoSQL like Firebase/MongoDB or Redis queue + PostgreSQL?
We are building a system that needs to store approximately 1000 image URLs every hour, which adds up to tens of thousands per day. The images themselves are stored on cloud storage; we only need to store their URLs along with some metadata (e.g., timestamp, userId, tags). We are considering two appr...
We are building a system that needs to store approximately 1000 image URLs every hour, which adds up to tens of thousands per day. The images themselves are stored on cloud storage; we only need to store their URLs along with some metadata (e.g., timestamp, userId, tags). We are considering two approaches: 1. Use a NoSQL database like Firebase Firestore or MongoDB to directly store the image URL records. 2. Send the URLs to a Redis queue first, then batch insert them into a relational database like PostgreSQL at regular intervals. Key requirements: Fast write performance Scalable over time Efficient querying by date or user Which approach would be more suitable for this use case in terms of scalability and performance? Are there any trade-offs or best practices we should be aware of
Mohit Gupta (11 rep)
May 14, 2025, 07:46 AM • Last activity: May 14, 2025, 01:57 PM
0 votes
1 answers
76 views
Redis server error: "Cannot assign requested address" under high traffic from Nginx to PHP to Redis
I'm experiencing an issue with my Redis server where I get the following error under high traffic: Cannot assign requested address my php code: connect('192.168.10.20', 6379,2); if($redisConnected) $redis->auth('myPassword'); } catch (Exception $e) { file_put_contents("redisErrorlog.log", $e->getMes...
I'm experiencing an issue with my Redis server where I get the following error under high traffic: Cannot assign requested address my php code: connect('192.168.10.20', 6379,2); if($redisConnected) $redis->auth('myPassword'); } catch (Exception $e) { file_put_contents("redisErrorlog.log", $e->getMessage(), FILE_APPEND); } This error appears intermittently when requests are being forwarded from Nginx to PHP, which then communicates with Redis. The traffic load is quite high, and the error seems to occur when Redis is under heavy load. not all requests give this error I tried to increase the linux server limit in ulimit -n 65535 and limits.conf and sysclt.conf but still has the same error. my server software ubuntu 22.04.3 my server hardware is 128 RAM with 128 core cpu Intel(R) Xeon(R) Gold 6438Y+ my redis.conf: bind 192.168.10.20 127.0.0.1 ::1 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes supervised no pidfile /var/run/redis/redis-server.pid loglevel notice logfile /var/log/redis/redis-server.log databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb rdb-del-sync-files no dir /var/lib/redis replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-diskless-load disabled repl-disable-tcp-nodelay no replica-priority 100 acllog-max-len 128 requirepass myPassword lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no lazyfree-lazy-user-del no oom-score-adj no oom-score-adj-values 0 200 800 appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes jemalloc-bg-thread yes
Sam Wiki (21 rep)
Dec 15, 2024, 08:32 PM • Last activity: Dec 18, 2024, 05:50 PM
0 votes
0 answers
42 views
How to restore a docker container redis dump file to Elasticache ( newly created )
**Situation :** I have a redis docker container running on local with some data of different kinds : Set, Hash, GeoHash. **Want to :** Move the data from local redis container to a newly created Elasticache instance ( Cluster mode Disabled ) **Tried :** **Approach 1 :** Created a rdb file of local d...
**Situation :** I have a redis docker container running on local with some data of different kinds : Set, Hash, GeoHash. **Want to :** Move the data from local redis container to a newly created Elasticache instance ( Cluster mode Disabled ) **Tried :** **Approach 1 :** Created a rdb file of local docker container data with : `sudo docker exec -it bash redis-cli `save. This creates dump.rdb in /data Copied rdb file on local from container Copied rdb file to EC2 instance which is running in same VPC, Subnets as of Elasticache Ran : cat dump.rdb | redis-cli -h .cache.amazonaws.com -p 6379 --pipe This fails with : >All data transferred. Waiting for the last reply... \ >ERR unknown command 'REDIS0012', with args beginning with: 'redis-ver7.4.1'\ >ERR unknown command 'redis-bits@ctime$used-mem', with args beginning with: Last reply received from server. \ >errors: 2, replies: 2 **Approach 2 :** While creating new Elasticache instance, there is an option to restore from backup ( below image ), this also fails with the local redis generated rdb file. Seems like this only works with Elasticache generated backups only. Don't know for sure, just a guess. enter image description here Question : How to take backup of a docker running redis instance & create a Elasticache instance with the same data ? Please help with this, Thanks.
Harsh.K (1 rep)
Nov 2, 2024, 05:35 AM • Last activity: Nov 3, 2024, 06:21 PM
-1 votes
1 answers
97 views
How to Sum and Update a Value by Key in PostgreSQL Without Causing Table Bloat?
I'm working with a PostgreSQL database where I need to store a numerical value associated with a specific key. Over time, I will be continuously adding to this value based on the key. I want to ensure that the table doesn't get bloated with multiple row versions or dead tuples, especially as this up...
I'm working with a PostgreSQL database where I need to store a numerical value associated with a specific key. Over time, I will be continuously adding to this value based on the key. I want to ensure that the table doesn't get bloated with multiple row versions or dead tuples, especially as this update operation will be frequent (like 100 req/s). 1. What are the best practices in PostgreSQL to accomplish this? 2. Should I use INSERT ON CONFLICT, a trigger, or another approach? 3. How can I ensure that my table remains efficient and doesn't suffer from excessive bloat due to frequent updates?
Andrei (111 rep)
Aug 27, 2024, 06:05 PM • Last activity: Aug 28, 2024, 07:33 AM
0 votes
1 answers
339 views
Is it correct to use 2 instances of Redis server one with persistence while the second with no persistence?
I'm starting a new project. In this project I'll use Redis to store some data which must be persistent. For these data I'll use `Append Only File (AOF)` persistence. In the project there are 2 applications that must communicate with each other. To allow this communication I'm thinking to use Redis a...
I'm starting a new project. In this project I'll use Redis to store some data which must be persistent. For these data I'll use Append Only File (AOF) persistence. In the project there are 2 applications that must communicate with each other. To allow this communication I'm thinking to use Redis as a *"shared memory"*. With the term *"shared memory"* I mean that the use of Redis is: - to store some keys (called keyA1, keyA2) that will be updated by application A - to store other keys updated by application B (called keyB1, keyB2) - application A reads the keys updated by Application B (keyB1, keyB2) - Application B reads the keys updated by Application A (keyA1, keyA2) This group of keys (keyA1, keyA2, keyB1, keyB2) must not be saved on persistent memory. My idea is *to use Redis to avoid the creation of a TCP communication protocol between Application A and Application B* (but I have the doubt that this is use of Redis could not be correct). Each of these 2 applications sends messages to the other and receives messages from the other by *writing and reading keys of the Redis database*. [This post](https://stackoverflow.com/questions/75291286/is-it-possible-to-enable-persistance-for-some-part-of-the-redis-data) tells that is not possible activate persistence for only a subset of database keys so I'm thinking to use 2 instances of Redis. Is it correct to use 2 instances of Redis server (one on the default port 6379 and the other on the port 6380) and setup the first with AOF persistence while the second with no persistence? Is there a better approach? Thanks
User051209 (101 rep)
May 10, 2023, 07:43 AM • Last activity: Jul 8, 2024, 07:14 AM
0 votes
1 answers
436 views
Strategy for Postgres updates with very high write load, but low consistency requirements
I need to collect statistics on profile thumbnail views for a social media app, backed by Postgres. Every time the thumbnail of a user appears anywhere on a page (in a list of profiles, or next to comments on a post, etc.), the view count for the userId corresponding to the thumbnail should be incre...
I need to collect statistics on profile thumbnail views for a social media app, backed by Postgres. Every time the thumbnail of a user appears anywhere on a page (in a list of profiles, or next to comments on a post, etc.), the view count for the userId corresponding to the thumbnail should be incremented. Obviously it would be horrendously taxing on the server to issue a count update for every userId that is ever returned from a query. So I am considering a couple of different ways of handling this. Feedback would be greatly appreciated: 1. I could update the counts in an in-memory cache, e.g. Redis, and use this as a write-through cache, which slowly in the background dumps count updates to Postgres, in batches of some fixed size. I'm pretty sure this system could never keep up with the update rate, but it would alleviate a lot of load from Postgres. Maybe the largest count changes could be prioritized to be written through to Postgres first. One downside is that if the server had to be restarted, unwritten counts would be lost (which is not great but not terrible in this case, since it's only a view statistic.) 2. I could update the view count only some small percentage of the time, e.g. for 1% of views, in other words, sample the views. This would give me a reasonable estimate over time of the actual view rate for large view counts, but would underrepresent small view counts. Any other ideas?
Luke Hutchison (141 rep)
Feb 8, 2024, 07:19 PM • Last activity: Feb 17, 2024, 10:45 AM
0 votes
1 answers
107 views
Hashes vs key scan in REDIS
My team is using Redis for caching and I want to cache the following structure: ``` inbox --(1 to many)-> folders --(1 to many)-> messages ``` It's just a simple tree with no nesting, and each inbox/folder/message is just a single `int`/`long`. I more often need to get all messages in a single folde...
My team is using Redis for caching and I want to cache the following structure:
inbox --(1 to many)-> folders --(1 to many)-> messages
It's just a simple tree with no nesting, and each inbox/folder/message is just a single int/long. I more often need to get all messages in a single folder, but sometimes also need to get all messages in a whole inbox (all the folders). For the Redis data structure, I'm considering the following 2: Storing messages in a list & using SCAN
key = "inbox_id:folder_id"
value = [message_ids]

fetch_folder(): GET inbox_id:folder_id
fetch_inbox(): SCAN inbox_id:* | GET resulting_ids
// I'm not sure how to do fetch_inbox in just Redis yet, so pipe for now
Having a hashmap per inbox
inbox_id = hashmap { key = folder_id, value = "message_ids_as_string" }

fetch_folder(): HMGET inbox_id folder_id
fetch_inbox(): HVALS inbox_id
Which storage & access pattern would be more performant, assuming 10-100 folders per inbox & 100-10_000 messages per folder?
ChocolateOverflow (99 rep)
Sep 30, 2023, 02:38 PM • Last activity: Jan 13, 2024, 01:58 PM
2 votes
1 answers
882 views
Redis master and slave use different memory
I deployed a redis replica set which consist of one master and one slave. From `top` command output, I figure out each memory usage. master: 8176352Byte * 8.5% = 678MB slave: 2050116Byte * 75.6% = 1513MB Why does slave use much more memory than master? master `top` output: ... KiB Mem: 8176352 total...
I deployed a redis replica set which consist of one master and one slave. From top command output, I figure out each memory usage. master: 8176352Byte * 8.5% = 678MB slave: 2050116Byte * 75.6% = 1513MB Why does slave use much more memory than master? master top output: ... KiB Mem: 8176352 total, 7746300 used, 430052 free, 55048 buffers KiB Swap: 8385892 total, 2360512 used, 6025380 free. 1279644 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32609 redis 20 0 1717424 696708 928 S 0.3 8.5 304:22.75 redis-server ... slave top output: ... KiB Mem: 2050116 total, 1977052 used, 73064 free, 3620 buffers KiB Swap: 2096444 total, 259100 used, 1837344 free. 165552 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1675 redis 20 0 1725616 1.477g 672 S 0.0 75.6 0:23.03 redis-server ...
gzc (323 rep)
Oct 28, 2015, 12:03 PM • Last activity: Sep 13, 2023, 05:05 PM
1 votes
1 answers
740 views
How to make Redis work from disk? Not about persistence
There are chances I selected the wrong tool for this task, feel free to suggest if you know a better one. The task: I need to store key-value pairs. * Each pair is 60 bytes. The keys are IDs and values - both are pretty random or at least unstructured. * There are ~2 milliards of such pairs which ma...
There are chances I selected the wrong tool for this task, feel free to suggest if you know a better one. The task: I need to store key-value pairs. * Each pair is 60 bytes. The keys are IDs and values - both are pretty random or at least unstructured. * There are ~2 milliards of such pairs which makes ~110 GB of pure data in total. Good chances for growth in the future. * The write load is heavy, and the read one is rather light. * It'd be nice to have a performance at 1K IOPS for writing, but maybe it's just a dream and I'll have to go with something slower but not so expensive. * I can batch writing, but the keys won't be sequential (like 123,7,13565 instead of 1,2,3) * No fancy search is needed, just give the value for the complete key. * I'm on AWS, if it matters, but can switch for a really good solution. * Cost matters. Redis is a key-value store, so I thought to use it but keeping such a big DB in memory is cost-prohibitive so I'd like to configure Redis in a way where it will take the data from memory as a cache, and when there's a cache miss - from the disk. So, it is not about Redis's persistence as a backup. Besides Redis I have tried: * plain files in directory tree, like key='abcdef' => ab/cd/ef. Ext2, BtrFS, tried to distribute writes across 16 partitions (terrible performance after ~0.5M pairs) * MySQL (silently died) Also I thought about AWS Elasticache Redis with Data Tiering, but the cheapest instance has insane cost for me (~$600/mo) How can I achieve that?
Putnik (295 rep)
Jul 26, 2023, 12:49 PM • Last activity: Jul 27, 2023, 12:51 PM
1 votes
0 answers
870 views
what is a reliable way to push data from postgres into redis queue's?
We need a way to process the records in a table asynchronously, we were processing the records in a AFTER INSERT trigger but that is affecting the insert performance and the back pressure is affecting the entire system. So the initial solution we came up with was to use Postgres NOTIFY/LISTEN framew...
We need a way to process the records in a table asynchronously, we were processing the records in a AFTER INSERT trigger but that is affecting the insert performance and the back pressure is affecting the entire system. So the initial solution we came up with was to use Postgres NOTIFY/LISTEN framework and push the record id's into a Redis queue to process them asynchronously, But we need some guarantee which doesn't seem possible with this, mainly the dropping of the notification if there are no active listeners, in case the listening process crashes for some reason the notifications will be lost, which is a critical problem for us. Periodically reading the table and inserting into redis is also not an option, as the events need to be processed as soon as they come in, and if the interval is too small, that might pit more load on the db. the other solution that could work was to use FDW, but currently available implementations of FDWs for Redis don't seem to support lists. Surprisingly there isn't as much information about Postgres to Redis interfacing, as I thought it would be. Is there a design which can guarantee no loss of events and asynchronous processing?.
Adithya Sama (111 rep)
Oct 13, 2022, 08:39 AM
2 votes
0 answers
383 views
Sensor data from Postgres to Redis
I have a Postgres database with a table that is updated with new data every 5 minutes from a weather station. Now, as soon as the table is updated with a new record from the sensor, I would like this new record to be sent to a Redis Stream database. Please note the following: - Obviously, I could ma...
I have a Postgres database with a table that is updated with new data every 5 minutes from a weather station. Now, as soon as the table is updated with a new record from the sensor, I would like this new record to be sent to a Redis Stream database. Please note the following: - Obviously, I could make the weather station sends data to both Postgres and Redis but this is not what I want. - Using Kafka is not an option for my case (I would like to avoid Java). - A delay of a few seconds (less than 5 seconds) would be acceptable. - Probably a cron-job would be the easiest solution but I am looking for something more elegant if possible. I am not an expert in Databases. I am just trying to understand what are some realistic options and how *change data capture* could work in my setting.
user17326436 (21 rep)
Sep 12, 2022, 07:06 AM • Last activity: Sep 12, 2022, 12:22 PM
6 votes
1 answers
21631 views
WRONGPASS invalid username-password pair or user is disabled when connect to redis 6.0+
I am tried to connect redis 6.0+, this is the Python celery redis broker url I am config now: broker_url = redis://:default:123456@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5 celery_result_backend = redis://:default:123456@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379...
I am tried to connect redis 6.0+, this is the Python celery redis broker url I am config now: broker_url = redis://:default:123456@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5 celery_result_backend = redis://:default:123456@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5 but when I start the celery using this command: root@pydolphin-service-6fc4b98f54-msfql:~/pydolphin# celery -A dolphin.tasks.tasks worker --loglevel=INFO -n worker2 -Q non_editor_pick_and_diff_pull --concurrency 2 /usr/local/lib/python3.9/site-packages/celery/platforms.py:834: SecurityWarning: You're running the worker with superuser privileges: this is absolutely not recommended! Please specify a different user using the --uid option. User information: uid=0 euid=0 gid=0 egid=0 warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format( -------------- celery@worker2 v5.1.2 (sun-harmonics) --- ***** ----- -- ******* ---- Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-glibc2.28 2021-08-08 20:45:35 - *** --- * --- - ** ---------- [config] - ** ---------- .> app: tasks:0x7fb4f6c2dd90 - ** ---------- .> transport: redis://:**@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5 - ** ---------- .> results: redis://:**@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5 - *** --- * --- .> concurrency: 2 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> non_editor_pick_and_diff_pull exchange=non_editor_pick_and_diff_pull(direct) key=non_editor_pick_and_diff_pull [tasks] . pydolphin.dolphin.tasks.cert-tasks . pydolphin.dolphin.tasks.tasks [2021-08-08 20:45:35,540: ERROR/MainProcess] consumer: Cannot connect to redis://:**@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5: WRONGPASS invalid username-password pair or user is disabled.. Trying again in 2.00 seconds... (1/100) [2021-08-08 20:45:37,547: ERROR/MainProcess] consumer: Cannot connect to redis://:**@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/5: WRONGPASS invalid username-password pair or user is disabled.. Trying again in 4.00 seconds... (2/100) ^C worker: Hitting Ctrl+C again will terminate all running tasks! worker: Warm shutdown (MainProcess) I am sure the user name and password correct because I could login redis using the redis-cli command. where is the problem and what should I do to fix it?
Dolphin (939 rep)
Aug 8, 2021, 12:55 PM • Last activity: Jul 25, 2022, 04:39 PM
0 votes
2 answers
39 views
.NET updates on Postgresql Windows hosts
I Started managing postgresql and redis recently. I have a postgresql Db cluster and redis installed in windows server 2019. I would like to know if postgresql and redis has dependencies with .NET framework. 1) Is there any .NET framework prerequisite to install postgres 14 in windows 2019? 2) If th...
I Started managing postgresql and redis recently. I have a postgresql Db cluster and redis installed in windows server 2019. I would like to know if postgresql and redis has dependencies with .NET framework. 1) Is there any .NET framework prerequisite to install postgres 14 in windows 2019? 2) If there are no dependencies between .NET framework and postgresql, I would allow .NET updates to get applied automatically along with monthly OS patches. I am from SQL Server background. MSSQL and .NET have tight dependencies, so I won't allow automatic .NET patches unless and until I verify those patches in a lower environment and make sure there are no breaking changes. I would like to know the same for Postgres and redis. I could not see it in the software requirements in the product documentation, hence seeking help from community. Thanks
udhayan dharmalingam (383 rep)
Mar 17, 2022, 03:05 PM • Last activity: Apr 4, 2022, 01:00 PM
1 votes
1 answers
512 views
is it possible to list all group in redis 6.0+
Now I am encount a problem in my app shows like this: Caused by: redis.clients.jedis.exceptions.JedisDataException: NOGROUP No such key 'pydolphin:stream:article' or consumer group 'pydolphin:stream:group:article' in XREADGROUP with GROUP option at redis.clients.jedis.Protocol.processError(Protocol....
Now I am encount a problem in my app shows like this: Caused by: redis.clients.jedis.exceptions.JedisDataException: NOGROUP No such key 'pydolphin:stream:article' or consumer group 'pydolphin:stream:group:article' in XREADGROUP with GROUP option at redis.clients.jedis.Protocol.processError(Protocol.java:135) ~[jedis-3.6.0.jar!/:?] at redis.clients.jedis.Protocol.process(Protocol.java:169) ~[jedis-3.6.0.jar!/:?] at redis.clients.jedis.Protocol.read(Protocol.java:223) ~[jedis-3.6.0.jar!/:?] at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:352) ~[jedis-3.6.0.jar!/:?] at redis.clients.jedis.Connection.getBinaryMultiBulkReply(Connection.java:304) ~[jedis-3.6.0.jar!/:?] at redis.clients.jedis.BinaryJedis.xreadGroup(BinaryJedis.java:4781) ~[jedis-3.6.0.jar!/:?] at org.springframework.data.redis.connection.jedis.JedisStreamCommands.lambda$xReadGroup$17(JedisStreamCommands.java:364) ~[spring-data-redis-2.5.0.jar!/:2.5.0] at org.springframework.data.redis.connection.jedis.JedisConnection.lambda$doInvoke$2(JedisConnection.java:176) ~[spring-data-redis-2.5.0.jar!/:2.5.0] at org.springframework.data.redis.connection.jedis.JedisConnection.doWithJedis(JedisConnection.java:799) ~[spring-data-redis-2.5.0.jar!/:2.5.0] ... 17 more now I want to see how many group in redis, to my surprise, I could not found any command to do this after read the xgroup document. is it possible to list all groups in redis 6.0+ ?
Dolphin (939 rep)
Aug 9, 2021, 09:56 AM • Last activity: Jan 19, 2022, 11:14 AM
1 votes
1 answers
12144 views
how to login redis 6.0+ use username and password
I found the redis 6.0+ add the acl, now I want to login use the default user like this: I have no name!@cruise-redis-master-0:/$ redis-cli -h 127.0.0.1 -a doGT233U7 -u default Invalid URI scheme I read the docs but did not find any command login into redis by username and password, what should I do...
I found the redis 6.0+ add the acl, now I want to login use the default user like this: I have no name!@cruise-redis-master-0:/$ redis-cli -h 127.0.0.1 -a doGT233U7 -u default Invalid URI scheme I read the docs but did not find any command login into redis by username and password, what should I do to login the new version of redis above 6.0+?
Dolphin (939 rep)
Aug 8, 2021, 11:54 AM • Last activity: Jan 18, 2022, 04:05 PM
0 votes
1 answers
4804 views
How to Upgrade Redis Database from 5.x to 6.x
This instance was running Redis version 5.0.8 open source. The Redis packages have been updated to the current 6.0.3 and now the server reports `Loading RDB produced by version 5.0.8` on startup. What is the proper method for upgrading the RDB to the version of the server? I have checked the [admin...
This instance was running Redis version 5.0.8 open source. The Redis packages have been updated to the current 6.0.3 and now the server reports Loading RDB produced by version 5.0.8 on startup. What is the proper method for upgrading the RDB to the version of the server? I have checked the [admin documentation](https://redis.io/topics/admin) and there's nothing mentioned. Also, I checked the [6.0 release notes](https://raw.githubusercontent.com/antirez/redis/6.0/00-RELEASENOTES) and nothing there either.
Utkonos (111 rep)
May 25, 2020, 02:15 PM • Last activity: Jun 10, 2021, 08:07 PM
0 votes
1 answers
172 views
Turning off Append only and snapshot on Redis
I want to turn off data persistence on my Redis master, by turning off appendonly and snapshots, I want to use the replica for persistence, will turning off these two have any affect on replication?
I want to turn off data persistence on my Redis master, by turning off appendonly and snapshots, I want to use the replica for persistence, will turning off these two have any affect on replication?
DB guy (69 rep)
Apr 2, 2021, 12:51 PM • Last activity: Apr 2, 2021, 02:33 PM
Showing page 1 of 20 total questions