Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
0 answers
23 views
Disable read-ahead caching for GFS2 Logical Volume
I have 10 node deployment which implement red hat clustering software - pacemaker/corosync to mount gfs2 and ensure high-availability. Nodes are actually mail servers and use gfs2 to store user's data in Maildir format. On each mail server i have the following gfs2 setup: root@mail ~# lsblk | grep g...
I have 10 node deployment which implement red hat clustering software - pacemaker/corosync to mount gfs2 and ensure high-availability. Nodes are actually mail servers and use gfs2 to store user's data in Maildir format. On each mail server i have the following gfs2 setup: root@mail ~# lsblk | grep gfs2 └─vg_1-lv_1 253:4 0 2T 0 lvm /mnt/gfs2_1 └─vg_2-lv_2 253:16 0 2T 0 lvm /mnt/gfs2_2 └─vg_3-lv_3 253:7 0 2T 0 lvm /mnt/gfs2_3 └─vg_4-lv_4 253:15 0 2T 0 lvm /mnt/gfs2_4 └─vg_5-lv_5 253:13 0 2T 0 lvm /mnt/gfs2_5 └─vg_6-lv_6 253:12 0 2T 0 lvm /mnt/gfs2_6 └─vg_7-lv_7 253:10 0 2T 0 lvm /mnt/gfs2_7 └─vg_8-lv_8 253:9 0 2T 0 lvm /mnt/gfs2_8 └─vg_9-lv_9 253:6 0 2T 0 lvm /mnt/gfs2_9 └─vg_10-lv_10 253:11 0 2T 0 lvm /mnt/gfs2_10 Now, i am not very pleased with my performance and my thinking is to remove read-ahead local cache on my gfs2 devices. Yes gfs2 uses local cache to serve clients more quickly, but as this information is not synced across all nodes, and we are not forcing single user to a single server, i am not sure this makes sense in terms of helping performance. Also, we are using dlm to force flushing outdated cached data across nodes. With all this being said, i am still not sure if this is the right move, and i am looking for an advice. 1) Is my thinking right - will this improve my fs performance or quite contrary? 2) Do you have any other advice that would improve my performance? Thank you in advance.
brchelli26 (1 rep)
Feb 17, 2025, 08:29 AM • Last activity: Feb 17, 2025, 08:45 AM
0 votes
1 answers
2254 views
Linux Ubuntu, NFS for logs and cache, NFS or GFS
my company has no dev ops person and I am developer so I hardly know the best practices of dev ops as I am new to it, so please forgive me for any mistake. I have two Ubuntu machines. Each of them have a exact SAME web server and hence they give access logs. They both are running ruby on rails web s...
my company has no dev ops person and I am developer so I hardly know the best practices of dev ops as I am new to it, so please forgive me for any mistake. I have two Ubuntu machines. Each of them have a exact SAME web server and hence they give access logs. They both are running ruby on rails web server and using nginx. **Problem:** Now I need a common place for viewing access Logs and web application logs.. And file caching should ofcourse be common to both web servers. **For this I have to come this solution**: Have a separate server for storing cache and logs using NFS. So both the webservers will act as clients and store logs and cache in the NFS server. I have also heard slightly about GFS. Now cache is hardly going to get written once in hour or something but logs are written every second. So I want to know what should I use for this problem ? NFS or GFS or both ? Which will give me the best performance ? Some people say use GFS as I researched a lot in the internet, but I want to know why GFS can be better then NFS in my case?
user1735921 (95 rep)
Sep 30, 2017, 06:43 AM • Last activity: Jul 10, 2023, 12:04 AM
1 votes
0 answers
174 views
Is it possible to mount an ATA over Ethernet (AOE) block device on multiple clients, if so how?
I have a lab consisting of 3 machines connected with 2 10gbe links on 2 segregated networks. Each device has 100tb in block storage connected to it. I want to use ATA over Ethernet to create a storage cluster which can be accessed from all the machines at one time. I want to do this from both the cl...
I have a lab consisting of 3 machines connected with 2 10gbe links on 2 segregated networks. Each device has 100tb in block storage connected to it. I want to use ATA over Ethernet to create a storage cluster which can be accessed from all the machines at one time. I want to do this from both the client and initiators at the same time. I want to be able to mount the target file system on multiple initiators. I do not need to write to the filesystem from multiple systems at once, but I do want to read from the file system from many at once. I have thought about creating cache devices, though i would like to avoid this if possible. I know this is a rather complex problem, but I feel like i'm missing something, as it seems like what I want should be possible. I have been playing around but have not come up with a good plan on how to do this. I've setup a target using vblade, and accessed that from a separate system; however I'm unable to access it from the host, if I access it from 2 initiators at once then it seems to become corrupted of course. Additionally, I am not sure what filesystem i can use which will allow what I want without becoming corrupted. I tried btrfs, xfs, zfs... I am thinking maybe I am going about it wrong, so thought I'd write this post and see if anyone can share some ideas. I think i need to use GFS but I haven't been able to get it setup correctly. **So my question is if it possible to access the same disk from two systems at the same time using AOE?** *relevant stuff:* - I do not want to use nfs, samba, ssh, or any of these options. - Security is not a concern it is on a closed internal air-gapped network. - Yes I've searched but it is hard to find relevant info. - the systems are each have 2 10gbe links, one for tcp/ip, one for ata over ethernet. - I'm open to suggestions, but I don't especially want to have to code a new solution, and want to stick with open source software. - I apologize in advance for my poorly written question.
Tim (111 rep)
Sep 6, 2021, 07:22 PM • Last activity: Sep 6, 2021, 07:39 PM
0 votes
1 answers
106 views
GFS2 File system Support End Date(S) with RHEL6 Server
Could anyone let me know, the End Of Life support dates for GFS2 file system for RHEL6 version? Raising a track to Redhat is an option for the existing partners to get this information! For me, it is not. Thanks. /Franco
Could anyone let me know, the End Of Life support dates for GFS2 file system for RHEL6 version? Raising a track to Redhat is an option for the existing partners to get this information! For me, it is not. Thanks. /Franco
Franco (1 rep)
May 21, 2020, 12:47 PM • Last activity: May 21, 2020, 04:13 PM
1 votes
1 answers
1181 views
CentOS 8 - Clustered File System
In my environment, I have a need for a shared disk between two application servers such that changes on Server A are immediately available on Server B. Historically, I have solved this issue by sharing a GFS2 volume by using multipathed disks stored on our enterprise filer and attached using our vir...
In my environment, I have a need for a shared disk between two application servers such that changes on Server A are immediately available on Server B. Historically, I have solved this issue by sharing a GFS2 volume by using multipathed disks stored on our enterprise filer and attached using our virtualization solution. This configuration requires fencing of the GFS2 nodes in the cluster and so I have used pacemaker to handle the fencing for GFS2 in the event that a node dies or becomes unhealthy to prevent full corruption of the file system by configuring [stonith](https://en.wikipedia.org/wiki/STONITH) to use [SBD fencing](http://linux-ha.org/wiki/SBD_Fencing) previously. While gfs2-utils and fence-agents-sbd is available for CentOS, the pcs command is not available as of CentOS 8.0 and it appears that it may [never be available](https://bugs.centos.org/view.php?id=16469) in the main repos. This is problematic as pcs was integral in configuring this in CentOS 7. This leaves me wondering - What can I do as an alternate for fencing the volume without having to compile the applicaiton from source (ongoing security updates & bug fixes is a requirement) - If nothing, what can I use to provide a distributed & redundant storage solution in CentOS 8? An NFS server would be out of the question as a failure to the file server would take both application servers offline.
James Shewey (1186 rep)
Nov 18, 2019, 10:18 PM • Last activity: Mar 6, 2020, 08:06 PM
0 votes
1 answers
784 views
Performing maintenance on GFS2 inside pacemaker cluster
I will need to perform a maintenance on one of storage servers that provides GFS2 volume to a three node pacemaker cluster. Same cluster has an addition of 2 GFS2 volumes as well. Would it be safe to run `pacemaker resource disable` on GFS2 resource which needs to be stopped due to maintenance, with...
I will need to perform a maintenance on one of storage servers that provides GFS2 volume to a three node pacemaker cluster. Same cluster has an addition of 2 GFS2 volumes as well. Would it be safe to run pacemaker resource disable on GFS2 resource which needs to be stopped due to maintenance, without risking other GFS2 volumes to be stopped or possibly cluster fenced? These are constraints: Ordering Constraints: start dlm-clone then start clvmd-clone (kind:Mandatory) start clvmd-clone then start gfs2-ISO-clone (kind:Mandatory) start clvmd-clone then start gfs2-shared-clone (kind:Mandatory) start clvmd-clone then start gfs2-qcow-clone (kind:Mandatory) Colocation Constraints: clvmd-clone with dlm-clone (score:INFINITY) gfs2-ISO-clone with clvmd-clone (score:INFINITY) gfs2-shared-clone with clvmd-clone (score:INFINITY) gfs2-qcow-clone with clvmd-clone (score:INFINITY) The volume i would like to stop is gfs2-qcow that is gfs2-qcow-clone. If i run pcs resource disable gfs2-qcow-clone will other GFS2 volumes die?
Marko Todoric (437 rep)
Aug 22, 2019, 11:18 AM • Last activity: Aug 30, 2019, 02:56 PM
1 votes
1 answers
262 views
Is there a way to achieve context-dependent path names (CDPN) on NFS?
In my GFS clusters I use the [CDPN](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Global_File_System/s1-manage-pathnames.html) feature to have separate chrooted `/dev/log` directories on separate cluster nodes: /home/ftpuser/foo: lrwxrwxrwx 1 root root 18 Sep 26 2010...
In my GFS clusters I use the [CDPN](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Global_File_System/s1-manage-pathnames.html) feature to have separate chrooted /dev/log directories on separate cluster nodes:
/home/ftpuser/foo:
lrwxrwxrwx   1 root root   18 Sep 26  2010 dev -> .sys/@hostname/dev

/home/ftpuser/foo/.sys:
drwx--x--x 3 root root 3864 Sep 26  2010 server1.example.com
drwx--x--x 3 root root 3864 Sep 26  2010 server2.example.com
drwx--x--x 3 root root 3864 Sep 26  2010 server3.example.com

/home/ftpuser/foo/.sys/server2.example.com:
drwx--x--x 2 root root 3864 Sep 25 09:34 dev

/home/ftpuser/foo/.sys/server2.example.com/dev:
srw-rw-rw- 1 root root    0 Sep 25 09:23 log

/home/ftpuser/foo/dev: (transparently picking 1 subdir depending on node name)
srw-rw-rw- 1 root root    0 Sep 25 09:23 log
I use this so the rsyslog daemon on each node doesn't interfere with eachother. It works because @hostname in a path is replaced with the hostname of the host that interprets it, so different hosts get a different directory. The clusters are active on all nodes simultaneously. My questions: * Is there a way to get corresponding functionality on an NFS share? * Could it in theory be implemented in the linux kernel on all filesystems (via a mount option so it doesn't break stuff by default)? This question is similar but not identical to this one: https://unix.stackexchange.com/questions/134815/nfs-file-with-same-name-but-different-content-depending-on-host
MattBianco (3806 rep)
Nov 10, 2014, 10:54 AM • Last activity: Dec 30, 2014, 03:56 PM
1 votes
0 answers
120 views
probably gfs2 problem
We have a Mail Server (CentOS 6.5) with these component: - Postfix - Dovecot - Mysql - MailScanner (spamassassin,clamav) - GFS2 Partition Last night the server crashed and this is the kernel logs before that: May 11 01:07:01 mail1 kernel: original: gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 ma...
We have a Mail Server (CentOS 6.5) with these component: - Postfix - Dovecot - Mysql - MailScanner (spamassassin,clamav) - GFS2 Partition Last night the server crashed and this is the kernel logs before that: May 11 01:07:01 mail1 kernel: original: gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: pid: 24608 May 11 01:07:01 mail1 kernel: lock type: 2 req lock state : 1 May 11 01:07:01 mail1 kernel: new: gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: pid: 24608 May 11 01:07:01 mail1 kernel: lock type: 2 req lock state : 1 May 11 01:07:01 mail1 kernel: G: s:EX n:2/28b165f5 f:yfIqob t:EX d:EX/0 a:1 v:0 r:4 m:200 May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] May 11 01:07:01 mail1 kernel: H: s:EX f:W e:0 p:24608 [imap] gfs2_fallocate+0xdc/0x540 [gfs2] I found these solutions in this post titled: [/var/log/messages shows lines like "H: s:EX f:W e:0 p:19456 [imap] gfs2_fallocate+0xdc/0x540 [gfs2]" repeated many times and GFS2 file system becomes inaccessible in RHEL 6][1] but we're not a Red Hat subscriber. In the Dovecot logs I found the message: > MySQL server has gone away **NOTE:** Presently the MySQL data is stored on that GFS2 partition. Does anyone have any ideas on how to resolve this problem?
Sassan torabkheslat (150 rep)
May 11, 2014, 02:40 PM • Last activity: May 11, 2014, 06:03 PM
6 votes
1 answers
2116 views
Is `ln` atomic and reliable on NFS? Could NFS replace GFS in this use case?
I have a cluster with a bunch of servers with a shared disk containing a GFS global file system that all nodes access simultaneously. Each node in the cluster run the same program (a shell script is the main core). The system processes files that appear in a couple of input directories, and it works...
I have a cluster with a bunch of servers with a shared disk containing a GFS global file system that all nodes access simultaneously. Each node in the cluster run the same program (a shell script is the main core). The system processes files that appear in a couple of input directories, and it works like this: * the program loops through the input directories. * for each file found, check existence of a "lock file", if lock file exists skip to next file. * if no lock file found, create lock file. If lockfile creation failed (race lost), skip to next file * if "we" own the lock, process the file and move it out of the way when it is finished. This all works very well, but I wonder if there are cheaper (less complex) solutions that would also work. I'm thinking NFS or SMB perhaps. There are two reasons for my use of GFS: 1. each file is stored in one place only (on redundant underlying hardware of course) 2. file locking works reliably I create the lockfile like this: date '+%s:'${unid} > ${currlock}.${unid} ln ${currlock}.${unid} ${currlock} lockrc=$? rm -f ${currlock}.${unid} where $unid is a unique session identifier and $currlock is /gfs/tmp/lock.${file_to_process} The beauty of ln is that it is **atomic**, so it fails for all but one that attempts the same thing at the same time. So, I guess what I'm asking is: will NFS fill my needs? Does ln work reliably in the same way on NFS as on GFS?
MattBianco (3806 rep)
Apr 22, 2014, 08:08 AM • Last activity: Apr 22, 2014, 11:20 PM
Showing page 1 of 9 total questions