Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
0 answers
67 views
Proper way to design filesystems structure for a cluster of diskless nodes
I'm trying to learn the basics of Linux clustering so I started designing a really humble cluster: - **6 worker nodes** (Libre Computer La Frite | Cortex-A53 @ 1.2 GHz | **1GB RAM**) - **1 master node** (Raspberry Pi 4 Model B | Cortex-A72 @ 1.5 GHz | **2GB RAM**) - 16-port Gbit Ethernet switch - 50...
I'm trying to learn the basics of Linux clustering so I started designing a really humble cluster: - **6 worker nodes** (Libre Computer La Frite | Cortex-A53 @ 1.2 GHz | **1GB RAM**) - **1 master node** (Raspberry Pi 4 Model B | Cortex-A72 @ 1.5 GHz | **2GB RAM**) - 16-port Gbit Ethernet switch - 500GB SSD **shared through NFS on the network** I'm planning to only run k3s on it and because of the diskless nature of the workers SBC boards, ***I was wondering which kind of "root filesystem structure**"* i should adopt for the cluster, on the NFS storage. For e.g. : - single-system image with the same shared root fs across all nodes - separate root fs for each node (/opt/node1/\* , /opt/node2/\* , ...) and so on... **Thanks to everyone who will reply!**
phreq (1 rep)
Mar 31, 2023, 04:58 PM • Last activity: Apr 1, 2023, 03:29 PM
1 votes
1 answers
1778 views
Object is remote in mounted CIFS on Ubuntu
I am struggling to mount a CIFS shared disk on Lubuntu 20.04. It can be accessed by ```smb://myserver/files```. The command I am using for mounting is: ``` sudo mount -t cifs //myserver/files /mnt/remote_disk -o username=yyyy,domain=hhhh,password=xxxx,vers=1.0,nodfs -v ``` Up to here, everything OK....
I am struggling to mount a CIFS shared disk on Lubuntu 20.04. It can be accessed by
://myserver/files
. The command I am using for mounting is:
sudo mount -t cifs //myserver/files /mnt/remote_disk -o username=yyyy,domain=hhhh,password=xxxx,vers=1.0,nodfs -v
Up to here, everything OK. The disk is mounted. But if I want to access the directory:
/mnt/remote_disk/parent1/parent2/son
It prompts:
bash: cd: /remote_disk/parent1/parent2/son: Object is remote
whereas if I step back one level:
/remote_disk/parent1/parent2
there is no problem. I guess this is not a problem of permissions as I already checked that. In fact, if I configure Nautilus: 1. Open Nautilus. 2. From the File menu, select Connect to Server. 3. Input
://myserver/files
I can access the files and folder within
/parent2/son
directory, modify and read them. Do you guys have any clue of what I might be missing? The dmesg outputs: CIFS VFS: cifs_mount failed w/return code = -66
ElPotac (41 rep)
Nov 17, 2020, 06:29 PM • Last activity: Nov 19, 2020, 10:45 AM
10 votes
4 answers
2829 views
Gedit cannot save in shared folder (Virtualbox)
I'm getting the ``` Cannot save _____ Unexpected error: Error renaming temporary file: Text file busy ``` in Gedit 2 when I try to save in a shared folder with Virtualbox (Debian). I've searched and apparently it's a Gedit problem. None of the solutions seem ideal or work for me. Would it be possibl...
I'm getting the
Cannot save _____ Unexpected error: Error renaming temporary file: Text file busy
in Gedit 2 when I try to save in a shared folder with Virtualbox (Debian). I've searched and apparently it's a Gedit problem. None of the solutions seem ideal or work for me. Would it be possible to create a shell script (external tools plugin) that saves the file somewhere else, then copies it back in shell? So I'll need to grab wherever Gedit's stored the temporary (live?) file. Or if this is not possible/won't work/bad practice, does anyone know a good way to get around this? I really like Gedit and prefer to use it. ---------- Currently, this is my script. I tell external tools not to save but pass the document as input (stdin) bin="" while read LINE; do echo ${LINE} # do something with it here bin="${bin}${LINE}\n" done echo $bin > /home/me/data2/test.txt It works fine except it doesn't preserve tabs. I'my only editing plain text files. Edit: this also seems to skip the last line
Raekye (609 rep)
Jan 27, 2013, 09:11 PM • Last activity: Jun 24, 2020, 10:15 AM
2 votes
1 answers
2676 views
storage server all nfs users on full 666 file permissions
How can I setup a shared storage location on a storage server shared via CIFS and NFS and all files written by CIFS and all users via NFS to write files and folders with full read/write permissions (folders 777, files 666). Reason: I use a Pydio server to manage files between my computer and a centr...
How can I setup a shared storage location on a storage server shared via CIFS and NFS and all files written by CIFS and all users via NFS to write files and folders with full read/write permissions (folders 777, files 666). Reason: I use a Pydio server to manage files between my computer and a central storage. But this central storage is also accessed directly over CIFS and NFS with other systems. All files written via CIFS I can do a force user and file creation mask, so that is covverred. But for NFS that is a different story. The Pydio server has an NFS mount to this storage location. Storage Location /etc/exports: /storage/internal *(rw,sync,all_squash) Pydio client mount: :/storage/internal /mnt/VODSTOR nfs rw,intr,noexec,rsize=16384,wsize=16384 0 0 all files written by this Pydio server have 644 file permissions. How to alter the parameters of the NFS export/mount options to write with file permission 666 and for folders 777...? Because then all other users should be able to copy, delete, change these files no matter if the use samba or NFS... Thanks in advance.
SHLelieveld (443 rep)
Oct 10, 2014, 08:22 AM • Last activity: Oct 5, 2019, 02:02 PM
2 votes
3 answers
4667 views
Share an ext4 filesystem between two RHEL servers, but only one will mount at a time
I have two RHEL 6 servers which share a physical connection to a SAN storage (i.e., so both servers can see this `/dev/sdb` when running `fdisk -l`). My goal is not to have both servers access the ext4 at the same time. In fact, one of the servers will be mounting it for most of the time. Only when...
I have two RHEL 6 servers which share a physical connection to a SAN storage (i.e., so both servers can see this /dev/sdb when running fdisk -l). My goal is not to have both servers access the ext4 at the same time. In fact, one of the servers will be mounting it for most of the time. Only when the first server fails, I will want the other server to mount this ext4 filesystem. I already created logical volumes and have tested that both servers can mount this filesystem successfully. I am going to write scripts that are going to check and make sure the volume is not mounted on the other server before mounting. My question is, when servers take turns to mount an ext4 filesystem like this, will there be underlying problems that I am missing? I fear that the OS might have some check or "notes" on the volumes...
Lok.K. (21 rep)
Jul 13, 2016, 04:56 AM • Last activity: Jun 4, 2019, 08:04 AM
2 votes
1 answers
2842 views
Unable to mount gfs2 file system on Debian Stretch, probable dlm mis-config?
I am experimenting with gfs2 on Debian Stretch, and having some difficulties. I am a reasonably experienced Linux admin, but new to shared-disk and parallel file systems. My immediate project is to mount a gfs2-formatted iscsi-exported device on multiple clients as a shared file system. For the mome...
I am experimenting with gfs2 on Debian Stretch, and having some difficulties. I am a reasonably experienced Linux admin, but new to shared-disk and parallel file systems. My immediate project is to mount a gfs2-formatted iscsi-exported device on multiple clients as a shared file system. For the moment, I am not interested in HA or fencing, although this may be important later on. The iscsi part is fine, I am able to log in to the target, format it as an xfs file system, and also mount it on multiple clients and verify that it shows up with the same blkid. To do the gfs2 business, I am following the scheme on the Debian stretch "gfs2" man page, modified for my config, and embellished slightly by various searches and so forth. Man page is here: https://manpages.debian.org/stretch/gfs2-utils/gfs2.5.en.html The actual error is, when I attempt to mount my gfs2 file system, the mount command returns with mount: mount(2) failed: /mnt: No such file or directory ... where /mnt is the desired mount point, which certainly does exist. (If you attempt to mount to a nonexistent mount point the error is "mount: mount point /wrong does not exist"). Related, at each mount attempt, dmesg reports: gfs2: can't find protocol lock_dlm I briefly went down the path of assuming the problem was that Debian packages do not provide "/sbin/mount.gfs2", and looked for that, but I think that was an incorrect guess. I have a five-machine cluster (of Raspberry Pis, in case it matters), named, somewhat idiosyncratically, pio, pi, pj, pk, and pl. They all have fixed static IP addresses, and there's no domain. I have installed the Debian gfs2, corosync, and dlm-controld packages. For the corosync step, my corosync config is (e.g. for pio, intended to be the master of the cluster): totem { version: 2 cluster_name: rpitest token: 3000 token_retransmits_before_loss_const: 10 clear_node_high_bit: yes crypto_cipher: none crypto_hash: none nodeid: 17 interface { ringnumber: 0 bindnetaddr: 192.168.0.17 mcastport: 5405 ttl: 1 } } nodelist { node { ring0_addr: 192.168.0.17 nodeid: 17 } node { ring0_addr: 192.168.0.11 nodeid: 1 } node { ring0_addr: 192.168.0.12 nodeid: 2 } node { ring0_addr: 192.168.0.13 nodeid: 3 } node { ring0_addr: 192.168.0.14 nodeid: 4 } } logging { fileline: off to_stderr: no to_logfile: no to_syslog: yes syslog_facility: daemon debug: off timestamp: on logger_subsys { subsys: QUORUM debug: off } } quorum { provider: corosync_votequorum expected_votes: 5 } This file is present on all the nodes, with appropriate node-specific changes to the nodeid and bindnetaddr fields in the totem section. The corosync tool starts without error on all nodes, and all the nodes also have sane-looking output from corosync-quorumtool, thus: root@pio:~# corosync-quorumtool Quorum information ------------------ Date: Sun Apr 22 11:04:13 2018 Quorum provider: corosync_votequorum Nodes: 5 Node ID: 17 Ring ID: 1/124 Quorate: Yes Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 5 Quorum: 3 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 1 1 192.168.0.11 2 1 192.168.0.12 3 1 192.168.0.13 4 1 192.168.0.14 17 1 192.168.0.17 (local) The dlm-controld package was installed, and /etc/dlm/dlm.conf created with the following simple config. Again, I am skipping fencing for now. The dlm.conf file is the same on all the nodes. enable_fencing=0 lockspace rpitest nodir=1 master rpitest node=17 I am unclear on whether or not the DLM "lockspace" name is supposed to match the corosync cluster name or not. I see the same behavior either way. The dlm-controld service starts without errors, and the the output of "dlm_tool status" appears sane: root@pio:~# dlm_tool status cluster nodeid 17 quorate 1 ring seq 124 124 daemon now 1367 fence_pid 0 node 1 M add 31 rem 0 fail 0 fence 0 at 0 0 node 2 M add 31 rem 0 fail 0 fence 0 at 0 0 node 3 M add 31 rem 0 fail 0 fence 0 at 0 0 node 4 M add 31 rem 0 fail 0 fence 0 at 0 0 node 17 M add 7 rem 0 fail 0 fence 0 at 0 0 The gfs2 file system was created by: mkfs -t gfs2 -p lock_dlm -j 5 -t rpitest:one /path/to/device Subsequent to this, "blkid /path/to/device" reports: /path/to/device: LABEL="rpitest:one" UUID= TYPE="gfs2" It looks the same on all the iscsi clients. At this point, I feel like I should be able to mount the gfs2 file system on any/all of the clients, but here is where I get the error above -- the mount command reports a "no such file or directory", and dmesg and syslog report "gfs2: can't find protocol lock_dlm". There are several other gfs2 guides out there, but many of them seem to be RH/CentOS specific, and for other cluster-management schemes besides corosync, like cman or pacemaker. Those aren't necessarily deal-breakers, but it's high-value to me to have this work on nearly-stock Debian Stretch. It also seems likely to me that this is probably a pretty simple dlm misconfiguration, but I can't seem to nail it down. Additional clues: When I try to "join" a lockspace via dlm_tool join ... I get a dmesg output: dlm cluster name 'rpitest' is being used without an application provided cluster name This happens independently of whether the lockspace I am joining is "rpitest" or not. This suggests that lockspace names and cluster names are indeed the same thing, and/but that the dlm is evidently not aware of the corosync config?
Andrew Reid (53 rep)
Apr 22, 2018, 04:44 PM • Last activity: Apr 24, 2018, 06:09 AM
1 votes
0 answers
1018 views
gvfs and permissions on a remote external usb drive
I have transmission and deluge (trying to see if it's a problem with the app) and I'm having the same problem with both of them. When torrenting. I get an error `" Operation not supported (/run/user/1000/gvfs/sftp:host=192.168.1.7,user=pi/media/ "` when downloading a file. 1) My desktop machine has...
I have transmission and deluge (trying to see if it's a problem with the app) and I'm having the same problem with both of them. When torrenting. I get an error " Operation not supported (/run/user/1000/gvfs/sftp:host=192.168.1.7,user=pi/media/ " when downloading a file. 1) My desktop machine has the torrent program on it (deluge / transmission) and it's connected to my raspberry pi (via ssh) which has an external usb drive connected to it. 2) I'm trying to run transmission on my desktop but have all the files go to the external usb drive located / connected to my raspberry pi. This worked on 14.04LTS but I can't seem to get it to work on 16.04 LTS. I created the group **torgrp** and placed the users in it but still I get the " Operation not supported (/run/user/1000/gvfs/sftp:host=192.168.1.7,user=pi/media/ " when I do a ls -l drwxrwxrwx 14 pi torgrp 4096 Apr 22 10:46 media but when I download and check the rights of the file (see below) -rw-r--r-- 1 pi pi 1485881344 Apr 22 10:46 ubuntu-16.04-desktop and the download stops after about 6meg and comes back with an error. **Please note the file size below says it's correct but it's just holding that amount of space for the full download** I also check to see if the users were in the group id pi uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input),1002(spi),1003(gpio),1004(torgrp) Any ideas how to fix this?
Rick T (357 rep)
Apr 22, 2016, 03:54 PM • Last activity: May 11, 2016, 10:53 AM
1 votes
1 answers
1224 views
Failover NFS service with shared storage
I need to configure a cluster with shared storage that can be moved from Node A to Node B and vica versa. In case of failure of Node A, Node B should take over the IP address associated with the NFS service, take ownership of the shared disk, mount it and start the NFS server. I am using SUSE Linux...
I need to configure a cluster with shared storage that can be moved from Node A to Node B and vica versa. In case of failure of Node A, Node B should take over the IP address associated with the NFS service, take ownership of the shared disk, mount it and start the NFS server. I am using SUSE Linux 11.4. So far I am using the HA cluster package and NFS. NFS is sharing the drive from Node A, but if Node A goes down Node B stops working.
mystery (11 rep)
Oct 31, 2015, 08:12 AM • Last activity: Oct 31, 2015, 08:57 AM
0 votes
1 answers
809 views
How to safely delete a system partition?
I got the following disk [![enter image description here][1]][1] [1]: https://i.sstatic.net/7HEH2.png `sda9` contains a new Linux installation which I'd like to keep, while `sda5` is the old installation which I'd like to **free** (and successively merge with `sda4`, but we don't care). `sda4` conta...
I got the following disk enter image description here sda9 contains a new Linux installation which I'd like to keep, while sda5 is the old installation which I'd like to **free** (and successively merge with sda4, but we don't care). sda4 contains a Windows installation which I'd like to keep. The question, made to be sure of **avoiding (grub) problems at boot**, is: *do I need just to delete* sda5 *without any other operation, and nothing will break ?*
Patrizio Bertoni (141 rep)
Jul 31, 2015, 02:26 PM • Last activity: Jul 31, 2015, 03:03 PM
1 votes
0 answers
148 views
How shareable is /var/cache?
I know that some subdirectories of `/var/cache` are pretty much shareable, for example `/var/cache/pacman` (in case of ArchLinux), and possibly others (`/var/cache/man`?), but at least `/var/cache/ldconfig` seems to me pretty much local. Is it possible to make such a setup that I have a partition fo...
I know that some subdirectories of /var/cache are pretty much shareable, for example /var/cache/pacman (in case of ArchLinux), and possibly others (/var/cache/man?), but at least /var/cache/ldconfig seems to me pretty much local. Is it possible to make such a setup that I have a partition for local cache and another one for cache that is shared between different hosts? What should be taken into consideration? On this specific case it is ok to assume that same base distribution of Linux (ArchLinux) is used, but architectures may differ (even arm, even though it is not the same ArchLinux distribution than x86 and x86_64 in some sense).
montiainen (163 rep)
Oct 21, 2014, 12:04 PM • Last activity: Oct 22, 2014, 02:57 PM
1 votes
0 answers
348 views
Shared drive between two Linux virtual machines
Assume I have two virtual machines. * How do I share a VMDK drive between them? * What solution do you recommend? Outside OS with NFS share? Both of OSes see this drive as: `/dev/sdb`.
Assume I have two virtual machines. * How do I share a VMDK drive between them? * What solution do you recommend? Outside OS with NFS share? Both of OSes see this drive as: /dev/sdb.
user77056
Jul 10, 2014, 08:02 AM • Last activity: Jul 11, 2014, 05:42 PM
3 votes
2 answers
2541 views
How can I mount vxfs FS to two or more Solaris servers?
I want to have two Solaris servers share the same SAN vxfs Filesystem. Though at a time just one would be accessing the share. This is to allow for a quick failover in case the primary server undergoes an unplanned outage for some reason. From the [Oracle Solaris Cluster Software Installation Guide]...
I want to have two Solaris servers share the same SAN vxfs Filesystem. Though at a time just one would be accessing the share. This is to allow for a quick failover in case the primary server undergoes an unplanned outage for some reason. From the [Oracle Solaris Cluster Software Installation Guide](http://docs.oracle.com/cd/E19680-01/html/821-1255/toc.html) , it seems a cluster needs to be setup and VxVM Software needs to be running on both servers to manage the cluster - which seems quite complicated in comparison to simply mounting a NAS share on two or more servers, for creating a shared filesystem. Could someone kindly point me in the right direction?
Kent Pawar (1296 rep)
Jun 28, 2013, 08:36 AM • Last activity: Jun 30, 2013, 02:00 AM
1 votes
1 answers
846 views
Share files between RedHat and Mac
I have one RedHat 6 machine and three Mac OS machines. I want the RedHat one to be a file server, so that all files will be stored there. All Mac clients will access this server according to their rights. It seems that this will not use samba. Could you please let me know what should I do in this ca...
I have one RedHat 6 machine and three Mac OS machines. I want the RedHat one to be a file server, so that all files will be stored there. All Mac clients will access this server according to their rights. It seems that this will not use samba. Could you please let me know what should I do in this case?
Phu Nguyen (93 rep)
Oct 21, 2012, 03:15 PM • Last activity: Oct 22, 2012, 12:51 AM
2 votes
2 answers
5037 views
Choosing a filesystem for a shared disk (not a cluster filesystem like GFS)
I have a bunch of servers connected to a SAN. One server hosts the production database server, performing full database backups to a filesystem on a LUN that is exported to all servers. Only the "owner" (production server) has this filesystem mounted read-write. When the owner has performed a full b...
I have a bunch of servers connected to a SAN. One server hosts the production database server, performing full database backups to a filesystem on a LUN that is exported to all servers. Only the "owner" (production server) has this filesystem mounted read-write. When the owner has performed a full backup, sync is called. The other hosts later mount this filesystem read-only, for quick access to the backup, for loading copies. This way I don't have the network as a bottleneck for transferring the backup. I have had this setup on Solaris for ages, with no glitches, on a plain UFS filesystem. Now I am going to set up the same on Linux (RHEL6), and want advice on what filesystem to pick. I'm thinking simpler is better, because I absolutely don't want any other host than the owner to make any changes whatsoever. No journal replay or other crazy stuff that can confuse the owners kernel if the on-disk structures stop matching what the kernel "knows" is there. I hope you understand my question. I've seen stuff happen (like journal replay) when mounting read-only filesystems on linux that worry me a bit. I'm looking for something simple. Not a cluster filesystem that requires handshaking and heartbeats. Only one node needs to write.
MattBianco (3806 rep)
Nov 16, 2011, 11:17 AM • Last activity: Nov 16, 2011, 08:12 PM
Showing page 1 of 14 total questions