Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
2 answers
530 views
Mirroring disk between computers
I want to have a logical disk (using LVM) that is mirrored between two computers. That is, I want one filesystem that has complete copies on two different computers. I want to do this on Linux, preferably without significant add-ons. Ideally, I would like to be able to access it from either computer...
I want to have a logical disk (using LVM) that is mirrored between two computers. That is, I want one filesystem that has complete copies on two different computers. I want to do this on Linux, preferably without significant add-ons. Ideally, I would like to be able to access it from either computer, locally, and NFS publish it for other machines. One solution might be to publish the "disk" on one machine with iSCSI or network block device (NBD) or the like, use RAID mirroring on that, then a filesystem, and then NFS to share it with everyone. This does everything but local access (but can be reconfigured if the normal master is dead), and is easily extended to more machines. Another might be two independent filesystems synced by cron jobs running rsync. What I am hoping for is an advanced distributed filesystem solution, that can also merge changes if one node is disconnected, edited, and reconnected. **Edit**: The basic requirements: I currently keep all my network management information on one computer. Every so often, it has trouble booting. (It is so old, the battery has died (and the big storm last Thursday (Maine) took out power).) So I would like to automatically distribute the data around to other machines in the network, so that as long as one works, I can get it all.
David G. (3429 rep)
Apr 8, 2024, 06:38 PM • Last activity: Apr 9, 2024, 03:31 AM
-2 votes
1 answers
451 views
How can I create or access a remote file shared by a distributed filesystem as if it were local?
In general, I would like to run local programs on remote files provided to me via a distributed file system (e.g. samba, NFS or whichever you recommend), and I want to access the remote files in a way similar to accessing local files: something similar to "location transparency". I was amazed by my...
In general, I would like to run local programs on remote files provided to me via a distributed file system (e.g. samba, NFS or whichever you recommend), and I want to access the remote files in a way similar to accessing local files: something similar to "location transparency". I was amazed by my file manager pcmanfm which can present remote files like local files, and allows me to run some local programs on them by right clicking in the same way of running the programs on local files. I want to do the same in CLI shell. For example, I am trying to copy a file to a remote directory sharesmb shared via a remote samba daemon. I can do it in my file manager pcmanfm, with address smb://olive.local/sharesmb on olive.local. I would like to do the same in shell, but the link doesn't work. Could you tell me how to do that in shell? $ cp 153-158.pdf smb://olive.local/sharesmb on olive.local cp: target 'olive.local' is not a directory $ cp 153-158.pdf smb://olive.local/'sharesmb on olive.local' cp: cannot create regular file 'smb://olive.local/sharesmb on olive.local': No such file or directory Thanks.
Tim (106430 rep)
Jan 8, 2020, 11:52 PM • Last activity: Jan 27, 2020, 02:35 PM
2 votes
2 answers
585 views
Does Opensolaris offer distributed ZFS filesystems
I haven't had any luck getting a *confirmed* yes or no to this question. I'd like to run ZFS as a distributed file system (like Gluster or CEPH). OpenZFS and ZFS on Linux does not (yet) have file system clustering. My understanding is distributed is currently a feature of ZFS in the proprietary (pai...
I haven't had any luck getting a *confirmed* yes or no to this question. I'd like to run ZFS as a distributed file system (like Gluster or CEPH). OpenZFS and ZFS on Linux does not (yet) have file system clustering. My understanding is distributed is currently a feature of ZFS in the proprietary (paid) Solaris. **Is it also available in Opensolaris?** The clustering will be used with two separate servers, each using their own JBOD. Thus, we need an across-the-network (10Gb ETH) solution.
BurningKrome (238 rep)
Nov 7, 2019, 01:47 PM • Last activity: Nov 7, 2019, 05:44 PM
7 votes
2 answers
6689 views
Parallel vs Distributed vs Traditional File system
I am trying to understand the differences between these three file system at a very basic level. - Distributed FS: HDFS - Parallel FS : Lustre - Traditional FS : ext4/ext3/ NTFS/FAT etc. I want to know what are the basic conceptual differences between these three file system. Most of my knowledge is...
I am trying to understand the differences between these three file system at a very basic level. - Distributed FS: HDFS - Parallel FS : Lustre - Traditional FS : ext4/ext3/ NTFS/FAT etc. I want to know what are the basic conceptual differences between these three file system. Most of my knowledge is of the traditional file systems , i.e. ext3/4 superblock , inode etc . - If a MPI based process(np=8) tries to read a file or write a file A from file system, then how does the file access mechanism differ in these contexts - also how is a file stored in this environment? i.e. File A will be split across multiple disks or file A will have redundant copies on storage. or a more simple scenario will be say multiple users opens a word document then saves it, then how does the write-back/synchronization differ in these 3 scenarios So far I have formed a few concepts that:- - In local file system , the storage is physically mounted on server/nodes. - In parallel file system , a disk is shared (mount) on multiple nodes, and, - In distributed FS, the multiple nodes have multiple local storage but all of them are synchronized by some mechanism. If I have A,B are a workstation and C,D is the disk: 1. If C is **physically** mounted on A & formatted as ext4 then it is traditional file system. 2. If C is physically mounted on storage server Z + C is network mounted (NFS) on both A & B then this is cluster FS. 3. If C is physically mounted on A and network mounted on B, D is physically on B and network mounted on A. Then this gives rise to Distributed FS. Though some answers state that metadata and data are on separate servers in parallel file systems , but here too I wish to understand how metadata is managed in Distributed File Systems?
puneet336 (188 rep)
Jul 15, 2015, 06:18 AM • Last activity: Jul 8, 2019, 03:30 PM
0 votes
1 answers
678 views
Adding entries to fstab results in emergency mode
This is my local system configuration. NAME="elementary OS" VERSION="5.0 Juno" I am mounting my remote server's file system on a subdirectory by this command, which is working fine. sudo sshfs -o allow_other della@108.49.38.08: /mnt/Production_server The terminal prompts for the local's sudo passwor...
This is my local system configuration. NAME="elementary OS" VERSION="5.0 Juno" I am mounting my remote server's file system on a subdirectory by this command, which is working fine. sudo sshfs -o allow_other della@108.49.38.08: /mnt/Production_server The terminal prompts for the local's sudo password first, then the remote's password. (Even though I have already copied the local's ed25519 public key into the remote's ~/.ssh/authorized_keys, somehow that does not work. I would like to make it work, but that is more like a side question.) Some tutorials led me to believe that I do not have to issue the above command every-time and the remote can be mounted automatically at each boot up. Following that, I entered the following line at the end of my /etc/fstab file. sshfs#della@108.49.38.08: /mnt/Production_server After I poweroff, the laptop simply refuses to boot and throws a message saying *You are in emergency mode.* Lucky that it allows me into a very basic login shell where I can edit the /etc/fstab using nano. Only after I eliminate the last line it boots up properly. So basically 1. Is it possible to automatically mount the remote at each reboot? How will the authentication take place? 2. If possible, am I editing the file system table incorrectly? What should the last line look like? Or is the method entirely different?
Della (131 rep)
May 21, 2019, 01:06 PM • Last activity: May 28, 2019, 03:51 AM
5 votes
2 answers
6376 views
Recommendations for replacing a GFS cluster?
I have a couple of CentOS GFS-clusters (GFS as in Global File System) using a shared disk in a Fibre Channel SAN. They are mature now, and the time has come to start planning for their replacement. They are an odd number of nodes (3 or 5) with fencing of faulty nodes set up with APC (PDU) power swit...
I have a couple of CentOS GFS-clusters (GFS as in Global File System) using a shared disk in a Fibre Channel SAN. They are mature now, and the time has come to start planning for their replacement. They are an odd number of nodes (3 or 5) with fencing of faulty nodes set up with APC (PDU) power switches. The nodes are all active and read and write simultaneously on the same shared filesystem. The filesystem is small, currently less than a TB, and will never grow larger than would fit on a commodity hard drive. I have two exclusive IP-address resources which relocate when a node is down. (1 on the 3-node cluster). Everything works very well, but the performance is not very good when there is a lot of activity. So, what could I do differently in my next generation cluster? **What I need is service uptime and data availability.** Possibly scalability as well, but probably not. I don't expect the load to grow very much. I also need to be able to read and write the files like regular files on a regular filesystem. There is no need for quotas or ACLs. Just regular unix permissions, ownership, mtime, size in bytes, and the ability to use ln to make a lock file in a way that fails on all but 1 node, should they try it at the same time. I don't want to increase the number of physical servers (which means that I want to use the storage on the actual servers themselves). It's not mandatory, but I think it would be good if I weren't dependent on the Shared disk. I've been through two incidents with Enterprise class SAN storage being unavailable in the last 5 years, so however improbable that is, I'd like to be one step ahead. Since uptime is very important, 1 physical server with 1 running kernel is too little. Virtual machines are dependent on the SAN in our environment. My thoughts so far: * All nodes could be plain NFSv3 clients (Would ln work the way I expect? What would be the NFS server then?) * [Ceph](http://ceph.com/) with CephFS (When will the FS be production ready?) * [XtreemFS](http://www.xtreemfs.org/index.php) (Why are there so little written about it compared to Ceph?) As you see, I'm interested in distributed storage, but need advice from experienced gurus. Especially recommendations or advice about Ceph or XtreemFS would be welcome. This is not a HPC with insane bandwidth demands. Just need the availability and reliability, and hopefully flexibility of my old solution, ideally in a "better" configuration than the current. **EDIT** (see Nils comment) The main reason I think about replacing this solution is that I want to see if it is possible to eliminate the single point of failure that the SAN storage cabinet is. Or should I instead use LVM mirroring to keep the data on two different storage systems in the same SAN fabric? Two FC-HBAs and double switches should be enough I think.
MattBianco (3806 rep)
Jan 14, 2014, 03:59 PM • Last activity: Mar 20, 2019, 12:29 PM
2 votes
1 answers
1624 views
Validate start-dfs.sh
I am trying to setup a Hadoop cluster, where master is my laptop and slave is the virtualbox, following this [guide][1]. So, I did, from *master*: gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./start-dfs.sh Starting namenodes on [master] root@master's password: master: namenode running as pr...
I am trying to setup a Hadoop cluster, where master is my laptop and slave is the virtualbox, following this guide . So, I did, from *master*: gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./start-dfs.sh Starting namenodes on [master] root@master's password: master: namenode running as process 2911. Stop it first. root@master's password: root@slave-1's password: master: datanode running as process 3057. Stop it first. slave-1: starting datanode, logging to /home/hadoopuser/hadoop/logs/hadoop-root-datanode-gsamaras-VirtualBox.out Starting secondary namenodes [0.0.0.0] root@0.0.0.0's password: 0.0.0.0: secondarynamenode running as process 3234. Stop it first. gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ su - hadoopuser Password: -su: /home/hduser/hadoop/sbin: No such file or directory hadoopuser@gsamaras:~$ jps 15845 Jps And the guide states: "The output of this command should list NameNode, SecondaryNameNode, DataNode on master node, and DataNode on all slave nodes.", which doesn't seem to be the case here (*does it?*) and then I checked the *slave*'s log: cat hadoop-root-datanode-gsamaras-VirtualBox.log ..rver: master/192.168.1.2:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 02:42:14,160 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.2:54310 gsamaras@gsamaras-VirtualBox:/home/hadoopuser/hadoop/logs$ ssh master gsamaras@master's password: Welcome to Ubuntu 14.04.3.. The logs in the master node seem error-less. Notice that I can do a password-less ssh from master to slave, but not vice versa, the guide doesn't mention something like this. Any ideas *please*? --- When I execute stop-dfs.sh, I am getting the erroneous message: slave-1: no datanode to stop --- Now, I did it again and I got in the *master*: gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ sudo ./stop-dfs.sh Stopping namenodes on [master] root@master's password: master: no namenode to stop root@master's password: root@slave-1's password: master: no datanode to stop slave-1: stopping datanode Stopping secondary namenodes [0.0.0.0] root@0.0.0.0's password: 0.0.0.0: stopping secondarynamenode gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ jps 19048 Jps gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ ps axww | grep hadoop 19277 pts/1 S+ 0:00 grep --color=auto hadoop gsamaras@gsamaras:/home/hadoopuser/hadoop/sbin$ jps 19278 Jps and ps axww | grep hadoop in the *slave*, gave process with id 2553.
gsamaras (191 rep)
Jan 24, 2016, 12:50 AM • Last activity: Jun 26, 2018, 10:05 AM
2 votes
1 answers
2842 views
Unable to mount gfs2 file system on Debian Stretch, probable dlm mis-config?
I am experimenting with gfs2 on Debian Stretch, and having some difficulties. I am a reasonably experienced Linux admin, but new to shared-disk and parallel file systems. My immediate project is to mount a gfs2-formatted iscsi-exported device on multiple clients as a shared file system. For the mome...
I am experimenting with gfs2 on Debian Stretch, and having some difficulties. I am a reasonably experienced Linux admin, but new to shared-disk and parallel file systems. My immediate project is to mount a gfs2-formatted iscsi-exported device on multiple clients as a shared file system. For the moment, I am not interested in HA or fencing, although this may be important later on. The iscsi part is fine, I am able to log in to the target, format it as an xfs file system, and also mount it on multiple clients and verify that it shows up with the same blkid. To do the gfs2 business, I am following the scheme on the Debian stretch "gfs2" man page, modified for my config, and embellished slightly by various searches and so forth. Man page is here: https://manpages.debian.org/stretch/gfs2-utils/gfs2.5.en.html The actual error is, when I attempt to mount my gfs2 file system, the mount command returns with mount: mount(2) failed: /mnt: No such file or directory ... where /mnt is the desired mount point, which certainly does exist. (If you attempt to mount to a nonexistent mount point the error is "mount: mount point /wrong does not exist"). Related, at each mount attempt, dmesg reports: gfs2: can't find protocol lock_dlm I briefly went down the path of assuming the problem was that Debian packages do not provide "/sbin/mount.gfs2", and looked for that, but I think that was an incorrect guess. I have a five-machine cluster (of Raspberry Pis, in case it matters), named, somewhat idiosyncratically, pio, pi, pj, pk, and pl. They all have fixed static IP addresses, and there's no domain. I have installed the Debian gfs2, corosync, and dlm-controld packages. For the corosync step, my corosync config is (e.g. for pio, intended to be the master of the cluster): totem { version: 2 cluster_name: rpitest token: 3000 token_retransmits_before_loss_const: 10 clear_node_high_bit: yes crypto_cipher: none crypto_hash: none nodeid: 17 interface { ringnumber: 0 bindnetaddr: 192.168.0.17 mcastport: 5405 ttl: 1 } } nodelist { node { ring0_addr: 192.168.0.17 nodeid: 17 } node { ring0_addr: 192.168.0.11 nodeid: 1 } node { ring0_addr: 192.168.0.12 nodeid: 2 } node { ring0_addr: 192.168.0.13 nodeid: 3 } node { ring0_addr: 192.168.0.14 nodeid: 4 } } logging { fileline: off to_stderr: no to_logfile: no to_syslog: yes syslog_facility: daemon debug: off timestamp: on logger_subsys { subsys: QUORUM debug: off } } quorum { provider: corosync_votequorum expected_votes: 5 } This file is present on all the nodes, with appropriate node-specific changes to the nodeid and bindnetaddr fields in the totem section. The corosync tool starts without error on all nodes, and all the nodes also have sane-looking output from corosync-quorumtool, thus: root@pio:~# corosync-quorumtool Quorum information ------------------ Date: Sun Apr 22 11:04:13 2018 Quorum provider: corosync_votequorum Nodes: 5 Node ID: 17 Ring ID: 1/124 Quorate: Yes Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 5 Quorum: 3 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 1 1 192.168.0.11 2 1 192.168.0.12 3 1 192.168.0.13 4 1 192.168.0.14 17 1 192.168.0.17 (local) The dlm-controld package was installed, and /etc/dlm/dlm.conf created with the following simple config. Again, I am skipping fencing for now. The dlm.conf file is the same on all the nodes. enable_fencing=0 lockspace rpitest nodir=1 master rpitest node=17 I am unclear on whether or not the DLM "lockspace" name is supposed to match the corosync cluster name or not. I see the same behavior either way. The dlm-controld service starts without errors, and the the output of "dlm_tool status" appears sane: root@pio:~# dlm_tool status cluster nodeid 17 quorate 1 ring seq 124 124 daemon now 1367 fence_pid 0 node 1 M add 31 rem 0 fail 0 fence 0 at 0 0 node 2 M add 31 rem 0 fail 0 fence 0 at 0 0 node 3 M add 31 rem 0 fail 0 fence 0 at 0 0 node 4 M add 31 rem 0 fail 0 fence 0 at 0 0 node 17 M add 7 rem 0 fail 0 fence 0 at 0 0 The gfs2 file system was created by: mkfs -t gfs2 -p lock_dlm -j 5 -t rpitest:one /path/to/device Subsequent to this, "blkid /path/to/device" reports: /path/to/device: LABEL="rpitest:one" UUID= TYPE="gfs2" It looks the same on all the iscsi clients. At this point, I feel like I should be able to mount the gfs2 file system on any/all of the clients, but here is where I get the error above -- the mount command reports a "no such file or directory", and dmesg and syslog report "gfs2: can't find protocol lock_dlm". There are several other gfs2 guides out there, but many of them seem to be RH/CentOS specific, and for other cluster-management schemes besides corosync, like cman or pacemaker. Those aren't necessarily deal-breakers, but it's high-value to me to have this work on nearly-stock Debian Stretch. It also seems likely to me that this is probably a pretty simple dlm misconfiguration, but I can't seem to nail it down. Additional clues: When I try to "join" a lockspace via dlm_tool join ... I get a dmesg output: dlm cluster name 'rpitest' is being used without an application provided cluster name This happens independently of whether the lockspace I am joining is "rpitest" or not. This suggests that lockspace names and cluster names are indeed the same thing, and/but that the dlm is evidently not aware of the corosync config?
Andrew Reid (53 rep)
Apr 22, 2018, 04:44 PM • Last activity: Apr 24, 2018, 06:09 AM
3 votes
0 answers
192 views
Is it crazy to consider keeping my home directory on OpenAFS?
I am a sysadmin by trade, and I do what I do at work at home as well for fun. I have a Gentoo Linux laptop, Raspberry Pis running Raspian, a Gentoo server, ARM devices running Debian and have various Android devices. I'm always wrestling and worrying about backing up and synchronizing my own home di...
I am a sysadmin by trade, and I do what I do at work at home as well for fun. I have a Gentoo Linux laptop, Raspberry Pis running Raspian, a Gentoo server, ARM devices running Debian and have various Android devices. I'm always wrestling and worrying about backing up and synchronizing my own home directory among disperate devices, while keeping it reasonably safe from prying eyes. I had experience with Andrew in the '80s at CMU, and it was like magic. I would consider NFS if it had some mechanism to handle disconnected access and didn't presume a constant network connection. Would OpenAFS be something that admins out there might consider to handle synchronizing the data of "lightly connected" hosts of the modern user? I've also considered things like Lustre. I am looking for something that requires moderate maintenance after initial setup. It seems like OpenAFS might also be interesting in that I could divide my home directory into administratively different subdirectories, which might be distributed to different devices in different measure. (E.g. a ~/mobile for files which must reside on my phone and tablets, ~/pi for Raspberry Pi files, etc.) Is OpenAFS a dead end, or am I on a good track? :)
Jesse Adelman (246 rep)
Jul 9, 2017, 06:43 PM
3 votes
0 answers
1079 views
Meta Data Server (MDS) cluster for pnfs
TL;DR ----- pNFS seems a great method for multiple concurrent access to centralised shared storage, but it has a problem: the single NFS server providing NFS metadata (meta data server, MDS) is a single point of failure. If the MDS becomes inaccessible the shared storage will become inaccessible, be...
TL;DR ----- pNFS seems a great method for multiple concurrent access to centralised shared storage, but it has a problem: the single NFS server providing NFS metadata (meta data server, MDS) is a single point of failure. If the MDS becomes inaccessible the shared storage will become inaccessible, before long. To avoid this problem I am looking to set up a MDS cluster. How can this be achieved **without denying clients direct I/O with the centralised storage**? There are solutions to HA-NFS, but they all break the direct I/O pNFS feature. ---------- The long story --------------- I am exploring (multiple) options to implement a shared "datastore" for an OpenStack environment. I am currently looking at scenarios where storage is served as block storage from a centralised storage system. In these scenarios problems arise, because most filesystems cannot be mounted multiple times concurrently safely, at least not when there is concurrent read and write I/O from multiple hosts. One possible approach to the multi-mount problem can be NFS 4.1 in combination with XFS. Starting with this version, there is pNFS, which allows NFS **clients** direct block-based I/O with the storage system. According to the kernel documentation, currently only XFS supports the nfs block layout. With this approach the NFS server exports the share to clients as usual, but when a client actually does I/O, it directly interacts with the storage system, not the NFS server. Especially with the use case of virtual machines (which can be assumed to run only on one host at a time) this seems like a great fit. > The NFS server which exports the share to the clients is a single > point of failure. If it goes down, no additional clients can mount the > share and no unopened files can be accessed. The solution is a cluster > of NFS servers where another server can take over if a server fails. > How can pNFS servers be clustered? There are quite a few tutorials on clustering NFS with the help of gluster or drbd. Solutions based on these helpers imply that data I/O is performed between NFS client and NFS server. Yet the intriguing feature of pNFS is that data I/O performed between NFS client and central storage system, which is not the NFS server. Therefore, using either of these approaches breaks the important feature of pNFS, thus cannot be considered a solution. I would greatly appreciate existing work on this. But I also value comments on ... my own theories --------------- Even though pNFS has been around and stable for a couple of years, there is not a lot of noise around it. The little information I have tells me that: - most/all states lie with the client. This means that if a NFS server goes down, the states are not lost immediately. - as the client performs I/O directly with the storage system, no ongoing data flow is interrupted by the server going down. With this information I'd guess that the following can be a viable approach: 1. Create the shared LUN and use one server to put an XFS on it. 2. Set up the kernel nfsd on every NFS server. 1. Have the bunch of NFS servers all mount the same LUN read-only. 2. Create a ramdisk on each server 3. Use DRBD to set up a synchronous mirror accross all the ramdisks 4. Use the synchronous mirrored ramdisk to store the NFS servers state by mounting it on /var/lib/nfs (and restart the nfs service) 5. Use keepalived to organise the bunch of servers into an active/passive cluster with a VIP where only one server is active at a time. 6. Use keepalived's notify mechanism to have the active NFS server mount the LUN read-write and have the other servers mount the LUN read-only. The load on the "primary" NFS server would be fairly low and directly correlate with the changes in virtual machines. Therefore having only one NFS server at a time doesn't seem to be a problem in small to medium sized virtualization environments. With the way NFS 4.*, works the synchronously mirrored NFS server state should allow another server to take over after the primary server failed. When using gluster for the shared server state directory, I believe this should even allow for an active/active cluster, but I am unable to find any work on this. Even if the mirrored server state does not work, the bump when another server takes over in an active/passive cluster shouldn't be that big, as I/O is performed directly with the central storage system. Correct?
Bananguin (8140 rep)
Dec 5, 2016, 03:35 PM
2 votes
2 answers
3134 views
ZFS read-only mount on Linux + simultaneous read-write mount on Solaris
We have to regularly copy quite huge files from Solaris to Linux (using network). It currently takes almost half a day for one file. The files in Solaris are on a ZFS filesystem. So I thought what a heck - we could probably mount that ZFS on Linux. But ZFS is not a clustered (or clusterable) filesys...
We have to regularly copy quite huge files from Solaris to Linux (using network). It currently takes almost half a day for one file. The files in Solaris are on a ZFS filesystem. So I thought what a heck - we could probably mount that ZFS on Linux. But ZFS is not a clustered (or clusterable) filesystem. ***Hypothesis***: So I thought we could since we're just copying from Solaris - we can mount that same ZFS filesystem read-only, so it doesn't have to be clustered in this case? As writes will be only on Solaris side (we can't unmount it there). That Solaris box is very busy and network NICs almost always are very busy too. So by moving file copy to FC it should be way faster. That Linux box is a virtual guest on a VMWare host. So yes, it's possible to present the same FC fabric to that Linux guest. Thoughts? I think that hypothesis piece is most where I look for feedback on. Not sure if it's possible to do ZFS read-only mount on Linux + simultaneous read-write mount on Solaris.
Tagar (243 rep)
Sep 29, 2016, 02:28 AM • Last activity: Sep 29, 2016, 10:09 PM
0 votes
1 answers
4525 views
How to move a directory?
I have a directory in HDFS with subdirectories that contain part-xxxxx files, created by Spark. I want to move that directory (and everything inside it) into a new directory. How to? --- My attempt: [gsamaras@gwta3000 ~]$ hadoop fs -mv /projects/landmarks/ /projects/landmarks/all/ mv: ` /projects/la...
I have a directory in HDFS with subdirectories that contain part-xxxxx files, created by Spark. I want to move that directory (and everything inside it) into a new directory. How to? --- My attempt: [gsamaras@gwta3000 ~]$ hadoop fs -mv  /projects/landmarks/ /projects/landmarks/all/ mv: ` /projects/landmarks/': No such file or directory [gsamaras@gwta3000 ~]$ hadoop fs -mv  /projects/landmarks/* /projects/landmarks/all/ mv: ` /projects/landmarks/*': No such file or directory [gsamaras@gwta3000 ~]$ hadoop fs -mv  /projects/landmarks/*/* /projects/landmarks/all/ mv: ` /projects/landmarks/*/*': No such file or directory [gsamaras@gwta3000 ~]$ hadoop fs -ls /projects/landmarks/ Found 116 items drwx------ - gsamaras edugr 0 2016-09-07 18:08 /projects/landmarks/Parthenon ...
gsamaras (191 rep)
Sep 15, 2016, 01:38 AM • Last activity: Sep 15, 2016, 02:52 AM
2 votes
2 answers
2263 views
set up NFS two way syncronization
I have two servers that I want to "share" the home folder of, such that when I make changes on Server A they appear on Server B and when I make changes on Server B they appear on Server A. Right now I have NFS set up such that when I make changes on Server A (NFS server) they appear on Server B (NFS...
I have two servers that I want to "share" the home folder of, such that when I make changes on Server A they appear on Server B and when I make changes on Server B they appear on Server A. Right now I have NFS set up such that when I make changes on Server A (NFS server) they appear on Server B (NFS client), but not visa versa. Is there something I can do within the NFS config to so that changes B show up on A? My question is, is there a way to get NFS to do what I want without doing something convoluted, or should I be using another tool to acheive this? Thanks in advance!
Spencer Smolen (31 rep)
Jun 12, 2016, 10:08 PM • Last activity: Jun 13, 2016, 06:36 AM
13 votes
3 answers
9982 views
Small Distributed Computing Cluster
I'm a high school student trying to build a linux cluster for a project (I have a bunch of decent computers slated for re-image this summer, so the tech department basically says as long as I don't physically break them I can do whatever. Anyway, I don't really know anything about building a cluster...
I'm a high school student trying to build a linux cluster for a project (I have a bunch of decent computers slated for re-image this summer, so the tech department basically says as long as I don't physically break them I can do whatever. Anyway, I don't really know anything about building a cluster, but I'm pretty good with Linux. I need to know these things: -What distro should I use? Does it even matter? -What software can configure the cluster? -On board or distributed FS? -Any sites that can offer decent guides or how-tos?
user6026
Mar 25, 2011, 02:08 AM • Last activity: May 18, 2016, 01:53 PM
3 votes
2 answers
1062 views
"Junctioned" symbolic links
Does Linux have the capability to use "junctioned" symbolic links? I'm not sure if this is an actual term or not, so let me explain the concept. I have a git repository containing all my configuration dot files in `~/dotfiles`. I use symbolic links into this directory in order to "activate" them. Fo...
Does Linux have the capability to use "junctioned" symbolic links? I'm not sure if this is an actual term or not, so let me explain the concept. I have a git repository containing all my configuration dot files in ~/dotfiles. I use symbolic links into this directory in order to "activate" them. For example, by executing ln -s ~/dotfiles/bash/bash_profile ~/.bash_profile to produce the link: ~/.bash_profile -> ~/dotfiles/bash/bash_profile However, I find myself in a situation where I want to combine the contents of multiple files. For example, I want the ~/.bash_profile symbolic link to point to two separate files, one for each project. E.g.: ~/.bash_profile (1) -> ~/dotfiles/bash/bash_profile (2) -> ~/dotfiles/proj/bash_profile I know I can simply concatenate the two files (e.g., cat ~/dotfiles/{bash,proj}/bash_profile > ~/.bash_profile), but if I can do the same thing with symbolic links, I would prefer to. I imagine that if such a feature exists (Nix is pretty huge), then under the hood it would have to map the two different files together, hiding all sorts of complexities under the hood (mapping file offsets of all non-first "mapped" files, locking all files when writing to the junctioned symbolic link, etc, etc). If such a feature doesn't exist, are there are plans to implement it?
magnus (449 rep)
Feb 23, 2016, 10:39 PM • Last activity: May 10, 2016, 07:01 PM
1 votes
1 answers
174 views
Where would be the best location to mount a distributed filesystem?
We are looking at mounting a distributed file system on our RHEL machines and we think that the best location to mount the share is at /var/dfs. Where would the best location be to mount this share?
We are looking at mounting a distributed file system on our RHEL machines and we think that the best location to mount the share is at /var/dfs. Where would the best location be to mount this share?
Spitfire19 (121 rep)
Mar 31, 2016, 03:36 PM • Last activity: Mar 31, 2016, 04:34 PM
1 votes
1 answers
860 views
distributed file system that works well with multiple small files
Hi my use case is quite specific. I have 20 Windows 7 machines constantly creating files in my storage; around 98% of these files are 2.1 MB. On average we create 24 million files every 3 days, and this number may increase in the near future as we may need to add new clients to our system. I do not...
Hi my use case is quite specific. I have 20 Windows 7 machines constantly creating files in my storage; around 98% of these files are 2.1 MB. On average we create 24 million files every 3 days, and this number may increase in the near future as we may need to add new clients to our system. I do not modify files (just create, read, copy and delete). I have seen Reiser4, which looks promising, but I also would like to have the capability to replicate the files across multiple storage nodes across the network, so I can have a fault tolerance system in place. Any suggestion?
masber (149 rep)
Mar 29, 2016, 12:37 AM • Last activity: Mar 29, 2016, 01:41 AM
0 votes
1 answers
82 views
GNU\Linux clustreing , Which level? Which application?
I need to start GNU\* clustering, But I have problem, Which level: - DB level? (replication) - file system? (distributed) - process level ? (such as intel fortran for civilization softwares) - new process ? (such as above with some differnces) I know DB level have per db backend seperated rrplicatio...
I need to start GNU\* clustering, But I have problem, Which level: - DB level? (replication) - file system? (distributed) - process level ? (such as intel fortran for civilization softwares) - new process ? (such as above with some differnces) I know DB level have per db backend seperated rrplication. Distrinuted file systems in linux is btrFS and freebsd uses zfs.But i have serious problem with process clustering and its softwares. Anyway , Question is, What's kernel level of process level?If I strt it , Do I have to forced to use a distributed fs?
PersianGulf (11308 rep)
May 10, 2015, 02:25 AM • Last activity: May 10, 2015, 07:45 AM
1 votes
0 answers
2307 views
clvm (cman) has problems with dlm
I am trying to create a volume group which is accessible on two servers with clvm. I read about it and it looks like it is most easily to do this with cman. But after hours of testing I cannot achieve a setup that works. I cannot start clvmd and I am frustrated because I cannot find a reason for it....
I am trying to create a volume group which is accessible on two servers with clvm. I read about it and it looks like it is most easily to do this with cman. But after hours of testing I cannot achieve a setup that works. I cannot start clvmd and I am frustrated because I cannot find a reason for it. Why does it say that it cannot connect to a local socket? What socket is meant by it? And after a few seconds there are problems with dlm (see below). # clvmd -fd2 -I cman local socket: connect failed: No such file or directory clvmd could not connect to cluster manager Consult syslog for more information # tail /var/log/syslog ... clvmd: CLVMD started ... clvmd: Connected to CMAN ... clvmd: CMAN initialisation complete ... kernel: [ 1069.787540] dlm: Using TCP for communications ... kernel: [ 1069.787874] dlm: c: joining the lockspace group... ... kernel: [ 1069.795626] dlm: c: group event done 0 0 ... kernel: [ 1069.795628] dlm: c: dlm_recover 1 ... kernel: [ 1069.795674] dlm: c: add member 1 ... kernel: [ 1069.795676] dlm: c: dlm_recover_members 1 nodes ... kernel: [ 1069.795678] dlm: c: generation 1 slots 1 1:1 ... kernel: [ 1069.795679] dlm: c: dlm_recover_directory ... kernel: [ 1069.795679] dlm: c: dlm_recover_directory 0 in 0 new ... kernel: [ 1069.795680] dlm: c: dlm_recover_directory 0 out 0 messages ... kernel: [ 1069.795705] dlm: c: dlm_recover 1 generation 1 done: 0 ms ... kernel: [ 1069.797183] dlm: c: join complete ... clvmd: Unable to create DLM lockspace for CLVM: No such file or directory ... clvmd: Can't initialise cluster interface I read several internet resources that described problems with the dlm setup. But to me it does not look like there is a problem: # ls -l /dev/dlm* crw-rw---- 1 root root 10, 56 Apr 3 10:22 /dev/dlm_c crw-rw-rw- 1 root root 10, 59 Apr 3 10:20 /dev/dlm-control crw-rw-rw- 1 root root 10, 58 Apr 3 10:20 /dev/dlm-monitor crw-rw---- 1 root root 10, 57 Apr 3 10:20 /dev/dlm_plock # lsmod|grep dlm dlm 157924 13 sctp 299454 3 dlm configfs 31664 2 dlm Maybe I have forgotten to create an essential config entry. I am new to the topic. My test environment consists of two virtual debian hosts (test1 and test2). cman_tool states that they are connected properly together. Here is my /etc/cluster/cluster.conf Any hints that get me closer to the source of the problem are very welcome!
user2715068 (51 rep)
Apr 3, 2015, 10:41 AM
2 votes
1 answers
281 views
Shared home directories with local duplication
I have a couple Linux computers which are all in the same network. Currently, I use `rsync` to copy my dotfiles and the like to each machine every couple of days from the master computer. Ideally, I would like this to happen automatically. The problem is that I would also like to have local duplicat...
I have a couple Linux computers which are all in the same network. Currently, I use rsync to copy my dotfiles and the like to each machine every couple of days from the master computer. Ideally, I would like this to happen automatically. The problem is that I would also like to have local duplication of the files. So a plain NFS or SMB mounted /home from my home server would give me really bad performance (like 3 MB/s). Duplication would serve as a physical backup as well. So far, I have thought of the following: - Using rsync at the clients so that they pull the latest data into their home directory. This overwrites local changes and is not ideal. - Use unison in a similar setup, such that each client syncs alls the files with the server in the background. Conflicts might happen, and I am not sure whether they are handled well. In the end, I would like to have local cache of the network drive. So writes are sent to the server and reads check whether the version is the latest one and use the local copy if so. Is there any software (or software stack) that does this?
Martin Ueding (2812 rep)
Sep 28, 2014, 02:26 PM • Last activity: Oct 2, 2014, 07:10 PM
Showing page 1 of 20 total questions