Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

3 votes
1 answers
675 views
Building redundancy by creating multiple nfs mounts to a single location
I have been managing a small local Linux cluster for my lab, consisting about two dozens of Ubuntu Linux servers - half of those are headless servers (rack/tower), and the other half are used as desktops by my students. I have setup an LDAP server for centralized account management and setup nfs/aut...
I have been managing a small local Linux cluster for my lab, consisting about two dozens of Ubuntu Linux servers - half of those are headless servers (rack/tower), and the other half are used as desktops by my students. I have setup an LDAP server for centralized account management and setup nfs/autofs mounts for shared file systems - including everyone's home directory. This setup has been running well over the last 8 years and I have been growing my servers over the time. A few times of a year, I found that when the nfs server exporting the home directory or the LDAP server went offline for various reasons, the entire cluster hanged. I am trying to build some redundancy to the system so that when this happens, I have a fallback plan. In a recent test, I noticed that my autofs configured autofs-ldap auto.direct mount to /homes directory, exported from server B, and the /etc/fstab configured nfs mount to /homes dir, exported from server A, can both be mounted, when typing df, this is what I saw
serverA:/local_mount/fstab/mount/export  ... 50% /homes
serverB:/local_mount/auto/direct/export  ... 50% /homes
serverA is configured in /etc/fstab, while serverB is configured in my LDAP system with auto.direct, both pointing to /homes. I found that when the system reboot, /etc/fstab first mount serverA to /homes, then, when autofs service starts, serverB's mount became active and shadows the fstab mount. My question is 1. does this configuration have any risk to use such folder with double-mounts? 2. does this configuration offers any redundancy to my /home dir mounts? for example, if serverB is down, but serverA is up, or vice versa, will my users still have a usable home dir without hanging?
FangQ (133 rep)
Feb 23, 2024, 03:04 AM • Last activity: Feb 23, 2024, 08:16 AM
3 votes
0 answers
912 views
Create redundant grub bootloader on Debian
I have a Debian based server, where every partition is RAIDed, except the one that has /boot/efi. This means that I will still have all my data after a disk failure, but the system may be unable to boot. Is there a good way to create a redundant grub install / EFI partition? I tried mdadm RAID with...
I have a Debian based server, where every partition is RAIDed, except the one that has /boot/efi. This means that I will still have all my data after a disk failure, but the system may be unable to boot. Is there a good way to create a redundant grub install / EFI partition? I tried mdadm RAID with v0.90 headers, but its fragile thus unsupported . I also found that grub supports syncing multiple EFI partitions on Ubuntu, but it's also not available on Debian.
sfphoton (56 rep)
Nov 12, 2021, 03:25 PM • Last activity: May 31, 2023, 01:12 PM
8 votes
2 answers
707 views
Archiving program that adds redundancy
I'm looking for a archiving program that adds redundancy to an archive. Example : I've got 500MB of data, and a 700MB media to burn it. Rather than waste 200MB, I want to use them to add redundancy. Then if some data is damaged, the archiving program will be able to restore it because it were redund...
I'm looking for a archiving program that adds redundancy to an archive. Example : I've got 500MB of data, and a 700MB media to burn it. Rather than waste 200MB, I want to use them to add redundancy. Then if some data is damaged, the archiving program will be able to restore it because it were redundant. Does such a program exists ? Which one would you recommend ? If possible, a FOSS software: if you don't have the archiver source code, you don't know if you'll be able to extract the archive in the future.
color2v (163 rep)
Apr 16, 2020, 10:43 AM • Last activity: May 25, 2023, 03:20 AM
10 votes
3 answers
6321 views
Protecting data against bit rot
I have realized that I need to protect all of my photographs against bit rot (file corruption occurring at random due to errors in hard drives or network transfer). I recently discovered par2 which seems like a great program to create redundancy files and give the ability to detect and repair file c...
I have realized that I need to protect all of my photographs against bit rot (file corruption occurring at random due to errors in hard drives or network transfer). I recently discovered par2 which seems like a great program to create redundancy files and give the ability to detect and repair file corruptions. I don't think journaling file systems are the right solution here, since I want the protection to follow along with the files into my backup and when migrating onto new laptops. So, what I think I need is a script that can be run as a cronjob, maybe once an hour. It would look through all of the files that needs protection and update the redundancy files if files are added or changed (file has edit timestamp newer than redundancy arhive), and it would repair files if any file has been corrupted (file has changed but edit timestamp hasn't been updated). Is there any script or program that would do this? Or are there programs that solve the problem in another way? Or should I just write such a script myself (a would prefer not to, I want something robust and tested by a lot of users)?
Jonatan Kallus (455 rep)
Jun 13, 2014, 12:39 PM • Last activity: Sep 20, 2020, 08:02 PM
8 votes
2 answers
1144 views
How to shard file into n-out-of-m redundancy (erasure code e.g. kind of reed solomon)?
How to shard file `file` into *m* files, so it can be recovered with any *n* of them ? It looks like [Erasure Code](https://en.wikipedia.org/wiki/Erasure_code), preferably "optimal erasure codes". (Example of another application and proposed programming library: "You need erasure code" https://stack...
How to shard file file into *m* files, so it can be recovered with any *n* of them ? It looks like [Erasure Code](https://en.wikipedia.org/wiki/Erasure_code) , preferably "optimal erasure codes". (Example of another application and proposed programming library: "You need erasure code" https://stackoverflow.com/a/28932095 ). It's like [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) (something more flexible than [RAID6](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_6)) style of redundancy. Early findings: I've found rsbep so far and [some modification](https://www.thanassis.space/rsbep.html) , but they rather seem to be designed toward different use case. I've also found reed-solomon from Linux Kernel ported to userspace [here](https://github.com/tierney/reed-solomon) , but it's not the tool for my described purpose. Example for 3-out-of-6 level of redundancy: split_with_redundancy -n 3 -m 6 input.dat Producing input.dat.0..5, so any tree of those files are sufficient for recovery: recover_using_redundancy intput.dat.{0,2,4} I do not care for errors within given file, i.e. I do not need [Forward Error Correction](https://en.wikipedia.org/wiki/Forward_error_correction) . I assume that I can rely on having n-out-of-m redundant parts fully correct.
Grzegorz Wierzowiecki (14740 rep)
Apr 20, 2016, 07:28 PM • Last activity: Sep 16, 2020, 07:15 PM
4 votes
0 answers
419 views
LVM: on simple raid1 with two legs Cpy%Sync never reaches 100% and always stays at 99.99%
On simple lvm raid1 with two legs Cpy%Sync never reaches 100% and always stays at 99.99%. lvm> lvs -a -o name,raid_sync_action,sync_percent LV SyncAction Cpy%Sync safe idle 99,99 [safe_rimage_0] [safe_rimage_1] [safe_rmeta_0] [safe_rmeta_1] How can I troubleshot and fix this? It is preventing me fro...
On simple lvm raid1 with two legs Cpy%Sync never reaches 100% and always stays at 99.99%. lvm> lvs -a -o name,raid_sync_action,sync_percent LV SyncAction Cpy%Sync safe idle 99,99 [safe_rimage_0] [safe_rimage_1] [safe_rmeta_0] [safe_rmeta_1] How can I troubleshot and fix this? It is preventing me from issue other commands, and probably doesn't handle failures properly. One of the failing commands is: lvm> lvchange --writemostly /dev/sdd1:y vg/safe Unable to change writemostly on vg/safe while it is not in-sync.
Pedro Rolo (151 rep)
Jun 28, 2020, 06:33 AM • Last activity: Jun 28, 2020, 10:16 AM
11 votes
2 answers
14670 views
How to make systemd-resolved stop trying to use offline DNS servers?
I have configured my DHCP server to supply two nameservers for redundancy, so that if one is offline the other one can be used. I have configured my PCs with `systemd-resolved` and according to `resolvectl status` it has picked up all the DNS servers (IPv4 and IPv6 addresses for both) and is using o...
I have configured my DHCP server to supply two nameservers for redundancy, so that if one is offline the other one can be used. I have configured my PCs with systemd-resolved and according to resolvectl status it has picked up all the DNS servers (IPv4 and IPv6 addresses for both) and is using one as the current one. However, if the DNS server goes offline, systemd-resolved does not switch to the next server but instead keeps trying to connect to the offline one, causing all uncached name resolution to fail. If I run systemctl restart systemd-resolved then it will switch to another server and continue working, but it will randomly switch back to the offline server after a while and name resolution will again fail. How can I tell systemd-resolved to stop using an offline DNS server and quickly switch to one of the other ones it knows about? journalctl only shows this when it switches to using the offline server: systemd-resolved: Using degraded feature set (UDP) for DNS server fdxx::x. systemd-resolved: Using degraded feature set (TCP) for DNS server fdxx::x. The server in question is completely offline when this happens and does not respond to pings.
Malvineous (7395 rep)
Sep 2, 2018, 02:57 AM • Last activity: May 26, 2020, 08:03 PM
0 votes
1 answers
33 views
When I get a font package with OTFs, should I also keep the Web versions?
I've just downloaded the Comic Neue font family distribution zip (from [here][1]). It has an `OTF/` subfolder with 12 `.otf` files, and a `Web/` subfolder with 49 files - `.ttf`, `.eot`, `.woff` and `.woff2` files (same 12 base-names as the OTFs). Is there any use for me - as a desktop user and even...
I've just downloaded the Comic Neue font family distribution zip (from here ). It has an OTF/ subfolder with 12 .otf files, and a Web/ subfolder with 49 files - .ttf, .eot, .woff and .woff2 files (same 12 base-names as the OTFs). Is there any use for me - as a desktop user and even perhaps as a person creating raster and vector graphics, but _not_ for the web / mobile world - to keep the Web/ font files?
einpoklum (10753 rep)
Mar 24, 2020, 10:07 PM • Last activity: Mar 25, 2020, 11:19 AM
1 votes
0 answers
310 views
OpenLDAP Cluster
Trying to implement an OpenLDAP cluster, I already managed to set up the two backend LDAP servers in mirroring mode. The application (iRedMail) using the LDAP service is running on the same systems as the LDAP servers. This applications needs the LDAP configuration in the former slapd.conf manner an...
Trying to implement an OpenLDAP cluster, I already managed to set up the two backend LDAP servers in mirroring mode. The application (iRedMail) using the LDAP service is running on the same systems as the LDAP servers. This applications needs the LDAP configuration in the former slapd.conf manner and not in the CONFIG-DB way. So I added the mirroring parameters to the slapd.conf file. The file looks like this on the first backend node:
include     /etc/openldap/schema/core.schema
include     /etc/openldap/schema/corba.schema
include     /etc/openldap/schema/cosine.schema
include     /etc/openldap/schema/inetorgperson.schema
include     /etc/openldap/schema/nis.schema
include     /etc/openldap/schema/calentry.schema
include     /etc/openldap/schema/calresource.schema
include     /etc/openldap/schema/amavisd-new.schema
include     /etc/openldap/schema/iredmail.schema

pidfile     /var/run/openldap/slapd.pid
argsfile    /var/run/openldap/slapd.args

# The syncprov overlay
moduleload syncprov.la

disallow    bind_anon
require     LDAPv3
loglevel    0

access to attrs="userPassword,mailForwardingAddress,employeeNumber"
    by anonymous    auth
    by self         write
    by dn.exact="cn=vmail,dc=myCompany,dc=de"   read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users        none

access to attrs="cn,sn,gn,givenName,telephoneNumber"
    by anonymous    auth
    by self         write
    by dn.exact="cn=vmail,dc=myCompany,dc=de"   read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users        read

access to attrs="objectclass,domainName,mtaTransport,enabledService,domainSenderBccAddress,domainRecipientBccAddress,domainBackupMX,domainMaxQuotaSize,domainMaxUserNumber,domainPendingAliasName"
    by anonymous    auth
    by self         read
    by dn.exact="cn=vmail,dc=myCompany,dc=de"   read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users        read

access to attrs="domainAdmin,domainGlobalAdmin,domainSenderBccAddress,domainRecipientBccAddress"
    by anonymous    auth
    by self         read
    by dn.exact="cn=vmail,dc=myCompany,dc=de"   read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users        none

access to attrs="mail,accountStatus,domainStatus,userSenderBccAddress,userRecipientBccAddress,mailQuota,backupMailAddress,shadowAddress,memberOfGroup,member,uniqueMember,storageBaseDirectory,homeDirectory,mailMessageStore,mailingListID"
    by anonymous    auth
    by self         read
    by dn.exact="cn=vmail,dc=myCompany,dc=de"   read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users        read

access to dn="cn=vmail,dc=myCompany,dc=de"
    by anonymous                    auth
    by self                         write
    by users                        none

access to dn="cn=vmailadmin,dc=myCompany,dc=de"
    by anonymous                    auth
    by self                         write
    by users                        none

access to dn.regex="domainName=([^,]+),o=domains,dc=myCompany,dc=de$"
    by anonymous                    auth
    by self                         write
    by dn.exact="cn=vmail,dc=myCompany,dc=de"   read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by dn.regex="mail=[^,]+@$1,o=domainAdmins,dc=myCompany,dc=de$" write
    by dn.regex="mail=[^,]+@$1,ou=Users,domainName=$1,o=domains,dc=myCompany,dc=de$" read
    by users                        none

access to dn.subtree="o=domains,dc=myCompany,dc=de"
    by anonymous                    auth
    by self                         write
    by dn.exact="cn=vmail,dc=myCompany,dc=de"    read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users                        read

access to dn.subtree="o=domainAdmins,dc=myCompany,dc=de"
    by anonymous                    auth
    by self                         write
    by dn.exact="cn=vmail,dc=myCompany,dc=de"    read
    by dn.exact="cn=vmailadmin,dc=myCompany,dc=de"  write
    by users                        none

access to dn.regex="cn=[^,]+,dc=myCompany,dc=de"
    by anonymous                    auth
    by self                         write
    by users                        none

access to *
    by anonymous                    auth
    by self                         write
    by users                        read

database monitor
access to dn="cn=monitor"
    by dn.exact="cn=Manager,dc=myCompany,dc=de" read
    by dn.exact="cn=vmail,dc=myCompany,dc=de" read
    by * none

database    mdb
suffix      dc=myCompany,dc=de
directory   /var/lib/ldap/myCompany.de
rootdn      cn=Manager,dc=myCompany,dc=de
rootpw      {SSHA}V5/UQXm9SmzRGjKK2zAKB79eFSaysc2wG9tPIg==
sizelimit   unlimited
maxsize     2147483648
checkpoint  128 3
mode        0700

index objectclass,entryCSN,entryUUID                eq
index uidNumber,gidNumber,uid,memberUid,loginShell  eq,pres
index homeDirectory,mailMessageStore                eq,pres
index ou,cn,mail,surname,givenname,telephoneNumber,displayName  eq,pres,sub
index nisMapName,nisMapEntry                        eq,pres,sub
index shadowLastChange                              eq,pres
index member,uniqueMember eq,pres

index domainName,mtaTransport,accountStatus,enabledService,disabledService  eq,pres,sub
index domainAliasName    eq,pres,sub
index domainMaxUserNumber eq,pres
index domainAdmin,domainGlobalAdmin,domainBackupMX    eq,pres,sub
index domainSenderBccAddress,domainRecipientBccAddress  eq,pres,sub

index accessPolicy,hasMember,listAllowedUser,mailingListID   eq,pres,sub

index mailForwardingAddress,shadowAddress   eq,pres,sub
index backupMailAddress,memberOfGroup   eq,pres,sub
index userRecipientBccAddress,userSenderBccAddress  eq,pres,sub
index mobile,departmentNumber eq,pres,sub

#Mirror Mode
serverID    001

# Consumer
syncrepl rid=001 \
provider=ldap://rm2.myCompany.de \
bindmethod=simple \
binddn="cn=vmail,dc=myCompany,dc=de" \
credentials="gtV9FwILIcp8Zw8YtGeB1AC9GbGfti" \
searchbase="dc=myCompany,dc=de" \
attrs="*,+" \
type=refreshAndPersist \
interval=00:00:01:00 \
retry="60 +"
# Provider
overlay syncprov
syncprov-checkpoint 50 1
syncprov-sessionlog 50

mirrormode on
There are only two differences in the second node's config file:
[...]
#Mirror Mode
serverID    002
[...]

# Consumer
[...]
provider=ldap://rm2.myCompany.de \
[...]
As mentioned before the mirroring works perfectly. Now I need a single connection address for the LDAP clients, i.e. web applications using LDAP as authentication mechanism. I read that you can use an OpenLDAP proxy for that purpose. The LDAP client (here: web application) connects to the LDAP proxy and the proxy will retrieve the authentication data from multiple backend LDAP servers. I set up an OpenLDAP proxy, it uses CONFIG-DB, not the ancient way. The slapd.conf file looks like this:
include         /etc/openldap/schema/corba.schema
include         /etc/openldap/schema/core.schema
include         /etc/openldap/schema/cosine.schema
include         /etc/openldap/schema/duaconf.schema
include         /etc/openldap/schema/dyngroup.schema
include         /etc/openldap/schema/inetorgperson.schema
include         /etc/openldap/schema/java.schema
include         /etc/openldap/schema/misc.schema
include         /etc/openldap/schema/nis.schema
include         /etc/openldap/schema/openldap.schema
include         /etc/openldap/schema/ppolicy.schema
 
pidfile         /var/run/openldap/slapd.pid
argsfile        /var/run/openldap/slapd.args

modulepath  /usr/lib/openldap
modulepath  /usr/lib64/openldap
moduleload  back_ldap.la       
loglevel	0

database		ldap
readonly		yes            
protocol-version	3
rebind-as-user
uri			"ldap://rm1.myCompany.de:389"
suffix		        "dc=myCompany,dc=de"
uri                     "ldap://rm2.myCompany.de:389"
suffix		        "dc=myCompany,dc=de"
First issue: Creating the CONFIG-DB using slaptest, the command fails, claiming:
5dc44107 /etc/openldap/slapd.conf: line 48: suffix already served by this backend!.
slaptest: bad configuration directory!
The slaptest command looks like this:
slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/
It is possible that I didn't understand completely the concept, because all guides I found are using subdomain prefixes for the different LDAP backend servers, i.e. instead of:
uri			"ldap://rm1.myCompany.de:389"
suffix		        "dc=myCompany,dc=de"
uri                     "ldap://rm2.myCompany.de:389"
suffix		        "dc=myCompany,dc=de"
they use:
uri            "ldap://rm1.myCompany.de:389"
suffix		   "dc=ou1,dc=myCompany,dc=de"
uri            "ldap://rm2.myCompany.de:389"
suffix		   "dc=ou2,dc=myCompany,dc=de"
What I didn't understand: On the backend servers there is no ou1 and ou2 respectively. How can they expect to find anything in the backend LDAPs if the DNs do not match? I temporarily commented the second uri in order to check if, apart from this issue, LDAP queries to the LDAP proxy succeed, but ran into the second issue. Second issue: If I run an ldapsearch against directly to the two backend LDAP servers (one after the other), all of the LDAP users will be enumerated. If I run the same ldapsearch against the LDAP proxy, only the user "vmail" will be enumerated. I think that the same users should be listed as in the direct query. This is the ldapsearch command:
ldapsearch -D "cn=vmail,dc=myCompany,dc=de" -w gtV9FwILIcp8Zw8YtGeB1AC9GbGfti -p 389 -h 192.168.0.92 -b "dc=myCompany,dc=de" -s sub "(objectclass=person)"
Did I miss sth.? Thank you for your considerations! Best regards, Florian
arminV (11 rep)
Nov 8, 2019, 10:45 AM
1 votes
0 answers
119 views
Ucarp with 2 interfaces
I have a server that has 2 interfaces which works like a router and a firewall too on debian. I want to use ucarp for redundency purposes. Question is how do i configure ucarp for 2 interfaces?
I have a server that has 2 interfaces which works like a router and a firewall too on debian. I want to use ucarp for redundency purposes. Question is how do i configure ucarp for 2 interfaces?
user19215 (111 rep)
Sep 5, 2019, 08:27 PM
1 votes
2 answers
1506 views
Most ironclad way to make root installation redundant and maximize uptime? RAID, ZFS or something else?
I would like to set up my desktop computer (which is actually a server for the KVM guests I do my actual work in) to have a redundant root installation. If one drive dies I want to quickly get back to work without doing a full restore from backup, nor a system reinstall and reset all my settings and...
I would like to set up my desktop computer (which is actually a server for the KVM guests I do my actual work in) to have a redundant root installation. If one drive dies I want to quickly get back to work without doing a full restore from backup, nor a system reinstall and reset all my settings and preferences. I thought that the way to do this would be RAID1, but the deeper I dig into it, the more I realize that RAID1 is not a 'set-it-and-forget-it' solution. Oh, and I want it to be UEFI boot. Last time I tried a software RAID1 install (which I set up using the Ubuntu Server installer), something got corrupted and I ended up with a GRUB rescue screen and could not for the life of me figure out how to get it to boot from the mirror drive. For all I know, the boot sector on both was corrupted due to the corruption replicating between drives. Obviously this defeats the purpose of having a RAID1 boot for the purpose of decreased downtime. I was thinking that maybe I should put the EFI partition on a USB drive and keep it backed up for quick and easy replacement (while having the root partition in RAID1), but I am worried that I might now always know then the EFI partition has changed and therefore will not know when to back it up. I was also thinking to do ZFS-on-root, in the thought that the bitrot protection and snapshotting might be more useful in preventing situations like the one above. But it seems that ZFS on root is not recommended for Ubuntu, and the status of ZFS on Linux in general seems to be in question now due to a certain Linux Kernel programmer's stated lack of tolerance for ZFS. I wonder if this might be a good approach but I know nothing about this whole MAAS thing and have no idea whether it is relevant to my use case. The last thing I was thinking was to just do a regular one-drive install and then every week or so dd it to a spare drive, so that if disaster strikes I can at least recover my settings and installation from a week ago or less. But wouldn't dding an SSD every week be really hard on it? I have found countless tutorials about RAID and ZFS, but so far have not found anything that clearly explains to pros and cons of my options with respect to the goal stated above. Advice or links to explanations would be greatly appreciated!
Stonecraft (869 rep)
Feb 18, 2019, 04:48 AM • Last activity: Feb 19, 2019, 03:49 PM
1 votes
0 answers
344 views
ICMPv6 send Multicast Listener Report Message and Neighbor Solicitation
I am configuring ucarp on two on my pc router ie master and slave. Master is turned off and then turned on, recorded in tshark slave application there is icmpv6 multicast listener report message and neighbor solicitation. But I do not configure ipv6 in ucarp (using common address redundancy protocol...
I am configuring ucarp on two on my pc router ie master and slave. Master is turned off and then turned on, recorded in tshark slave application there is icmpv6 multicast listener report message and neighbor solicitation. But I do not configure ipv6 in ucarp (using common address redundancy protocol). Is this an additional protocol that appears when the new master router is enabled?
Reski Wahyuni (11 rep)
May 5, 2018, 12:50 PM • Last activity: May 5, 2018, 12:56 PM
1 votes
1 answers
273 views
How to use one path in heredocument
I wrote this here-document to source a few scripts under the path `~/own_scripts/` but I wrote it in a way that causes duplication of this path: source <<-EOF ~/own_scripts/1.sh ~/own_scripts/2.sh # More scripts under ~/own_scripts; EOF Setting and later unsetting a variable with the path is nice bu...
I wrote this here-document to source a few scripts under the path ~/own_scripts/ but I wrote it in a way that causes duplication of this path: source <<-EOF ~/own_scripts/1.sh ~/own_scripts/2.sh # More scripts under ~/own_scripts; EOF Setting and later unsetting a variable with the path is nice but would still result in redundancy. What's the best way to avoid path redundancy in such source (or bash) here-document?
Arcticooling (1 rep)
Feb 3, 2018, 05:48 AM • Last activity: Feb 3, 2018, 09:05 AM
1 votes
0 answers
354 views
Proxmox hyper-convergence and storage redundancy
What kind of storage redundancy (data blocks at least on two different physical machines) does [Proxmox Hyper-Convergence](https://pve.proxmox.com/wiki/Hyper-converged_Infrastructure) provide? Nutanix does it with [a controller VM (CVM)](http://next.nutanix.com/t5/How-It-Works/CVM/td-p/12526#message...
What kind of storage redundancy (data blocks at least on two different physical machines) does [Proxmox Hyper-Convergence](https://pve.proxmox.com/wiki/Hyper-converged_Infrastructure) provide? Nutanix does it with [a controller VM (CVM)](http://next.nutanix.com/t5/How-It-Works/CVM/td-p/12526#messageBodySimpleDisplay_0) , VxRail with [VMware vSAN](https://www.vmware.com/products/vsan.html) and SimpliVity with [an I/O card (OVC)](https://www.google.com/search?q=OmniStack+Virtual+Controller) .
haba713 (307 rep)
Jan 30, 2018, 12:51 PM
1 votes
0 answers
26 views
Several WebServers redundancy with same IP for the public access
We are using 2 (or even more) WebServers for backup, so if the main WebServer went down, the others will be replaced. For this purpose, I think I should use Virtual IP address (Red triangle), right? If yes so, how should I set and do this? And if not, what is the best for this purpose, and how shoul...
We are using 2 (or even more) WebServers for backup, so if the main WebServer went down, the others will be replaced. For this purpose, I think I should use Virtual IP address (Red triangle), right? If yes so, how should I set and do this? And if not, what is the best for this purpose, and how should I make it? enter image description here OS: Debian 8 & CentOS 7
Parsa Samet (777 rep)
Sep 14, 2016, 05:45 AM
4 votes
0 answers
237 views
Split file F into M parts, recover file F from N of those M parts
I want to split a file into *M* parts, such that I can recover the file from *N* of those *M* parts. (Where *M>N*, and I get to choose both *M* and *N*). For example: 1. I have `FILE.IMG` 2. I split `FILE.IMG` into *M=3* parts. 3. I set the split-time encoding to allow me to recover the file from an...
I want to split a file into *M* parts, such that I can recover the file from *N* of those *M* parts. (Where *M>N*, and I get to choose both *M* and *N*). For example: 1. I have FILE.IMG 2. I split FILE.IMG into *M=3* parts. 3. I set the split-time encoding to allow me to recover the file from any *N=2* of those parts. 4. The encoding/splitting is finished, I now have FILE.IMG.1, FILE.IMG.2 and FILE.IMG.3 5. I delete any one of those three new files, and yet I can still recover the original FILE.IMG I use Ubuntu Linux, and hope for an answer using apt-get-able tools thereon.
Rick (141 rep)
Aug 18, 2016, 11:46 PM • Last activity: Aug 20, 2016, 01:12 AM
3 votes
1 answers
1437 views
Using WiFi port as redundant link
I want to use WiFi port of my server as redundant link for copper connection to my home DSL router. My router if SAGEM 2704 with very limited functionality. So practically the only possibility is to configure something on the server. Is it possible to use WiFi for redundancy. If so, what I have to i...
I want to use WiFi port of my server as redundant link for copper connection to my home DSL router. My router if SAGEM 2704 with very limited functionality. So practically the only possibility is to configure something on the server. Is it possible to use WiFi for redundancy. If so, what I have to implement on my server?
mackowiakp (455 rep)
Apr 16, 2014, 08:21 PM • Last activity: Apr 17, 2014, 09:57 AM
0 votes
0 answers
149 views
Redundant and Load Balanced storage on FreeBSD?
Back from: https://unix.stackexchange.com/questions/81252/load-balanced-redundant-storage-on-freebsd How do I create a (Redundant and Load Balanced) Storage based on FreeBSD? - Storage should be located on (2 or more servers) providing Redundancy and Load Balancing (round robin) at the same time. -...
Back from: https://unix.stackexchange.com/questions/81252/load-balanced-redundant-storage-on-freebsd How do I create a (Redundant and Load Balanced) Storage based on FreeBSD? - Storage should be located on (2 or more servers) providing Redundancy and Load Balancing (round robin) at the same time. - Storage needs to be accessible over network for other BSD machines as one folder (i.e. /storage) (NFS?). Any solutions are welcome, except for master-slave HAST and CARP. **Please don't post these: they are not related to this question.**
scorpio1441 (411 rep)
Aug 20, 2013, 11:07 PM • Last activity: Aug 20, 2013, 11:21 PM
3 votes
2 answers
259 views
What is a good way of layering zfs filsystems to manage unpredictable future workload that doesn't severely compromise performance?
I've been playing around with ZFS and using sparse files as virtual devices to learn how to use the zfs tools. It seems that there isn't much of a hit in performance by creating a raid pool out of sparse files which are on top of JBOD filesystems that are located on different devices. So I'm wonderi...
I've been playing around with ZFS and using sparse files as virtual devices to learn how to use the zfs tools. It seems that there isn't much of a hit in performance by creating a raid pool out of sparse files which are on top of JBOD filesystems that are located on different devices. So I'm wondering if this is a good way to have a flexible system going forward seen as it's not possible to shrink zfs filesystems. So for example I have four disks. Each one has a single device filesystem on it. I create 4 sparse files, one on each filesystem therefore disk. The load is spread across the devices and they are also providing redundancy. Until the raid array fills, I still have all the rest of the disk space for some things which may not need raid such as temporary bulky files. I expect that there would be a major performance penalty when both types of filesystem would be busy at the same time. Are there some more pros and cons of such an approach?
barrymac (1165 rep)
Aug 19, 2013, 07:54 PM • Last activity: Aug 20, 2013, 10:36 AM
2 votes
1 answers
536 views
Configure SSH keys for production and DR server
Here is the case, we have successfully configured SSH keys (and hence password-less SFTP connection) between ProdServer-A to ProdServer-B. It works, or so I thought still yesterday. Yesterday ProdServer-B failed over to DisasterServer-B. When it did, the SSH connection failed. In ProdServer-A we get...
Here is the case, we have successfully configured SSH keys (and hence password-less SFTP connection) between ProdServer-A to ProdServer-B. It works, or so I thought still yesterday. Yesterday ProdServer-B failed over to DisasterServer-B. When it did, the SSH connection failed. In ProdServer-A we get an alert saying ProdServer-B has changed (known-hosts) and could be "man-in-middle" attack (which is as expected, but we know why this is happening). So, my question is, how can we add keys from two servers (same hostname, but only one active at a time) into known_hosts? Or even better, how can we successfully avoid SSH connection failures if a production server failed over to disaster server? Suggestions are welcome. Thanks!
Guru (181 rep)
Mar 19, 2013, 09:43 PM • Last activity: Mar 19, 2013, 11:18 PM
Showing page 1 of 20 total questions