Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
1
answers
2972
views
OpenAFS suddenly fails: a pioctl failed while obtaining tokens
My afs client stopped working. I'm not sure why - maybe I ran `apt-get `? Anyways: user@box ~ $ kinit user@IES.AUC.DK's Password: user@box ~ $ aklog aklog: a pioctl failed while obtaining tokens for cell ies.auc.dk Checking status of service: user@box ~ $ sudo service openafs-client status [sudo] pa...
My afs client stopped working. I'm not sure why - maybe I ran `apt-get
`? Anyways:
user@box ~ $ kinit
user@IES.AUC.DK's Password:
user@box ~ $ aklog
aklog: a pioctl failed while obtaining tokens for cell ies.auc.dk
Checking status of service:
user@box ~ $ sudo service openafs-client status
[sudo] password for user:
● openafs-client.service - OpenAFS client
Loaded: loaded (/lib/systemd/system/openafs-client.service; enabled; vendor p
Active: active (exited) since Mon 2017-11-13 08:17:40 CET; 3h 8min ago
Process: 1942 ExecStartPost=/usr/bin/fs sysname $AFS_SYSNAME (code=exited, sta
Process: 1934 ExecStartPost=/usr/bin/fs setcrypt $AFS_SETCRYPT (code=exited, s
Process: 1930 ExecStart=/sbin/afsd $AFSD_ARGS (code=exited, status=0/SUCCESS)
Process: 1918 ExecStartPre=/usr/share/openafs/openafs-client-precheck (code=ex
Tasks: 0 (limit: 512)
Memory: 0B
CPU: 0
Nov 13 08:17:40 box systemd: Starting OpenAFS client...
Nov 13 08:17:40 box openafs-client-precheck: modprobe: FATAL: Modul
Nov 13 08:17:40 box openafs-client-precheck: Failed to load openafs
Nov 13 08:17:40 box fs: Usage: /usr/bin/fs setcrypt -crypt
kidmose
(185 rep)
Nov 13, 2017, 01:27 PM
• Last activity: Apr 27, 2025, 06:05 AM
0
votes
0
answers
124
views
modprobe: ERROR: could not insert 'openafs': Exec format error raspberry pi
Why am I getting this `Exec format error` listed below while attempting to install the openafs client on a raspberry pi? [rpi][dotfiles][100]$ sudo apt-get install -y linux-headers-6.1.0-21-arm64 Reading package lists... Done Building dependency tree... Done Reading state information... Done linux-h...
Why am I getting this
Exec format error
listed below while attempting to install the openafs client on a raspberry pi?
[rpi][dotfiles]$ sudo apt-get install -y linux-headers-6.1.0-21-arm64
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
linux-headers-6.1.0-21-arm64 is already the newest version (6.1.90-1).
0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up openafs-client (1.8.9-1) ...
modprobe: ERROR: could not insert 'openafs': Exec format error
Failed to load openafs.ko. Does it need to be built?
dpkg: error processing package openafs-client (--configure):
installed openafs-client package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
openafs-client
needrestart is being skipped since dpkg has failed
E: Sub-process /usr/bin/dpkg returned an error code (1)
_[rpi][dotfiles]$
dmesg says:
[491845.462900] module openafs: unsupported RELA relocation: 311
This happened after installing some of the following packages:
apt-get install -y openafs-{modules-dkms,client,krb5} krb5-{config,user} libpam-krb5
Additional system info:
_[rpi][dotfiles]$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
_[rpi][dotfiles]$ uname -a
Linux rpi 6.1.0-21-arm64 #1 SMP Debian 6.1.90-1 (2024-05-03) aarch64 GNU/Linux
_[rpi][dotfiles]$
ealfonso
(993 rep)
Jun 20, 2024, 07:46 PM
• Last activity: Jun 22, 2024, 10:24 AM
0
votes
1
answers
24
views
bos: unable to build security class (configuring connection security)
I'm having trouble setting up openafs on debian bookworm. I've imported kerberos keys into openafs via `akeyconvert -all`: sudo asetkey list rxkad_krb5 kvno 4 enctype 17; key is: ???????????????????????????????? rxkad_krb5 kvno 4 enctype 18; key is: ??????????????????????????????????????????????????...
I'm having trouble setting up openafs on debian bookworm.
I've imported kerberos keys into openafs via
akeyconvert -all
:
sudo asetkey list
rxkad_krb5 kvno 4 enctype 17; key is: ????????????????????????????????
rxkad_krb5 kvno 4 enctype 18; key is: ????????????????????????????????????????????????????????????????
All done.
I'm now try to use the bos command line, but this fails:
$ sudo bos listkeys -server asus.erjoalgo.com
bos: unable to build security class (configuring connection security)
I have tried building bos
from source to better understand the context of the error message. I've only narrowed it down to:
function afsconf_ClientAuthToken in auth/authcon.c
code = ktc_GetTokenEx(info->name, &tokenSet);
function ktc_GetTokenEx in auth/ktc.c:
code = PIOCTL(0, VIOC_GETTOK2, &iob, 0);
This returns a non-zero code, causing the command line to fail.
What could be the reason that the PIOCTL call is failing? Is there any way to get more information?
I've tried rebuilding the kernel module as suggested here:
sudo dpkg-reconfigure openafs-modules-dkms
And restarting the openafs-client service, but this does not change anything.
I only noticed some bening-looking warnings in dmesg:
[ 20.377862] systemd-fstab-generator: Checking was requested for "/var/cache/openafs.img", but it is not a device.
[ 20.676946] systemd: /lib/systemd/system/openafs-client.service:22: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[ 49.217272] openafs: loading out-of-tree module taints kernel.
[ 49.217278] openafs: module license 'http://www.openafs.org/dl/license10.html ' taints kernel.
[ 49.217987] openafs: module verification failed: signature and/or required key missing - tainting kernel
I don't see anything interesting in the openafs-client service logs or in syslog:
$ sudo journalctl -feu openafs-client
May 28 09:03:43 asus systemd: Starting openafs-client.service - OpenAFS client...
May 28 09:03:50 asus afsd: afsd: All AFS daemons started.
May 28 09:03:50 asus afsd: afsd: All AFS daemons started.
May 28 09:03:50 asus systemd: Started openafs-client.service - OpenAFS client.
May 28 09:03:52 asus fs: Usage: /usr/bin/fs sysname [-newsys +] [-help]
May 28 21:11:53 asus systemd: Stopping openafs-client.service - OpenAFS client...
May 28 21:11:54 asus systemd: openafs-client.service: Deactivated successfully.
May 28 21:11:54 asus systemd: Stopped openafs-client.service - OpenAFS client.
May 28 21:11:54 asus systemd: openafs-client.service: Consumed 2.957s CPU time.
May 28 21:11:54 asus systemd: Starting openafs-client.service - OpenAFS client...
May 28 21:11:56 asus afsd: afsd: All AFS daemons started.
May 28 21:11:56 asus afsd: afsd: All AFS daemons started.
May 28 21:11:56 asus fs: Usage: /usr/bin/fs sysname [-newsys +] [-help]
May 28 21:11:56 asus systemd: Started openafs-client.service - OpenAFS client.
How can I further debug this bos error?
openafs 1.8.9-1-debian
$ sudo lsmod | grep openafs
openafs 2863104 2
$
bos: unable to build security class (configuring connection security)
ealfonso
(993 rep)
May 29, 2024, 01:44 AM
• Last activity: Jun 21, 2024, 09:47 AM
1
votes
2
answers
62
views
Help importing kerberos key into openafs
I'm having trouble exporting and importing kerberos keys into openafs. My first problem is that when using `addprinc` and `ktadd` commands in `kadmin.local`, the encryption key type `-e` option appears to be ignored. For example when I try to add a key of type `des-cbc-crc:v4`, a key of type `aes256...
I'm having trouble exporting and importing kerberos keys into openafs.
My first problem is that when using
addprinc
and ktadd
commands in kadmin.local
, the encryption key type -e
option appears to be ignored. For example when I try to add a key of type des-cbc-crc:v4
, a key of type aes256-cts-hmac-sha1-96
appears to be added instead:
kadmin.local: ktadd -e des-cbc-crc:v4 -k /tmp/afs.ktab afs
Entry for principal afs with kvno 4, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/tmp/afs.ktab.
Entry for principal afs with kvno 4, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/tmp/afs.ktab.
The same happens with addprinc, I try to specify -e DES-CBC-CRC:md5
for the key type but this appears to be ignored and end up with a aes128-cts-hmac-sha1-96
key:
$ kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM with password.
kadmin.local: addprinc -policy service -randkey -e DES-CBC-CRC:md5 afs
WARNING: policy "service" does not exist
Principal "afs@EXAMPLE.COM" created.
kadmin.local: getprinc afs
Principal: afs@EXAMPLE.COM
Expiration date: [never]
Last password change: Mon May 27 18:22:21 EDT 2024
Password expiration date: [never]
Maximum ticket life: 0 days 10:00:00
Maximum renewable life: 7 days 00:00:00
Last modified: Mon May 27 18:22:21 EDT 2024 (root/admin@EXAMPLE.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 2
Key: vno 1, aes256-cts-hmac-sha1-96
Key: vno 1, aes128-cts-hmac-sha1-96
MKey: vno 1
Attributes: REQUIRES_PRE_AUTH
Policy: service [does not exist]
kadmin.local:
Additionally, when I try to import this key using asetkey
, I get an unreadable error message:
sudo asetkey add 4 /tmp/afs.ktab afs
asetkey: unknown RPC error (-1765328203) for keytab entry with Principal afs@EXAMPLE.COM, kvno 4, DES-CBC-CRC/MD5/MD4
Reading the asetkey
manpage I see a strong recommendation against using the des-cbc-crc
key type and using the rxkad-k5
extension instead:
A modern AFS cell should be using the rxkad-k5 extension, or risks terribly insecure operation (complete cell compromise for $100 in 1 day). The
keys used for rxkad-k5 operation are stored in the KeyFileExt. Cells not using the rxkad-k5 extension (i.e., stock rxkad) use keys of the des-cbc-
crc encryption type, which are stored in the KeyFile.
Reading further, the KeyFileExt
man page says that trying to add rxkad-k5
keys requires specifying a krb5 encryption type number
, which is distinct from a string identifier:
Using asetkey(8) to add rxkad-k5 keys to the KeyFileExt also requires specifying a krb5 encryption type number.
Since the encryption type must be specified by its number (not a symbolic or string name), care must be taken to determine the correct encryption
type to add.
I'm stuck with a lot of related questions:
1. Why does kadmin
appear to ignore my specified encryption type?
2. How do I determine if my openafs is using the rxkad-k5
extension? I searched debian packages via apt-cache search rxkad-k5
and rxkad
and found nothing.
3. Since aes256-cts-hmac-sha1-96
looks like a string identifier, how can I determine the "krb5 encryption type number" for this encryption in order to import it via asetkey?
4. I noticed openafs-krb5
is a separate package from openafs-{fileserver,dbserver,client}
. Is there a recommended way of managing openafs authentication on debian without setting up kerberos?
5. I found that akeyconvert
claims to help importing keys from the krb5 keytab format to the KeyFileExt format
. Should I be using akeyconvert
to convert my afs.keytab
key into openafs?
ealfonso
(993 rep)
May 27, 2024, 11:17 PM
• Last activity: Jun 9, 2024, 07:03 AM
0
votes
2
answers
45
views
/afs/ThisCell path resolution fails
When using AFS, you can specify paths as `/afs/ThisCell/...` instead of using the actual cell name and it will interpolate it for you. That's not working on my local machine, but all other AFS functionality seems to be working fine. I can still access files and directories if I specify the cell name...
When using AFS, you can specify paths as
/afs/ThisCell/...
instead of using the actual cell name and it will interpolate it for you. That's not working on my local machine, but all other AFS functionality seems to be working fine. I can still access files and directories if I specify the cell name instead of using ThisCell
. If I log into one of our remote machines and attempt the same, then ThisCell
works correctly.
I'm using OpenAFS 1.6.23 on RHEL 7.7. The remote machines are using the same version of OpenAFS and RHEL 7.5. I've verified that /usr/vice/etc/ThisCell
exists on my machine and has the correct contents. fs wscell
also returns the correct value. I restarted my machine, but that didn't help either.
Where should I start with this? What else should I try?
Paul Bunch
(21 rep)
Jun 24, 2020, 05:04 PM
• Last activity: Jul 2, 2020, 02:56 PM
1
votes
2
answers
431
views
screen session loses contact to AFS file system
Our $HOME file system is an openAFS system. I log on to my desktop machine from my laptop at home and want to run a long job. Thus to protect it from a broken session, I open up screen -S session_name and run the script from there, and then disconnect the screen session. My problem is that after a r...
Our $HOME file system is an openAFS system. I log on to my desktop machine from my laptop at home and want to run a long job. Thus to protect it from a broken session, I open up
screen -S session_name
and run the script from there, and then disconnect the screen session. My problem is that after a relatively short time of a few hours, the session loses contact to the AFS file system, so I can't use any files there in the script stored on my $HOME. If I reconnect later to the session I can't list any files there or change directory to my home, I simply get a permission denied error.
I tried the following commands to try and reconnect, which usually work if I have left my desktop logged in too long:
fs checkservers
fs checkvolumes
fs flush
but that doesn't help the screen session to reconnect. Does anyone know how I can keep access to AFS in a disconnected screen session, or place a command in my bash/python scripts to keep it alive?
ClimateUnboxed
(133 rep)
Jun 8, 2020, 09:48 PM
• Last activity: Jun 10, 2020, 06:01 PM
6
votes
2
answers
7135
views
Why do I get permission denied error when I log out of the SSH session?
I have to run some tests on a server at the University. I have ssh access to the server from the desktop in my office. I want to launch a python script on the server that will run several tests during the weekend. The desktop in the office will go on standby during the weekend and as such it is esse...
I have to run some tests on a server at the University. I have ssh access to the server from the desktop in my office. I want to launch a python script on the server that will run several tests during the weekend.
The desktop in the office will go on standby during the weekend and as such it is essential that the process continues to run on the server even when the SSH session gets terminated.
I know about
nohup
and screen
and tmux
, as described in questions like:
- [How to keep processes running after ending ssh session?](https://askubuntu.com/q/8653)
- [How can I close a terminal without killing the command running in it?](https://unix.stackexchange.com/questions/4004/how-can-i-close-a-terminal-without-killing-the-command-running-in-it)
What am I doing right now is:
- ssh username@server
- tmux
- python3 run_my_tests.py
-> this script does a bunch of subprocess.check_output
of an other script which itself launches some Java processes.
- Tests run fine.
- I use Ctrl+B, D and I detach the session.
- When doing tmux attach
I reobtain the tmux session **which is still running fine, no errors whatsoever**. I kept checking this for minutes and the tests run fine.
- I close the ssh session
After this if I log in to the server via SSH, I **do** am able to reattach to the running tmux
session, *however* what I see is something like:
Traceback (most recent call last):
File "run_my_examples.py", line 70, in
File "run_my_examples.py", line 62, in run_cmd_aggr
File "run_my_examples.py", line 41, in run_cmd
File "/usr/lib64/python3.4/subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib64/python3.4/subprocess.py", line 858, in __init__
restore_signals, start_new_session)
File "/usr/lib64/python3.4/subprocess.py", line 1456, in _execute_child
raise child_exception_type(errno_num, err_msg)
PermissionError: [Errno 13] Permission denied
I.e. the process that was spawning my running tests, *right after the end of the SSH session*, was completely unable to spawn other subprocesses. I have chmod
ed the permissions of all files involved and nothing changes.
I believe the servers use Kerberos for login/permissions, the server is Scientific Linux 7.2.
Could it be possible that the permissions of spawning new processes get removed when I log off from the ssh sessions? Is there something I can do about it? I have to launch *several* tests, with no idea how much time or space they will take...
----
- The version of systemd
is 219
- The file system is AFS, using fs listacl
I can confirm that I do have permissions over the directories/files that are used by the script.
Bakuriu
(817 rep)
Aug 5, 2016, 01:56 PM
• Last activity: Dec 20, 2018, 04:17 AM
0
votes
1
answers
46
views
How can I set the access right to 'l' for every directory in a path (in an Andrew File System)?
I saved a java program (demo.java) in a university's linux server inside of a directory called "Codes". The Codes directory itself is in the Desktop folder. So the path of the java program is Home/Bob/Desktop/Codes/demo.java Now in order for the university to access the demo.java file, I'm apparentl...
I saved a java program (demo.java) in a university's linux server inside of a directory called "Codes". The Codes directory itself is in the Desktop folder. So the path of the java program is Home/Bob/Desktop/Codes/demo.java
Now in order for the university to access the demo.java file, I'm apparently supposed to have ‘l‘ access rights set in every directory from my home directory all the way down to the directory where my program/code resides (which is Codes). So the Home directory, the Desktop directory and the Codes directory need to have the 'l' access right.
Now unfortunately, I haven't found a lot of online tutorials for AFS, so I'm kinda in the dark here.
User95
(153 rep)
Oct 26, 2018, 09:32 AM
• Last activity: Oct 26, 2018, 10:01 AM
2
votes
2
answers
1247
views
Screen loses permissions after ssh disconnection
I have a long running bash script that I run on a remote host with `screen`, so I can log off ssh. When I come back after some time (after logging off), the screen terminal no longer has permissions to access my files and folders. What's causing this, and is there a way to avoid it? OS: Scientific L...
I have a long running bash script that I run on a remote host with
screen
, so I can log off ssh. When I come back after some time (after logging off), the screen terminal no longer has permissions to access my files and folders.
What's causing this, and is there a way to avoid it?
OS: Scientific Linux CERN SLC release 6.9 (Carbon), using the Andrew File System. I'm using the private directory in AFS, if that makes a difference.
**Edit:** The screen still has access to my public directory after disconnecting, and other public files. So something about AFS is messing it up.
Miatrix
(121 rep)
Jul 3, 2017, 04:14 PM
• Last activity: May 15, 2018, 03:21 PM
3
votes
1
answers
1776
views
Where are AFS tokens stored, and how do I get them into running Screen session
I have the following situation, I have a running GNU screen session where I can't access AFS anymore - my token has expired. I can however access it from a new shell. The difference to [this question][1] is that I don't have a Kerberos ticket (well, not for the realm aklog is looking for), so I can'...
I have the following situation, I have a running GNU screen session where I can't access AFS anymore - my token has expired. I can however access it from a new shell. The difference to this question is that I don't have a Kerberos ticket (well, not for the realm aklog is looking for), so I can't call aklog. I also can't get such a ticket. I have no idea how AFS is set up, but it works.
Now, Kerberos tickets are "stored" in /tmp/krb5cc*, and pointed to by a variable called KRB5CCNAME. If I have this problem with Kerberos and screen/tmux, I can either do kinit, or transplant the newer ticket to the old shell by setting KRB5CCNAME.
I wonder how AFS credentials are pointed to, and if I can similarly transplant them from the outer shell (the one I ssh into, which has AFS access) to the inner shell (the one I get after
screen -r
, which has no more AFS access). There seems to be no relevant environment variable changed between both shells. strace tokens
tells me that it just accesses /proc/fs/openafs/afs_ioctl
, which suggests it is tied to the process and using a special kernel feature, which would make it pretty hard. Any ideas how I can get AFS access back in my shell without closing it and opening a new one?
jdm
(599 rep)
Jul 31, 2015, 06:21 AM
• Last activity: May 7, 2018, 05:55 PM
0
votes
0
answers
126
views
AFS not working after Upgrade to Debian 9
We are using Debian 8 and 9 with Gnome 3 / KDE 5, kerberos authentification and an AFS filesystem. Debian 8 works fine, but after Upgrading some PCs to Debian 9 we have the following problem: Gnome completly ignores the AFS-Tokens, this results in many errors when the user home is an AFS-directory (...
We are using Debian 8 and 9 with Gnome 3 / KDE 5, kerberos authentification and an AFS filesystem.
Debian 8 works fine, but after Upgrading some PCs to Debian 9 we have the following problem:
Gnome completly ignores the AFS-Tokens, this results in many errors when the user home is an AFS-directory (but login to gnome works). KDE cannot write some config after login, but starts. KDE programs (Dolphin and Konsole) have AFS-Rights, Gnome ones do not.
The KRB- and AFS-Tokens look fine:
Valid starting Expires Service principal
28.11.2017 10:10:10 29.11.2017 11:10:10 krbtgt/xyz.com@XYZ.COM renew until 28.12.2017 10:10:10
28.11.2017 10:10:10 29.11.2017 11:10:10 afs/xyz.com@XYZ.COM renew until 28.12.2017 10:10:10
Special case: gnome-terminal:
user@pc: klist #see foregoing results
user@pc: ls /afs/xyz.com/home/u/user
Permission denied
user@pc: aklog
user@pc: klist #no difference
user@pc: ls /afs/xyz.com/home/u/user #works!
[AFS Home Contents]
Other Applications do still not have permissions to view AFS-directories.
I am not really sure where to search for the problem. Maybe something related to pam? Any other ideas?
rbs
(103 rep)
Nov 28, 2017, 10:43 AM
• Last activity: Nov 28, 2017, 11:03 AM
3
votes
0
answers
192
views
Is it crazy to consider keeping my home directory on OpenAFS?
I am a sysadmin by trade, and I do what I do at work at home as well for fun. I have a Gentoo Linux laptop, Raspberry Pis running Raspian, a Gentoo server, ARM devices running Debian and have various Android devices. I'm always wrestling and worrying about backing up and synchronizing my own home di...
I am a sysadmin by trade, and I do what I do at work at home as well for fun. I have a Gentoo Linux laptop, Raspberry Pis running Raspian, a Gentoo server, ARM devices running Debian and have various Android devices. I'm always wrestling and worrying about backing up and synchronizing my own home directory among disperate devices, while keeping it reasonably safe from prying eyes.
I had experience with Andrew in the '80s at CMU, and it was like magic. I would consider NFS if it had some mechanism to handle disconnected access and didn't presume a constant network connection.
Would OpenAFS be something that admins out there might consider to handle synchronizing the data of "lightly connected" hosts of the modern user? I've also considered things like Lustre. I am looking for something that requires moderate maintenance after initial setup. It seems like OpenAFS might also be interesting in that I could divide my home directory into administratively different subdirectories, which might be distributed to different devices in different measure. (E.g. a ~/mobile for files which must reside on my phone and tablets, ~/pi for Raspberry Pi files, etc.)
Is OpenAFS a dead end, or am I on a good track? :)
Jesse Adelman
(246 rep)
Jul 9, 2017, 06:43 PM
8
votes
1
answers
5516
views
unpacking tarball with hard links on a file system that doesn't support hard links
I got a tarball (let's say `t.tar.gz`) that contains the following files ./a/a.txt ./b/b.txt where `./b/b.txt` is a hard link to `./a/a.txt`. I want to unpack the tarball on a network file system (AFS) that only supports hard links in the same directory (see [here][1]). Therefore, just unpacking it...
I got a tarball (let's say
t.tar.gz
) that contains the following files
./a/a.txt
./b/b.txt
where ./b/b.txt
is a hard link to ./a/a.txt
.
I want to unpack the tarball on a network file system (AFS) that only supports hard links in the same directory (see here ).
Therefore, just unpacking it via tar -xzf t.tar.gz
raises an error that the hard link ./b/b.txt
cannot be created.
So far, my solution to the problem was to unpack ./t.tar.gz
on a file system that supports ordinary hard links. Then pack it again with the option --hard-dereference
as the GNU tar manual proposes. And lastly, unpack that new tarball into the AFS.
As this is unsatisfactory for me, I'm asking if there is an easier way to get the content of the archive unpacked directly to it's final destination? Such as an equivalent option to --hard-dereference
for unpacking instead of archiving?
Denis
(191 rep)
Feb 22, 2016, 01:08 PM
• Last activity: Mar 1, 2016, 12:35 PM
2
votes
1
answers
79
views
log file for storing outputs on stdout and stderr isn't created
I am running a long running script on a Scientific Linux server with Kerberos and Andrew file system, by myscript.sh >log 2>&1 & Upon starting the command, I didn't see a file called `log` in the current directory, but saw a file called `.__afs063D` which is logging the outputs on stdout and stderr....
I am running a long running script on a Scientific Linux server with Kerberos and Andrew file system, by
myscript.sh >log 2>&1 &
Upon starting the command, I didn't see a file called
log
in the current directory, but saw a file called .__afs063D
which is logging the outputs on stdout and stderr.
The script is still running. why is log
not created? When will it be?
Tim
(106430 rep)
Jan 11, 2016, 12:24 AM
• Last activity: Jan 11, 2016, 02:32 AM
0
votes
1
answers
855
views
How to keep a long-running process run with Kerberos and Andrew file system?
I am running a bash script on a Scientific Linux server. The script has a loop for copying files and running some programs. The process for running the script probably takes a day or two. But I always can't finish running the script, because of the following error: > cp: cannot open `somefile' for r...
I am running a bash script on a Scientific Linux server. The script has a loop for copying files and running some programs. The process for running the script probably takes a day or two. But I always can't finish running the script, because of the following error:
> cp: cannot open `somefile' for reading: Permission denied
I suspect that the cause is due to Kerberos and/or Andrew file system on the server.
How can I make my long-running script run well to finish?
Thanks.
Tim
(106430 rep)
Jan 10, 2016, 11:56 PM
• Last activity: Jan 11, 2016, 02:32 AM
1
votes
0
answers
84
views
Get status of current AFS transfers and cache?
Is there a way to monitor OpenAFS transfers and cache status? I am using the OpenAFS client, and sometimes accessing a certain file (that has not been cached) takes a while. I get impatient because I don't know whether it will be ready soon, or if my connection is down and it will timeout later, or...
Is there a way to monitor OpenAFS transfers and cache status? I am using the OpenAFS client, and sometimes accessing a certain file (that has not been cached) takes a while. I get impatient because I don't know whether it will be ready soon, or if my connection is down and it will timeout later, or if my program is trying to access a file which it needn't and I can just cancel it, etc.. I would like to be able to see things like:
- Current transfer rate: X kb/s
- Currently downloading files: A, B, C
- Cache X MB / Y MB full
- No connection to host X while trying to download file Y
I am looking for a builtin command that I might have missed, or an API that I can use to write something myself (and put an indicator in my tmux or on my desktop).
jdm
(599 rep)
Aug 13, 2015, 09:33 AM
0
votes
2
answers
121
views
Is the FreeBSD login program compatible with AFS?
Is FreeBSD's `login` program compatible with AFS, so you can just login normally instead of using the `kinit` method (this is possible according to the [OpenAFS manual](http://docs.openafs.org/UserGuide/index.html#HDRWQ20.html#Header_33))?
Is FreeBSD's
login
program compatible with AFS, so you can just login normally instead of using the kinit
method (this is possible according to the [OpenAFS manual](http://docs.openafs.org/UserGuide/index.html#HDRWQ20.html#Header_33)) ?
rake
(241 rep)
Jan 1, 2014, 11:42 PM
• Last activity: Jan 26, 2015, 03:09 PM
2
votes
1
answers
1698
views
Can I get "Permission denied" when running out of space?
I am running [smrtanalysis software][1] which is very demanding in terms of cpu, RAM and storage. After running for couple of hours, I got following error message: IOError: [Errno 13] Permission denied: '/afs/bx.psu.edu/user/s/szr/smrtanalysis/tmpdir/tmpqNMh9s' However, when I check the files, it se...
I am running smrtanalysis software which is very demanding in terms of cpu, RAM and storage. After running for couple of hours, I got following error message:
IOError: [Errno 13] Permission denied: '/afs/bx.psu.edu/user/s/szr/smrtanalysis/tmpdir/tmpqNMh9s'
However, when I check the files, it seems that permissions are set OK (I am starting program as biomonika):
[biomonika@brubeck tmpdir]$ ls -l tmpqNMh9s
-rw------- 1 biomonika biomonika 639 Feb 6 01:13 tmpqNMh9s
Actually, tmpdir is full of similarly named folders and files created at very similar times. At the time of error, there were only 131 GB of free space left.
I am wondering if "Permission denied" can mean something other than actually incorrectly set permissions, e.g. running out of space. However, the disk is "afs " which I don't have experience with - I am experienced only with use of chmod, hence the question.
Perlnika
(657 rep)
Feb 6, 2014, 04:13 PM
• Last activity: May 30, 2014, 01:04 AM
2
votes
2
answers
1602
views
How can I stop dolphin from reading my entire home directory tree in order to make it usable on AFS?
At work, I would like to use KDE's `dolphin` as a file manager. However, our home directories reside on an AFS share [1]. When starting dolphin, it becomes unresponsive for dozens of minutes. stracing it reveals that it tries to open all the nodes in our AFS tree: openat(AT_FDCWD, "/afs/somewhereEls...
At work, I would like to use KDE's
dolphin
as a file manager. However, our home directories reside on an AFS share . When starting dolphin, it becomes unresponsive for dozens of minutes.
stracing it reveals that it tries to open all the nodes in our AFS tree:
openat(AT_FDCWD, "/afs/somewhereElse.tld", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC
I need to stop dolphin from doing that; this behaviour makes the program completely unusable on AFS trees. Is there some setting that controls this?
----------
If you have never worked with AFS before, for the sake of this question, assume that there is a root directory that has subtrees from different universities, research institutes etc. mounted below it. The data in those subtrees really reside at the remote sites, so access is slow and resource-intensive.
jstarek
(1742 rep)
Jul 5, 2012, 09:45 AM
• Last activity: May 30, 2014, 12:44 AM
6
votes
1
answers
6303
views
Keep kerberos ticket across sudo invocation
On a regular linux machine, when I use `sudo -s` as a normal user, I become root but `HOME` still points to `~user`, so every admin has his own environment etc. (this is without `env_reset` or `always_set_home` set). On a system where the home directories live on an AFS file system, this also works,...
On a regular linux machine, when I use
sudo -s
as a normal user, I become root but HOME
still points to ~user
, so every admin has his own environment etc. (this is without env_reset
or always_set_home
set).
On a system where the home directories live on an AFS file system, this also works, if the environment variable KRB5CCNAME
is preseved, as root can read this file in /tmp
.
But if I use sudo
on such a system to change a local non-root user (e.g. the dedicated user for a certain service), the new user cannot access the kerberos cache (as it is owned by the old user and has mode 600). But if I unset KRB5CCNAME && kinit user && aklog && exec bash
, I have access to my environment again.
So the question is: Is there a clean way to make sudo take the kerberos tickets that I had before and add them to the kerberos ticket cache of the new user?
Joachim Breitner
(1407 rep)
Aug 10, 2012, 11:55 AM
• Last activity: Mar 17, 2013, 07:35 AM
Showing page 1 of 20 total questions