Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
1
answers
1025
views
OpenIndiana: getting "excluded by installed incorporation" errors trying to install packages on a fresh install of hipster-20211031
I am trying out OpenIndiana for the first time. I used the live image `OI-hipster-gui-20211031.iso` (which seems to be the latest version) to install into a qemu-kvm virtual machine. The very first thing I wanted to do was set up a development environment. According to [this page from the OpenIndian...
I am trying out OpenIndiana for the first time. I used the live image
OI-hipster-gui-20211031.iso
(which seems to be the latest version) to install into a qemu-kvm virtual machine.
The very first thing I wanted to do was set up a development environment. According to [this page from the OpenIndiana documentation](http://docs.openindiana.org/contrib/git/#installing-git) , "installing Git on OpenIndiana Hipster is simple." However, I found it to be not so simple:
ppelleti@illumos:~$ sudo pkg install git
Creating Plan (Solver setup): |
pkg install: No matching version of developer/versioning/git can be installed:
Reject: pkg://openindiana.org/developer/versioning/git@2.35.1-2022.0.0.0
to
pkg://openindiana.org/developer/versioning/git@2.36.1-2022.0.0.0
Reason: This version is excluded by installed incorporation consolidation/userland/userland-incorporation@0.5.11-2020.0.1.14595
ppelleti@illumos:~$
I tried installing a different package, and got the same error message:
ppelleti@illumos:~$ sudo pkg install build-essential
Creating Plan (Solver setup): |
pkg install: No matching version of metapackages/build-essential can be installed:
Reject: pkg://openindiana.org/metapackages/build-essential@1.0-2022.0.0.1
Reason: This version is excluded by installed incorporation consolidation/userland/userland-incorporation@0.5.11-2020.0.1.14595
ppelleti@illumos:~$
I searched the web for "excluded by installed incorporation", and it seems to be a common problem, but it doesn't seem to have a clear solution, especially not one that is applicable to my situation.
For example, [this question](https://unix.stackexchange.com/questions/486281/solaris-11-3-unable-to-install-system-header) seems to indicate it is a problem with the package publisher. Here is my package publisher:
ppelleti@illumos:~$ pkg publisher
PUBLISHER TYPE STATUS P LOCATION
openindiana.org origin online F http://pkg.openindiana.org/hipster/
ppelleti@illumos:~$
This isn't something I set up. This publisher is just what the fresh install of hipster-20211031
came with, right out of the box.
The question I linked to seemed to involve the publisher being out of date because of a Solaris support contract expiring, but that doesn't seem to be relevant to my situation, because OpenIndiana is open source, so there shouldn't be any licensing issue.
Is there an easy fix for this? Coming from Linux, I was not expecting this level of difficulty. pkg
seems to be far more arcane than apt-get
is.
user31708
(457 rep)
May 31, 2022, 08:19 AM
• Last activity: Apr 25, 2023, 01:12 AM
14
votes
2
answers
13860
views
What are pros to use OmniOS than SmartOS or OpenIndiana?
I could not find *good* comparison between OmniOS and SmartOS (or OpenIndiana). So what are pros to use OmniOS than SmartOS? (As some people are nitpickers I had to make the question like this. But I would like to know pros/cons...)
I could not find *good* comparison between OmniOS and SmartOS (or OpenIndiana). So what are pros to use OmniOS than SmartOS?
(As some people are nitpickers I had to make the question like this. But I would like to know pros/cons...)
jirib
(1188 rep)
Oct 16, 2013, 11:10 AM
• Last activity: Feb 15, 2023, 09:51 AM
3
votes
0
answers
96
views
Illumos/Solaris equivalent of FreeBSD devd (daemon to run userland programs when kernel events happen)?
I have been searching the [illumos man pages][1] for an equivalent of FreeBSD's [`devd`][2] (way to run programs based upon kernel events), but I could not find any. Is there an Illumos/Solaris equivalent of FreeBSD's `devd` daemon, providing a way to have userland programs run when certain kernel e...
I have been searching the illumos man pages for an equivalent of FreeBSD's
devd
(way to run programs based upon kernel events), but I could not find any.
Is there an Illumos/Solaris equivalent of FreeBSD's devd
daemon, providing a way to have userland programs run when certain kernel events happen?
gsl
(298 rep)
Sep 27, 2020, 09:04 AM
• Last activity: Sep 27, 2020, 09:29 AM
0
votes
0
answers
648
views
How to create a union mount (bind two or more directories to one mountpoint) on illumos / solaris?
I have been using union mounts, such as `unionfs` on FreeBSD or `mergerfs` on Linux, and now I would need to do the same on illumos (SmartOS). Reading [illumos](https://illumos.org/man/) or [SmartOS](https://smartos.org/man/) man pages is not helpful. Same with searching them [via external search en...
I have been using union mounts, such as
unionfs
on FreeBSD or mergerfs
on Linux, and now I would need to do the same on illumos (SmartOS).
Reading [illumos](https://illumos.org/man/) or [SmartOS](https://smartos.org/man/) man pages is not helpful. Same with searching them [via external search engine](https://www.google.com/search?hl=en&q=site%3Ahttps%3A%2F%2Fillumos.org%2Fman%2F) .
If possible, how can one create a union mount (bind two or more directories to one mountpoint) on illumos / solaris?
If not possible, is there anything close to same effect?
## Why
In my humble opinion, union-like filesystems are used in relatively complex scenarios. Describing some here would be out of scope for this question. I would like to know if this is possible _a-priori_ in any case, as I would not want an answer solving only a subset of all unionfs
possible use-cases.
As an example: I need to pick and choose _from the subdirectories_ of read-only zfs datasets, mounted DVD and readonly USB drives, overlaid with read-write datasets. The resulting mount point needs to be available as a cifs share. It would need to be done programmatically after boot, depending on various system factors.
For the interested, see referenced articles with some use cases. There are many such sysadmin scenarios.
---
https://whattheserver.com/what-are-unionfs-mounts-and-how-to-use-them/
https://medium.com/@knoldus/unionfs-a-file-system-of-a-container-2136cd11a779
gsl
(298 rep)
Jul 11, 2020, 01:45 PM
• Last activity: Jul 12, 2020, 02:51 PM
2
votes
0
answers
319
views
Help: error 7 (RPC: Authentication error) when trying to mount FreeBSD 12.1-RELEASE NFS export on OpenIndiana Hipster GUI client
I have a FreeBSD 12.1-RELEASE machine, hostname `DellOptiPlex390`. I would like to export the folders `/usr/home/jdrch/KeePass` & `/usr/home/jdrch/Sync` and mount them via NFS on an OpenIndiana Hipster GUI machine with IP address 192.168.0.71. My username, jdrch, is the same on both machines. I ther...
I have a FreeBSD 12.1-RELEASE machine, hostname
DellOptiPlex390
. I would like to export the folders /usr/home/jdrch/KeePass
& /usr/home/jdrch/Sync
and mount them via NFS on an OpenIndiana Hipster GUI machine with IP address 192.168.0.71. My username, jdrch, is the same on both machines. I therefore have the following:
My /etc/rc.conf
:
hostname="DellOptiPlex390"
zfs_enable="YES"
kld_list="sysctlinfo"
ifconfig_re0="DHCP"
linux_enable="YES"
dbus_enable="YES"
dsbdriverd_enable="YES"
sddm_enable="YES"
sshd_enable="YES"
nfs_client_enable="YES"
webmin_enable="YES"
smartd_enable="YES"
ntpd_enable=YES
ntpd_sync_on_start=YES
rpcbind_enable="YES"
nfs_server_enable="YES"
nfsv4_server_enable="YES"
mountd_flags="-r"
mountd_enable="YES"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
My /etc/exports
:
# Export /usr/home as read-write to OpenIndiana
/usr/home -alldirs -rw -mapall=MyFreeBSDUsername 192.168.0.71
I'm exporting /usr/home
because it's a ZFS filesystem and the [exports(5) manpage](https://www.freebsd.org/cgi/man.cgi?query=exports&sektion=5&manpath=freebsd-release-ports) seems to imply that's necessary. FTA:
> All ZFS file systems in the subtree below the NFSv4 tree root must be exported
After any update to either of those files I restart both nfsd & mountd on the FreeBSD server.
Unfortunately, I haven't had any luck getting the export to mount.
Trying to mount one of the export's subfolders fails:
# mount DellOptiPlex390:/usr/home/jdrch/KeePass /export/home/jdrch/KeePass
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
nfs mount: mount: /export/home/jdrch/KeePass: Permission denied
Trying to mount the exported filesystem also fails:
# mount DellOptiPlex390:/usr/home/ /export/home/jdrch/KeePass
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
nfs mount: mount: /export/home/jdrch/KeePass: Permission denied
Using sec=sys
in the mount command doesn't work, either:
# mount -F nfs -o vers=4,sec=sys DellOptiPlex390:/usr/home/jdrch/KeePass /export/home/jdrch/KeePass
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
NFS compound failed for server DellOptiPlex390: error 7 (RPC: Authentication error)
nfs mount: mount: /export/home/jdrch/KeePass: Permission denied
Substituting the FreeBSD server's IP address for its hostname has no effect.
What am I doing wrong?
jdrch
(53 rep)
Jul 6, 2020, 02:18 AM
5
votes
0
answers
128
views
What dictates smf service maintenance mode?
I have smartos machines running a custom application as an smf service (a circonus monitoring agent). On some of these machines the agent errors when starting and gets stuck in a restart loop eventually leading to the machine panicking. For every other smf service I have worked with they will go int...
I have smartos machines running a custom application as an smf service (a circonus monitoring agent). On some of these machines the agent errors when starting and gets stuck in a restart loop eventually leading to the machine panicking. For every other smf service I have worked with they will go into "maintenance" mode after restarting a few times but this particular service never seems to. I don't see any way to tweak these settings in the smf manifest and I'm not finding much information about it in the oracle docs. Does anyone know if this is a configurable setting and if so where can I find it?
The SMF manifest defines the following restart method:
jesse_b
(41447 rep)
May 21, 2020, 01:20 PM
• Last activity: Jun 5, 2020, 01:36 PM
2
votes
1
answers
237
views
OpenIndiana Hipster GUI (Illumos): How do I ensure my installed pkgsrc packages are always the latest?
I run [OpenIndianaHipster GUI][1] and would like to ensure that my pkgsrc [packages][2] are always the absolute latest available in the repos. What is required to do this (it's not particularly clear to me from the [setup instructions](https://pkgsrc.joyent.com/install-on-illumos/))? Is simply runni...
I run OpenIndianaHipster GUI and would like to ensure that my pkgsrc packages are always the absolute latest available in the repos.
What is required to do this (it's not particularly clear to me from the [setup instructions](https://pkgsrc.joyent.com/install-on-illumos/)) ? Is simply running
# pkgin -y full-upgrade
sufficient, or do I *also* have to upgrade from the previous quarterly release pkgsrc trunk set every time a new pkgsrc trunk set quarterly release becomes available?
jdrch
(53 rep)
Apr 7, 2020, 09:13 PM
• Last activity: Apr 8, 2020, 05:56 AM
0
votes
0
answers
1453
views
/dev/tun vs /dev/net/tun
I am mounting the /dev/tun device of an IllumOS installation (actually OmniOS, but I don't think it makes a difference) inside a lx-brand zone (using add device, set match=\dev\tun, end). Problem is, the CentOS inside the zone expects the tun device to be in /dev/net/tun, not /dev/tun, so OpenVPN is...
I am mounting the /dev/tun device of an IllumOS installation (actually OmniOS, but I don't think it makes a difference) inside a lx-brand zone (using add device, set match=\dev\tun, end). Problem is, the CentOS inside the zone expects the tun device to be in /dev/net/tun, not /dev/tun, so OpenVPN is not working. It complains that /dev/net/tun does not exist, which I guess makes sense.
What is the difference between having the tun device in /dev or in /dev/net? More importantly, how can I make this work? I have tried symlinking /dev/tun in /dev/net/tun both in IllumOS and in CentOS, but it's not letting me.
Any help is appreciated.
EDIT: Thanks to the comments I am now able to trick the system into believing that /dev/net/tun exists, however even when trying
tunctl -t tun0 -f /dev/tun
I get TUNSETIFF: Inappropriate ioctl for device
. The full strace is below:
execve("/sbin/tunctl", ["tunctl", "-t", "tun0", "-f", "/dev/tun"], [/* 20 vars */]) = 0
brk(NULL) = 0x6020e0
uname({sysname="Linux", nodename="centos-zerotier", ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fffef240000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=26396, ...}) = 0
mmap(NULL, 26396, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fffef040000
close(3) = 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\35\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2127336, ...}) = 0
mmap(NULL, 3940800, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fffeec00000
mprotect(0x7fffeedb8000, 2097152, PROT_NONE) = 0
mmap(0x7fffeefb8000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b8000) = 0x7fffeefb8000
mmap(0x7fffeefbe000, 16832, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fffeefbe000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fffef030000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fffef020000
arch_prctl(ARCH_SET_FS, 0x7fffef020740) = 0
mprotect(0x7fffeefb8000, 16384, PROT_READ) = 0
mprotect(0x601000, 4096, PROT_READ) = 0
mprotect(0x7fffef421000, 4096, PROT_READ) = 0
munmap(0x7fffef040000, 26396) = 0
open("/dev/tun", O_RDWR) = 3
stat("/etc/sysconfig/64bit_strstr_via_64bit_strstr_sse2_unaligned", 0x7fffffefefb0) = -1 ENOENT (No such file or directory)
ioctl(3, TUNSETIFF, 0x7fffffeff460) = -1 ENOTTY (Inappropriate ioctl for device)
dup(2) = 4
fcntl(4, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
brk(NULL) = 0x6020e0
brk(0x6230e0) = 0x6230e0
brk(NULL) = 0x6230e0
brk(0x624000) = 0x624000
fstat(4, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fffef010000
write(4, "TUNSETIFF: Inappropriate ioctl f"..., 42TUNSETIFF: Inappropriate ioctl for device
) = 42
close(4) = 0
munmap(0x7fffef010000, 8192) = 0
brk(NULL) = 0x624000
brk(NULL) = 0x624000
brk(0x623000) = 0x623000
brk(NULL) = 0x623000
exit_group(1) = ?
+++ exited with 1 +++
Francesco Carzaniga
(101 rep)
Mar 20, 2020, 07:19 PM
• Last activity: Mar 20, 2020, 09:08 PM
2
votes
1
answers
757
views
How To Read Variable Value Using mdb?
Let's say I set a parameter with the following command, how could I read it back later on? `mdb -kwe "spa_load_verify_metadata/W 0"` I am trying to read the [man page][1], but I'm only in this OS temporarily and don't understand what it's talking about. The search modifiers are: l Search for the spe...
Let's say I set a parameter with the following command, how could I read it back later on?
mdb -kwe "spa_load_verify_metadata/W 0"
I am trying to read the man page , but I'm only in this OS temporarily and don't understand what it's talking about.
The search modifiers are:
l Search for the specified 2-byte value.
L Search for the specified 4-byte value.
M Search for the specified 8-byte value.
I would normally expect that value to be in /sys/modules/zfs/parameters/spa_load_verify_metadata
where I could just cat
the value, but /sys
doesn't even exist.
I tried finding the variable using find
, but it wasn't in the filesystem. I don't understand the concept of where these values are...
I'm actually just trying to read the values of other parameters that I know to exist .
Louis Waweru
(195 rep)
Feb 29, 2020, 01:35 AM
• Last activity: Mar 1, 2020, 01:07 AM
0
votes
4
answers
213
views
Print line if following line is missing
This is related to [awk print 2 lines back if match][1] but since my command has buffering issues that I'm unable to resolve I think a better approach would be to completely ignore stderr and look for output missing certain lines. So my output will be: Gathering drive descriptors ... Gathering data...
This is related to awk print 2 lines back if match but since my command has buffering issues that I'm unable to resolve I think a better approach would be to completely ignore stderr and look for output missing certain lines.
So my output will be:
Gathering drive descriptors ...
Gathering data for drive 0 ...
Drive Model: DataTraveler 2.0
Gathering data for drive 1 ...
Drive name: id1,sd@n5000cca17096
Drive Model: HUH721010AL4204
Drive Speed: 7200 RPMs
Drive Temp: 41 C
Gathering data for drive 2 ...
Drive name: id1,sd@n5000cca24156
Drive Model: HUH721010AL4204
Drive Speed: 7200 RPMs
Drive Temp: 41 C
Gathering data for drive 3 ...
Drive name: id1,sd@n5000cca8749
Drive Model: HUH721010AL4204
Gathering data for drive 4 ...
Drive name: id1,sd@n5000cca19183
Drive Model: HUH721010AL4204
Drive Speed: 7200 RPMs
Drive Temp: 41 C
Gathering data for drive 5 ...
Drive name: id1,sd@n5000cca4607
Drive Model: HUSMH8010BSS204
Gathering data for drive 6 ...
Drive name: id1,sd@n5000cca10152
Drive Model: HUH721010AL4204
Drive Speed: 7200 RPMs
Drive Temp: 41 C
And I would like to output the
Drive name
for any drive missing the Drive Speed
and Drive Temp
lines.
Output should be:
Drive name: id1,sd@n5000cca8749
Drive name: id1,sd@n5000cca4607
This is beyond my capabilities but I'm sure awk
can do it but am not set on using awk
, anything that accomplishes the task will work (don't have GNU tools). Thanks!
jesse_b
(41447 rep)
Jan 17, 2020, 05:32 PM
• Last activity: Jan 19, 2020, 05:27 PM
1
votes
0
answers
68
views
SmartOS issues after a minor incident
Last week I had issue where SmartOS wouldn't boot suddenly, it just hanged there after "mDNSPlatformRawTime went backwards by 3684530 ticks..." warning. I think this one always appears but I didn't pay attention, its just that it would hang at this point. Then since my zpool didnt contain that much...
Last week I had issue where SmartOS wouldn't boot suddenly, it just hanged there after "mDNSPlatformRawTime went backwards by 3684530 ticks..." warning. I think this one always appears but I didn't pay attention, its just that it would hang at this point. Then since my zpool didnt contain that much important data, I just unplugged this disk and used another one to create fresh pool, as I thought maybe the disk is the issue (was on cheaper end).
Long story short, because its dual-boot, in Windows 7 I noticed that out of 6 sticks of RAM(12 GB), it sees only 2GBs, one stick. Today I got the time, and cleaned up dust plus there was some hair, and there we go, 12GB back, so I definitely think this was the issue.
But when I get back to smartos and even on new zpool/disk, when I try to install latest bootstrap, it says that root / filesystem is full.
I have not yet mastered Illumos/SmartOS, even though its my plan to delve as far as possible, so I'm not aware of how metadata on specific Zpool works, or what part is changed on USB boot stick itself, after kernel panic.. I remember this message from previous pool:
https://illumos.org/msg/FMD-8000-2K
Now, is it possible that on new zpool I still have to locate disabled module and enable it? Assuming all this was related to RAM incident when only 2GB was connected.
UPDATE:
so this is waht worries me, why is ramdisk:a such low capacity and full:
# df -h
Filesystem Size Used Available Capacity Mounted on
/devices/ramdisk:a 289M 289M 66K 100% /
/devices 0 0 0 0% /devices
/dev 0 0 0 0% /dev
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 4.76G 980K 4.76G 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
bootfs 0 0 0 0% /system/boot
sharefs 0 0 0 0% /etc/dfs/sharetab
/devices/pseudo/lofi@1:disk
432M 360M 72.4M 84% /usr
/usr/lib/libc/libc_hwcap1.so.1
432M 360M 72.4M 84% /lib/libc.so.1
lidagon
(71 rep)
Jan 12, 2020, 07:36 PM
• Last activity: Jan 12, 2020, 10:12 PM
4
votes
1
answers
484
views
ZFS silent corruption
I am trying to find answers to some questions on how ZFS works: * does it detect silent corruption via checksums as soon as data is changed (and differs from checksum), automatic in a way (then if there's RAIDZ 1, it would repair by fetching from mirrored disk), OR this works only when accessing cor...
I am trying to find answers to some questions on how ZFS works:
* does it detect silent corruption via checksums as soon as data is changed (and differs from checksum), automatic in a way (then if there's RAIDZ 1, it would repair by fetching from mirrored disk), OR this works only when accessing corrupted file(during read, and scrubbing of course)?
* I am now confused about traditional hardware RAID now - can it detect silent corruption with same certainty as ZFS, and location of corruption as well, and if yes - is it able to do repair as ZFS also?
Just need some more precision in explanation on how this works.
Thanks.
lidagon
(71 rep)
Aug 29, 2019, 10:36 AM
• Last activity: Aug 29, 2019, 01:47 PM
-1
votes
1
answers
30
views
Is it possible to find boundaries of a Solaris2 partition after MBR corruption?
I had disk with 4 primary partition on it with MBR (/dev/sda): - the first had a NTFS filesystem, with data (/dev/sda1); - the second was a Solaris2 partition, with openindiana (/dev/sda2 on GNU/Linux gparted); - the third was a swap partition (/dev/sda3); - the forth a Linux partition with ext4 (/d...
I had disk with 4 primary partition on it with MBR (/dev/sda):
- the first had a NTFS filesystem, with data (/dev/sda1);
- the second was a Solaris2 partition, with openindiana (/dev/sda2 on GNU/Linux gparted);
- the third was a swap partition (/dev/sda3);
- the forth a Linux partition with ext4 (/dev/sda4).
I made a mess because I "dd-ed" the first 300MB of the disk with data, so corrupting the mbr (I had no backup of it) and the first partition!
I was running on a GNU/Linux OS and I could save part of data in the NTFS partition (via a symbolic link still working), and I could partially rebuild mbr and the partition table (via /sys/ information and fdisk). But I could not read the Solaris2 partition under GNU/Linux (in /sys/... or fdisk) , but I only have a big unallocated space between NTFS partition (/dev/sda1) and the swap (/dev/sda3). I tried to create a partition using all the free blocks in between but I cannot boot openindiana anymore, because I see it was a really poor attempt!
Is there a way to find where the Solaris2 partition starts and ends, so I can try to rebuild a correct partition table? I tried also booting the openindiana LiveCd so using a Solaris fdisk but I had no luck at all.
EnricoTh
(1 rep)
May 17, 2019, 10:13 AM
• Last activity: May 17, 2019, 11:25 AM
3
votes
1
answers
532
views
How to force OmniOS (illumos) "format" properly recognize disk geometry?
I have a FreeBSD-initialised 8-disk vdev, all 10TB WD RED, now on a server with OmniOS r151026, connected via LSI 3008 HBA. At POST, the card shows all disks with right geometry (I can post picture if necessary). But `format` reports a wrong (~ 2TB) geometry: format Searching for disks...done c0t500...
I have a FreeBSD-initialised 8-disk vdev, all 10TB WD RED, now on a server with OmniOS r151026, connected via LSI 3008 HBA.
At POST, the card shows all disks with right geometry (I can post picture if necessary).
But
format
reports a wrong (~ 2TB) geometry:
format
Searching for disks...done
c0t5000CCA26BD0CAFAd0: configured with capacity of 2047.71GB
c0t5000CCA26BD5AAC5d0: configured with capacity of 2047.71GB
c0t5000CCA26BD6B9CCd0: configured with capacity of 2047.71GB
c0t5000CCA26BD6C6D4d0: configured with capacity of 2047.71GB
c0t5000CCA26BD6E59Cd0: configured with capacity of 2047.71GB
c0t5000CCA26BD59F6Dd0: configured with capacity of 2047.71GB
c0t5000CCA26BD116ACd0: configured with capacity of 2047.71GB
c0t5000CCA26BD6960Ed0: configured with capacity of 2047.71GB
format
should instead report something like (only first drive listed):
AVAILABLE DISK SELECTIONS:
0. c0t5000CCA26BD0CAFAd0
/scsi_vhci/disk@g5000cca26bd0cafa
diskinfo
correctly reports size (showing only first disk):
root@omniosce:~# diskinfo -p
TYPE DISK VID PID SIZE RMV SSD
SCSI c0t5000CCA26BD0CAFAd0 ATA WDC WD100EFAX-68 10000831348736 no no
How to force OmniOS (illumos) "format" properly recognize disk geometry?
Thank you in advance.
Edit 2018-06-02: Added disk kind and expected result (thanks to @andrew-henle)
gsl
(298 rep)
Jun 1, 2018, 01:09 PM
• Last activity: Apr 18, 2019, 07:08 AM
4
votes
2
answers
1491
views
How to source correct startup scripts on interactive, non-login shell
I'm trying to set up a sane/usable environment in a barebones OpenSolaris-derivative (OmniOS, a distribution of Illumos/OpenIndiana). I have all the plumbing code I need in .profile, .inputrc, and .bashrc files ready to promote to system-wide use, but no system-wide scripts are being sourced for non...
I'm trying to set up a sane/usable environment in a barebones OpenSolaris-derivative (OmniOS, a distribution of Illumos/OpenIndiana). I have all the plumbing code I need in .profile, .inputrc, and .bashrc files ready to promote to system-wide use, but no system-wide scripts are being sourced for non-login shells. Bash attempts to load the user's .bashrc file on su, but $HOME (and any other environment variables) remains configured for the previous user.
Output from a direct (SSH) login:
login as: myuser
Using keyboard-interactive authentication.
Password:
/etc/profile run
myuser's .bashrc run
myuser's .profile run
myuser@Helios:~$ echo ~
/home/myuser
myuser@Helios:~$
Output switching user:
root@Helios:/etc# su myuser
bash: /root/.bashrc: Permission denied
bash-4.2$ id
uid=1001(myuser) gid=100(users) groups=100(users),27(sudo)
bash-4.2$ echo ~
/root
bash-4.2$
Note in particular the attempt to source root's .bashrc instead of myuser's .bashrc.
su (without additional arguments) has always worked seamlessly in Ubuntu, Fedora, etc. and I intend to replicate that experience, but what can I do when no system-wide scripts run and the user's scripts cannot be found? I'm inclined to blame OmniOS's version of bash and/or su for missing something, but what exactly is the correct behavior? Can I configure/access/script additional plumbing somewhere which addresses the failure to update $HOME and other envvars?
Further notes:
- there is no man bash in OmniOS (at least not using MANPATH=/opt/omni/share/man:/opt/mysql55/man:/opt/gcc-4.4.4/man:/usr/gnu/share/man:/usr/local/man:/usr/local/share/man:/usr/man:/usr/share/man)
- /etc/bashrc and /etc/bash.bashrc never get sourced (which is expected as this is apparently a distribution-specific convention, but Ubuntu does appear to be loading these without reference from .bashrc)
HonoredMule
(313 rep)
Jul 29, 2013, 08:09 AM
• Last activity: Apr 5, 2019, 10:59 PM
2
votes
1
answers
342
views
How to troubleshoot disk controller on Illumos based systems?
I am using OmniOS which is based off of Illumos. I have a ZFS pool of two SSD's that are mirrored; the pool, known as `data` is reporting its `%b` as 100; below is `iostat -xn`: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 8.0 0.0 61.5 8.7 4.5 1092.6 556.8 39 100 data Unfortunately, th...
I am using OmniOS which is based off of Illumos.
I have a ZFS pool of two SSD's that are mirrored; the pool, known as
data
is reporting its %b
as 100; below is iostat -xn
:
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 8.0 0.0 61.5 8.7 4.5 1092.6 556.8 39 100 data
Unfortunately, there is not actually a lot of throughput going on; iotop
reports about 23552
bytes a second.
I also ran iostat -E
and it reported quite a bit of Transport Errors
; we changed the port and they went away.
I figured there might be an issue with the drives; SMART reports no issues; I've ran multiple smartctl -t short
and smartctl -t long
; no issues reported.
I ran fmadm faulty
and it reported the following:
--------------- ------------------------------------ -------------- ---------
TIME EVENT-ID MSG-ID SEVERITY
--------------- ------------------------------------ -------------- ---------
Jun 01 18:34:01 5fdf0c4c-5627-ccaa-d41e-fc5b2d282ab2 ZFS-8000-D3 Major
Host : sys1
Platform : xxxx-xxxx Chassis_id : xxxxxxx
Product_sn :
Fault class : fault.fs.zfs.device
Affects : zfs://pool=data/vdev=cad34c3e3be42919
faulted but still in service
Problem in : zfs://pool=data/vdev=cad34c3e3be42919
faulted but still in service
Description : A ZFS device failed. Refer to http://illumos.org/msg/ZFS-8000-D3
for more information.
Response : No automated response will occur.
Impact : Fault tolerance of the pool may be compromised.
Action : Run 'zpool status -x' and replace the bad device.
Like it suggests I ran zpool status -x
and it reports all pools are healthy
.
I ran some DTraces and found that all the IO activity is from `` (for the file); which is metadata; so there actually isn't any file IO going on.
When I run kstat -p zone_vfs
it reports the following:
zone_vfs:0:global:100ms_ops 21412
zone_vfs:0:global:10ms_ops 95554
zone_vfs:0:global:10s_ops 1639
zone_vfs:0:global:1s_ops 20752
zone_vfs:0:global:class zone_vfs
zone_vfs:0:global:crtime 0
zone_vfs:0:global:delay_cnt 0
zone_vfs:0:global:delay_time 0
zone_vfs:0:global:nread 69700628762
zone_vfs:0:global:nwritten 42450222087
zone_vfs:0:global:reads 14837387
zone_vfs:0:global:rlentime 229340224122
zone_vfs:0:global:rtime 202749379182
zone_vfs:0:global:snaptime 168018.106250637
zone_vfs:0:global:wlentime 153502283827640
zone_vfs:0:global:writes 2599025
zone_vfs:0:global:wtime 113171882481275
zone_vfs:0:global:zonename global
The high amount of 1s_ops
and 10s_ops
are very concerning.
I'm thinking that it's the controller but I can't be sure; anyone have any ideas? Or where I can get more info?
user26053
Jun 3, 2015, 07:29 PM
• Last activity: Feb 4, 2019, 12:50 PM
1
votes
1
answers
256
views
Problem in running vlc in Openindiana
3 days ago I upgraded vlc in Openindiana hipster 2018.04 . The problem is that vlc will open but cannot play anything. I thought that it is due to ffmpeg package So, I removed my ffmpeg by pkg://sfe-encumbered/video/ffmpeg pkg://sfe-encumbered/library/video/ffmpeg pkg://sfe-encumbered/media/mplayer2...
3 days ago I upgraded vlc in Openindiana hipster 2018.04 .
The problem is that vlc will open but cannot play anything.
I thought that it is due to ffmpeg package
So, I removed my ffmpeg by
pkg://sfe-encumbered/video/ffmpeg
pkg://sfe-encumbered/library/video/ffmpeg
pkg://sfe-encumbered/media/mplayer2
smplayer
pkg://sfe-encumbered/media/vlc
and then reinstalled it
/usr/bin/pkg install pkg://sfe-encumbered/video/ffmpeg@0.8.5 mplayer2 smplayer media/vlc
But the problem is still same
user
(94 rep)
Jul 22, 2018, 07:47 AM
• Last activity: Oct 24, 2018, 01:24 AM
2
votes
1
answers
307
views
How to improve rsync execution time on OmniOS (illumos-based)?
I am testing illumos in some of its variants, currently OmniOS. As I was benchmarking io-bound processes, I saw that `rsync` was significantly slower in respect to my reference, FreeBSD 12-CURRENT. Using same hardware, same command with same source and target disks: In OmniOS r151026 I measured, tes...
I am testing illumos in some of its variants, currently OmniOS.
As I was benchmarking io-bound processes, I saw that
rsync
was significantly slower in respect to my reference, FreeBSD 12-CURRENT.
Using same hardware, same command with same source and target disks:
In OmniOS r151026 I measured,
test@omniosce:~# time rsync -aPt /zarc/images /home/test/
real 17m25.428s
user 28m33.792s
sys 2m46.217s
In FreeBSD 12-CURRENT:
test@freebsd:~ % time rsync -aPt /zarc/images /home/test/
374.651u 464.028s 11:30.63 121.4% 567+210k 791583+780083io 2pf+0w
(Note that FreeBSD 12-CURRENT contains debug switches, so it runs slower than future upcoming RELEASE version).
- I noticed that, under FreeBSD, rsync
was running as 3 processes, all with nice=0
, two of them **consistently using 50% to 70% CPU time**.
- On OmniOS, rsync
was also running as 3 processes, also with nice=0
, but **each one never more than 3%**.
Is the CPU usage the reason execution time on same hardware is so different on FreeBSD and illumos?
If so, since nice
was the same on both OS, why illumos does not allow higher CPU usage?
How could one improve rsync
execution time on illumos-based OS?
Thank you in advance.
---
## 2018-06-02 edit:
- Clarified question to make it more specific. Thanks to @rui-f-ribeiro
- Answering to @roaima:
1. The source and destination filesystems are both local disks
2. This is not a one-off run for each OS, I have been testing this puzzling situation with many repetitions
3. At ever test I am making sure the destination directory tree is completely empty of files matching those in the source
gsl
(298 rep)
Jun 2, 2018, 09:07 AM
• Last activity: Jun 15, 2018, 12:47 PM
3
votes
1
answers
391
views
Omnios having problems using Zone with ZFS NFS dataset
I am experimenting with Omnios trying to attempt creating a shared zfs dataset using zfs inbuilt nfs inside a zone but every time I attempt to do so I get the following message zfs create -o casesensitivity=mixed -o nbmand=on -o mountpoint=/dat/share -o sharenfs=rw=@192.168.1.0/24 dat/share cannot s...
I am experimenting with Omnios trying to attempt creating a shared zfs dataset using zfs inbuilt nfs inside a zone but every time I attempt to do so I get the following message
zfs create -o casesensitivity=mixed -o nbmand=on -o mountpoint=/dat/share -o sharenfs=rw=@192.168.1.0/24 dat/share
cannot set property for 'dat': 'sharenfs' cannot be set in a non-global zone
So I take this as a sign that you can't use zfs nfs inside a zone so I've attempt to create the zfs nfs share out of the zone and I get the exact same error.
cannot create 'dat/share': 'mountpoint' cannot be set on dataset in a non-global zone
So now I am stumped and after a couple of hours of fiddling and googling around I am hoping someone can shed some light on what I am doing wrong here.
user152044
Jan 3, 2017, 11:28 AM
• Last activity: Mar 2, 2018, 06:33 AM
3
votes
0
answers
202
views
Tribblix: cannot start xfce
I installed the illumos (opensolaris) distro Tribblix and can't start Xfce. I tried both with a display manager, `slim` (I know it's old, but it's the only one they offer), and from the console. The default Xorg WM, `twm`, however, works perfectly as long as I don't try to run a Xfce programme (e.g....
I installed the illumos (opensolaris) distro Tribblix and can't start Xfce. I tried both with a display manager,
slim
(I know it's old, but it's the only one they offer), and from the console. The default Xorg WM, twm
, however, works perfectly as long as I don't try to run a Xfce programme (e.g. Thunar or Mousepad). Then the entire X server crashes. I've linked to some logs I thought were relevant. Please help!
Xfce log: https://pastebin.com/wQfFBmaX
Xorg log: https://pastebin.com/fYX8NTmJ
spacelander
(193 rep)
Aug 14, 2017, 11:10 AM
Showing page 1 of 20 total questions