Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
30
views
How to implement a Linux Cloud Storage sync with placeholder files
On Windows, Microsoft offers their [Cloud Filter API][1] (parts of which are implemented in the Windows Kernel as the CldFlt Windows Cloud Files Mini Filter Driver), which provides for an interface for cloud storage integration with features similar to Microsoft's OneDrive. AFAIK, Linux doesn't offe...
On Windows, Microsoft offers their Cloud Filter API (parts of which are implemented in the Windows Kernel as the CldFlt Windows Cloud Files Mini Filter Driver), which provides for an interface for cloud storage integration with features similar to Microsoft's OneDrive. AFAIK, Linux doesn't offer anything quite like that (StackOverflow ).
One of the most useful features of the Cloud Filter API are the placeholder files, which allow the local file system to display files that will be downloaded from the cloud storage on access. This is a huge usability improvement over having to select specific folders to fully sync and download. It also enables space saving by removing unused files from local storage.
Are there any cloud storage clients on Linux that implement such a feature and if so, how do they do that? To be clear, I am not looking for a specific piece of software, but I would like to understand how such a mechanism is implemented on Linux.
Using
libfuse
might be an option, but it seems less than ideal, since it would require hiding the actual local storage folder from the user and serving a full 'fake' filesystem instead. (What if the daemon isn't running? Can the user access the locally stored files, or see the placeholders and be notified that the daemon is down?)
TrayMan
(101 rep)
Jul 3, 2025, 07:08 PM
• Last activity: Jul 4, 2025, 07:31 AM
1
votes
1
answers
2930
views
Overriding apt-get security check in Debian 9 (stretch) for Cloudera upgrade
I am trying to update Cloudera packages on several servers that have recently been upgraded to Debian 9 (stretch). The latest updates for Cloudera were for Debian 8 (jessie). The update/upgrade fails because Debian 9 thinks that Cloudera's GPG signature is invalid (not secure enough, I think?). Is t...
I am trying to update Cloudera packages on several servers that have recently been upgraded to Debian 9 (stretch). The latest updates for Cloudera were for Debian 8 (jessie). The update/upgrade fails because Debian 9 thinks that Cloudera's GPG signature is invalid (not secure enough, I think?).
Is there a way I get around this issue and force Debian to update/upgrade the packages, whether or not it dislikes the GPG key?
Things I've tried that haven't worked:
Adding [trusted=yes] to /etc/apt/sources.list, e.g.:
deb [trusted=yes] http://archive.cloudera.com/cdh5/debian/jessie/amd64/cdh jessie-cdh5 contrib
Telling (I think) apt-get to not worry about the authentication, e.g.:
# apt-get --allow-unauthenticated update
# apt-get --allow-unauthenticated upgrade
Adding a file to /etc/apt/apt.conf.d with the following contents does not work.
APT{ Get { AllowUnauthenticated "1"; }; };
What to do?
EDITED: Here's the error I get from apt-get:
Err:4 http://archive.cloudera.com/cdh5/debian/jessie/amd64/cdh jessie-cdh5 InRelease
The following signatures were invalid: F36A89E33CC1BD0F71079007327574EE02A818DD
Error: GDBus.Error:org.freedesktop.DBus.Error.TimedOut: Failed to activate service 'org.freedesktop.PackageKit': timed out
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://archive.cloudera.com/cdh5/debian/jessie/amd64/cdh jessie-cdh5 InRelease: The following signatures were invalid: F36A89E33CC1BD0F71079007327574EE02A818DD
W: Failed to fetch http://archive.cloudera.com/cdh5/debian/jessie/amd64/cdh/dists/jessie-cdh5/InRelease The following signatures were invalid: F36A89E33CC1BD0F71079007327574EE02A818DD
Michael Greifenkamp
(11 rep)
Oct 25, 2018, 01:39 PM
• Last activity: Apr 27, 2025, 02:08 AM
0
votes
0
answers
55
views
Advice on Keeping Directories Synced Between Two Linux Laptops?
I need advice about syncing directories between two Linux laptops. Currently, I'm using Google Drive mounted with google-drive-ocamlfuse on both machines, but it's causing significant performance issues. Basic operations like cp, mv, and even compilation are running very slowly. What I'm specificall...
I need advice about syncing directories between two Linux laptops. Currently, I'm using Google Drive mounted with google-drive-ocamlfuse on both machines, but it's causing significant performance issues. Basic operations like cp, mv, and even compilation are running very slowly.
What I'm specifically looking for is a setup where each laptop maintains its own local copy of the data, with a master copy on a cloud/NFS server. The ideal tool would automatically sync changes made on either laptop with the master copy and update accordingly.
Any suggestions for a better solution would be appreciated
Chuck Garcia
(1 rep)
Nov 30, 2024, 03:48 PM
• Last activity: Nov 30, 2024, 04:25 PM
0
votes
1
answers
128
views
How can I store Coding Projects Locally but sync related documents to iCloud Drive?
I have many Git repositories I will need to keep on my computer, which after some research, I decided would stay only locally on the MacBook and not on iCloud Drive. The problem, then is that I want to sync documents related to these projects - including slideshows, text files, documents, spreadshee...
I have many Git repositories I will need to keep on my computer, which after some research, I decided would stay only locally on the MacBook and not on iCloud Drive.
The problem, then is that I want to sync documents related to these projects - including slideshows, text files, documents, spreadsheets, and images - that are not part of the Git repository but part of the projects - on iCloud Drive. My first thought is to use the .nosync for each of these, but the other problem is that iCloud is notoriously hard to navigate to, so ideally, my entire "Computing" folder is located at "~/Computing".
My second thought is to store what I want synced on iCloud, then alias it to the ~/Computing folder, and this is what I am doing for now. But I would love a solution that I can set up once and never have to "maintain."
How do I prevent the Git repositories from getting synced while syncing the others to iCloud?
Marvin
(101 rep)
Jul 9, 2023, 04:06 PM
• Last activity: Nov 13, 2024, 02:17 AM
0
votes
0
answers
144
views
Is it at all possible to use less than 50 GB for an instance boot volume?
I want to create an Oracle Cloud Compute Instance that uses less than 50 GB of space. I only need about 10 GB. Most information I'm finding on the Internet states that the boot volume on OCI can not be less than 50 GB. However, https://discuss.hashicorp.com/t/oracle-cloud-block-storage-size-question...
I want to create an Oracle Cloud Compute Instance that uses less than 50 GB of space. I only need about 10 GB.
Most information I'm finding on the Internet states that the boot volume on OCI can not be less than 50 GB.
However, https://discuss.hashicorp.com/t/oracle-cloud-block-storage-size-question/9614/7
says:
> In OCI, the boot volume size is goverend by the source image you use. If you use an Oracle-provided image, they are all smaller than 50 GB for historical reasons (until a recent OCI update, boot columes could not exceed 50 GB in size.
I checked the Debian image I'm using and from what I understand, it says it needs at least 2 GB only:
# wget https://cdimage.debian.org/cdimage/cloud/bookworm/latest/debian-12-generic-arm64.qcow2
# qemu-img info debian-12-generic-arm64.qcow2
image: debian-12-generic-arm64.qcow2
file format: qcow2
virtual size: 2.0G (2147483648 bytes)
disk size: 409M
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Still, OCI creates 50 GB boot volumes. Is there any way to instruct the system to create a 10 GB boot volume?
(As a side note, my own instances are 47 GB if I don't change the devault volume size suggested when I create them. So it is possible to have a boot volume less than 50 GB?)
ᴍᴇʜᴏᴠ
(818 rep)
Oct 26, 2024, 06:51 AM
1
votes
2
answers
163
views
sudo chown: pemission denied even if I'm the owner
Could someone explain this? ``` john@john-pcRefs:~/pCloudDrive/someFolder$ ls -al total 16 drwxr-xr-x 2 john john 4096 Jan 11 2022 . drwxr-xr-x 4 john john 4096 Jan 11 2022 .. -rw-r--r-- 1 john john 10439 Sep 22 18:48 EnvironmentSetup.sh -rw-r--r-- 1 john john 3370 Mar 25 2023 GitInitialization.sh -...
Could someone explain this?
john@john-pcRefs:~/pCloudDrive/someFolder$ ls -al
total 16
drwxr-xr-x 2 john john 4096 Jan 11 2022 .
drwxr-xr-x 4 john john 4096 Jan 11 2022 ..
-rw-r--r-- 1 john john 10439 Sep 22 18:48 EnvironmentSetup.sh
-rw-r--r-- 1 john john 3370 Mar 25 2023 GitInitialization.sh
-rw-r--r-- 1 john john 342 Jul 10 2023 InitDatabase.sh
john@john-pcRefs:~/pCloudDrive/someFolder$ echo $USER
john
john@john-pcRefs:~/pCloudDrive/someFolder$ sudo chmod +x GitInitialization.sh
chmod: cannot access 'GitInitialization.sh': Permission denied
If it can help, I'm working on a cloud drive and I've just gone from Ubuntu 22 to Ubuntu 24.
Zlotz
(123 rep)
Sep 22, 2024, 06:56 PM
• Last activity: Oct 4, 2024, 11:05 PM
0
votes
1
answers
1025
views
vgcreate: Cannot use /dev/vdb1: device has a signature
When I try to create a new Volume Group with my LVM partition vgcreate vgdata /dev/vdb1 I've got a message Cannot use /dev/vdb1: device has a signature What does it mean?
When I try to create a new Volume Group with my LVM partition
vgcreate vgdata /dev/vdb1
I've got a message
Cannot use /dev/vdb1: device has a signature
What does it mean?
Irina
(139 rep)
Aug 19, 2024, 12:03 PM
• Last activity: Aug 20, 2024, 06:06 AM
0
votes
3
answers
742
views
Is there any way to upload large amounts of files to Google Drive with a decent speed besides Filezilla Pro and RClone?
I signed for Google Drive's Premium 2TB plan for my backups and needed a way to send more than 1 million+ of files there. Web interface is very problematic for this purpose as if the transfer has any errors, identifying what's online and what's not would be very difficult. So I started to look for s...
I signed for Google Drive's Premium 2TB plan for my backups and needed a way to send more than 1 million+ of files there. Web interface is very problematic for this purpose as if the transfer has any errors, identifying what's online and what's not would be very difficult.
So I started to look for some safe way to send files there.
First I found google-drive-ftp-adapter, but as I was trying to install it I got an error that I couldn't solve: *"This app is blocked. This app tried to access sensitive info in your Google Account. To keep your account safe Google blocked this access."* The project has no activity so I guessed that this was some move by Google that wasn't addressed by the maintainers.
Then I tried to add google drive to Gnome's online accounts. It did mount the drive, but when I tried to upload the files the speed was ridiculous: around 15 KB/s which would take me like 10000 hours to upload the files (around 543GB) and I got some errors even in the first files.
After that I bought Filezilla Pro which connects to Google Drive. So I bought it and it is worked fine. The speed is around 15 MB/s which is not ideal, it will take like 10 hours to upload but that's much better that 10000 and it's doable. Also filezilla has the "Failed transfers" tab that allows me to see the transfers that had an error and I just redo them and that's it.
I wanted a GUI tool, but also gave a chance to rclone that proved to work really well and I could make it reach 32 MB/s which downed the time to 5 hours. The drawback is that if any tranfer has an error I have to cherry pick them in a log file to retransfer them. This is more incovenient than Filezilla.
But none of these solutions felt like my endgame. Each one has it's drawbacks. I really would prefer a FOSS solution preferably with a GUI that simply let me choose the dirs I wanted to copy, redo the errors easily and it's fast enough. GUI tools are preffered but I would consider command line tools if it's really efficient, let me choose what I want and has decent logging while copying preferably visually.
Does anyone has any other solution besides Filezilla Pro and RClone ? How would you send 1 million+ of files from several different folders to Google Drive ?
**Edit**
I really had underestimated the speed. It's been copying files for 17 hours and there's only 368GB copied. With RClone. 🙄
**Edit2**
Underestimate ? that's a very far understatement. The upload finally concluded. And guess what: 3d 1h 13m 42.3s to upload 463.169 GiB*.
Now... ok I know the small file overhead and stuff. But this overhead level is not acceptable by any standard. I really have to find some way to speed things to an acceptable level.
Searching for a solution I saw that TeraBox has a feature called "cloud decompression". Will give it a try.
*I know the value differs from what I stated above. But that's because I had already uploaded a part of the files in a previous run.
**Edit3**
TeraBox do have cloud decompression. They do offer 1TB free, their paid plan (2TB) is as cheap as it gets (U$3.49/month) and does have the ability to decompress online. Unfortunately decompression of files larger than 12 Gib isn't supported and to divide 530 Gibs in packages of 12 Gibs I have to cherry pick files in the sub-dirs what is a lot of work. Impractical. So this solution is only applicable if either we want to do that 3 day upload again or maintain subdirs as packages.
No FTP client, paid or free, supports TeraBox as Filezilla supports Google Drive. But they do have desktop apps for all major OS (Win, Mac Linux) and a mobile app for Android.
to quote U2: "And I still haven't found, what I'm looking for" :D
Nelson Teixeira
(470 rep)
May 9, 2024, 07:16 AM
• Last activity: May 15, 2024, 02:22 AM
0
votes
1
answers
1739
views
Kernel Panic on CentOS - Google Compute Engine Instance
I'm getting a kernel panic error in a CentOS instance of Google Compute Engine. I'm able to see the error and already figure out how to solve it, but I can't get into the GRUB menu trough the serial console. dracut: Mounted root filesystem /dev/sda1 dracut: Loading SELinux policy type=1404 audit(147...
I'm getting a kernel panic error in a CentOS instance of Google Compute Engine. I'm able to see the error and already figure out how to solve it, but I can't get into the GRUB menu trough the serial console.
dracut: Mounted root filesystem /dev/sda1
dracut: Loading SELinux policy
type=1404 audit(1479929075.614:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
dracut: SELinux: Could not open policy file ] ? panic+0xa7/0x179
[] ? perf_event_exit_task+0xc0/0x340
[] ? do_exit+0x867/0x870
[] ? fput+0x25/0x30
[] ? do_group_exit+0x58/0xd0
[] ? sys_exit_group+0x17/0x20
[] ? system_call_fastpath+0x16/0x1b
The CentOS version is 6.7 and this happened after a yum update. I'm just trying to get into GRUBs menu to append "selinux=0" to boot into Permissive mode, but it seems that it's not possible through the serial console. I would appreciate any help.
João Felipe
(11 rep)
Nov 24, 2016, 01:09 PM
• Last activity: Nov 17, 2023, 01:02 PM
0
votes
1
answers
202
views
Cannot use SSH via internal IP address but can ping it easily
I have this setup in Oracle Cloud Infrastructure ... I have two networks .. **Network 1**: internal cidr: 192.168.1.0/24 public IP: 10.0.0.1 **Network 2**: internal cidr: 192.168.2.0/24 public IP: 10.0.0.2 connected via OCI tool -> "Local Peering Gateway" (without this, I couldn't even ping) ... bot...
I have this setup in Oracle Cloud Infrastructure ...
I have two networks ..
**Network 1**:
internal cidr: 192.168.1.0/24
public IP: 10.0.0.1
**Network 2**:
internal cidr: 192.168.2.0/24
public IP: 10.0.0.2
connected via OCI tool -> "Local Peering Gateway" (without this, I couldn't even ping) ...
both are running SSHD and are accessible/connectable via SSH by their PUBLIC IP addresses ..
This all works fine ...
However, when I want to connect from the Network 1 to Network two via internal IP address, I cannot ... even tho it pings fine
admin@network1:$ ping 192.168.2.10
PING 192.168.2.10 (192.168.2.10) 56(84) bytes of data.
64 bytes from 192.168.2.10: icmp_seq=1 ttl=54 time=154 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=54 time=152 ms
from tho other network, I cannot ping tho
admin@network2:$ ping 192.168.1.10
PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data.
--- 155.0.10.38 ping statistics ---
347 packets transmitted, 0 received, 100% packet loss, time 354309ms
The traceroute shows a lot of asterisks on the way .. and nmap from both machines does not show any open ports ... which is weird because publicly they are opened and I cannot via the public IP using ssh without a problem...
Any idea what could it be? I am kinda helpless .... especially because of OCI does not really provide support on this matter and many services do not provide logs :(
PS: Pls do not advice to change the provider/cloud .. I cannot - that's clients request ..
----------- EDIT:
wireshark ping log
admin@network1$ sudo tshark -i ens3 | grep 192.168.2.10
Running as user "root" and group "root". This could be dangerous.
Capturing on 'ens3'
281 273 13.491209915 192.168.1.10 → 192.168.2.10 ICMP 98 Echo (ping) request id=0x000f, seq=1/256, ttl=64
300 279 13.644608152 192.168.2.10 → 192.168.1.10 ICMP 98 Echo (ping) reply id=0x000f, seq=1/256, ttl=54 (request in 273)
313 295 14.492540119 192.168.1.10 → 192.168.2.10 ICMP 98 Echo (ping) request id=0x000f, seq=2/512, ttl=64
298 14.645324623 192.168.2.10 → 192.168.1.10 ICMP 98 Echo (ping) reply id=0x000f, seq=2/512, ttl=54 (request in 295)
wireshark for ssh connection:
643 628 37.001410195 192.168.1.10 → 192.168.2.10 TCP 74 [TCP Retransmission] 36306 → 22 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1327517210 TSecr=0 WS=128
855 854 53.226989006 192.168.1.10 → 192.168.2.10 TCP 74 45260 → 22 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1327533435 TSecr=0 WS=128
945 912 54.281406442 192.168.1.10 → 192.168.2.10 TCP 74 [TCP Retransmission] 45260 → 22 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1327534490 TSecr=0 WS=128
963 948 56.329410437 192.168.1.10 → 192.168.2.10 TCP 74 [TCP Retransmission] 45260 → 22 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1327536538 TSecr=0 WS=128
1004 991 60.361413567 192.168.1.10 → 192.168.2.10 TCP 74 [TCP Retransmission] 45260 → 22 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1327540570 TSecr=0 WS=128
Mr.P
(101 rep)
Aug 25, 2023, 01:02 PM
• Last activity: Aug 25, 2023, 01:27 PM
0
votes
1
answers
2936
views
ping can resolve hostname, while dig is unable to resolve the same hostname?
>ping can resolve hostname, while dig is unable to resolve the same hostname? I'm experiencing a strange issue, where `ping` is able to resolve some DNS hostname, while `dig` cannot. I've tried to `dig +search` to use the search entries in `/etc/resolv.conf`, or `dig @ ` to set the nameserver explic...
>ping can resolve hostname, while dig is unable to resolve the same hostname?
I'm experiencing a strange issue, where
ping
is able to resolve some DNS hostname, while dig
cannot.
I've tried to dig +search
to use the search entries in /etc/resolv.conf
, or dig @
to set the nameserver explicitly, but that didn't help.
How can I understand why ping
resolves the hostnames, while dig
is unable?
kube@ctf-k8s-deploy-647d66b697-lxqkl:~$ cat /etc/resolv.conf
nameserver 100.64.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
kube@ctf-k8s-deploy-647d66b697-lxqkl:~$ dig +search ctf-k8s-deploy-647d66b697-lxqkl
; > DiG 9.11.5-P4-5.1-Debian > +search ctf-k8s-deploy-647d66b697-lxqkl
;; global options: +cmd
;; Got answer:
;; ->>HEADER> DiG 9.11.5-P4-5.1-Debian > @100.64.0.10 +search ctf-k8s-deploy-647d66b697-lxqkl
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47568
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ctf-k8s-deploy-647d66b697-lxqkl. IN A
;; Query time: 0 msec
;; SERVER: 100.64.0.10#53(100.64.0.10)
;; WHEN: Mon Mar 23 14:46:53 UTC 2020
;; MSG SIZE rcvd: 60
kube@ctf-k8s-deploy-647d66b697-lxqkl:~$ ping ctf-k8s-deploy-647d66b697-lxqkl
PING ctf-k8s-deploy-647d66b697-lxqkl (100.96.1.8) 56(84) bytes of data.
64 bytes from ctf-k8s-deploy-647d66b697-lxqkl (100.96.1.8): icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from ctf-k8s-deploy-647d66b697-lxqkl (100.96.1.8): icmp_seq=2 ttl=64 time=0.019 ms
64 bytes from ctf-k8s-deploy-647d66b697-lxqkl (100.96.1.8): icmp_seq=3 ttl=64 time=0.021 ms
^C
--- ctf-k8s-deploy-647d66b697-lxqkl ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 40ms
rtt min/avg/max/mdev = 0.019/0.019/0.021/0.005 ms
**Update**:
/etc/hosts
:
kube@ctf1-deploy1-89b48b46-zkqld:~$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
100.96.3.22 ctf1-deploy1-89b48b46-zkqld
/etc/resolv.conf
:
kube@ctf1-deploy1-89b48b46-zkqld:~$ cat /etc/resolv.conf
nameserver 100.64.0.10
search ctf1-ns.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
Shuzheng
(4931 rep)
Mar 23, 2020, 02:53 PM
• Last activity: Jul 8, 2023, 09:01 AM
80
votes
12
answers
93289
views
Mount Google Drive in Linux?
Now that [Google Drive][1] is available, how do we mount it to a Linux filesystem? Similar solutions exist for [Amazon S3][2] and [Rackspace Cloud Files][3]. [1]: https://drive.google.com/ [2]: http://code.google.com/p/s3fs/wiki/FuseOverAmazon [3]: https://unix.stackexchange.com/questions/23646/moun...
Now that Google Drive is available, how do we mount it to a Linux filesystem? Similar solutions exist for Amazon S3 and Rackspace Cloud Files .
blee
(1352 rep)
Apr 24, 2012, 09:50 PM
• Last activity: Jun 25, 2023, 10:51 PM
1
votes
2
answers
3484
views
I am Unable to access to root user in oracle linux server
I was configuring Ansible on my server where I created an Ansible user on all servers and gave them sudo access so whenever I ssh to another server using an Ansible user it should not ask me for a password. I successfully configured it on two servers but when I was doing it on another server by maki...
I was configuring Ansible on my server where I created an Ansible user on all servers and gave them sudo access so whenever I ssh to another server using an Ansible user it should not ask me for a password. I successfully configured it on two servers but when I was doing it on another server by making changes in
/etc/sudoers/
file the system became unresponsive and then I restarted my terminal and now when I am trying to login server to the root user it is throwing an error
$ sudo su
/etc/sudoers: syntax error near line 121 <<< sudo: parse error in /etc/sudoers near line 121 sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin
user21824488
(11 rep)
May 20, 2023, 11:24 AM
• Last activity: May 22, 2023, 11:21 AM
0
votes
3
answers
1765
views
Detect whether a Linux execution host is cloud based or not
Presently I'm checking by running dmidecode -s bios-version and grepping against major cloud vendors. Ex: `# From an amazon ec2 VM $ sudo dmidecode -s bios-version 4.2.amazon` Is there a generic and more reliable approach for finding this?
Presently I'm checking by running dmidecode -s bios-version and grepping against major cloud vendors. Ex:
`# From an amazon ec2 VM
$ sudo dmidecode -s bios-version
4.2.amazon`
Is there a generic and more reliable approach for finding this?
nithinj
(180 rep)
Apr 7, 2017, 12:04 PM
• Last activity: Mar 21, 2023, 03:06 AM
0
votes
1
answers
97
views
What is the free environment to deploy an application built with docker compose
Is there a free cloud service that allows to deploy a web application developed with **JSf** and **Mysql**. The application is built locally with **docker compose** using both the docker images of Wildfly and Mysql and it works fine. It remains to deploy it in the cloud to test the operation. Thanks
Is there a free cloud service that allows to deploy a web application developed with **JSf** and **Mysql**.
The application is built locally with **docker compose** using both the docker images of Wildfly and Mysql and it works fine.
It remains to deploy it in the cloud to test the operation.
Thanks
Dev Learning
(11 rep)
Oct 11, 2022, 07:35 PM
• Last activity: Feb 4, 2023, 10:39 AM
1
votes
1
answers
328
views
Using Unison to sync my desktop computer with a container in the cloud
I'd like to use Unison to sync a directory between my desktop computer and a container that lives in the cloud. I have this kind of working with Unison, but the problem I'm seeing is that every time I spin up the container, it can have a different hostname and IP number, and sshd will be running on...
I'd like to use Unison to sync a directory between my desktop computer and a container that lives in the cloud. I have this kind of working with Unison, but the problem I'm seeing is that every time I spin up the container, it can have a different hostname and IP number, and sshd will be running on a different port number.
To make this mostly transparent to me, every time I spin up the container, I edit my ~/.ssh/config file like so:
Host pod
User root
Hostname 184.26.5.182
Port 10666
I replace the IP number and port number accordingly.
Once I've done this, I can log into the container using ssh pod
, etc. I can run scripts that rsync stuff around with no problem, even though they have the host alias "pod" hardwired into them. (If I knew how to make rsync do a bidirectional sync with deletions, then I'd just use that, but I'm pretty sure that that is not possible.)
My Unison config file looks like this:
root = /Users/me/path/to/dir
root = ssh://pod//path/to/dir
ignore = Path subdir1
ignore = Path subdir2
Unfortunately, this does not work great because every time I spin up the container, I get a different port number (the IP number doesn't usually change), and Unison considers this to be a fresh sync. Which is not at all what I want.
Is there a way to tell Unison to ignore the fact that the remote container seems to have changed to a different computer, and to just trust that my "pod" host alias can be relied on for host identity?
Douglas
(121 rep)
Jan 16, 2023, 08:37 PM
• Last activity: Jan 17, 2023, 04:04 AM
2
votes
1
answers
720
views
How to Repair Grub in a Cloud VM (Azure)
I ran into some troubles doing `do-release-upgrade` and it killed my grub boot. Because I couldn't find the steps on how to fix this online I'll answer with my own notes below.
I ran into some troubles doing
do-release-upgrade
and it killed my grub boot.
Because I couldn't find the steps on how to fix this online I'll answer with my own notes below.
laktak
(6313 rep)
Sep 6, 2022, 10:30 PM
• Last activity: Sep 7, 2022, 06:55 AM
1
votes
1
answers
112
views
ssh to instances on IBM-cloud using private IP
I am creating virtual server instances (by using my custom image) on IBM CLOUD. after the VMs are provisioned, they get a private IP (10.X.X.X), and can get a floating IP (public IP) that Internet can access it. How can I make free access to that VMs from my company network without opening it to the...
I am creating virtual server instances (by using my custom image) on IBM CLOUD.
after the VMs are provisioned, they get a private IP (10.X.X.X),
and can get a floating IP (public IP) that Internet can access it.
How can I make free access to that VMs from my company network without opening it to the world?
or alternatively, how to make that VM get private IP from my company subnet,
So that every entity in the company will have access to the machines and vice versa but without a public IP.
for clarify: i need SSH to those VMs with their private IP.
Thank's for your answers!
Orly Orly
(123 rep)
Sep 5, 2022, 07:03 AM
• Last activity: Sep 5, 2022, 07:38 AM
2
votes
2
answers
1074
views
Installing man pages for Fedora cloud
I have installed a Fedora cloud image for education purposes. It has no man pages but I want them. [Gergely Gombos][1] writes a solution at the [fedora-cloud issues page][2]: > Workaround solution for later reference: comment `tsflags=nodocs` in `/etc/dnf/dnf.conf`, then reinstall *everything*...
I have installed a Fedora cloud image for education purposes. It has no man pages but I want them.
Gergely Gombos writes a solution at the fedora-cloud issues page :
> Workaround solution for later reference: comment
tsflags=nodocs
in /etc/dnf/dnf.conf
, then reinstall *everything* with
>>sudo dnf reinstall $(sudo dnf list --installed | awk '{print $1})
>
Is there an easier solution than reinstalling all packages?
"no man pages · Issue #9 · fedora-cloud/docker-brew-fedora · GitHub"
Gergely
(826 rep)
Mar 22, 2021, 12:18 PM
• Last activity: Sep 2, 2022, 11:46 AM
2
votes
1
answers
5244
views
Always Free: NO NETWORK after Upgrade from Ubuntu 20.04 LTS ARM to Ubuntu 22.04 LTS ARM
**Scenario is the following:** I upgraded from Ubuntu 20.04 LTS to Ubuntu 22.04 LTS. However, after upgrading I lost total connection to the host and I'm only able to access it through Cloud Shell Console. **Observed behavior:** - SSH is no accesible through Public IP. Only Cloud Shell through Seria...
**Scenario is the following:**
I upgraded from Ubuntu 20.04 LTS to Ubuntu 22.04 LTS. However, after upgrading I lost total connection to the host and I'm only able to access it through Cloud Shell Console.
**Observed behavior:**
- SSH is no accesible through Public IP. Only Cloud Shell through Serial Console. Tested if key exchange RSA was the issue since now its recommended ECDSA, but this doesn't seem to be the issue.
- Unable to reach any source while doing
I'm running out of ideas and any help is GREATLY appreciated. I have other instances running on the same tenant sharing same VCN and have no problems whatsoever....so I'm inclined to think this is a OS issue.
sudo apt update
- nslookup google.es
won't give any output (unreachable)
- ip addr
IP Interfaces are up:
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:00:95:86 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.62/24 metric 100 brd 10.0.0.255 scope global dynamic ens3
valid_lft 62767sec preferred_lft 62767sec
inet6 fe80::17ff:fe00:9586/64 scope link
valid_lft forever preferred_lft forever
- ip route
output:
default via 10.0.0.1 dev ens3 proto dhcp src 10.0.0.62 metric 100
10.0.0.0/24 dev ens3 proto kernel scope link src 10.0.0.62 metric 100
10.0.0.1 dev ens3 proto dhcp scope link src 10.0.0.62 metric 100
169.254.169.254 via 10.0.0.1 dev ens3 proto dhcp src 10.0.0.62 metric 100
- iptables -L
output:
Chain INPUT (policy DROP)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
- The output for iptables-save -c
out is as follows:
:INPUT DROP [71948:4580785]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT[59782:3909997] COMMIT
# Completed on Tue Aug 16 08:10:43 2022
- Netplan has only 1 configuration filed called 50-cloud-init.yaml
which contains the following:
ethernets:
ens3:
dhcp4: true
match:
macaddress: 02:00:17:00:95:86
set-name: ens3
version: 2
- /etc/resolv.conf
output:
nameserver 127.0.0.53
options edns0 trust-ad
search vcn09040100.oraclevcn.com
- Im not sure if this an Oracle Cloud configuration issue, although I havent touched anything and it was working before and hosting a website. VCN vcn-20210904-0043
is assigned with a subnet of 10.0.0.0/24 and the following Ingress Security List

kiraitachi
(23 rep)
Aug 15, 2022, 05:19 PM
• Last activity: Aug 16, 2022, 11:16 AM
Showing page 1 of 20 total questions