Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
1
answers
2005
views
How to get private IP address of EC2 after spinning the AWS from local machine/ at Jenkins outside AWS
Assume we already spin the AWS using cloudformation plugin from Jenkins outside AWS and now, how to get private IP address after spinning the AWS at my local machine/jenkins using any API methods? I tried ruby aws-sdk, REST API methods to get the private ip outside AWS (at my local) and I'm getting...
Assume we already spin the AWS using cloudformation plugin from Jenkins outside AWS and now, how to get private IP address after spinning the AWS at my local machine/jenkins using any API methods? I tried ruby aws-sdk, REST API methods to get the private ip outside AWS (at my local) and I'm getting timed out for the connection. Here are some examples that did not yield the ip address/EC2 objects and getting timeout -
#using ruby aws-sdk
require 'rubygems'
require 'aws-sdk-v1'
require 'aws-sdk'
AWS.config(:region => "xxxxx",
:access_key_id => "xxxxx",
:secret_access_key => "xxxxx")
ec2 = AWS::EC2.new(
:region => "xxxxxx",
:access_key_id => "xxxx",
:secret_access_key => "xxxxxxx")
ec2.instances.each do |test|
puts test.id
end
#using REST API client -
require 'rubygems'
require 'rest-client'
url = "https://ec2.amazonaws.com/?Action=DescribeNatGateways&ImageId=xxxxxxx&access_key_id=xxxxxx&secret_access_key=xxxxxxxx "
response=RestClient::Request.execute(:url =>url, :method => :get, :verify_ssl => false)
puts response
Tried uploading the IP address contained text file to S3 and then reading it back -
--------------cloudformation Json contains the following ---------
"wget -qO- http://169.254.169.254/latest/meta-data/local-pv4 >>dockeriseleniumgrid_ip_address\n", "aws s3 cp dockeriseleniumgrid_ip_address s3://xxxxx/dockeriseleniumgrid_ip_address\n"
----------tried reading it from s3 and writing to local machine --------
require 'aws/s3'
S3ID = "xxxxx"
S3KEY = "xxxx"
# include AWS::S3
AWS::S3::Base.establish_connection!(
:access_key_id => S3ID,
:secret_access_key => S3KEY
)
bucket = AWS::S3::Bucket.find("dockeriseleniumgrid_ip_address")
File.open("ip_address.txt", "w") do |f|
f.write(bucket.objects.read)
end
I'm new to AWS and appreciate if anyone can help
user176867
(41 rep)
Nov 18, 2016, 08:12 PM
• Last activity: Jul 3, 2025, 06:07 AM
5
votes
1
answers
3282
views
How to recover data from EBS volume showing no partition or filesystem?
I restored an EBS volume and attached it to a new EC2 instance. When I `lsblk`, I can see it under the name `/dev/nvme1n1`. More specifically, the output of `lsblk` is: ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 25M 1 loop /snap/amazon-ssm-agent/4046 loop1 7:1 0 55.4M 1 loop /snap/core1...
I restored an EBS volume and attached it to a new EC2 instance. When I
lsblk
, I can see it under the name /dev/nvme1n1
.
More specifically, the output of lsblk
is:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 25M 1 loop /snap/amazon-ssm-agent/4046
loop1 7:1 0 55.4M 1 loop /snap/core18/2128
loop2 7:2 0 61.9M 1 loop /snap/core20/1169
loop3 7:3 0 67.3M 1 loop /snap/lxd/21545
loop4 7:4 0 32.5M 1 loop /snap/snapd/13640
loop5 7:5 0 55.5M 1 loop /snap/core18/2246
loop6 7:6 0 67.2M 1 loop /snap/lxd/21835
nvme0n1 259:0 0 8G 0 disk
└─nvme0n1p1 259:1 0 8G 0 part /
nvme1n1 259:2 0 100G 0 disk
As you can see, nvme1n1
has no partitions. As a result, when I try to mount it on a folder with:
sudo mkdir mount_point
sudo mount /dev/nvme1n1 mount_point/
I get
mount: /home/ubuntu/mount_point: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
The volume has data inside:
ubuntu@ubuntu:~$ sudo file -s /dev/nvme1n1
/dev/nvme1n1: data
`
Using sudo mkfs -t xfs /dev/nvme1n1
to create a filesystem is not an option as Amazon states that:
> **Warning**
> Do not use this command if you're mounting a volume that already has data on it (for example, a volume that was created from a snapshot). Otherwise, you'll format the volume and delete the existing data.
Indeed, I tried it with a second dummy EBS snapshot that I recovered, and all I got was a dummy lost+found
linux folder .
This EBS recovered snapshot has useful data inside. How can I mount it without destroying it?
---
# parted -l /dev/nvme1n1 print
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme0n1: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 8590MB 8589MB primary ext4 boot
Error: /dev/nvme1n1: unrecognised disk label
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme1n1: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
dmesg | grep nvme1n1
[ 68.475368] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
[ 96.604971] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
[ 254.674651] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
[ 256.438712] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
$ sudo fsck /dev/nvme1n1
fsck from util-linux 2.34
e2fsck 1.45.5 (07-Jan-2020)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/nvme1n1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
HelloWorld
(1785 rep)
Nov 8, 2021, 12:08 PM
• Last activity: May 19, 2025, 08:41 AM
0
votes
1
answers
91
views
etcd cluster healthcheck | aws ELB
I am setting up a 3 node etcd cluster behind a AWS ELB. I would like to know how can I setup a healthcheck on the ELB for the 3 nodes, preferably using the v3 api endpoint. Thanks.
I am setting up a 3 node etcd cluster behind a AWS ELB. I would like to know how can I setup a healthcheck on the ELB for the 3 nodes, preferably using the v3 api endpoint. Thanks.
GMaster
(6837 rep)
Jun 12, 2019, 03:34 AM
• Last activity: Jun 15, 2019, 10:28 AM
3
votes
1
answers
1215
views
kops unable to deploy elb for ingress controller
i am unable to deploy nginx ingress in kops 1.9 in aws with this kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml kubectl app...
i am unable to deploy nginx ingress in kops 1.9 in aws with this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml
# kubectl describe svc ingress-nginx -n ingress-nginx
Warning CreatingLoadBalancerFailed 2s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service ingress-nginx/ingress-nginx: AccessDenied: User: arn:aws:sts::605051368824:assumed-role/masters.play.domain.org/i-0372932f001403e37 is not authorized to perform: iam:CreateServiceLinkedRole on resource: arn:aws:iam::605051368824:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing
status code: 403, request id: b41a558a-9668-11e8-9265-3b1bdc7d9e74
Mohammed Ali
(691 rep)
Aug 2, 2018, 03:33 PM
• Last activity: Mar 1, 2019, 04:27 PM
0
votes
1
answers
1620
views
OpenVPN how to add route to AWS Load balancer
I have a load balancer in AWS, I want to restrict access in its security group to be accesible only from my OpenVPN server. Now, I would like to use push "route a.b.c.d 255.255.255.255" in server.conf, in order to advertise the loadbalancer address to VPN clients as being accessible through the VPN...
I have a load balancer in AWS, I want to restrict access in its security group to be accesible only from my OpenVPN server.
Now, I would like to use
push "route a.b.c.d 255.255.255.255"
in server.conf, in order to advertise the loadbalancer address to VPN clients as being accessible through the VPN
The problem here is that AWS uses CNAMEs for pointing to load balancers kind of
MyDomainELB-918273645.us-east-1.elb.amazonaws.com
instead of IPs
How could I advertise my clients to reach AWS load balancers through VPN ?
jmhostalet
(301 rep)
Oct 3, 2018, 07:36 AM
• Last activity: Oct 4, 2018, 11:00 AM
0
votes
2
answers
619
views
Unable to load the AWS ELB on 443 port but able to access ELB url on 80 port
I have 2 webservers under AWS ELB. Each webserver has one virtual host file and bundle.crt, .key files. When I tried to load the ELB with http then its directing to the webservers fine but when I use https://ELB url then I am getting below error. [![enter image description here][1]][1] [1]: https://...
I have 2 webservers under AWS ELB. Each webserver has one virtual host file and bundle.crt, .key files. When I tried to load the ELB with http then its directing to the webservers fine but when I use https://ELB url then I am getting below error.
I am tried various options to troubleshoot this issue. I changed the certificates in webserver, I changed the listener ports on the ELB servers, I checked the security group of instances and ELB, I verified the httpd.conf file, verified ssl_conf file but I didnt find any server level error or misconfigurations. All seems to be good at server level but still I am facing above issue. When I tested my web url in ssltest site then I got "The secure protocol is not support" error. I am not sure how to proceed further.

vil
(21 rep)
Sep 20, 2017, 07:20 PM
• Last activity: Sep 20, 2017, 08:00 PM
2
votes
0
answers
251
views
AWS ELB Latency issue
I have two c3.2xlarge EC2 machines with Ubuntu environment both in us-west-2a AZ. Both contains same code with mySQL database from AWS RDS (db.r3.2xlarge). Both instances are added to an ELB. Both has one cron scheduled that runs twice in a day. ELB has been configured to raise the alarm once the th...
I have two c3.2xlarge EC2 machines with Ubuntu environment both in us-west-2a AZ. Both contains same code with mySQL database from AWS RDS (db.r3.2xlarge). Both instances are added to an ELB. Both has one cron scheduled that runs twice in a day.
ELB has been configured to raise the alarm once the threshold crosses 5.0. The CPU utilization of both the instances are by average 30 - 50. At peak hours hits 100% for a minute or two and then returns to normal. But ELB constantly raises alarm thrice a day. At this time, both instances has
CPU - ~50%
Memory - total - 14979
used - ~6000
free - ~9000
RDS CPU - ~30%
Connections - 200 to 300 /5,000
According to this https://aws.amazon.com/premiumsupport/knowledge-center/elb-latency-troubleshooting/ I could find nothing wrong with the instances. But still latency hits the peak and both instance fails to respond.
Till now, I am just removing one of the instance from the load balancer, restart the apache and then load it back and do the same for other instance. This does the job perfectly alright and the instances and ELB works good for next 6-10 hours. But this is not acceptable since, every day twice or thrice one has to take care of the server, needs it to restart.
I need to know, if there is anything wrong or any steps to be taken to resolve this problem.

Thamilhan
(121 rep)
Mar 7, 2016, 05:53 PM
• Last activity: Oct 19, 2016, 04:47 AM
Showing page 1 of 7 total questions