Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
1
answers
2005
views
How to get private IP address of EC2 after spinning the AWS from local machine/ at Jenkins outside AWS
Assume we already spin the AWS using cloudformation plugin from Jenkins outside AWS and now, how to get private IP address after spinning the AWS at my local machine/jenkins using any API methods? I tried ruby aws-sdk, REST API methods to get the private ip outside AWS (at my local) and I'm getting...
Assume we already spin the AWS using cloudformation plugin from Jenkins outside AWS and now, how to get private IP address after spinning the AWS at my local machine/jenkins using any API methods? I tried ruby aws-sdk, REST API methods to get the private ip outside AWS (at my local) and I'm getting timed out for the connection. Here are some examples that did not yield the ip address/EC2 objects and getting timeout -
#using ruby aws-sdk
require 'rubygems'
require 'aws-sdk-v1'
require 'aws-sdk'
AWS.config(:region => "xxxxx",
:access_key_id => "xxxxx",
:secret_access_key => "xxxxx")
ec2 = AWS::EC2.new(
:region => "xxxxxx",
:access_key_id => "xxxx",
:secret_access_key => "xxxxxxx")
ec2.instances.each do |test|
puts test.id
end
#using REST API client -
require 'rubygems'
require 'rest-client'
url = "https://ec2.amazonaws.com/?Action=DescribeNatGateways&ImageId=xxxxxxx&access_key_id=xxxxxx&secret_access_key=xxxxxxxx "
response=RestClient::Request.execute(:url =>url, :method => :get, :verify_ssl => false)
puts response
Tried uploading the IP address contained text file to S3 and then reading it back -
--------------cloudformation Json contains the following ---------
"wget -qO- http://169.254.169.254/latest/meta-data/local-pv4 >>dockeriseleniumgrid_ip_address\n", "aws s3 cp dockeriseleniumgrid_ip_address s3://xxxxx/dockeriseleniumgrid_ip_address\n"
----------tried reading it from s3 and writing to local machine --------
require 'aws/s3'
S3ID = "xxxxx"
S3KEY = "xxxx"
# include AWS::S3
AWS::S3::Base.establish_connection!(
:access_key_id => S3ID,
:secret_access_key => S3KEY
)
bucket = AWS::S3::Bucket.find("dockeriseleniumgrid_ip_address")
File.open("ip_address.txt", "w") do |f|
f.write(bucket.objects.read)
end
I'm new to AWS and appreciate if anyone can help
user176867
(41 rep)
Nov 18, 2016, 08:12 PM
• Last activity: Jul 3, 2025, 06:07 AM
0
votes
2
answers
5534
views
Find specific folder path in s3 bucket
I'm searching a folder in s3 bucket using this command aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3' It's getting results like dir1/dir2/dir3/1/aaa.txt dir1/dir2/dir3/1/bbb.txt dir1/dir2/dir3/1/ccc.txt However, I need only path of that file like dir1/dir2/dir3 I can able remo...
I'm searching a folder in s3 bucket using this command
aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3'
It's getting results like
dir1/dir2/dir3/1/aaa.txt
dir1/dir2/dir3/1/bbb.txt
dir1/dir2/dir3/1/ccc.txt
However, I need only path of that file like
dir1/dir2/dir3
I can able remove unnecessary text to get directory path by this
aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir2' | head -n 1 | sed 's/1.*//'
But this is not working with multiple string search in grep
aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3\|folder3'
I need output like this
dir1/dir2/dir3
folder1/folder2/folder3
Jaimin Gosai
(11 rep)
Mar 28, 2019, 10:16 AM
• Last activity: May 22, 2025, 10:04 AM
8
votes
4
answers
18516
views
Search inside s3 bucket with logs
How to search for a string inside a lot of .gz files in Amazon S3 bucket subfolder? I tried to mount it via s3fs and zgrep but it's sooooo slow. Do you use any other methods? Maybe is there any Amazon service which I could use to quickly zgrep them?
How to search for a string inside a lot of .gz files in Amazon S3 bucket subfolder? I tried to mount it via s3fs and zgrep but it's sooooo slow. Do you use any other methods?
Maybe is there any Amazon service which I could use to quickly zgrep them?
Michal_Szulc
(266 rep)
Sep 26, 2016, 02:04 PM
• Last activity: Jun 4, 2024, 11:24 AM
0
votes
0
answers
47
views
Is it impossible to overwrite a 0byte file on NFS?
I have set up NFS using AWS Storage Gateway and File Share on a Mac environment. When using a specific program to create 0byte files on this NFS, if a file with the same name already exists, the program does not overwrite it but instead retains the existing file. For example: 1. Create a.txt at 12:3...
I have set up NFS using AWS Storage Gateway and File Share on a Mac environment. When using a specific program to create 0byte files on this NFS, if a file with the same name already exists, the program does not overwrite it but instead retains the existing file.
For example:
1. Create a.txt at 12:35 pm
2. Create a.txt at 12:40 pm
3. The system information for a.txt shows the last modification time as 12:35 pm.
Is it impossible to overwrite 0byte files on NFS? My main question is that when performing the same action locally, the mtime changes correctly. Why is there this difference?
I tried changing various NFS mount options and also made adjustments to the S3 settings, but I was not able to achieve the desired result.
derick-park
(1 rep)
May 29, 2024, 01:38 AM
0
votes
0
answers
501
views
xargs cat a file with aws cli (amazon s3 move)
I have a file called `file.csv`, which is a list of the files (or more precisely, file paths path in an S3 Bucket) that need to be moved to another folder in s3. This file lists 53,00,000 files. I have tried the following, but each move is taking a long time. ```bash cat file.csv | xargs -I {} aws s...
I have a file called
file.csv
, which is a list of the files (or more precisely, file paths path in an S3 Bucket) that need to be moved to another folder in s3.
This file lists 53,00,000 files. I have tried the following, but each move is taking a long time.
cat file.csv | xargs -I {} aws s3 mv s3://Bucket1/{} s3://Bucket2/{}
I am trying to speed up the process with the following:
cat file.csv | xargs -P50 -I {} aws s3 mv --recursive s3://Bucket1/{} s3://Bucket2/{}
...but it doesn't seem to be working.
I also tried:
while read line; do
echo ${line} | \
xargs -n1 -P100 -I {} \
aws s3 mv s3://Bucket1/{} s3://Bucket2/{} --recursive
done < file.csv
But that doesn't seem to work either.
How can I run multiple aws cli commands with xargs
by reading the input file?
hanu
(1 rep)
Oct 25, 2023, 03:08 AM
• Last activity: Oct 29, 2023, 11:07 AM
1
votes
1
answers
2656
views
Limiting recursive 'aws s3 ls' searches by number of items in the folder
Here's the scenario: I need to recursively search through tons and tons of folders and subfolders and spit out the results to a log file using the ls command, BUT, I need to stop searching a folder if it has more than 10~ objects? The reason being is once I have a sample of 10 items in a folder, I k...
Here's the scenario:
I need to recursively search through tons and tons of folders and subfolders and spit out the results to a log file using the ls command, BUT, I need to stop searching a folder if it has more than 10~ objects? The reason being is once I have a sample of 10 items in a folder, I know what's in the folder, and since some folders contain tens of thousands of results, this will save lots of time.
Why are you limited to 'ls'?
Because I am searching S3, using the command
aws s3 ls
. The command aws s3 ls --summarize --recursive
does what I need, I just now need a way to limit the search based on the number of items in a folder.
I have tried using aws s3api list-buckets & list-objects and so forth, but even with the --max-values tag it doesn't do what I need. Thanks for your help.
Sidereal
(111 rep)
Aug 3, 2018, 05:30 PM
• Last activity: Sep 1, 2023, 05:03 AM
0
votes
0
answers
6910
views
aws s3 copy failing with fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Copying a file from S3 bucket to EC2 Instance.EC2 Instance is in the same region us-east-2 as S3 Bucket in.S3 Bucket is ACL disabled and Allowed for public access. ```shell session # /usr/local/bin/aws s3 cp s3://Bucket_name/Trail_SW/server_dec.tar.gz /tmp --source-region us-east-2 fatal error: An e...
Copying a file from S3 bucket to EC2 Instance.EC2 Instance is in the same region us-east-2 as S3 Bucket in.S3 Bucket is ACL disabled and Allowed for public access.
session
# /usr/local/bin/aws s3 cp s3://Bucket_name/Trail_SW/server_dec.tar.gz /tmp --source-region us-east-2
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
session
/usr/local/bin/aws --version
aws-cli/2.11.16 Python/3.11.3 Linux/4.18.0-240.1.1.el8_3.x86_64 exe/x86_64.rhel.8 prompt/off
Please share any suggestions.
dbadmin
(23 rep)
May 4, 2023, 05:57 AM
• Last activity: May 4, 2023, 06:42 AM
2
votes
1
answers
1739
views
s3fs complains about SSH key or SSL cert - how to fix?
I downloaded and installed [s3fs 1.73](https://code.google.com/p/s3fs/wiki/FuseOverAmazon) on my Debian Wheezy system. The specific steps I took were, all as root: apt-get -u install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support ./configure --prefix=/usr/local...
I downloaded and installed [s3fs 1.73](https://code.google.com/p/s3fs/wiki/FuseOverAmazon) on my Debian Wheezy system. The specific steps I took were, all as root:
apt-get -u install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support
./configure --prefix=/usr/local
make
make install
The installation went well and I proceeded to create a file
/usr/local/etc/passwd-s3fs
with my credentials copied from past notes (I'm pretty sure those are correct). That file is mode 0600 owner 0:0. Piecing together from the example on the web page and the man page, I then try a simple mount as a proof of concept to make sure everything works:
$ sudo -i
# s3fs mybucketname /mnt -o url=https://s3.amazonaws.com -o passwd_file=/usr/local/etc/passwd-s3fs
In short: it doesn't.
The mount point exists with reasonable permissions, and I get no error output from s3fs. However, nothing gets mounted on /mnt, mount
has no idea about anything of the sort, and if I try umount
it says about the directory "not mounted". The system logs say s3fs: ###curlCode: 51 msg: SSL peer certificate or SSH remote key was not OK
, but **how do I find out which SSL certificate it is talking about or in what way was it not OK?** Firefox has no complaints when I connect to that URL but also redirects me to https://aws.amazon.com/s3/ .
**How do I get s3fs to actually work?**
user
(29991 rep)
Sep 20, 2013, 05:30 PM
• Last activity: Feb 1, 2023, 02:19 PM
3
votes
2
answers
2725
views
Get the size of a S3 Bucket's sub-folder through bash script
I am trying to write a bash script to get the total size of sub folders in a S3 bucket. My bucketpath **s3://path1/path2/subfolders** Inside the path2 folder i have many sub-folder like 2019_06 2019_07 2019_08 2019_09 2019_10 2019_11 2019_12 I need to get the size of each subfolder in a bash script....
I am trying to write a bash script to get the total size of sub folders in a S3 bucket.
My bucketpath **s3://path1/path2/subfolders**
Inside the path2 folder i have many sub-folder like
2019_06
2019_07
2019_08
2019_09
2019_10
2019_11
2019_12
I need to get the size of each subfolder in a bash script.
I wrote a script like
**
#!/bin/bash
FILES=$(mktemp)
aws s3 ls "s3://path1/path2/" >> "$FILES"
cat $FILES
echo
for file in $FILES
do
if [ ! -e "$file" ]
then
s3cmd du -r s3://path1/path2/$file
echo "$file"; echo
continue
fi
echo
done
**
The output of cat $tmpfile is as below
2019_06
2019_07
2019_08
2019_09
2019_10
2019_11
2019_12
But am getting error. While passing the variable into the for loop. Ideally my aim is like for each iteration when for loop runs inside do .....The command should be like
**s3cmd du -r s3://path1/path2/2019_06**
**s3cmd du -r s3://path1/path2/2019_07**
**s3cmd du -r s3://path1/path2/2019_08**
etc...
So that i can get the total size of the folder
Kindly help!
clarie
(95 rep)
Apr 29, 2020, 06:12 AM
• Last activity: Dec 24, 2022, 06:24 AM
0
votes
1
answers
245
views
How to backup many large files to single compressed file on S3
I have an application that has many thousands of files totaling over 10TB. I'd need to backup this data somewhere (probably to AWS S3). I'd like to: 1. compress data being backed up 2. save the backup as a single file For example as a gzipped tarfile. Because of the size, I cannot create the gzipped...
I have an application that has many thousands of files totaling over 10TB.
I'd need to backup this data somewhere (probably to AWS S3).
I'd like to:
1. compress data being backed up
2. save the backup as a single file
For example as a gzipped tarfile.
Because of the size, I cannot create the gzipped tarfile locally because it'd be too large.
How can I:
1. Stream all of these folders and files onto AWS S3 as a single compressed file?
2. Stream the compressed file from S3 back onto my disk to the original filesystem layout?
nick314
(3 rep)
Nov 30, 2022, 06:57 PM
• Last activity: Nov 30, 2022, 07:22 PM
0
votes
1
answers
30
views
Flask Systemd Service not able open video file under s3 bucket mounted folder
My flask app is running as service but it is not able to read video file from s3bucket mounted folder. OpenCV does not throw error when it fails to load but returns 0 frame array. So we understand that it is not able to read from s3 bucket path. Service file [Unit] Description=service After=network....
My flask app is running as service but it is not able to read video file from s3bucket mounted folder. OpenCV does not throw error when it fails to load but returns 0 frame array. So we understand that it is not able to read from s3 bucket path.
Service file
[Unit]
Description=service
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/src/my-service
Environment=PATH=/home/ubuntu/src/my-service/venv/bin
ExecStart=/home/ubuntu/src/my-service/venv/bin/uwsgi --ini my-service.ini
[Install]
WantedBy=multi-user.target
s3 Bucket videos are under
/home/ubuntu/src/s3Bucket/video
I am able to read from s3Bucket/video if I would create new python file and run it manually, but can not read from service application. Also, service can read files under
/home/ubuntu/src
But service can not read files under
/home/ubuntu/src/s3Bucket/video
Tried different service file options, but could not figure out.
Thanks.
nerdicsapo
(101 rep)
Sep 26, 2022, 08:26 PM
• Last activity: Oct 5, 2022, 10:39 PM
0
votes
2
answers
1210
views
I want to download the latest 150 files from S3 Spaces
I want to download the latest 150 files from S3 Spaces. I used this command s3cmd get s3://obs/site1/uploads/large/ /home/ankit -r | tail -n150 but it does not do what I want; instead, it starts downloading all the files For example: If I command: **INPUT** `s3cmd ls s3://obs/site1/uploads/large/` *...
I want to download the latest 150 files from S3 Spaces. I used this command
s3cmd get s3://obs/site1/uploads/large/ /home/ankit -r | tail -n150
but it does not do what I want; instead, it starts downloading all the files
For example:
If I command:
**INPUT**
s3cmd ls s3://obs/site1/uploads/large/
**OUTPUT**
2020-04-30 20:04 0 s3://obs/site1/uploads/large/
2020-04-30 20:04 1401551 s3://obs/site1/uploads/large/501587671885rwk.jpg
2020-04-30 20:04 268417 s3://obs/site1/uploads/large/501587676002xe2.jpg
2020-04-30 20:04 268417 s3://obs/site1/uploads/large/501587677157ssj.jpg
2020-04-30 20:04 268417 s3://obs/site1/uploads/large/501587747245hea.jpg
2020-05-01 05:23 399636 s3://obs/site1/uploads/large/87429599_1412258992269430_5992557431891165184_o.jpg
And I want to download the only the last file (it is latest) that is:
2020-05-01 05:23 399636 s3://obs/site1/uploads/large/87429599_1412258992269430_5992557431891165184_o.jpg
I can list the latest file but cannot download the latest file:
I listed through:
s3cmd ls s3://obs/site1/uploads/large/ | tail -n1
OUTPUT:
2020-05-01 05:23 399636
s3://obs/site1/uploads/large/87429599_1412258992269430_5992557431891165184_o.jpg
**SO, please tell me the command to download this latest file only?**
Ankit JaiSwal
(101 rep)
May 1, 2020, 06:37 AM
• Last activity: Jul 7, 2022, 10:35 PM
2
votes
3
answers
2346
views
autofs : dynamic mounting rule for s3 bucket
I successfully implement autofs to automatically mount s3 bucket to the server which running ubuntu server 14.04.5 by following [this tutorials][1]. but the number of bucket (that needs to automatically mounted) is dynamic, means its can be increase or decrease. so far I need to add/remove rule in a...
I successfully implement autofs to automatically mount s3 bucket to the server which running ubuntu server 14.04.5 by following this tutorials . but the number of bucket (that needs to automatically mounted) is dynamic, means its can be increase or decrease. so far I need to add/remove rule in autofs config whenever the bucket number changed.
the option command for mount those bucket is same. only path and bucket name that have difference. here is my configuration :
on /etc/auto.master
+auto.master
/- /etc/auto.s3bucket --timeout=30
on /etc/auto.s3bucket
[mount-point-bucket1] -fstype=fuse,uid,gid,etc,etc :[tool-mounting]#bucket1
[mount-point-bucket2] -fstype=fuse,uid,gid,etc,etc :[tool-mounting]#bucket2
.....
[mount-point-bucketX] -fstype=fuse,uid,gid,etc,etc :[tool-mounting]#bucketX
my question : Is there a built-in script or function in autofs to dynamically add or remove rule in the file configuration?. so I don't need to re-config whenever the bucket is decrease or increase.
kahidna
(21 rep)
Aug 19, 2017, 10:53 AM
• Last activity: Jun 7, 2022, 06:30 PM
1
votes
0
answers
380
views
AWS libcrypto resolve messages seen when using a boto3 library, apparently after an update
I'm using the `s4cmd` package in Python which in turn uses `boto3` to communicate with a (non Amazon) S3 service. I've started seeing these warning messages on stderr. I believe this happened after an auto update to OpenSSL, but that's just my best guess. ``` AWS libcrypto resolve: searching process...
I'm using the
s4cmd
package in Python which in turn uses boto3
to communicate with a (non Amazon) S3 service.
I've started seeing these warning messages on stderr. I believe this happened after an auto update to OpenSSL, but that's just my best guess.
AWS libcrypto resolve: searching process and loaded modules
AWS libcrypto resolve: found static aws-lc HMAC symbols
AWS libcrypto resolve: found static aws-lc libcrypto 1.1.1 EVP_MD symbols
> openssl version
OpenSSL 1.1.1g 21 Apr 2020
> cat /etc/os-release | head -n6
NAME="Pop!_OS"
VERSION="20.10"
ID=pop
ID_LIKE="ubuntu debian"
PRETTY_NAME="Pop!_OS 20.10"
VERSION_ID="20.10"
Does anyone know what these messages are, if they're ignorable, and if they are how to suppress them?
The onset of these messages correlates with a lot of random SSL failures. Both in Firefox and when using boto3
. I commonly see errors like [Exception] Connection was closed before we received a valid response from endpoint URL
now, but when I ssh into another server I have no problem. An hour later the problems will be gone, only to reappear some apparently random time later.
**Additional info:**
I recently noticed that inside a docker container on my laptop my boto3
& s4cmd
commands work while they fail on my base OS. I checked openssl version
on both:
```
# Base OS, failing
openssl version
OpenSSL 1.1.1g 21 Apr 2020
# Inside docker container, working
openssl version
OpenSSL 1.1.1 11 Sep 2018
David Parks
(1190 rep)
Jun 23, 2021, 05:51 PM
• Last activity: Apr 14, 2022, 08:54 PM
0
votes
1
answers
890
views
Include Variable in S3 CP command for include block
I have a scenario where I want to include some files names in S3 cp command but it keeps failing. Below is my command:- ``` DATE=$(date '+%Y%m%d') /usr/local/bin/aws s3 cp "/test/copy" "s3://test-bucket/test/copy/" --include "TEST_$DATE.REQHE" --include "TEST_$DATE.ERRMSG_REQHE" ``` I am getting bel...
I have a scenario where I want to include some files names in S3 cp command but it keeps failing.
Below is my command:-
DATE=$(date '+%Y%m%d')
/usr/local/bin/aws s3 cp "/test/copy" "s3://test-bucket/test/copy/" --include "TEST_$DATE.REQHE" --include "TEST_$DATE.ERRMSG_REQHE"
I am getting below error while running the command :-
upload failed: ./ to s3://test-bucket/test/copy/ [Errno 21] Is a directory: '/test/copy'
Can someone help me to modify the command so that it can work.
Output of manual commands :-
[root@server:/home/linux]$ /usr/local/bin/aws s3 cp /test/copy/ s3://test-bucket/test/copy/ --include "TEST_20211220.REQHE" --profile aws-s3 --sse aws:kms --sse-kms-key-id ab457654f-6562-43df-124l-4et75653bq12
upload failed: ../../test/copy to s3://test-bucket/test/copy/ [Errno 21] Is a directory: '/test/copy/'
[root@server:/home/linux]$ /usr/local/bin/aws s3 cp "/test/copy/" "s3://test-bucket/test/copy/" --include "TEST_20211220.REQHE" --profile aws-s3 --sse aws:kms --sse-kms-key-id ab457654f-6562-43df-124l-4et75653bq12
upload failed: ./../test/copy to s3://test-bucket/test/copy/ [Errno 21] Is a directory: '/test/copy/'
[root@server:/home/linux]$ /usr/local/bin/aws s3 cp "/test/copy" "s3://test-bucket/test/copy/" --include "TEST_20211220.REQHE" --profile aws-s3 --sse aws:kms --sse-kms-key-id ab457654f-6562-43df-124l-4et75653bq12
upload failed: ./../test/copy to s3://test-bucket/test/copy/ [Errno 21] Is a directory: '/test/copy/'
Thanks in Advance
Ashmit
(1 rep)
Dec 20, 2021, 09:38 AM
• Last activity: Dec 21, 2021, 07:54 AM
0
votes
1
answers
194
views
Uploading to S3 from memory and benchmarking it
I am running some very basic time commands on an S3 read/write. The problem is, I dont want it to be affected by system IO, and want to bench it from memory. A friend has suggested to use /dev/null as pipe, but I have a folder of 1000 files which is about 1GB in size. My bash command looks like this...
I am running some very basic time commands on an S3 read/write. The problem is, I dont want it to be affected by system IO, and want to bench it from memory. A friend has suggested to use /dev/null as pipe, but I have a folder of 1000 files which is about 1GB in size.
My bash command looks like this right now :
time aws s3 cp folder s3://mybucket/folder
What do you suggest, that will time only the write from memory?
Many thanks
Stat.Enthus
(101 rep)
Oct 26, 2021, 08:26 PM
• Last activity: Oct 26, 2021, 09:19 PM
1
votes
0
answers
32
views
Are there commercial/freeware/script systems that could back up (daily/weekly/monthly) to S3?
I used to backup 10 important machines (CentOS/RHEL) using `rsnap` to physical HDDs. I'd like to migrate this backup so its target would be an AWS/S3 bucket. **Do I have to reinvent the wheel, or are there commercial/freeware/script systems that could back up (daily/weekly/monthly) to S3?**
I used to backup 10 important machines (CentOS/RHEL) using
rsnap
to physical HDDs.
I'd like to migrate this backup so its target would be an AWS/S3 bucket.
**Do I have to reinvent the wheel, or are there commercial/freeware/script systems that could back up (daily/weekly/monthly) to S3?**
boardrider
(262 rep)
Sep 27, 2021, 05:23 PM
1
votes
3
answers
1509
views
How to pass each line of an output file as an argument to a for loop in the same bash script?
am trying to write a bash script to get the total size of sub folders in a S3 bucket. My bucketpath `s3://path1/path2/subfolders` Inside the path2 folder i have many sub-folder like 2019_06 2019_07 2019_08 2019_09 2019_10 2019_11 2019_12 I need to get the size of each subfolder in a bash script. I w...
am trying to write a bash script to get the total size of sub folders in a S3 bucket.
My bucketpath
s3://path1/path2/subfolders
Inside the path2 folder i have many sub-folder like
2019_06
2019_07
2019_08
2019_09
2019_10
2019_11
2019_12
I need to get the size of each subfolder in a bash script.
I wrote a script like
#!/bin/bash
FILES=$(mktemp)
aws s3 ls "s3://path1/path2/" >> "$FILES"
cat $FILES
echo
for file in $FILES
do
if [ ! -e "$file" ]
then
s3cmd du -r s3://path1/path2/$file
echo "$file"; echo
continue
fi
echo
done
The output of cat $tmpfile is as below
2019_06
2019_07
2019_08
2019_09
2019_10
2019_11
2019_12
But am getting error. While passing the variable into the for loop. Ideally my aim is like for each iteration when for loop runs inside do .....The command should be like
s3cmd du -r s3://path1/path2/2019_06
s3cmd du -r s3://path1/path2/2019_07
s3cmd du -r s3://path1/path2/2019_08
etc...
So that i can get the total size of the folder
Kindly help!
**Update**
I have edited the code as suggested
#!/bin/bash
FILES=$(mktemp)
aws s3 ls "s3://path1/path2/" >> "$FILES"
for file in cat $FILES
do
if [ -n "$file" ]
echo $file
done
clarie
(95 rep)
Apr 29, 2020, 10:06 AM
• Last activity: May 26, 2021, 11:33 PM
1
votes
3
answers
16018
views
Only copy files from a particular date from s3 storage
I would like to only copy files from S3 that are from today out of a certain bucket with 100s of files. I tried the following: `$ aws s3 ls s3://cve-etherwan/ --recursive --region=us-west-2 | grep 2018-11-06 | awk '{system("aws s3 sync s3://cve-etherwan/$4 . --region=us-west-2") }'` but it doesn't q...
I would like to only copy files from S3 that are from today out of a certain bucket with 100s of files. I tried the following:
$ aws s3 ls s3://cve-etherwan/ --recursive --region=us-west-2 | grep 2018-11-06 | awk '{system("aws s3 sync s3://cve-etherwan/$4 . --region=us-west-2") }'
but it doesn't quite work, I also get files from other dates.
How do I do this correctly?
stdcerr
(2099 rep)
Nov 7, 2018, 04:08 AM
• Last activity: May 6, 2021, 01:44 PM
0
votes
2
answers
342
views
browse local S3 storage (not Amazon) with command line tool
I wish to browse, put and get files in local S3 storage from vendor [EMC Atmos Cloud Storage][1] I have a RHEL7 and wish to do it on command line. We have no flat structure (like Amazon) we use directories. With a search engine I found [Amazon S3 Tools: Command Line S3 Client Software and S3 Backup]...
I wish to browse, put and get files in local S3 storage from vendor EMC Atmos Cloud Storage
I have a RHEL7 and wish to do it on command line. We have no flat structure (like Amazon) we use directories.
With a search engine I found Amazon S3 Tools: Command Line S3 Client Software and S3 Backup .
Howto point
s3cmd
to a local S3 host?
Sybil
(1983 rep)
Oct 5, 2015, 08:06 PM
• Last activity: Mar 9, 2021, 10:35 AM
Showing page 1 of 20 total questions