Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
2
answers
1520
views
Truncate file that is opened in another procces
If I try `truncate -s 0 log.log` (`:>log.log` has same behavior) the displayd space on disk do become free but the size (`ls -l`) of file is still the same (tho `du` shows less). As far as I understand, it happens because the pointer is still "old". This behavior causes that I cannot use `cat ... |...
If I try
truncate -s 0 log.log
(:>log.log
has same behavior) the displayd space on disk do become free but the size (ls -l
) of file is still the same (tho du
shows less). As far as I understand, it happens because the pointer is still "old".
This behavior causes that I cannot use cat ... | grep ...
command: CLI says that file is binary. So the only way is to use less
or another commands.
So, how do I truncate file, that is opened in write mode in another proccess and have correct file size after truncate?
I need data in log.log
to be truncated to another file, or just delete whole data in file without deleting the file itself
Sova
(23 rep)
May 10, 2021, 04:09 PM
• Last activity: Feb 7, 2025, 05:18 AM
4
votes
2
answers
544
views
What is the "at most" modifier for in 'truncate'?
The manual of [`truncate`][1] shows I can add `<` for "at most". What is this for? It sounds like the default to me. [1]: https://linux.die.net/man/1/truncate
The manual of
truncate
shows I can add <
for "at most".
What is this for?
It sounds like the default to me.
rubo77
(30435 rep)
Jan 7, 2024, 10:07 PM
• Last activity: Jan 9, 2024, 01:52 AM
-1
votes
1
answers
148
views
Truncate command not creating a hole
I am trying to create a file with hole using the truncate command. I read up in some posts and one of the answers in this [post][1] says to use truncate command. Filesystem used is btrfs. This is the command ``` $: truncate -s 16K holes $: du holes $: 16 holes $: stat holes File: holes Size: 16384 B...
I am trying to create a file with hole using the truncate command. I read up in some posts and one of the answers in this post says to use truncate command. Filesystem used is btrfs. This is the command
$: truncate -s 16K holes
$: du holes
$: 16 holes
$: stat holes
File: holes
Size: 16384 Blocks: 32 IO Block: 4096 regular file
As can be seen, its allocating 16 blocks...my understanding was that it will allocate 0 blocks as mentioned that answer as well. Did I make a mistake in understanding what truncate is doing?
Shivanshu Arora
(11 rep)
Oct 5, 2023, 02:17 AM
• Last activity: Oct 5, 2023, 04:48 PM
0
votes
1
answers
1080
views
Truncate expand doesn't seem to work
I just expanded my 1GB "homefile" which is mounted to `/home/user` by another GB using `truncate -s +1G homefile`. While it changed the size of homefile shown with `df` to 2GB, when mounted it is still only 1GB. Am I missing something? I don't have to use mkfs.etx4 again, do I?.. as that would wipe...
I just expanded my 1GB "homefile" which is mounted to
/home/user
by another GB using truncate -s +1G homefile
. While it changed the size of homefile shown with df
to 2GB, when mounted it is still only 1GB. Am I missing something? I don't have to use mkfs.etx4 again, do I?.. as that would wipe the data and defeat the purpose of me using truncate over fallocate, with which I have to use resize2fs after expanding, but doesn't change the data. I tried using resize2fs after truncate, but it had an error in the filesystem type (I'll have to reboot my VM if that is pertinent).
If there is no way for me to expand with truncate, is there another way to mount a dynamically expanding file? I know qcow2 can do that, but it seems a little heavy-handed for this. I'm aware of squashfs in things like PuppyOS and PorteuOS, but it had more steps involved to set up when I looked. I like the simplicity of truncate if it will work.
alchemy
(757 rep)
Apr 10, 2022, 04:47 AM
• Last activity: Sep 9, 2023, 08:32 AM
29
votes
4
answers
36118
views
Is there any limit on line length when pasting to a terminal in Linux?
I am trying to send messages from `kafka-console-producer.sh`, which is #!/bin/bash if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M" fi exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@" I am pasting messages then via Putty terminal. On receive side I see...
I am trying to send messages from
kafka-console-producer.sh
, which is
#!/bin/bash
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx512M"
fi
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@"
I am pasting messages then via Putty terminal. On receive side I see messages truncated approximately to 4096 bytes. I don't see anywhere in Kafka, that this limit is set.
Can this limit be from bash/terminal or Putty?
Dims
(3425 rep)
Apr 6, 2021, 01:50 PM
• Last activity: Aug 16, 2023, 10:53 AM
0
votes
1
answers
594
views
Problem creating a disk image of an SD card
I have built a custom image of Armbian with a partition size of 3.1 GB, and I am now finished working with it. It is currently written to a bootable 64 GB SD card which is using a GUID partition table (GPT). My problem is, is that when I want to make an image of the card using Ubuntu, I get an image...
I have built a custom image of Armbian with a partition size of 3.1 GB, and I am now finished working with it. It is currently written to a bootable 64 GB SD card which is using a GUID partition table (GPT).
My problem is, is that when I want to make an image of the card using Ubuntu, I get an image file 63 GB in size, but I don't want an image file with 60 GB of empty space.
I looked for other ways of shortening the image file by using
truncate
command, and creating an image using dd count=
and it isn't working. When I use dd
it creates an image file that when mounted is all "free space" and PMBR, and truncate
breaks a working image file.
So (unless I'm doing it wrong), how can I create a 3 GB image of my SD card that will contain the boot information?
Chris Hudlin
(33 rep)
May 27, 2022, 06:51 AM
• Last activity: May 27, 2022, 08:45 AM
0
votes
0
answers
697
views
"> $logfile" does not truncate, file size goes to 0 and a second later, is back to full size
I have a script that writes to a logfile like this: $ nohup myscript.sh > myscript.out 2>&1 & when the log file gets very large, I need to truncate it like this: > myscript.out I see the size go to 0 briefly but immediately jumps back to full size again. $ ls -ald myscript.out -rw-rw-r-- 1 vmware vm...
I have a script that writes to a logfile like this:
$ nohup myscript.sh > myscript.out 2>&1 &
when the log file gets very large, I need to truncate it like this:
> myscript.out
I see the size go to 0 briefly but immediately jumps back to full size again.
$ ls -ald myscript.out
-rw-rw-r-- 1 vmware vmware 14285855 Apr 11 04:33 myscript.out
$ > myscript.out
$ ls -ald myscript.out
-rw-rw-r-- 1 vmware vmware 0 Apr 11 04:33 myscript.out
$ ls -ald myscript.out
-rw-rw-r-- 1 vmware vmware 14298778 Apr 11 04:33 myscript.out
How can I truncate it so the size goes to zero and starts growing again from zero?
I have tried many other alternatives but nothing works. Same thing, size goes to 0, then back to full size.
true > myscript.out
: > myscript.out
echo -n > myscript.out
cp /dev/null myscript.out
truncate -s 0 myscript.out
dd if=/dev/null of=myscript.out
chiwal
(1 rep)
Apr 11, 2022, 09:41 AM
• Last activity: Apr 11, 2022, 09:50 AM
292
votes
3
answers
477869
views
Most efficient method to empty the contents of a file
I am aware of three methods to delete all entries from a file. They are - `>filename` - `touch filename` 1 - `filename filename` the most as that requires the least number of keystrokes. However, I would like to know which is the most efficient of the three (if there are any more efficient methods)...
I am aware of three methods to delete all entries from a file.
They are
-
>filename
- touch filename
1
- filename filename
the most as that requires the least number of keystrokes.
However, I would like to know which is the most efficient of the three (if there are any more efficient methods) with respect to large log files and small files.
Also, how does the three codes operate and delete the contents?
----
1**Edit**: as discussed in [this answer](https://unix.stackexchange.com/a/88810/377345) , this actually _does not_ clear the file!
debal
(3754 rep)
Aug 30, 2013, 05:13 AM
• Last activity: Feb 25, 2021, 02:26 PM
1
votes
0
answers
244
views
truncate command to to safely remove Celery log files?
I my Ubuntu server that is running Python/Celery has about 9 log files totaling to 9GB space. Is it safe to reduce the size of these files by running the following command? > truncate -s 0 *log If not, is there any other suggestions to reduce the size of these files?
I my Ubuntu server that is running Python/Celery has about 9 log files totaling to 9GB space.
Is it safe to reduce the size of these files by running the following command?
> truncate -s 0 *log
If not, is there any other suggestions to reduce the size of these files?
user1761583
(11 rep)
Dec 26, 2020, 08:03 PM
• Last activity: Dec 26, 2020, 08:20 PM
1
votes
1
answers
103
views
tee pipeline and pnmtools - truncated file
This sequence of commands works OK: ``` pngtopnm file.png 2> /dev/null > dump1 pnmfile /dev/null | tee dump2 | pnmfile stdin: PPM raw, 1920 by 1080 maxval 255 ls -l dump2 -rw-r----- 1 cmb 49152 Sep 15 14:34 dump2 ``` I'm not clear on what difference it makes where 'tee' is sending stdin to what gets...
This sequence of commands works OK:
pngtopnm file.png 2> /dev/null > dump1
pnmfile /dev/null | tee dump2 | pnmfile
stdin: PPM raw, 1920 by 1080 maxval 255
ls -l dump2
-rw-r----- 1 cmb 49152 Sep 15 14:34 dump2
I'm not clear on what difference it makes where 'tee' is sending stdin to what gets saved in the dump file - why is 'dump2' truncated, and not identical to 'dump1'?
cmp dump
cmp: EOF on dump2 after byte 49152, in line 4
I suspect its something to do with 'pnmfile', since putting something else at the end of the pipeline seems to work OK - 'dump3' is the right size/same content as dump1, and the end of the pipe ('fmt') is doing something to the file...:
pngtopnm file.png 2> /dev/null | tee dump3 |fmt -10 > dump4
ls -l dump
-rw-r----- 1 cmb 6220817 Sep 15 14:41 dump3
-rw-r----- 1 cmb 6224311 Sep 15 14:41 dump4
(XUbuntu 20.04, diffutils 3.7, Netpbm 10.0, coreutils 8.30)
ColinB
(113 rep)
Sep 15, 2020, 02:00 PM
• Last activity: Sep 15, 2020, 03:08 PM
0
votes
0
answers
63
views
truncate linux kernel
firstly sorry for my English. I want to truncate kernel options like Bluetooth, mouse, keyboard etc.. except for my use. but how can I know that kernel options related to devices that I want to remove? I've been that work with "make menuconfig". I've been read the description of kernel options and u...
firstly sorry for my English.
I want to truncate kernel options like Bluetooth, mouse, keyboard etc.. except for my use.
but how can I know that kernel options related to devices that I want to remove?
I've been that work with "make menuconfig".
I've been read the description of kernel options and uncheck the options.
Does any summary exist on the website? or any way that can work?
please tell me. Thank you
Jinwoo Bae
(13 rep)
Aug 4, 2020, 01:55 AM
-1
votes
1
answers
468
views
tail -n 10 is truncating my entire file
I have these 2 scripts, one is writing to the file: #!/usr/bin/env bash while true; do sleep 1; echo "$(uuidgen)" >> /tmp/cprev.stdout.log done; the other is reading the last 10 lines and overwriting the file with those 10 lines: #!/usr/bin/env bash while true; do sleep 5; inotifywait -e modify /tmp...
I have these 2 scripts, one is writing to the file:
#!/usr/bin/env bash
while true; do
sleep 1;
echo "$(uuidgen)" >> /tmp/cprev.stdout.log
done;
the other is reading the last 10 lines and overwriting the file with those 10 lines:
#!/usr/bin/env bash
while true; do
sleep 5;
inotifywait -e modify /tmp/cprev.stdout.log | tail /tmp/cprev.stdout.log > /tmp/cprev.stdout.log
done;
for some reason the tail command is truncating the file - what I want to do is write to the file only when the tail command finishes getting all 10 lines from the file, how can I do that?
what actually happens:
1. tail truncates file
2. tail reads 0 lines
but what I want to do:
1. tail reads 10 lines
2. tail truncates files
3. tail writes 10 lines from above
how can I do that?
Alexander Mills
(10734 rep)
Mar 26, 2020, 12:59 AM
• Last activity: Mar 26, 2020, 02:35 AM
11
votes
9
answers
7549
views
Is it possible to truncate a file (in place, same inode) at the beginning?
It is possible to remove trailing bytes of a `file` without writting to a new file (` > newfile `) and moving it back (`mv newfile file`). That is done with `truncate`: truncate -s -1 file It is possible to remove leading bytes but by moving it around (which changes the inode) (for some versions of...
It is possible to remove trailing bytes of a
file
without writting to a new file ( > newfile
) and moving it back (mv newfile file
). That is done with truncate
:
truncate -s -1 file
It is possible to remove leading bytes but by moving it around (which changes the inode) (for some versions of tail):
tail -c +1 file > newfile ; mv newfile file
So: How to do that without moving files around?
Ideally, like truncate, only a few bytes would need to be changed even for very big files.
note: sed -i
will change the file inode, so, even if it may be useful, is not an answer to this question IMO.
user232326
Nov 30, 2019, 05:25 PM
• Last activity: Dec 11, 2019, 06:14 AM
0
votes
1
answers
1129
views
What could cause a log file to be trunctated during writing?
In my workplace, I've inherited the responsibility of managing a web server. It's a CentOS Linux virtual machine, running on Amazon AWS EC2. Alongside serving web pages, there is a pile of scheduled tasks, background processing and database operations that happen. Just now I was manually running a B...
In my workplace, I've inherited the responsibility of managing a web server. It's a CentOS Linux virtual machine, running on Amazon AWS EC2. Alongside serving web pages, there is a pile of scheduled tasks, background processing and database operations that happen.
Just now I was manually running a Bash script that calls Oracle SQL*Plus, which reads a SQL script with a load of UPDATE statements and calls to refresh materialized views. Maybe none of that is relevant, but I wanted to give some context.
The Bash script writes output to a log file in
/tmp
, and I was using the command tail -f output.log
to monitor the output. It was running for a long time - maybe 20 minutes - with output slowly appearing in my terminal, but then I got a message: tail: output.log: file truncated
and the Bash script stopped running. The log file exists in /tmp
but it has size 0. I was hoping to go through the log file in detail to see what DB errors were reported, so I could fix things.
My question is, what could cause this file truncation to happen? I don't think the file itself was very big - only something like 200 lines. This is all kind of new to me, and I don't really know where to start, or what to suspect as potentially problematic.
osullic
(235 rep)
Dec 9, 2019, 03:28 PM
• Last activity: Dec 9, 2019, 03:47 PM
2
votes
2
answers
787
views
/usr/bin/truncate: Argument list too long
I want to use the `truncate` command to create a huge number of small files for testing. I tried the command with a small number of files (100) and it worked. When I changed the number to 1000000, it reports an error: root:[~/data]# truncate -s 1k {1..100} root:[~/data]# rm -rf * root:[~/data]# trun...
I want to use the
truncate
command to create a huge number of small files for testing. I tried the command with a small number of files (100) and it worked. When I changed the number to 1000000, it reports an error:
root:[~/data]# truncate -s 1k {1..100}
root:[~/data]# rm -rf *
root:[~/data]# truncate -s 1k {1..1000000}
-bash: /usr/bin/truncate: Argument list too long
root:[~/data]#
How can I solve it? I have a sense that xargs
could be used, but I can't make it work.
Just a learner
(2022 rep)
Aug 25, 2019, 07:14 PM
• Last activity: Aug 26, 2019, 02:19 PM
4
votes
1
answers
2094
views
Difference between writing file from /dev/zero and truncate
$ timeout 1 cat /dev/zero > file1 $ wc -c file1 270422016 file1 $ du file1 264084 file1 Questions : (1) How do 270422016 null characters come out to be 264084 bytes (i.e 258M). $ truncate -s 270422016 file2 $ wc -c file2 270422016 file2 $ du file2 0 file2 Questions : (2) `file2` has been created wit...
$ timeout 1 cat /dev/zero > file1
$ wc -c file1
270422016 file1
$ du file1
264084 file1
Questions :
(1) How do 270422016 null characters come out to be 264084 bytes (i.e 258M).
$ truncate -s 270422016 file2
$ wc -c file2
270422016 file2
$ du file2
0 file2
Questions :
(2)
file2
has been created with same number of null characters as file1
has, but the size of file2
is zero, why?
(3) What did /dev/zero
do that truncate
didn't or vice versa?
GypsyCosmonaut
(4289 rep)
May 12, 2019, 05:25 AM
• Last activity: May 12, 2019, 02:32 PM
0
votes
2
answers
2679
views
wget not downloading streaming mp3 correctly
I am trying to download a streaming mp3 using wget. this is my basic command: wget http://sj128.hnux.com/sj128.mp3 -c --timeout=1 --waitretry=0 --tries=0 -O "file.mp3" i have been doing this in a script (which lets this run for 1 hour), but what i have been infuriatingly finding is that my file woul...
I am trying to download a streaming mp3 using wget. this is my basic command:
wget http://sj128.hnux.com/sj128.mp3 -c --timeout=1 --waitretry=0 --tries=0 -O "file.mp3"
i have been doing this in a script (which lets this run for 1 hour), but what i have been infuriatingly finding is that my file would end up truncated and incomplete. for example, i would expect the file to be, say around 30MB and it would only be something like 13MB.
i didn't understand what was happening until i ran this command directly from the CLI and saw that eventually i'd always run into a "read timeout". this shouldn't be a show stopper. the -c and infinite retries should handle this FINE.
but instead, after a "read timeout" and a new retry, my file would stop growing even though the download continued.
why does the download continue but the file not continue to grow as expected?
i went so far as to create an elaborate script which started a completely new wget under a completely different file name to avoid a "file" type of conflict and even though ALL OUTPUT showed a completely different file name with a completely new process, IT STILL DIDN'T WRITE A NEW FILE!
in this case, why does the download appear to commence and my new file does't even show up!?
Low Information Voter
(3 rep)
Feb 7, 2019, 08:31 AM
• Last activity: Feb 7, 2019, 04:54 PM
1
votes
1
answers
703
views
ffmpeg truncates paths on output when there are dots involved when run on bash
If I have a path with dots in the path, for instance: /home/user/Documents/hello/test.testing_23-24.123/test.testing_23-24.124 `ffmpeg` can locate the file if you pass the file's path as an argument but it will truncate the path name to the first dot it encounters in the file's path when it outputs...
If I have a path with dots in the path, for instance:
/home/user/Documents/hello/test.testing_23-24.123/test.testing_23-24.124
ffmpeg
can locate the file if you pass the file's path as an argument but it will truncate the path name to the first dot it encounters in the file's path when it outputs the file.
For instance, I got this:
#!/bin/sh
src_folder=pwd
for filename in "${src_folder}"/*.MP4
do
ffmpeg -threads 0 -probesize 100M -analyzeduration 100M -i "${filename}" \
-c:v libx265 -preset medium -pass 1 -tune grain -x265-params "crf=28:pmode=yes" \
-c:a libmp3lame -q:a 9 -strict experimental "${filename%%.*}"_1stpass.mkv
done
The output states:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/user/Documents/hello/test.testing_23-24.123/test.testing_23-24.124/GOPR2103.MP4':
[..]
Output #0, matroska, to '/home/user/Documents/hello/test_hevc.mkv':
We see that the output file's folder path below:
/home/user/Documents/hello/test.testing_23-24.123/test.testing_23-24.124
Gets truncated after ffmpeg
encounters the first dot in it's output path. This only happens to the output's filename. Also it uses the truncated path as the filename.
Anyone that could point me in the right direction on how to solve this?
PS: I am aware that a bad workaround should be avoiding folders with dots :]
sternumbeef
(11 rep)
Jul 27, 2018, 08:12 PM
• Last activity: Jul 28, 2018, 12:40 AM
1
votes
1
answers
793
views
How to print multiple columns without truncating?
I know that `pr -m -t file1 file2` will give me 2 columns like so: file1: a abc abcdefg abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz file2: 1 123 12345678 12345678901234567890 - $ pr -m -t file1 file2 a 1 abc 123 abcdefg 12345678 abcdefghijklmnopqrstuvwxyzabcdefghi 12345678901234567890 Above...
I know that
pr -m -t file1 file2
will give me 2 columns like so:
file1:
a
abc
abcdefg
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
file2:
1
123
12345678
12345678901234567890
-
$ pr -m -t file1 file2
a 1
abc 123
abcdefg 12345678
abcdefghijklmnopqrstuvwxyzabcdefghi 12345678901234567890
Above is a literal cut and paste, but here I added spaces to show how it really lines up in the terminal:
$ pr -m -t file1 file2
a 1
abc 123
abcdefg 12345678
abcdefghijklmnopqrstuvwxyzabcdefghi 12345678901234567890
For some reason, unix
stack exchange doesn't make the code blocks solid.
Anyway, I don't need the line numbers to match up (but to answer the general question, you could also answer how to do that) but the main property I want is to make it so that the lines wrap instead of getting truncated. Do I have no choice but to preprocess each file to a certain width and pipe that in? If so, how would I even do that?
Update: I suppose if there was some command
which restricted the width of a file and forced wrapping into new lines, I'd do: pr -m -t <(command file1) <(command file2)
Timothy Swan
(141 rep)
Apr 18, 2018, 08:54 PM
• Last activity: Apr 18, 2018, 09:39 PM
0
votes
1
answers
569
views
How do I remove a directory on a remote system that has a quota?
I have an rsync.net account that has hit its quota and I'm trying to remove (rm -rf) a directory to clean up space. However all the remove commands I can think of to try (rm, truncate, find -delete, etc.) give me an error related to "Disc quota exceeded". The only method I've found is to scp an empt...
I have an rsync.net account that has hit its quota and I'm trying to remove (rm -rf) a directory to clean up space. However all the remove commands I can think of to try (rm, truncate, find -delete, etc.) give me an error related to "Disc quota exceeded".
The only method I've found is to scp an empty file in and overwrite every file in the directory.
Is there any better way to approach this? Ideally a one liner?
mpr
(1194 rep)
Jan 15, 2018, 06:53 PM
• Last activity: Jan 15, 2018, 07:46 PM
Showing page 1 of 20 total questions