Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

25 votes
7 answers
20564 views
Suppress filename from output of sha512sum
Maybe it is a trivial question, but in the `man` page I didn't find something useful. I am using Ubuntu and `bash`. The normal output for `sha512sum testfile` is testfile How to suppress the filename output? I would like to obtain just
Maybe it is a trivial question, but in the man page I didn't find something useful. I am using Ubuntu and bash. The normal output for sha512sum testfile is testfile How to suppress the filename output? I would like to obtain just
BowPark (5155 rep)
Feb 25, 2016, 01:41 PM • Last activity: Jul 5, 2025, 09:57 PM
1 votes
1 answers
2746 views
Can tar verify integrity of untarred files on destination disk?
Example: I create a.tar.gz from the file "a.txt" (so I used the -z option). Let's say the checksum of the file a.txt before it's added to the archive is "abc123". When I untar and "a.txt" is written to disk, can I make it so tar checks that the checksum of a.txt on the destination disk is "abc123" a...
Example: I create a.tar.gz from the file "a.txt" (so I used the -z option). Let's say the checksum of the file a.txt before it's added to the archive is "abc123". When I untar and "a.txt" is written to disk, can I make it so tar checks that the checksum of a.txt on the destination disk is "abc123" and fail if it isn't the same?
Kias (173 rep)
Nov 21, 2016, 08:11 PM • Last activity: May 6, 2025, 09:05 AM
1 votes
1 answers
47 views
How to calculate the checksum of a ext4 (eg. superblock or inode)
I have an image-file of an ext4 file system (filesystem.img). I am extracting sections sections of the filsystem. Is there a way to manually calculate the the checksum of the sections just with a given byte dump of the section. Given the the superblock according to the documentation the checksum is...
I have an image-file of an ext4 file system (filesystem.img). I am extracting sections sections of the filsystem. Is there a way to manually calculate the the checksum of the sections just with a given byte dump of the section. Given the the superblock according to the documentation the checksum is calculated by using the algorithm CRC32C over the superblock ignoring the checksum field. I have tried to do it in python with the function checksum = crc32c.crc32c(superblock) but with no success yet. Has anybody managed to do this or know what I have to do there? Any help would be appreciated. Thanks
user722585 (11 rep)
Mar 18, 2025, 01:32 PM • Last activity: Mar 19, 2025, 04:08 AM
3 votes
1 answers
263 views
sha256 checksum for my dpkg intel-microcode_3.20241112.1~deb12u1_amd64.deb doesn't match Debian website checksum. Worry?
I believe this is the right way to confirm package integrity - ``` $ sha256sum /var/cache/apt/archives/intel-microcode_3.20241112.1~deb12u1_amd64.deb 5ae98379ad2ca170ab4808d2e78e86560a6976264557a3f26c8829ed45aa33bd /var/cache/apt/archives/intel-microcode_3.20241112.1~deb12u1_amd64.deb ``` However th...
I believe this is the right way to confirm package integrity -
$ sha256sum /var/cache/apt/archives/intel-microcode_3.20241112.1~deb12u1_amd64.deb
5ae98379ad2ca170ab4808d2e78e86560a6976264557a3f26c8829ed45aa33bd  /var/cache/apt/archives/intel-microcode_3.20241112.1~deb12u1_amd64.deb
However the Debian website page https://packages.debian.org/sid/amd64/intel-microcode/download with the title *"Download Page for intel-microcode_3.20241112.1_amd64.deb on AMD64 machines"* lists
Exact Size		7107380 Byte (6.8 MByte)
MD5 checksum		b132ba25e76a0362993eeacac0d26275
SHA1 checksum		Not Available
SHA256 checksum		6aaeef4e106a983b88c8ddec99d105e91064037ead83cc6b35dd1e6d675df485
Also, the size is different
-rw-r--r-- 1 root root 7109172 Dec 18 05:46 /var/cache/apt/archives/intel-microcode_3.20241112.1~deb12u1_amd64.deb
Obviously if the size is different then the checksum will be different, but I checked the checksum before noticing the size. Is there some innocent explanation for this?
Craig Hicks (746 rep)
Feb 14, 2025, 04:53 PM • Last activity: Feb 14, 2025, 05:04 PM
0 votes
2 answers
3654 views
How to disable TCP checksum validation on Linux?
For some reason, I want to disable TCP checksum validation on my Linux host. Is that possible to disable it?
For some reason, I want to disable TCP checksum validation on my Linux host. Is that possible to disable it?
Hsien-Wen Hu (1 rep)
Apr 11, 2019, 04:57 AM • Last activity: Jan 21, 2025, 08:03 AM
0 votes
2 answers
152 views
sha256checksum - how to compare local file with checksum of the remote original
I have a local file `file1` as well as the sha256 checksum of the original file. The remote is a `rhel8 server`, local tool `sha256sum` inside of a `git bash` on a Windows10 Laptop. For the love of god I seem to be unable to compare those too in a reasonable manner. on the remote I get ``` ~> sha256...
I have a local file file1 as well as the sha256 checksum of the original file. The remote is a rhel8 server, local tool sha256sum inside of a git bash on a Windows10 Laptop. For the love of god I seem to be unable to compare those too in a reasonable manner. on the remote I get
~> sha256sum file1
55dcd0a7046a24e5ee296d1481d6f787b1956d138bbfb6e44ca8591006367f27  file1
so my question is ... how to I compare that locally created sum with the checksum I took on the remote? I would assume there must be a way to do this with the sha256sum command itself which then somhow outputs checksum matches or checksum differs or is it really that you have to rely on your eysight to compare those 2 or creation of a extra text files containing each checksum and execute a diff on them? the best way I can come up with is
echo [sha of the remote file] ; sha256sum file1
55dcd0a7046a24e5ee296d1481d6f787b1956d138bbfb6e44ca8591006367f27
55dcd0a7046a24e5ee296d1481d6f787b1956d138bbfb6e44ca8591006367f27 *file1
... and then compare those 2 lines using my bare eyesight. is there nothing better? the manual says
-c, --check
              read SHA256 sums from the FILEs and check them
but what does that practically mean? sha256sum file1 --check [sha of the remote file] would sound logical to me but does not work. Neither does
:~> sha256sum file1 --check
sha256sum: file1: no properly formatted SHA256 checksum lines found
vrms (287 rep)
Oct 28, 2024, 05:52 PM • Last activity: Oct 29, 2024, 10:55 AM
1 votes
2 answers
393 views
accelerate re-syncing for a large file (> 3 TB) with rsync
How to accelerate re-syncing large file (>3 TB) which got few of its blocks changed (< 1 GB) with rsync. AFAIK, rsync will do checksum comparison between src/dst blocks to find the differences and then sync them. Is there a way to increase concurrency at the checksum comparison stage to accelerate t...
How to accelerate re-syncing large file (>3 TB) which got few of its blocks changed (< 1 GB) with rsync. AFAIK, rsync will do checksum comparison between src/dst blocks to find the differences and then sync them. Is there a way to increase concurrency at the checksum comparison stage to accelerate the sync?
LeoMan (111 rep)
Jul 29, 2024, 10:22 PM • Last activity: Oct 11, 2024, 08:58 PM
0 votes
0 answers
67 views
Rsync repeated copying with --checksum keeps finding differences
When I use the command rsync -avhc "/mnt/remotes/remote_disk_dir/" "/mnt/user/local_disk_dir" repeatedly, rsync seemingly keeps finding differences based on the checksum and therefore overwrites a few files. Normally if no differences are found, the output would be something like: sending incrementa...
When I use the command rsync -avhc "/mnt/remotes/remote_disk_dir/" "/mnt/user/local_disk_dir" repeatedly, rsync seemingly keeps finding differences based on the checksum and therefore overwrites a few files. Normally if no differences are found, the output would be something like: sending incremental file list sent 23 bytes received 130 bytes 2.67M bytes/sec total size is 61.88G speedup is 75.66 I keep getting outputs like this: sending incremental file list file1.mp4 file7.mp4 file45.mp4 file68.mp4 sent 1.64G bytes received 73 bytes 2.62M bytes/sec total size is 61.88G speedup is 31.94 It is *not* the same files that is overwritten each time. The total size is about 60 GB with individual file sizes either around 10 MB or 500 MB. I have run the command 10 times and it still happens. I run the command from the terminal in Unraid 6.12.13 and the remote share is on a Windows 10 machine. Why does this happen? I am a complete beginner when it comes to Linux and command line.
Carsten (1 rep)
Sep 15, 2024, 09:10 AM • Last activity: Sep 15, 2024, 09:16 AM
11 votes
10 answers
53057 views
How to verify a checksum using one command line?
Suppose I type and run the following command: sha256sum ubuntu-18.04.1-desktop-amd64.iso After a delay, this outputs the following: 5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433 ubuntu-18.04.1-desktop-amd64.iso Then, I realize that I should have typed the following command to more...
Suppose I type and run the following command: sha256sum ubuntu-18.04.1-desktop-amd64.iso After a delay, this outputs the following: 5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433 ubuntu-18.04.1-desktop-amd64.iso Then, I realize that I should have typed the following command to more rapidly assess whether the SHA‐256 hash matches: sha256sum ubuntu-18.04.1-desktop-amd64.iso | grep 5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433 Is there a way to act on the first output without using the sha256sum command to verify the checksum a second time (i.e., to avoid the delay that would be caused by doing so)? Specifically: 1. I'd like to know how to do this using a command that does not require copy and pasting of the first output's checksum (if it's possible). 2. I'd like to know the simplest way to do this using a command that does require copy and pasting of the first output's checksum. (Simply attempting to use grep on a double‐quoted pasted checksum (i.e., as a string) doesn't work.)
Patrick Dark (233 rep)
Aug 22, 2018, 12:58 AM • Last activity: Sep 13, 2024, 04:24 AM
4 votes
2 answers
7803 views
Command to verify CRC (CRC32) hashes recursively
With the commands `md5sum`, `sha1sum`, `sha256sum` I can take a text file having an hash and a path per line and verify the entire list of files in a single command, like `sha1sum -c mydir.txt`. (Said text file is easy to produce with a loop in `find` or other.) Is there a way to do the same with a...
With the commands md5sum, sha1sum, sha256sum I can take a text file having an hash and a path per line and verify the entire list of files in a single command, like sha1sum -c mydir.txt. (Said text file is easy to produce with a loop in find or other.) Is there a way to do the same with a list of CRC/CRC32 hashes? Such hashes are often stored inside zip-like archives, like ZIP itself or 7z. For instance: $ unzip -v archive.zip Archive: archive.zip Length Method Size Cmpr Date Time CRC-32 Name -------- ------ ------- ---- ---------- ----- -------- ---- 8617812 Stored 8617812 0% 12-03-2015 15:20 13fda20b 0001.tif Or: $ 7z l -slt archive.7z Path = filename Size = 8548096 Packed Size = Modified = 2015-12-03 14:20:20 Attributes = A_ -rw-r--r-- CRC = B2F761E3 Encrypted = - Method = LZMA2:24 Block = 0
Nemo (938 rep)
Dec 18, 2015, 12:47 PM • Last activity: Jul 29, 2024, 08:44 AM
0 votes
1 answers
190 views
How are EXT4 metadata checksums within group descriptors calculated?
I'm looking at parsing raw EXT4-formatted block devices, just using Python, primarily as a learning exercise, but am having trouble manually generating the expected Group Descriptor checksums - there appears to be some conflicting, missing or (seemingly) incorrect information when I search resources...
I'm looking at parsing raw EXT4-formatted block devices, just using Python, primarily as a learning exercise, but am having trouble manually generating the expected Group Descriptor checksums - there appears to be some conflicting, missing or (seemingly) incorrect information when I search resources online. I am able to correctly calculate the expected block bitmap and inode bitmap checksums, using the following calculation:

- (crc32c(s_uuid + bitmap_block))


as opposed to:

(s_uuid + bitmap_block)


As most of the documentation suggests (though I don't understand why inverting it yields the expected checksum value). However, I am unable to calculate the expected group descriptor checksum. The documentation suggests this should be:

(s_uuid + bg_num + group_desc) & 0xffff


I have tried calculating it as the documentation suggests, inverting as before, with and without the block group number, using the full block as the group descriptor, using the 64 byte descriptor, using only 32 bytes as the descriptor. And I have tried all of these with zero-ing out the 16-bit checksum field, and skipping over that field in calculations. Nothing I try yields the expected checksum value.

For reference, both METADATA_CSUM and FLEX_BG feature flags are set, and maybe the
part of the calculation that I am using is incorrect as a result of this.

Can anyone provide more information on how to correctly calculate the group descriptor checksum within EXT4 group descriptors? Can anyone also advise why the bitmap checksums only yield the expected (correct) values when subtracted from
despite no documentation I've found suggesting that this is necessary?
genericuser99 (119 rep)
Jul 18, 2024, 10:40 AM • Last activity: Jul 23, 2024, 10:54 PM
0 votes
1 answers
290 views
How to verify downloaded Fedora installation file against the checksum files?
I have downloaded Fedora to my macOS Catalina. I then followed the steps outlined [here](https://fedoraproject.org/security) to verify the downloaded file against the checksum files. Step 4 says to run the following command: `sha256sum -c *-CHECKSUM`. However, there doesn't seem to be a `sha256sum`...
I have downloaded Fedora to my macOS Catalina. I then followed the steps outlined [here](https://fedoraproject.org/security) to verify the downloaded file against the checksum files. Step 4 says to run the following command: sha256sum -c *-CHECKSUM. However, there doesn't seem to be a sha256sum on macOS Catalina, so I ran the following command instead: shasum -a 256 -c *-CHECKSUM. The output was: shasum: Fedora-Workstation-40-1.14-x86_64-CHECKSUM: no properly formatted SHA1 checksum lines found Does it mean the downloaded file is corrupt? How can I properly verify the downloaded installation file against the checksum files?
Evan Aad (103 rep)
Jul 22, 2024, 11:24 AM • Last activity: Jul 22, 2024, 11:33 AM
1 votes
0 answers
48 views
OpenBSD's signify doesn't work on macOS (signify-osx) Sonoma - Could not verify signature
I wish to verifiy a [GrapheneOS release][1] I downloaded. This OpenBSD's signify looks like more secure `sha512sum`. I did not fully understand it. This is what I did: $ brew install signify-osx $ ssh-keygen -Y verify -f allowed_signers.sig -I contact@grapheneos.org -n "factory images" -s tangorpro-...
I wish to verifiy a GrapheneOS release I downloaded. This OpenBSD's signify looks like more secure sha512sum. I did not fully understand it. This is what I did: $ brew install signify-osx $ ssh-keygen -Y verify -f allowed_signers.sig -I contact@grapheneos.org -n "factory images" -s tangorpro-factory-2024071200.zip.sig < /Users/me/Downloads/tangorpro-factory-2024071200.zip Could not verify signature. $ ls -lrt total 16 -rw-r--r-- 1 me staff 144 Jul 17 10:13 allowed_signers.sig -rw-r--r--@ 1 me staff 310 Jul 17 10:15 tangorpro-factory-2024071200.zip.sig $ cat allowed_signers.sig untrusted comment: verify with factory.pub RWQZW9NItOuQYMZY8ZMX9VX4hfy54df7Pt3Yh1qEWTyRlQKH4PdteqeKUk9jljywlcCl8nzKJAj75F70Y5FTsAK4cw2aV+CZcAA= $ cat tangorpro-factory-2024071200.zip.sig -----BEGIN SSH SIGNATURE----- U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAghSD+bkKg/zdvSt9ILNhJUDhzDi Kvj2KjkY+jFuDF0kQAAAAOZmFjdG9yeSBpbWFnZXMAAAAAAAAABnNoYTUxMgAAAFMAAAAL c3NoLWVkMjU1MTkAAABApKkue2E4mce5NMqalNZq4ERyZcte1rECYxy7KK9BqwwTX1y4gH Yc0wwNOd9mTwGAMimJWWw2+23biUFWW5kpBg== -----END SSH SIGNATURE----- The error Could not verify signature and signify is not yet documented in internet.
Salinas (11 rep)
Jul 17, 2024, 08:36 AM • Last activity: Jul 17, 2024, 09:58 AM
0 votes
1 answers
165 views
in AUR checksum is checksum of what?
In arch linux AUR there is a checksum field. It is easy to calculate checksum of files or archived directories. The thing I cannot find on internet is this AUR checksum is checksum of what?
In arch linux AUR there is a checksum field. It is easy to calculate checksum of files or archived directories. The thing I cannot find on internet is this AUR checksum is checksum of what?
Masoud Yousefvand (15 rep)
Jun 20, 2024, 01:43 PM • Last activity: Jun 20, 2024, 03:06 PM
2 votes
1 answers
107 views
Continue a directory tree checksum from a given file
I have: - A `checksum.txt` file which contains many lines of checksums of single files from a mountpoint in a huge directory, which mounted and then it disconnected, thereby not finishing the `checksum.txt` (partial checksums) - `localchecksums.txt` full checksum list, containing thousands of lines...
I have: - A checksum.txt file which contains many lines of checksums of single files from a mountpoint in a huge directory, which mounted and then it disconnected, thereby not finishing the checksum.txt (partial checksums) - localchecksums.txt full checksum list, containing thousands of lines of SHA256 checksums with filenames etc. I would like to: - Compare the remote mount checksums and local ones with sha256sum -c checksum.txt localchecksum.txt or similar, but: 1. I don't want to go through gigabytes of data again to get the remaining hashes 2. I don't want to restart the whole process for checksum.txt I generated the list by using find to recursively find single files and exec sha256sum on them. It is possible to get the remaining hashes by comparing the two files or somehow continue checking the checksums by reading the checksum.txt file and only calculating checksums for unchecked files. The problem with the first approach is that the order is different in the files. The second approach sounds good but I don't have any idea how to start with that. #### Sample of any of the checksum files:
8e2931cc1ad3adc07df115456b36b0dbd6f80f675e0a9813e20ad732ae5d4515  ./folder/8ggSHp5I7hNEl3vDCbWv6Q/wA-KzXIh1Ce3G93s20X24v_4vUeywBe3mXPhGjPt_Lg/cRf8KgbqIsqwbon3DX3PN1-oV6_Nr9Baeymaw-ZJw00
37d2dfe2315cc401536329e3fbe421384bbb50c656c3dbeb42798e5666822e6c  ./folder/8ggSHp7I7hHEl3vDCbWv6Q/wA-KzXIh1Ce3G93s2oX24v_4vUeywBe3mXPhGjPt_Lg/V02s6HKhyJ9Nyd2jQtSjWg
d0e9b95065a264db0d372ccace5d3a72f38f74ca7b44da4794dae23c91e18e57  ./folder/8ggSHp7I7hNxl3vDCbWv6Q/wA-KzXIh1Ce3G93s2oX24v_4vUeywBe3mXPhGjPt_Lg/U3fhBugX6pexYzh6qGKlW7lYWsFShWH7JwN9fmU8ay2lLZkciH2sXsiGbmIc97iJ
44a5fe29063e472857bb9a1929af06a32bb4b2394630f80c2dc732fd662620bc  ./folder/8ggSHp7I7hNEc3vDCbWv6Q/wA-KzXIh1Ce3G93s2oX24v_4vUeywBe3mXPhGjPt_Lg/gTrqUL4ZjWTWMl6BcjfwUe5bBDatscwUoYY9IFQDztc
Sir Muffington (1306 rep)
Apr 16, 2024, 11:14 AM • Last activity: Apr 18, 2024, 05:32 PM
1 votes
1 answers
78 views
Check SHA256SUMS and exit non-zero on unexpected file (file not present in digest)
I'm trying to check the integrity of a set of downloaded files using `sha256sum`. I cryptographically signed a digest file (named `SHA256SUMS`) with PGP. I create the file by recursively calculating the checksums of all the files in & under the current directory with ``` find . -type f -not -name SH...
I'm trying to check the integrity of a set of downloaded files using sha256sum. I cryptographically signed a digest file (named SHA256SUMS) with PGP. I create the file by recursively calculating the checksums of all the files in & under the current directory with
find . -type f -not -name SHA256SUMS -exec sha256sum '{}' \; >> SHA256SUMS
I can now verify the integrity of the files by (after checking the signature of the digest file, which is omitted from this question for simplicity) executing:
sha256sum -c SHA256SUMS
The above command will exit non-zero if any of the files in the digest file have a different contents from what's stored in the digest file. However, it will *not* exit non-zero if there's some new file that's not listed in the digest. I couldn't find any options in sha256sum to fail if there's an unexpected file. How can I verify the integrity of a directory recursively using sha256sum, including failing on unverified files?
Michael Altfield (382 rep)
Mar 22, 2024, 04:34 AM • Last activity: Mar 22, 2024, 01:51 PM
10 votes
3 answers
10815 views
Smarter filetransfers than rsync?
I have a large file (2-3 GB, binary, undocumented format) that I use on two different computers (normally I use it on a desktop system but when I travel I put it on my laptop). I use rsync to transfer this file back and forth. I make small updates to this file from time to time, changing less than 1...
I have a large file (2-3 GB, binary, undocumented format) that I use on two different computers (normally I use it on a desktop system but when I travel I put it on my laptop). I use rsync to transfer this file back and forth. I make small updates to this file from time to time, changing less than 100 kB. This happens on both systems. The problem with rsync as I understand it is that if it think a file has changed between source and destination it transfers the complete file. In my situation it feels like a big waste of time when just a small part of a file has changes. I envisage a protocol where the transfer agents on source and destination first checksums the whole file and then compare the result. When they realise that the checksum for the whole file is different, they split the file into two parts, A and B and checksum them separately. Aha, B is identical on both machines, let's ignore that half. Now it splits A into A1 and A2. Ok, only A2 has changed. Split A2 into A2I and A2II and compare etc. Do this recursively until it has found e.g., three parts that are 1 MB each that differs between source and destination and then transfer just these parts and insert them in the right position at the destination file. Today with fast SSDs and multicore CPUs such parallelisation should be very efficient. So my question is, are there any tools that works like this (or in another manner I couldn't imagine but with similar result) available today? A request for clarification has been posted. I mostly use Mac so the filesystem is HFS+. Typically I start rsync like this rsync -av --delete --progress --stats - in this cases I sometimes use SSH and sometimes rsyncd. When I use rsyncd I start it like this rsync --daemon --verbose --no-detach. Second clarification: I ask for either a tool that just transfers the delta for a file that exists in two locations with small changes and/or if rsync really offers this. My experience with rsync is that it transfers the files in full (but now there is an answer that explains this: rsync needs an rsync server to be able to transfer just the deltas, otherwise (e.g., using ssh-shell) it transfers the whole file however much has changed).
d-b (2047 rep)
Jan 25, 2015, 04:37 PM • Last activity: Jan 15, 2024, 10:00 AM
0 votes
1 answers
92 views
How do you synchronize files with partially failed download using rsync?
I'm trying to synchronize literally thousands of files of various sizes and I would like to have a 1:1 copy of the files. That means that already present files should be checked for their integrity and if there's a wrong checksum, the file needs to be overwritten. A so-called delta transfer is only...
I'm trying to synchronize literally thousands of files of various sizes and I would like to have a 1:1 copy of the files. That means that already present files should be checked for their integrity and if there's a wrong checksum, the file needs to be overwritten. A so-called delta transfer is only necessary at this point because of the partially failed transfer. Apparently my mount is kinda unstable and it fails after 300-400GB of transfer using cp or rsync. I did the following before this: 1. I mounted the storage, and did cp -r src dest, it failed after like 300GB because the mount dropped and it errored out (don't have the error anymore apparently) 2. I mounted the storage again and did rsync -aP src dest, it failed after like 400GB with rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1338) [sender=3.2.7] because the mount failed again. Considering the file size it probably overwrite most of the files. 3. I checked my kernel log and found nothing (sudo dmesg) I found a reconnect flag for my mount, but it would not be instant. - There's a rsync flag named -c which calculates the checksums, but does it do a so-called delta transfer too or do I need to add more flags? How could I best fix this problem at hand? # UPDATE 1 Correct me if I'm wrong but I think the issue at hand was that the storage had the owners and groups of different users and groups than in rsync. To elaborate: cp -r copied the files and changed their ownership and group ownership to the user copying, whereas rsync seems to copy the file 1:1 with the same user and group ownership... That's probably why the transfer was overwriting old files...
Sir Muffington (1306 rep)
Dec 30, 2023, 03:26 PM • Last activity: Dec 30, 2023, 04:24 PM
20 votes
4 answers
88950 views
How to check if a file is corrupt or not?
Are there any general solutions to check if a file is corrupt or not? For example, whether a video file is bad, or a compressed file is corrupt, etc.
Are there any general solutions to check if a file is corrupt or not? For example, whether a video file is bad, or a compressed file is corrupt, etc.
LanceBaynes (41465 rep)
Jun 17, 2011, 08:07 AM • Last activity: Nov 22, 2023, 07:32 PM
0 votes
0 answers
125 views
BASH Get input from STDIN while using | (pipe)
I'm creating a new script. I would like to implement a way to verify this script. So I'm using those commands to check the content of the file. ``` lang-bash remote_file="$(curl -m2 -s "$1")" checksum_remote="$(echo "$remote_file" | sha256sum | cut -d ' ' -f1)" checksum_current="$(sha256sum < "$COMM...
I'm creating a new script. I would like to implement a way to verify this script. So I'm using those commands to check the content of the file.
lang-bash
remote_file="$(curl -m2 -s "$1")"
checksum_remote="$(echo "$remote_file" | sha256sum | cut -d ' ' -f1)"
checksum_current="$(sha256sum < "$COMMAND_NAME" | cut -d ' ' -f1)"
But after further developments I realize that I could use my script with the command curl -s $SCRIPT_URL | bash. In this case my check [ "$checksum_remote" != "$checksum_current" ] always succeed as the $COMMAND_NAME is bash and not the content of my script. Is anyone know how to retrieve the whole source code with the bash script itself when we are using pipe execution?
olive007 (131 rep)
Nov 20, 2023, 03:15 PM
Showing page 1 of 20 total questions