Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

32 votes
8 answers
14362 views
Simultaneously calculate multiple digests (md5, sha256)?
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in pa...
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have tried openssl dgst -sha256 -md5, but it only calculates the hash using one algorithm. Pseudo-code for the expected behavior: for each block: for each algorithm: hash_state[algorithm].update(block) for each algorithm: print algorithm, hash_state[algorithm].final_hash()
Lekensteyn (21600 rep)
Oct 23, 2014, 10:00 AM • Last activity: Aug 4, 2025, 06:51 AM
13 votes
3 answers
4773 views
Tar produces different files each time
I often have large directories that I want to transfer to a local computer from a server. Instead of using recursive `scp` or `rsync` on the directory itself, I'll often `tar` and `gzip` it first and then transfer it. Recently, I've wanted to check that this is actually working so I ran md5sum on tw...
I often have large directories that I want to transfer to a local computer from a server. Instead of using recursive scp or rsync on the directory itself, I'll often tar and gzip it first and then transfer it. Recently, I've wanted to check that this is actually working so I ran md5sum on two independently generated tar and gzip archives of the same source directory. To my suprise, the MD5 hash was different. I did this two more times and it was always a new value. Why am I seeing this result? Are two tar and gzipped directories both generated with the same version of GNU tar in the exact same way not supposed to be exactly the same? For clarity, I have a source directory and a destination directory. In the destination directory I have dir1 and dir2. I'm running: tar -zcvf /destination/dir1/source.tar.gz source && md5sum /destination/dir1/source.tar.gz >> md5.txt tar -zcvf /destination/dir2/source.tar.gz source && md5sum /destination/dir2/source.tar.gz >> md5.txt Each time I do this, I get a different result from md5sum. Tar produces no errors or warnings.
Alon Gelber (133 rep)
Apr 17, 2018, 02:55 PM • Last activity: Aug 1, 2025, 01:16 PM
25 votes
7 answers
20564 views
Suppress filename from output of sha512sum
Maybe it is a trivial question, but in the `man` page I didn't find something useful. I am using Ubuntu and `bash`. The normal output for `sha512sum testfile` is testfile How to suppress the filename output? I would like to obtain just
Maybe it is a trivial question, but in the man page I didn't find something useful. I am using Ubuntu and bash. The normal output for sha512sum testfile is testfile How to suppress the filename output? I would like to obtain just
BowPark (5155 rep)
Feb 25, 2016, 01:41 PM • Last activity: Jul 5, 2025, 09:57 PM
3 votes
1 answers
1930 views
Why is DM-Integrity so slow compared to BTRFS?
I want to detect silent corruption of block devices similar to how BTRFS does that for files. I'd even like to do that below BTRFS (and disable BTRFS's native checksumming) so that I can tweak more parameters than BTRFS allows. DM-Integrity seems like the best choice and in principle it must be doin...
I want to detect silent corruption of block devices similar to how BTRFS does that for files. I'd even like to do that below BTRFS (and disable BTRFS's native checksumming) so that I can tweak more parameters than BTRFS allows. DM-Integrity seems like the best choice and in principle it must be doing the same thing as BTRFS. The problem is that it's incredibly, unusably slow. While sequential writes on BTRFS are 170+ MiB/s (with compression disabled), on DM-Integrity they're 8-12 MiB/s. I tried to match DM-Integrity parameters with BTRFS (sector size, hashing algorithm, etc) and I tried lots of combinations of other parameters (data interleaving, bitmapping, native vs generic hashing drivers, etc). The writes were asynchronous, but the speed was calculated based on the time it took for writes to be committed (so I don't think the difference was due to memory caching). Everything was on top of a writethrough Bcache, which should be reordering writes (so I don't think it could be BTRFS reordering writes). I can't think of any other reason that could explain this drastic performance difference. I'm using Debian 11 with a self-compiled 6.0.12 Linux kernel and sha256 as my hashing algorithm. My block layers are (dm-integrity or btrfs)/lvm/dm-crypt/bcache/dm-raid. **Is there a flaw in my testing? Or some other explanation for this huge performance difference? Is there some parameter I can change with DM-Integrity to achieve comparable performance to BTRFS?**
ATLief (328 rep)
Dec 30, 2022, 12:56 PM • Last activity: Jul 5, 2025, 07:08 PM
0 votes
1 answers
4216 views
Problems in creating certificate with SHA256 / SHA512
I want to generate a self-signed certificate with SHA256 or SHA512, but I have problems with it. I have created a script, which should does this automatically: #!/bin/bash set -e echo "WORKSPACE: $WORKSPACE" SSL_DIR=$(pwd)/httpd_ssl_certs OPENSSL_CNF=$(pwd)/openssl.cnf if [ -d "$SSL_DIR" ]; then rm...
I want to generate a self-signed certificate with SHA256 or SHA512, but I have problems with it. I have created a script, which should does this automatically: #!/bin/bash set -e echo "WORKSPACE: $WORKSPACE" SSL_DIR=$(pwd)/httpd_ssl_certs OPENSSL_CNF=$(pwd)/openssl.cnf if [ -d "$SSL_DIR" ]; then rm -rvf "$SSL_DIR" fi mkdir -vp "$SSL_DIR" pushd "$SSL_DIR" # check if openssl.cnf exists if [ ! -f "$OPENSSL_CNF" ]; then echo "Could not find $OPENSSL_CNF. Build will be exited." exit 1 fi echo " - create private key" openssl genrsa -out server.key.template 2048 echo " - create signing request" openssl req -nodes -new -sha256 -config $OPENSSL_CNF -key server.key.template -out server.csr.template echo " - create certificate" openssl x509 -req -in server.csr.template -signkey server.key.template -out server.crt.template -extfile $OPENSSL_CNF And I have a openssl.cnf file with configuration for it: [ ca ] default_ca = CA_default [ CA_default ] # how long to certify default_days = 365 # how long before next CRL default_crl_days = 30 # use public key default MD default_md = sha256 # keep passed DN ordering preserve = no policy = policy_anything [ policy_anything ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = optional emailAddress = optional [ req ] default_bits = 2048 default_keyfile = server.key.template distinguished_name = req_distinguished_name prompt = no encrypt_key = no # add default_md to [ req ] for creating certificates with SHA256 default_md = sha256 [ req_distinguished_name ] countryName = "AB" stateOrProvinceName = "CD" localityName = "Some town" organizationName = "XXX Y" organizationalUnitName = "XXX Y" commonName = "localhost" emailAddress = "somemail@some.org" When I run the script with this openssl.cnf, then I get a certifiacte, but this certificate is always encrypted with SHA1. I checked it with this command: openssl x509 -in server.crt.template -text -noout | grep 'Signature. I always get this output: Signature Algorithm: sha1WithRSAEncryption Signature Algorithm: sha1WithRSAEncryption Can someone give me a hint, whats false there?
devopsfun (1447 rep)
Oct 17, 2016, 12:17 PM • Last activity: Jun 21, 2025, 03:01 AM
0 votes
1 answers
1956 views
Salt size in /etc/shadow
After user password change the size of salt decreased in RHEL/Centos 6, eg: ``` cat /etc/shadow ... root:$6$FkMNsNxT$FW77....................nbL0...... bin:*:15422:0:99999:7::: ... ``` As you can see, FkMNsNxT is 8 characters. Why it happens? In the beginning, after installation, the size is 16 char...
After user password change the size of salt decreased in RHEL/Centos 6, eg:
cat /etc/shadow

...
root:$6$FkMNsNxT$FW77....................nbL0......
bin:*:15422:0:99999:7:::
...
As you can see, FkMNsNxT is 8 characters. Why it happens? In the beginning, after installation, the size is 16 chars.
user13726895 (1 rep)
Aug 17, 2021, 04:54 PM • Last activity: May 24, 2025, 09:01 AM
0 votes
1 answers
137 views
/etc/shadow file and password storing algorithm in Linux
I don't know what's algorithm of storing password in `/etc/shadow` in Linux. I tested via the following script via python: import hashlib message = b"123" md5_hash = hashlib.md5(message).hexdigest() sha1_hash = hashlib.sha1(message).hexdigest() sha256_hash = hashlib.sha256(message).hexdigest() sha38...
I don't know what's algorithm of storing password in /etc/shadow in Linux.
I tested via the following script via python: import hashlib message = b"123" md5_hash = hashlib.md5(message).hexdigest() sha1_hash = hashlib.sha1(message).hexdigest() sha256_hash = hashlib.sha256(message).hexdigest() sha384_hash = hashlib.sha384(message).hexdigest() sha512_hash = hashlib.sha512(message).hexdigest() print(f"MD5: {md5_hash}") print(f"SHA-1: {sha1_hash}") print(f"SHA-256: {sha256_hash}") print(f"SHA-384: {sha384_hash}") print(f"SHA-512: {sha512_hash}") But I did't see my password.My password is 123.
1. Does shadow store as HASH? if yes I should discard getting password.
2. If password doesn't store as hash, How can I get it?
PersianGulf (11308 rep)
Feb 22, 2025, 06:16 AM • Last activity: Feb 22, 2025, 04:19 PM
2 votes
2 answers
972 views
Why is the file changing before being written to?
On Kubuntu Linux, The Google Chrome browser adds a checksum to the file, preventing simply editing the file by hand. So I'm writing a script to add the checksum. ``` $ cat .config/google-chrome/Default/Custom\ Dictionary.txt AATEST dotancohen checksum_v1 = 2b7288da7c9556608de620e65308efa4$ ``` No pr...
On Kubuntu Linux, The Google Chrome browser adds a checksum to the file, preventing simply editing the file by hand. So I'm writing a script to add the checksum.
$ cat .config/google-chrome/Default/Custom\ Dictionary.txt
AATEST
dotancohen
checksum_v1 = 2b7288da7c9556608de620e65308efa4$
No problem, I'll copy the entire file sans last line and check of its MD5 hash matches that checksum.
$ head -n -1 .config/google-chrome/Default/Custom\ Dictionary.txt > ~/chrome-dict
$ cat ~/chrome-dict
AATEST
dotancohen
$ md5sum ~/chrome-dict
2b7288da7c9556608de620e65308efa4  /home/dotancohen/chrome-dict
We got 2b7288da7c9556608de620e65308efa4, as expected. It matches! So let's add that to the end of the file.
$ { printf "checksum_v1 = " ; printf $(md5sum -z ~/chrome-dict | awk '{print $1}') ; } >> ~/chrome-dict
$ cat ~/chrome-dict
AATEST
dotancohen
checksum_v1 = 08f7dd79a17e12b178a1010057ef5e34$
No, wrong checksum! Lets try cat to ensure that nothing is written to the file between the two printf statements.
$ head -n -1 .config/google-chrome/Default/Custom\ Dictionary.txt > ~/chrome-dict
$ cat ~/chrome-dict
AATEST
dotancohen
$ { printf "checksum_v1 = " ; printf $(md5sum -z ~/chrome-dict | awk '{print $1}') ; } | cat >> ~/chrome-dict
$ cat ~/chrome-dict
AATEST
dotancohen
checksum_v1 = 08f7dd79a17e12b178a1010057ef5e34$
Still wrong checksum! Let's try a tmp file.
$ head -n -1 .config/google-chrome/Default/Custom\ Dictionary.txt > ~/chrome-dict
$ cat ~/chrome-dict
AATEST
dotancohen
$ { printf "checksum_v1 = " ; printf $(md5sum -z ~/chrome-dict | awk '{print $1}') ; } >> ~/chrome-dict-tmp
$ cat ~/chrome-dict-tmp >> ~/chrome-dict && rm ~/chrome-dict-tmp
$ cat ~/chrome-dict 
AATEST
dotancohen
checksum_v1 = 2b7288da7c9556608de620e65308efa4$
That worked! **Why didn't the one-liners that redirect output to the end of the ~/chrome-dict file return the correct MD5 hash?**
dotancohen (16493 rep)
Jan 9, 2025, 09:00 PM • Last activity: Jan 9, 2025, 10:34 PM
10 votes
4 answers
4842 views
How to show progress when checking checksums using sha256sum
How do I show the progress when when checking the SHA256 checksums of large files? When I do `sha256sum -c SHA256SUMS` where the file `SHA256SUMS` contains the checksum for a large file, there is no indication of when the command might finish. Is there a way to show progress when doing `sha256sum -c...
How do I show the progress when when checking the SHA256 checksums of large files? When I do sha256sum -c SHA256SUMS where the file SHA256SUMS contains the checksum for a large file, there is no indication of when the command might finish. Is there a way to show progress when doing sha256sum -c ...?
Flux (3238 rep)
Jun 5, 2022, 03:15 AM • Last activity: Nov 9, 2024, 04:06 PM
0 votes
1 answers
120 views
What if I don't want to display the first directory in the find command?
I don't want to display the first directory component of paths printed by the inner `find` command from the below example ``` find . ! -name . -prune -type d -exec sh -c ' for dir do echo "\n\n${dir#.*/}\n" find "$dir" -exec md5sum {\} + done' sh {} + >> ../md5 ``` Output example: ``` dir1 md5sum va...
I don't want to display the first directory component of paths printed by the inner find command from the below example
find . ! -name . -prune -type d -exec sh -c '
for dir do
echo "\n\n${dir#.*/}\n"
find "$dir" -exec md5sum {\} +
done' sh {} + >> ../md5
Output example:
dir1

md5sum value ./dir1/file1

dir2

md5sum value ./dir2/dir3/file2
I do not want to display the first directory component i.e ./dir1 and ./dir2 in the ./dir1/file1, ./dir2/dir3/file2 paths, instead I want find to print ./file1 and ./dir3/file2 Is there any way to do that? Thank you in advance
user27072144 (1 rep)
Oct 7, 2024, 07:53 PM • Last activity: Oct 8, 2024, 06:12 AM
80 votes
8 answers
120604 views
Compute bcrypt hash from command line
I would like to compute the [bcrypt][1] hash of my password. Is there an open source command line tool that would do that ? I would use this hash in the Syncthing configuration file (even if I know [from here][2] that I can reset the password by editing the config file to remove the user and passwor...
I would like to compute the bcrypt hash of my password. Is there an open source command line tool that would do that ? I would use this hash in the Syncthing configuration file (even if I know from here that I can reset the password by editing the config file to remove the user and password in the gui section, then restart Syncthing).
Gabriel Devillers (1416 rep)
Sep 5, 2016, 03:30 PM • Last activity: Sep 9, 2024, 10:41 AM
4 votes
2 answers
7803 views
Command to verify CRC (CRC32) hashes recursively
With the commands `md5sum`, `sha1sum`, `sha256sum` I can take a text file having an hash and a path per line and verify the entire list of files in a single command, like `sha1sum -c mydir.txt`. (Said text file is easy to produce with a loop in `find` or other.) Is there a way to do the same with a...
With the commands md5sum, sha1sum, sha256sum I can take a text file having an hash and a path per line and verify the entire list of files in a single command, like sha1sum -c mydir.txt. (Said text file is easy to produce with a loop in find or other.) Is there a way to do the same with a list of CRC/CRC32 hashes? Such hashes are often stored inside zip-like archives, like ZIP itself or 7z. For instance: $ unzip -v archive.zip Archive: archive.zip Length Method Size Cmpr Date Time CRC-32 Name -------- ------ ------- ---- ---------- ----- -------- ---- 8617812 Stored 8617812 0% 12-03-2015 15:20 13fda20b 0001.tif Or: $ 7z l -slt archive.7z Path = filename Size = 8548096 Packed Size = Modified = 2015-12-03 14:20:20 Attributes = A_ -rw-r--r-- CRC = B2F761E3 Encrypted = - Method = LZMA2:24 Block = 0
Nemo (938 rep)
Dec 18, 2015, 12:47 PM • Last activity: Jul 29, 2024, 08:44 AM
0 votes
0 answers
99 views
Verify sha256sums of files inside tarball using hashfile
Is it possible to check the tarball's files sha256sums **without** first extracting to disk? I have this file: $ cat cathy.sha256 \SHA256 (atypical\nfilename) = ee8b2587b199594ac439b9464e14ea72429bf6998c4fbfa941c1cf89244c0b3e SHA256 (Documents/shopping list (old).txt) = 531ad778437bbec46c60ac4e3434b...
Is it possible to check the tarball's files sha256sums **without** first extracting to disk? I have this file: $ cat cathy.sha256 \SHA256 (atypical\nfilename) = ee8b2587b199594ac439b9464e14ea72429bf6998c4fbfa941c1cf89244c0b3e SHA256 (Documents/shopping list (old).txt) = 531ad778437bbec46c60ac4e3434bd1cab2834570e72cf1148db19e4c875ff50 I also have tarball (pax format) with this files inside: $ tar -tv --lzma -f cathy.tar.lzma drwxr-xr-x cathy/staff 0 2024-07-25 01:07 Documents/ -rw-r--r-- cathy/staff 758 2020-01-16 13:02 Documents/shopping list (old).txt -rw-r--r-- cathy/staff 16 2024-07-25 01:06 atypical\nfilename I tried this command but there are errors: $ tar -x --lzma -f cathy.tar.lzma --to-command='{ > printf '\''%s'\'' '\''^\\\?SHA256 ('\''; > printf '\''%s'\'' "$(sed --posix -zE '\''s/\n$//; s/\\/\\\\/g; s/\n/\\n/g; s/\r/\\r/g'\'' sed '\''s/[][\.*^$]/\\&/g'\''; > printf '\''%s'\'' '\'') = [0-9a-f]\{64\}$'\''; > } | grep -f- /path/to/cathy.sha256 | sha256sum -c -' /bin/sh: 3: Syntax error: redirection unexpected tar: 217238: Child returned status 2 /bin/sh: 3: Syntax error: redirection unexpected tar: 217239: Child returned status 2 tar: Exiting with failure status due to previous errors It must be able to handle line feeds (\n) carriage returns (\r) and backslashes (\\) in filenames, converting only those as needed to match the filenames sha256sum saves in .sha256 file (to explain the regex and printf). I expect filenames could contain all characters except NUL (\0) and forwardslash (/). ---- Putting the above --to-command code into script (undesired): $ cat tarHashHelper.sh #!/bin/bash { printf '%s' '^\\\?SHA256 ('; printf '%s' "$(sed --posix -zE 's/\n$//; s/\\/\\\\/g; s/\n/\\n/g; s/\r/\\r/g' <<< "${TAR_REALNAME}")" | sed 's/[][\.*^$]/\\&/g'; printf '%s' ') = [0-9a-f]\{64\}$'; } | grep -f- /path/to/cathy.sha256 | sha256sum -c - reads copies from disk instead (copies i had before): $ tar -x --lzma -f cathy.tar.lzma --to-command='/path/to/tarHashHelper.sh' Documents/shopping list (old).txt: OK \atypical\nfilename: OK and deleting them first gives different error: $ tar -x --lzma -f cathy.tar.lzma --to-command='/path/to/tarHashHelper.sh' sha256sum: 'Documents/shopping list (old).txt': No such file or directory Documents/shopping list (old).txt: FAILED open or read sha256sum: WARNING: 1 listed file could not be read tar: 217318: Child returned status 1 sha256sum: 'atypical'$'\n''filename': No such file or directory \atypical\nfilename: FAILED open or read sha256sum: WARNING: 1 listed file could not be read tar: 217327: Child returned status 1 tar: Exiting with failure status due to previous errors (still tries to read from disk and NOT from tarball. I want it to read from tarball)
leetbacoon (383 rep)
Jul 25, 2024, 01:48 AM • Last activity: Jul 25, 2024, 07:23 PM
36 votes
2 answers
124345 views
/etc/shadow : how to generate $6$ 's encrypted password?
In `/etc/shadow` file there are encrypted password. Encrypted password is no longer `crypt(3)` or md5 "type 1" format. ([according to this previous answer][1]) Now I have a $6$somesalt$someveryverylongencryptedpasswd as entry. I can no longer use openssl passwd -1 -salt salt hello-world $1$salt$pJUW...
In /etc/shadow file there are encrypted password. Encrypted password is no longer crypt(3) or md5 "type 1" format. (according to this previous answer ) Now I have a $6$somesalt$someveryverylongencryptedpasswd as entry. I can no longer use openssl passwd -1 -salt salt hello-world $1$salt$pJUW3ztI6C1N/anHwD6MB0 to generate encrypted passwd. Any equivalent like (non existing) .. ? openssl passwd -6 -salt salt hello-world
Archemar (32267 rep)
Sep 30, 2014, 11:25 AM • Last activity: Jul 10, 2024, 02:21 PM
7 votes
5 answers
4290 views
Echo hash only from shasum
Is there a way to get `shasum` to _only_ print the hash? I know this can be achieved by piping the output to another program, e.g. shasum something | cut -d' ' -f1 Is there a way to achieve this only using `shasum`, without having to pipe the result somewhere else?
Is there a way to get shasum to _only_ print the hash? I know this can be achieved by piping the output to another program, e.g. shasum something | cut -d' ' -f1 Is there a way to achieve this only using shasum, without having to pipe the result somewhere else?
Armand (373 rep)
Oct 25, 2017, 08:01 AM • Last activity: Jun 4, 2024, 08:37 PM
0 votes
2 answers
326 views
How can I compute bcrypt hash with less than 16 (2^4) rounds on Linux?
How can I compute bcrypt hash with less than 16 rounds (cost factor of 4 = 2^4 = 16 rounds) on Linux? This is the same question as https://unix.stackexchange.com/questions/307994/compute-bcrypt-hash-from-command-line, but requiring a cost factor of 1 to 3. All of the answers there only allow a cost...
How can I compute bcrypt hash with less than 16 rounds (cost factor of 4 = 2^4 = 16 rounds) on Linux? This is the same question as https://unix.stackexchange.com/questions/307994/compute-bcrypt-hash-from-command-line , but requiring a cost factor of 1 to 3. All of the answers there only allow a cost factor of 4 or higher. This is what they say:
> htpasswd -bnBC 3 "" Y
htpasswd: Unable to encode with bcrypt: Invalid argument
For https://bcrypt-generator.com/ , setting cost to 3 or less computes a hash using a cost factor of 4 ($2a$04$Gtm/m3uLxfxcezWRLcVLBuUTNbrSse/.XsBK16WBN5u37Cl88kaFy) For https://www.browserling.com/tools/bcrypt , Rounds exceded maximum (30)!.
wjwrpoyob (460 rep)
May 8, 2024, 01:06 AM • Last activity: May 8, 2024, 04:20 PM
2 votes
1 answers
107 views
Continue a directory tree checksum from a given file
I have: - A `checksum.txt` file which contains many lines of checksums of single files from a mountpoint in a huge directory, which mounted and then it disconnected, thereby not finishing the `checksum.txt` (partial checksums) - `localchecksums.txt` full checksum list, containing thousands of lines...
I have: - A checksum.txt file which contains many lines of checksums of single files from a mountpoint in a huge directory, which mounted and then it disconnected, thereby not finishing the checksum.txt (partial checksums) - localchecksums.txt full checksum list, containing thousands of lines of SHA256 checksums with filenames etc. I would like to: - Compare the remote mount checksums and local ones with sha256sum -c checksum.txt localchecksum.txt or similar, but: 1. I don't want to go through gigabytes of data again to get the remaining hashes 2. I don't want to restart the whole process for checksum.txt I generated the list by using find to recursively find single files and exec sha256sum on them. It is possible to get the remaining hashes by comparing the two files or somehow continue checking the checksums by reading the checksum.txt file and only calculating checksums for unchecked files. The problem with the first approach is that the order is different in the files. The second approach sounds good but I don't have any idea how to start with that. #### Sample of any of the checksum files:
8e2931cc1ad3adc07df115456b36b0dbd6f80f675e0a9813e20ad732ae5d4515  ./folder/8ggSHp5I7hNEl3vDCbWv6Q/wA-KzXIh1Ce3G93s20X24v_4vUeywBe3mXPhGjPt_Lg/cRf8KgbqIsqwbon3DX3PN1-oV6_Nr9Baeymaw-ZJw00
37d2dfe2315cc401536329e3fbe421384bbb50c656c3dbeb42798e5666822e6c  ./folder/8ggSHp7I7hHEl3vDCbWv6Q/wA-KzXIh1Ce3G93s2oX24v_4vUeywBe3mXPhGjPt_Lg/V02s6HKhyJ9Nyd2jQtSjWg
d0e9b95065a264db0d372ccace5d3a72f38f74ca7b44da4794dae23c91e18e57  ./folder/8ggSHp7I7hNxl3vDCbWv6Q/wA-KzXIh1Ce3G93s2oX24v_4vUeywBe3mXPhGjPt_Lg/U3fhBugX6pexYzh6qGKlW7lYWsFShWH7JwN9fmU8ay2lLZkciH2sXsiGbmIc97iJ
44a5fe29063e472857bb9a1929af06a32bb4b2394630f80c2dc732fd662620bc  ./folder/8ggSHp7I7hNEc3vDCbWv6Q/wA-KzXIh1Ce3G93s2oX24v_4vUeywBe3mXPhGjPt_Lg/gTrqUL4ZjWTWMl6BcjfwUe5bBDatscwUoYY9IFQDztc
Sir Muffington (1306 rep)
Apr 16, 2024, 11:14 AM • Last activity: Apr 18, 2024, 05:32 PM
0 votes
1 answers
66 views
What is a good way to perform a SHA256d hash (double SHA256) on an OpenBSD fresh install?
What is a good way to perform a SHA256d hash (double SHA256) on the default terminal of an network isolated OpenBSD fresh install? Here's what I'm doing: echo test > testfile cat testfile | openssl dgst -binary | openssl dgst It gives me a number ending in `0xe0b6` Just wondering if there is a more...
What is a good way to perform a SHA256d hash (double SHA256) on the default terminal of an network isolated OpenBSD fresh install? Here's what I'm doing: echo test > testfile cat testfile | openssl dgst -binary | openssl dgst It gives me a number ending in 0xe0b6 Just wondering if there is a more concise/otherwise better way?
Lee (549 rep)
Apr 4, 2024, 02:47 PM • Last activity: Apr 4, 2024, 03:22 PM
98 votes
5 answers
117962 views
No sha256sum in MacOS
I tried to use `sha256sum` in High Sierra; I attempted to install it with `MacPorts`, as: sudo port install sha256sum It did not work. What to do?
I tried to use sha256sum in High Sierra; I attempted to install it with MacPorts, as: sudo port install sha256sum It did not work. What to do?
Rui F Ribeiro (57882 rep)
Feb 27, 2018, 03:05 AM • Last activity: Mar 22, 2024, 06:04 PM
0 votes
1 answers
161 views
apt hashes error in Debian 12 Bookworm with cdrom
I just installed Debian 12 Bookworm for the first time. apt will not install gparted. There is a hashes mismatch error : $ sudo apt install gparted Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: gp...
I just installed Debian 12 Bookworm for the first time. apt will not install gparted. There is a hashes mismatch error : $ sudo apt install gparted Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: gparted-common Suggested packages: dmraid gpart jfsutils kpartx mtools reiser4progs reiserfsprogs udftools xfsprogs The following NEW packages will be installed: gparted gparted-common 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2542 kB of archives. After this operation, 8724 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 file:/media/debian-disc bookworm/main amd64 gparted-common all 1.3.1-1 [1711 kB] Err:1 file:/media/debian-disc bookworm/main amd64 gparted-common all 1.3.1-1 Hash Sum mismatch Hashes of expected file: - SHA256:448c2a1c439a2526123c661025fadc190b5a9c561d23d2e6c44264118fe184b7 - MD5Sum:5fdcf4d3c2272fa5fc9affe8ccd77786 [weak] - Filesize:1711028 [weak] Hashes of received file: Last modification reported: Tue, 08 Feb 2022 22:12:15 +0000 Get:2 file:/media/debian-disc bookworm/main amd64 gparted amd64 1.3.1-1 [831 kB] Err:2 file:/media/debian-disc bookworm/main amd64 gparted amd64 1.3.1-1 Hash Sum mismatch Hashes of expected file: - SHA256:41d45a07fa0e79a9e135da38220bdd124bd09c370029e21ca1e4853b6579a538 - MD5Sum:4977aac4bd411819c5d4ea6d979eaabc [weak] - Filesize:831132 [weak] Hashes of received file: Last modification reported: Tue, 08 Feb 2022 22:12:13 +0000 E: Read error - read (5: Input/output error) E: Read error - read (5: Input/output error) I was able to fix that by commenting out the first line in sources.list being the cdrom line. So it is happy to fetch gparted from the net but does not like to take it from the cd. That seems a bit wasteful, especially for a legacy package like gparted. Any ideas how to fix this?
cardamom (662 rep)
Feb 12, 2024, 12:18 PM • Last activity: Feb 12, 2024, 12:28 PM
Showing page 1 of 20 total questions