Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
6
votes
2
answers
1362
views
Deleting a file before it has finished downloading
Today I downloaded a large file using `wget`, but I accidentally deleted it before it had finished downloading. However `wget` continued to download the file until it had finished, but there was no file saved at the end. I wonder what happened to rest of the file that Wget downloaded after I deleted...
Today I downloaded a large file using
wget
, but I accidentally deleted it before it had finished downloading.
However wget
continued to download the file until it had finished, but there was no file saved at the end.
I wonder what happened to rest of the file that Wget downloaded after I deleted it?
EmmaV
(4359 rep)
May 20, 2025, 02:30 PM
• Last activity: May 25, 2025, 08:04 PM
23
votes
4
answers
79243
views
What's the fastest way to remove all files & subfolders in a directory?
One program created lots of nested sub-folders. I tried to use command `rm -fr *` to remove them all. But it's very slow. I'm wondering is there any faster way to delete them all?
One program created lots of nested sub-folders. I tried to use command
rm -fr *
to remove them all. But it's very slow. I'm wondering is there any faster way to delete them all?
Lei Hao
(341 rep)
Apr 18, 2016, 07:33 AM
• Last activity: May 21, 2025, 01:39 PM
0
votes
0
answers
29
views
Recover deleted txt file which was stored to btrfs right before another one that I still have
I used my terminal emulator to run a command and redirect output to a text file on my btrfs. Right after, I did the same with another command. I since deleted the first text file and not the second. I'd like to recover part of the first text: photorec seems overkill but I'm familiar with the tool, u...
I used my terminal emulator to run a command and redirect output to a text file on my btrfs. Right after, I did the same with another command. I since deleted the first text file and not the second. I'd like to recover part of the first text: photorec seems overkill but I'm familiar with the tool, using strings seems more suitable but I'm not familiar with this method. How about looking at the actual blocks on the fs: maybe the two files were written sequentially?
user324831
(113 rep)
May 20, 2025, 06:50 PM
• Last activity: May 20, 2025, 06:55 PM
10
votes
8
answers
9977
views
How to recover “deleted” files in Linux on an NTFS filesystem (files originally from macOS)
My girlfriend has a external hard disk with 10 years+ of photos, documents and more. A lot of these files originate from her old iPhone 5 and her MacBook. The hard disk itself is NTFS Format. Since the disk is so old, it turns into a hazard of data loss (what an irony). As we tried to upload all the...
My girlfriend has a external hard disk with 10 years+ of photos, documents and more. A lot of these files originate from her old iPhone 5 and her MacBook. The hard disk itself is NTFS Format. Since the disk is so old, it turns into a hazard of data loss (what an irony).
As we tried to upload all the files into OneDrive to store them safely, we got 1,000s of errors because of invalid file names. I realized that many files started with
._
, e.g. ./pic/92 win new/iphone/._IMG_1604.JPG
. I don't understand macOS and why files should be named like that, but for sure you can never get them into OneDrive like that.
So I decided to hook it to my Raspberry Pi and rename all files with the wrong characters from the command line. After listing the nearly 10,000 files, I ran the following over the whole hard disk.
find . -name "._*" | sed -e "p;s/\._//" | xargs -d '\n' -n2 mv
Furthermore, I removed some leading whitespace in filenames with zmv.
I tried the command in a test environment first and it looked fine. But I didn't check the file size.
After my girlfriend connected the hard disk back onto her Mac, all renamed files show a file size of 4KB (empty)! **I screwed it up** and I don't know how.
I assume the data is still there, but I somehow screwed the filesystem.
Does anybody understand what I did wrong? More importantly, do you see a chance to recover the files? I would appreciate any advice.
LukasH
(109 rep)
Mar 6, 2021, 05:15 PM
• Last activity: Apr 26, 2025, 09:30 PM
4
votes
0
answers
2462
views
Recovering an overwritten file on ZFS
Is there any way to recover a deleted or overwritten file on ZFS? I accidentally overwrote a JPG file with a scanned image. Unfortunately, I didn’t take a snapshot beforehand. However, since ZFS uses the Copy-on-Write (CoW) mechanism, I believe the overwritten data might still exist in some form. Do...
Is there any way to recover a deleted or overwritten file on ZFS?
I accidentally overwrote a JPG file with a scanned image. Unfortunately, I didn’t take a snapshot beforehand. However, since ZFS uses the Copy-on-Write (CoW) mechanism, I believe the overwritten data might still exist in some form.
Does anyone know if there is a way to restore the overwritten file on ZFS?
I tried using
photorec
. As a result, I recovered some JPG files. However, my target file was not there. Strangely, photorec
couldn't recover well almost JPG files which is not deleted. And I remembered that unfortunately my pool had a setting for lz4 compression.
kou
(41 rep)
Sep 25, 2020, 08:05 AM
• Last activity: Mar 26, 2025, 08:01 PM
200
votes
14
answers
1186950
views
Recover deleted files on Linux
Is there a command to recover/undelete deleted files by `rm`? rm -rf /path/to/myfile How can I recover `myfile`? If there is a tool to do this, how can I use it?
Is there a command to recover/undelete deleted files by
rm
?
rm -rf /path/to/myfile
How can I recover myfile
? If there is a tool to do this, how can I use it?
pylover
(3568 rep)
Jun 21, 2013, 01:41 PM
• Last activity: Dec 13, 2024, 02:23 PM
1
votes
2
answers
2017
views
How to safely delete a regular directory that contains several btrfs snapshots inside it?
I have a regular directory than contains directly underneath it several btrfs snapshots. Is it safe to do an `rm -rf` on the parent directory, or do I need to first do a `btrfs subvolume delete SUBVOL` on each of the snapshots before removing the parent directory?
I have a regular directory than contains directly underneath it several btrfs snapshots. Is it safe to do an
rm -rf
on the parent directory, or do I need to first do a btrfs subvolume delete SUBVOL
on each of the snapshots before removing the parent directory?
user779159
(421 rep)
Jan 29, 2016, 12:47 PM
• Last activity: Dec 12, 2024, 10:49 PM
4
votes
1
answers
1186
views
File deletion bypasses trash bin on Btrfs subvolumes
Nautilus trash do not move file to trash bin on btrfs file system instead it deletes them permanently. Thunar seems to do the same thing, and it seems related to lower level library implementation of the trash bin. I added a new directory `~/.local/share/Trash/` that allows the file manager Nautilus...
Nautilus trash do not move file to trash bin on btrfs file system
instead it deletes them permanently. Thunar seems to do the same thing,
and it seems related to lower level library implementation of the trash
bin.
I added a new directory
~/.local/share/Trash/
that allows the file
manager Nautilus and Thunar to use the trash bin however it works only
for deletion that occurs from the same subvolume. Trashing from any
other btrfs subvolume or partition triggers permanent deletion.
Calibre, the ebook library manager, has a different behavior. My calibre
library is on a different subvolume than my home. When I delete books
from the Calibre manager, it moves the books to a hidden directory
.Trash-1000
under the root of the subvolume where the Calibre library
is stored. However this hidden directory is not taken into account by
the trash bin therefore the books stay hidden occupying space until I
finally discovered them by chance. I don't think Calibre is wrong here
because it looks like the way trash bins are managed on external media
in ext3/4 file system.
Emacs function (delete-file)
manage to use the trash bin for different
subvolumes and even more amazing it uses the trash bin
~/.local/share/Trash/
for other partition on different disks. So it
does the job perfectly.
- Do you know a desktop file manager than could work with the trash
bin across different btrfs subvolume ?
- Do you know a way to prevent some softwares such as calibre to move
the files to a hidden spot while you think they are deleted ?
- It seems the libraries gvfs and glib/gio are used to manage the trash bin. The
softwares I mentioned are all calling different OS functions when it comes
to file deletion and the result is not consistent. Anyone has tracks about how
trash bins could be managed correctly ?
As far as my research goes, the problem seems to be older than 5 years
https://bugs.launchpad.net/glib/+bug/1442649 I have been using btrfs for a long
time with this downside, but I am curious to see how other people deal with this
problem especially since btrfs becomes more popular and seems to be proposed in
some linux distribution installer.
GNU/Linux 5.15.2-2 Manjaro Gnome: 41.1
Émilien
(81 rep)
Dec 8, 2021, 12:08 PM
• Last activity: Oct 16, 2024, 06:14 AM
0
votes
1
answers
147
views
ext4 and chattr ( change attributes ) 's - undo ( u ) option
This is an `microsd ext4 card on a rooted Android`, where I want to use busybox chattr -R +u /mypath I know what +u means but does anyone have an idea about how the undo option is implemented for ext4 and specifically in case my hardware to what extent undo will let me in datarecovery. If I use `ext...
This is an
microsd ext4 card on a rooted Android
, where I want to use
busybox chattr -R +u /mypath
I know what +u means but does anyone have an idea about how the undo option is implemented for ext4 and specifically in case my hardware to what extent undo will let me in datarecovery. If I use extundelete & ext4magic
would these get back my data in case it's deleted by the app ?
corollary to that if :
-- ext4
does honor +u a
nd other extended attributes any mount
level option that will gimme a good chance of recovery in case of deletion ?
-- what file system then does +u
work with after all chattr
is a linux / unix
command
there is a mention
> ext_attr
This feature enables the use of extended attributes. This
feature is supported by ext2, ext3, and ext4.
here
I wondered if that op mount activate the +u attribute
user1874594
(133 rep)
Aug 30, 2024, 06:38 AM
• Last activity: Aug 30, 2024, 11:12 AM
0
votes
0
answers
80
views
Deleting incremental backups created with rsync and symbolic links: recursive effect?
I am doing incremental backup according to the following example found there: ``` #!/bin/bash # A script to perform incremental backups using rsync set -o errexit set -o nounset set -o pipefail readonly SOURCE_DIR="${HOME}" readonly BACKUP_DIR="/mnt/data/backups" readonly DATETIME="$(date '+%Y-%m-%d...
I am doing incremental backup according to the following example found there:
#!/bin/bash
# A script to perform incremental backups using rsync
set -o errexit
set -o nounset
set -o pipefail
readonly SOURCE_DIR="${HOME}"
readonly BACKUP_DIR="/mnt/data/backups"
readonly DATETIME="$(date '+%Y-%m-%d_%H:%M:%S')"
readonly BACKUP_PATH="${BACKUP_DIR}/${DATETIME}"
readonly LATEST_LINK="${BACKUP_DIR}/latest"
mkdir -p "${BACKUP_DIR}"
rsync -av --delete \
"${SOURCE_DIR}/" \
--link-dest "${LATEST_LINK}" \
--exclude=".cache" \
"${BACKUP_PATH}"
rm -rf "${LATEST_LINK}"
ln -s "${BACKUP_PATH}" "${LATEST_LINK}"
After a few incremental backup This gives me a list of folder like this:
dir
dir_2024_06_21T18_17_40
dir_2024_06_21T18_18_14
dir_2024_06_21T18_18_32
dir_2024_06_21T18_18_50
dir_latest
After a while with enough changes, the disk will becomes full
I have the following questions:
1. If a file thefile
was created between dir_2024_06_21T18_18_14
and dir_2024_06_21T18_18_32
and I delete dir_2024_06_21T18_18_32
, is thefile
going te be still found in dir_2024_06_21T18_18_50
or not (because there is some kind of recursive reference in the time) or can I safely delete dir_2024_06_21T18_18_32
and still find thefile
in dir_2024_06_21T18_18_50
?
2. More generally, is there a better strategy to erase the incremental backups when the backup disk becomes full?
ecjb
(475 rep)
Jun 21, 2024, 04:41 PM
• Last activity: Jun 22, 2024, 10:07 AM
1
votes
1
answers
22130
views
find: cannot fork: Cannot allocate memory
I want to delete 1 file at a time from my directory which contains so many files, so I want to remove 1 file at a time. Just to avoid too many reads and failed at too many arguments. find ./Backup/ -name '*.csv' -maxdepth 1 -exec rm {} \; find: warning: you have specified the -maxdepth option after...
I want to delete 1 file at a time from my directory which contains so many files, so I want to remove 1 file at a time. Just to avoid too many reads and failed at too many arguments.
find ./Backup/ -name '*.csv' -maxdepth 1 -exec rm {} \;
find: warning: you have specified the -maxdepth option after a non-option argument -name, but options are not positional (-maxdepth affects tests specified before it as well as those specified after it). Please specify options before other arguments.
find: cannot fork: Cannot allocate memory
I don't want to delete recursively from a child directory thats why -maxdepth 1.
Any help and suggestion?
Aashu
(791 rep)
Oct 13, 2015, 12:25 PM
• Last activity: May 16, 2024, 06:06 AM
74
votes
4
answers
324570
views
undelete a just deleted file on ext4 with extundelete
Is there a simple option on `extundelete` how I can try to undelete a file called `/var/tmp/test.iso` that I just deleted? (it is not so important that I would start to remount the drive read-only or such things. I can also just re-download that file again) I am looking for a simple command with tha...
Is there a simple option on
extundelete
how I can try to undelete a file called /var/tmp/test.iso
that I just deleted?
(it is not so important that I would start to remount the drive read-only or such things. I can also just re-download that file again)
I am looking for a simple command with that I could try if I manage to fast-recover it.
I know, it is possible with remounting the drive **in read-only**: (see https://unix.stackexchange.com/questions/90186/how-do-i-simply-recover-the-only-file-on-an-empty-disk-just-deleted?rq=1)
But is this also possible somehow **on the still mounted disk?**
---
For info:
if the deleted file is on an NTFS partition it is easy with ntfsundelete
e.g. if you know the size was about 250MB use
sudo ntfsundelete -S 240m-260m -p 100 /dev/hda2
and then undelete the file by **inode** e.g. with
sudo ntfsundelete /dev/hda2 --undelete --inodes 8270
rubo77
(30435 rep)
Mar 30, 2014, 09:26 PM
• Last activity: Jan 24, 2024, 04:35 PM
0
votes
1
answers
4985
views
Disk still full after deleting some files
I have an AWS linux instance with a root drive of 32GB. I have an EBS mount of 20GB On my root I ran out of space, so I cleared out some files. However my root drive is still full. I can't find out why because when I look at sizes of the directories using `du` and `ncdu` they show the drive should h...
I have an AWS linux instance with a root drive of 32GB.
I have an EBS mount of 20GB On my root I ran out of space, so I cleared out some files. However my root drive is still full. I can't find out why because when I look at sizes of the directories using
I have an EBS mount of 20GB On my root I ran out of space, so I cleared out some files. However my root drive is still full. I can't find out why because when I look at sizes of the directories using
du
and ncdu
they show the drive should have a lot of space.
df
I get the following results
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 32894812 31946092 848472 98% /
devtmpfs 2017224 60 2017164 1% /dev
tmpfs 2025364 0 2025364 0% /dev/shm
/dev/xvdh 20511356 4459276 15003504 23% /mnt/ebs
My /dev/xvda1 is still full
After some research I installed a great tool *ncdu* to display disk space and the results are:
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- / -------------------------------------
4.2GiB [##########] /mnt
1.5GiB [### ] /var
1.2GiB [## ] /usr
684.9MiB [# ] /opt
464.3MiB [# ] /home
141.7MiB [ ] /lib
53.5MiB [ ] /boot
21.2MiB [ ] /lib64
10.8MiB [ ] /sbin
8.1MiB [ ] /bin
7.1MiB [ ] /etc
2.7MiB [ ] /tmp
60.0KiB [ ] /dev
48.0KiB [ ] /root
e 16.0KiB [ ] /lost+found
e 4.0KiB [ ] /srv
e 4.0KiB [ ] /selinux
e 4.0KiB [ ] /media
e 4.0KiB [ ] /local
. 0.0 B [ ] /proc
0.0 B [ ] /sys
0.0 B [ ] .autofsck
If I du -h
my total is
8.3G /
So why would my disk be 95% full when it clearly has a lot of space. Am I missing something to do with the mounts and is there any other tool I can run to find out why it is 95% full?
Shawn Vader
(103 rep)
Jul 26, 2015, 10:12 AM
• Last activity: Dec 20, 2023, 11:09 PM
0
votes
1
answers
2018
views
Recovering deleted files on macOS
I have gone through couple of previously answered questions but couldn't find any thing which will work for me. I used this command accidently on a wrong folder, which deleted some important files and scripts except *.sh files. find . -type f ! -name '*.sh' -delete Is it possible to recover the file...
I have gone through couple of previously answered questions but couldn't find any thing which will work for me.
I used this command accidently on a wrong folder, which deleted some important files and scripts except *.sh files.
find . -type f ! -name '*.sh' -delete
Is it possible to recover the files ?
Grayrigel
(129 rep)
Sep 17, 2016, 10:32 AM
• Last activity: Dec 5, 2023, 06:03 AM
14
votes
4
answers
10845
views
How can I restore my default .bashrc file again?
I accidentally changed all the contents of the `.bashrc` file and I haven't scripted it yet so there's no problem for now. I added little scripts to it (just a few `alias`), so I can write them again one by one. How can I restore my .bashrc file with the default settings? I use Linux Mint.
I accidentally changed all the contents of the
.bashrc
file and I haven't scripted it yet so there's no problem for now. I added little scripts to it (just a few alias
), so I can write them again one by one.
How can I restore my .bashrc file with the default settings?
I use Linux Mint.
Ash
(185 rep)
Mar 17, 2023, 11:38 PM
• Last activity: Jun 23, 2023, 05:51 PM
39
votes
7
answers
199936
views
Best way to free disk space from deleted files that are held open
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space $ lsof /tmp/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME cron 1623 root 5u REG 0,21...
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space
$ lsof /tmp/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cron 1623 root 5u REG 0,21 0 395919638 /tmp/tmpfPagTZ4 (deleted)
The disk space taken up by the deleted file above causes problems such as when trying to use the tab key to autocomplete a file path I get the error
bash: cannot create temp file for here-document: No space left on device
But after I run kill -9 1623
the space for that PID is freed and I no longer get the error.
My questions are:
- why is this space not immediately freed when the file is first deleted?
- what is the best way to get back the file space associated with the deleted files?
and please let me know any incorrect terminology I have used or any other relevant and pertinent info regarding this situation.
BryanK
(803 rep)
Jan 30, 2015, 06:56 PM
• Last activity: May 22, 2023, 07:12 AM
3
votes
0
answers
3424
views
ext4magic segfault
I am trying to restore a directory I accidentally deleted which contained many files. Ironically this was a fat finger error while trying to set up a backup! I am following the instructions [here][1]: The ext4magic command to generate a histogram is successful: [root] /mnt/reos-storage-2 $ ext4magic...
I am trying to restore a directory I accidentally deleted which contained many files. Ironically this was a fat finger error while trying to set up a backup! I am following the instructions here :
The ext4magic command to generate a histogram is successful:
[root] /mnt/reos-storage-2 $ ext4magic /dev/sda2 -H -a $(date -d "-70minutes" +%s)
Filesystem in use: /dev/sda2
|-----------c_time Histogram----------------- after -------------------- Wed Jun 23 10:42:35 2021
1624441775 : 0 | | Wed Jun 23 10:49:35 2021
1624442195 : 43 |**************************************************| Wed Jun 23 10:56:35 2021
1624442615 : 0 | | Wed Jun 23 11:03:35 2021
1624443035 : 1 |** | Wed Jun 23 11:10:35 2021
1624443455 : 0 | | Wed Jun 23 11:17:35 2021
1624443875 : 3 |**** | Wed Jun 23 11:24:35 2021
1624444295 : 0 | | Wed Jun 23 11:31:35 2021
1624444715 : 0 | | Wed Jun 23 11:38:35 2021
1624445135 : 0 | | Wed Jun 23 11:45:35 2021
1624445555 : 0 | | Wed Jun 23 11:52:35 2021
|-----------d_time Histogram----------------- after -------------------- Wed Jun 23 10:42:35 2021
1624441775 : 0 | | Wed Jun 23 10:49:35 2021
1624442195 : 0 | | Wed Jun 23 10:56:35 2021
1624442615 : 1 |* | Wed Jun 23 11:03:35 2021
1624443035 : 9380 |**************************************************| Wed Jun 23 11:10:35 2021
1624443455 : 0 | | Wed Jun 23 11:17:35 2021
1624443875 : 0 | | Wed Jun 23 11:24:35 2021
1624444295 : 1 |* | Wed Jun 23 11:31:35 2021
1624444715 : 0 | | Wed Jun 23 11:38:35 2021
1624445135 : 0 | | Wed Jun 23 11:45:35 2021
1624445555 : 0 | | Wed Jun 23 11:52:35 2021
|-----------cr_time Histogram----------------- after -------------------- Wed Jun 23 10:42:35 2021
1624441775 : 0 | | Wed Jun 23 10:49:35 2021
1624442195 : 33 |**************************************************| Wed Jun 23 10:56:35 2021
1624442615 : 1 |** | Wed Jun 23 11:03:35 2021
1624443035 : 0 | | Wed Jun 23 11:10:35 2021
1624443455 : 0 | | Wed Jun 23 11:17:35 2021
1624443875 : 0 | | Wed Jun 23 11:24:35 2021
1624444295 : 0 | | Wed Jun 23 11:31:35 2021
1624444715 : 0 | | Wed Jun 23 11:38:35 2021
1624445135 : 0 | | Wed Jun 23 11:45:35 2021
1624445555 : 0 | | Wed Jun 23 11:52:35 2021
ext4magic : EXIT_SUCCESS
However, any further commands essentially end in a segfault
[root] /mnt/reos-storage-2 $ ext4magic /dev/sda2 -a 1624442615 -f r sftp_data -l
Filesystem in use: /dev/sda2
Using internal Journal at Inode 8
Activ Time after : Wed Jun 23 11:03:35 2021
Activ Time before : Wed Jun 23 11:56:09 2021
Segmentation fault
Is there anything I can do?
----------
**More Background**
As background, I got to this point with roughly the following steps.
I unmounted the file system as quickly as I could after frantic googling. Initially I did this with
umount -l /mnt/reos-storage-1
I used the
-l
option because umount could not unmount the partition which was still in use. Next I did
fuser -cuk /mnt/reos-storage-1/
Next I ran
fsck /dev/sda2
which is not recommended by ext4magic actually.
Next I tried to recover the files with extundelete
which failed
[root] /mnt/reos-storage-2 $ extundelete --restore-directory /mnt/reos-storage-1/sftp_data /dev/sda2
NOTICE: Extended attributes are not restored.
Loading filesystem metadata ... 29027 groups loaded.
Loading journal descriptors ... 0 descriptors loaded.
extundelete: Extent block checksum does not match extent block while finding inode for mnt
extundelete: Extent block checksum does not match extent block while finding inode for mnt
Failed to restore file /mnt/reos-storage-1/sftp_data
Could not find correct inode number past inode 2.
Try altering the filename to one of the entries listed below.
File name | Inode number | Deleted status
extundelete: Operation not permitted while restoring directory.
extundelete: Operation not permitted when trying to examine filesystem
[root] /mnt/reos-storage-2 $ man umount
[root] /mnt/reos-storage-2 $ extundelete --restore-all /dev/sda2
NOTICE: Extended attributes are not restored.
Loading filesystem metadata ... 29027 groups loaded.
Loading journal descriptors ... 0 descriptors loaded.
Searching for recoverable inodes in directory / ...
0 recoverable inodes found.
Looking through the directory structure for deleted files ...
0 recoverable inodes still lost.
No files were undeleted.
I then installed ext4magic using apt and ran the following command as in the instructions to make a backup of the journal
debugfs -R "dump /tmp/sda2.journal" /dev/sda2
crobar
(243 rep)
Jun 23, 2021, 11:38 AM
• Last activity: May 14, 2023, 01:46 PM
0
votes
0
answers
965
views
How to recover files accidentally deleted from LUKS encrypted device?
I just "Permanently Deleted" two pictures from a folder in my disk that is encrypted with LUKS. I ran ```sudo testdisk``` and navigated through the menus. It did identify the two now missing files but their entries indicated a size of zero. Is LUKS designed to wipe the empty space so that deleted fi...
I just "Permanently Deleted" two pictures from a folder in my disk that is encrypted with LUKS. I ran
testdisk
and navigated through the menus. It did identify the two now missing files but their entries indicated a size of zero.
Is LUKS designed to wipe the empty space so that deleted files are not recoverable? Or was testdisk not fit for the job? Are there other tools that may help save my photos?
xsn8853
(9 rep)
Apr 24, 2023, 04:15 PM
0
votes
1
answers
180
views
my system is almost useles after running this cmd sudo apt-get remove --purge '^nvidia-.*'
Hello I'm very new to linux and the cmd line and have been learning a lot through google, firefox, and bing. I'm running Ubuntu server Xenial 16.04.07 LTS with all security patches from ubuntu, intrams enabled fips ect. on a WS WRX80-E SE SAGE WIFI PRO Asus board with a Threadripper pro 3975x with m...
Hello I'm very new to linux and the cmd line and have been learning a lot through google, firefox, and bing. I'm running Ubuntu server Xenial 16.04.07 LTS with all security patches from ubuntu, intrams enabled fips ect. on a WS WRX80-E SE SAGE WIFI PRO Asus board with a Threadripper pro 3975x with my ubuntu server on a fire cuda nvme m.2 ssd and many more drives as well 132 gb of ram after I put this cmd "***sudo apt-get remove --purge '^nvidia-.*' ...***" and following through with the question it asked me my system cant run any commands and now and is practically useless and broken. I was trying to fix my issue with my two graphics cards not being detected and I guess I did the wrong thing. I tried so many different ways that I thought by removing nvadia completely and then reinstalling it would help me but the person who posted the command I ran did me a very terrible diss-service as it was never my intent to take all of my permissions away and render my system useless and thus halt me on my projects. I hope some one on this site has working knowledge of the command I ran and what I did to my system and can help me fix all these issue. I have one thing going for me and that is that I'm still sshd in a terminal if you have working knowledge and a clear understanding of what happens when the above command is run please get back to me with a resolution. Thank you
Edward Bullock
(1 rep)
Nov 16, 2022, 05:46 AM
• Last activity: Nov 16, 2022, 08:15 AM
1
votes
2
answers
3189
views
Restore deleted contents of a text file
Due to [this][1] bug, the content of one of my source files was deleted. I searched a lot and found that I can read the raw data on the storage device with the `dd` command, and I can find the address of blocks that hold my file's data through the `hdparm` command, but the contents of my file were d...
Due to this bug, the content of one of my source files was deleted. I searched a lot and found that I can read the raw data on the storage device with the
dd
command, and I can find the address of blocks that hold my file's data through the hdparm
command, but the contents of my file were deleted, and the size of it is zero so hdparm
shows nothing.
So I read the entire partition with dd
and filtered output with grep
and I found a line of my source file, so I am sure that my source file's contents are alive on my storage device. Now I have some questions:
1. Is there is a optimized way to restore the content of file?
2. Is it a good idea to delete the file and then recover it with recovery tools?
3. Is there a way to find the address after finding a line of my source file?
4. Is there a way to find the addresses of old blocks?
5. Is there a way to force the grep
command to print (for example) 1000 lines after finding the desired line?
6. I am not familiar with how files are saved in a directory, does the directory have a physical boundary? I mean, can I find the start and end addresses of a directory to read all of the directory's contents? ( I want to read the entirety of my src
folder).
Basically, how can I get my fiels back?
Note:
1. I use the linux.
2. My partition's file system is ex4.
3. My storage device is SSD.
Thanks.
ali tavakoli
(19 rep)
Mar 20, 2022, 11:25 AM
• Last activity: Nov 12, 2022, 09:15 PM
Showing page 1 of 20 total questions