How to repair/isolate hard drive bad blocks [closed] - disk

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
During the last month Ubuntu starts having some problems: it shuts down suddenly without any apparent reason.. I figured out that the problem is in the hard disk, if I run this command:
$ sudo badblocks -sv -b 512 /dev/sda
I get 24 bad blocks all in the Linux partition (I have Windows in another one and it does not have the same problem). The question is if there is a way (different from changing the disk) for avoiding this shutting down. Maybe isolating the bad blocks?

Software/file system bad blocks marking is mostly a thing of the past; recent drives automatically relocate bad blocks in a transparent way.
If you start getting bad blocks "visible" to software it probably means that the hard drive is exhausting the reserve of free replacement blocks, so it's probably failing. You should check the SMART status of the disk to see if this is actually confirmed by the other SMART attributes, do a backup and get ready to replace your drive.

I found a good tutorial that might help you: http://www.ehow.com/how_6864409_fix-bad-sectors-ubuntu.html
Open the terminal > type the command mount and follow the steps:
Choose a filesystem to repair. For example, you might choose the filesystem named "/home" if the output from the "mount" command includes this line:
/dev/mapper/vg0-home on /home type ext3 (rw)
Type the "umount" command to unmount the filesystem. To unmount the "/home" filesystem, for example, issue the command "sudo umount /home".
Type the "fsck" command to repair the filesystem. The "fsck" command stands for "file system check"; it scans the disk for bad sectors and labels the ones that aren't working. To run fsck on the /home filesystem, issue the command "sudo fsck /dev/mapper/vg0-home". Replace "/dev/mapper/vg0-home" with the output from your "mount" command, as appropriate.
Type the "mount" command to remount the repaired filesystem. If you repaired the "/home" filesystem, then use the command "sudo mount /home".

Spinrite (grc.com) is the best tool I know of for recovering bad sectors and getting the drive to use backup sectors in their place. Its not cheap but it works. If any of your friends own a copy you are allowed to borrow it. Ive used it for 7 years now. Its good for periodic maintenance too.

Related

System Storage Taking Up Way Too Much Space in macOS Mojave [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
My mac is sending me the frequent alert of low disk space. When I am checking the system storage then it's showing 170+gb is occupied by the system. I am not sure where is my space is getting used?
I tried a few cleaner tools also but couldn't get help much.
Please help to resolve it?
After doing research over various forums of mac's and StackExchange I figured out that it's mostly because of the following reasons.
Log files (Might be crash log files/docker files)
Your email messages stored in outlook (in my case it was almost ~20 GB)
Logs related to cores when a system restarts (~ 10 GB)
Docker Images (This had ~70 GB in my case).
Your nonsystem documents/downloads/itunes
So the question is how to find what all things are unnecessary and safe to delete? These system files are not visible directly.
I tried using a few tools like cleanmymac etc but all were paid so I couldn't get help much there.
To clean up your non-system unnecessary files, you can directly take the help of the storage management tool of mac. You just have to click on optimize storage and it will show all the non-system files.
To cleanup unnecessary system files, use below command
sudo find -x / -type f -size +10G
This command will give you all the files occupying more than 10 GB. You can analyze the files and delete them as necessary.
The highlighted cores are nothing but the state files of your mac to reboot from last state when your mac restarts so it's safe to delete.
Next step is to delete a hidden tmp folder
It will show the size as 0 bytes because your user won't have permission to read it. But will be occupying a hell amount of space. So delete it by giving root permission.
Now, Look if there are any docker images present in your system. Clean them all (Docker.raw).
Using all these steps I was able to clean almost 100+ GB.
Recently found that this issue was caused by a memory leak in one of the Java applications I was running. I had to start the Activity Monitor, searching for Java processes and Force Quit them. Rinse and repeat every time my space runs out. Also fix your code where you can to get rid of memory leaks.

rm -rf ~$* on macbook - what now? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
So I recently as a total mistake ran the command:
rm -rf ~$*
So now my terminal shell looks crappy and all my files are gone. Great succes.
My terminal shows the user as:
User%
How do I get it back to "User#Machine" format?
This isn't really an answer, just a bit to add onto the advice from #swa66.
I don't know what kind of mac you have, but if you have one that you can pull the hard drive out yourself then you might want to consider doing that. There are numerous tools on the market that can recover deleted files and directories as long as you have not written over the data. If you put a new bare drive into your mac, assuming you can, then you can install a fresh copy of macOS and your third party apps, etc as swa66 advised. Then you can purchase one of the reputable disk recovery apps and attach your pulled out drive via an external enclosure or dock (I like the docks the best) and then proceed to recover your important files. It takes some work and it requires a bit of expenditure if you don't have a suitable bare drive around and an external enclosure or dock and the recovery software. But depending on the value of your lost data it maybe worth it to you. As swa66 said, drive recovery services are extremely expensive so if you have not overwritten you data with new data or repartitioned you can have good success retrieving the most common file types yourself.
If you cannot pull your drive out, but you have access to another mac, then there is the option of using target disk mode to access your drive from another mac to image the drive for later recovery attempts or direct recovery but you have to make sure that the recovery software supports target disk mode. If your lost data is important, then be very careful what you do with your computer to avoid overwriting the lost data. -rf does not actually over-write or remove the data from the disk so it is still there but the locations of the files on the disk are now available to be overwritten by anything. Don't install recovery software onto the same drive that you are trying to recover from for example.
Restore from backup
To get your files back, you have but one easy option: restore from backup.
Let's hope you made TimeMachine backups on a regular basis.
rm -rf on the command line removes files and directories recursively, no mercy, no second guesses, no second chances.
The ~$*: I'm unsure what it expanded to. $* in bash expands to the arguments given to the script. but since it likely expanded to nothing, you might have nuked the home directory ~ of the user that executed this and everything in it that you can erase recursively will be gone. That's typically way too much to still have a stable environment.
So: restore from backup as your only simple option.
If you can't do that, There are 2 options left: Start over and Recover (some) data
Start over
Myself, I'd just restore the system from scratch if I didn't want to are was unable to restore a backup. It is the only way to be sure to have a stable system again where directories like Desktop, Downloads, Library, etc. still exist with their proper permissions and contents.
Recover (some) data
If you stop using the system ASAP there's an option that some services might find some valuable data on your harddisk. No guarantees will be given at all. So consider it a last resort at best. It will not restore your system to working condition, but it might recover some valuable data.
What to do if you want to keep this option open:
Stop using the system NOW, shut it down. Every write your system does to the harddisk is (potentially) overwriting the data you might want to recover.
If you have a system with removable harddisks. Most modern macs are not easy, nor recommended for end-users to swap harddisk themselves, it's even likely to void warranties on the system, so take care!
-> Replace the disk in the machine with a new one and start rebuilding on that new disk. Use the old disk only as a target of the recovery, never boot from it, or otherwise write to it.
If you have a system without an easily removable harddisk, you'll have to stop using the system till the valuable data has been recovered. If you're going for the DIY path below, you will have to bring the system up in "target disk mode" See here how to do that: https://support.apple.com/en-us/HT201462
You now have two options:
DIY: I honestly have never had any success with this in real cases, but it is possible to find software that will claim to do this for you. Obviously nothing will ever be guaranteed and the best you can hope for is to recover some of the valuable data files. This software is typically not cheap, but significantly cheaper than the next option.
Professional data recovery service. Get the disk to the service of your choice. Expect this to be extremely expensive, without any guarantee to results.
Lessons learned
All incidents should always allow for an after the fact point in time where you learn from the experience. Without trying to preach too much:
Be careful with rm -rf ... it is powerful
Make backups regularly. On macOS timeMachine is easy and painless and costs you next to nothing compared to this pain. TimeMachine can backup to an external drive, an apple time capsule, a partition on a NAS, ... If you leave it connected, you'll have hourly backups.

Ubuntu gone wrong, because of using mv /* [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
After using this command
root#localhost:/var/www/google# mv /* ./
mv /* ./
mv: cannot move ?dev?to ?/dev? Device or resource busy
mv: cannot move ?proc?to ?/proc? Device or resource busy
mv: cannot move ?run?to ?/run? Device or resource busy
mv: cannot move ?sys?to ?/sys? Device or resource busy
mv: cannot move ?var?to a subdirectory of itself, ?/var?
Every command is going wrong.
After that, I want to zip my files as backup and it gets wrong.
Somebody help me, thank you .
I want to restore the system normally.
If that not, how to zip it with some zip tools?
Judging from the comments, you were running as root and the current directory was /var/www/google when you ran the command:
mv /* ./
This has moved everything movable from / to /var/www/google. One side effect is that the commands that normally live in /bin are now in /var/www/google/bin and those that live in /usr/bin are now in /var/www/google/usr/bin.
Do not reboot. Do not log out.
If you do, you will have to reinstall from scratch.
Temporarily, you can do:
PATH=/var/www/google/bin:/var/www/google/usr/bin:$PATH
cd /var/www/google
mv * /
These steps undo the primary damage (you should be able to reboot after this, but don't).
You then need to replace the directories that are now in / but that should be in /var/www/google back in the correct place.
You should create a new terminal session and check that your system is working sanely (do not close the open terminal until you've demonstrated that all is OK again).
Don't work as root unless you have to, and only for the minimum time necessary (one command at a time?).
If any of this fails, you should probably assume that a reinstall will be necessary. Or take the machine to someone who has the experience to help you fix the problems. There are endless things that could go wrong. Mercifully for you, the /dev directory was not moved; that avoids a lot of problems. However, the /etc directory was moved; commands could get upset about that.
try to revert it
cd /var/www/google
mv ./* /
Good Luck
PD to zip:
zip archive.zip /path/to/zip/*
EDIT
/var/www/google/bin/mv /var/www/google/* /

Changing ulimit on ubuntu 12.04 Never works [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am working with hadoop and need to change number of open files ulimit -n. I have seen similar questions on stackoverflow and elsewhere and have tried everything in those answers, but it still does not work. I am working with ubuntu 12.04 LTS. Here is what i have done:
change limits in /etc/security/limits.conf, i have put in settings for * and root. I have also changed limits to some number like 10000 and unlimited.
* soft nofile 1513687
* hard nofile 1513687
root soft nofile 1513687
root hard nofile 1513687
I have also tried above setting with - instead of soft and hard. After these changes, i have made changes to /etc/pam.d/ files such as:
common-session
common-session-noninterative
login
cron
sshd
su
sudo
i have added session required pam_limits.so to the beginning of each file. I made a restart of the box in question and the settings did not take effect.
I have also found that there were files inside /etc/security/limits.d/ directory for the users hbase mapred and hdfs. I have tried changing limits in these individual files as well to no avail.
I have tried putting ulimit -S -n unlimited inside /etc/profile as well. It did not work.
Finally, i have tried putting limit nofile unlimited unlimited inside /etc/init.d/hadoop* files as first line. Did not work.
One interesting thing though is, i do not have hbase installed on the box but i do have a hbase.conf file inside /etc/security/limits.d/ directory. The settings in this file are reflected with ulimit -n. But settings from hdfs.conf and mapred.conf are not reflected. Which suggests that something is overwriting settings for hdfs and mapred.
I guess i have tried everything people suggested on several forums, is there anything else that i may have missed or done incorrectly?
I am using CDH 4.4.0 as my hadoop distribution.
How are you checking ulimit?
I was experiencing a similar issue, where I would run sudo ulimit -n, and still see 1024. This is because ulimit is a Bash built-in. In order to see the changes reflected in /etc/security/limits.conf, I had to run ulimit as the actual user.

How to stop NTFS volume auto-mounting on OS X? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm a bit newbieish when it comes to the deeper parts of OSX configuration and am having to put up with a fairly irritating niggle which while I can put up with it, I know under Windows I could have sorted in minutes.
Basically, I have an external disk with two volumes:
One is an HFS+ volume which I use for TimeMachine backups.
The other, an NTFS volume that I use for general file copying etc on Mac and Windows boxes.
So what happens is that whenever I plug in the disk into my Mac USB, OSX goes off and mounts both volumes and shows an icon on the desktop for each. The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes, which causes an annoying warning dialog to be shown every time.
What I'd prefer is some way to prevent the NTFS volume from auto-mounting altogether. I've done some hefty googling and here's a list of things I've tried so far:
I've tried going through options in Disk Utility
I've tried setting AutoMount to No in /etc/hostconfig but that is a bit too global for my liking.
I've also tried the suggested approach to putting settings in fstab but it appears the OSX (10.5) is ignoring these settings.
Any other suggestions would be welcomed. Just a little dissapointed that I can't just tick a box somewhere (or untick).
EDIT: Thanks heaps to hop for the answer it worked a treat. For the record it turns out that it wasn't OSX not picking up the settings I actually had "msdos" instead of "ntfs" in the fs type column.
The following entry in /etc/fstab will do what you want, even on 10.5 (Leopard):
LABEL=VolumeName none ntfs noauto
If the file is not already there, just create it. Do not use /etc/fstab.hd! No reloading of diskarbitrationd needed.
If this still doesn't work for you, maybe you can find a hint in the syslog.
This is not directly an answer, but
The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes
I have a similar situation.
OSX remembers where you put your icons on the desktop - I've moved the icons for both of my removable drives to just above where the trash can lives.
Eject procedure becomes
Hit top-left of screen with mouse to show desktop
Drag small box around both removable drives
Drag 2cm onto trash so they both get ejected
Remove firewire cable

Resources