Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am working with hadoop and need to change number of open files ulimit -n. I have seen similar questions on stackoverflow and elsewhere and have tried everything in those answers, but it still does not work. I am working with ubuntu 12.04 LTS. Here is what i have done:
change limits in /etc/security/limits.conf, i have put in settings for * and root. I have also changed limits to some number like 10000 and unlimited.
* soft nofile 1513687
* hard nofile 1513687
root soft nofile 1513687
root hard nofile 1513687
I have also tried above setting with - instead of soft and hard. After these changes, i have made changes to /etc/pam.d/ files such as:
common-session
common-session-noninterative
login
cron
sshd
su
sudo
i have added session required pam_limits.so to the beginning of each file. I made a restart of the box in question and the settings did not take effect.
I have also found that there were files inside /etc/security/limits.d/ directory for the users hbase mapred and hdfs. I have tried changing limits in these individual files as well to no avail.
I have tried putting ulimit -S -n unlimited inside /etc/profile as well. It did not work.
Finally, i have tried putting limit nofile unlimited unlimited inside /etc/init.d/hadoop* files as first line. Did not work.
One interesting thing though is, i do not have hbase installed on the box but i do have a hbase.conf file inside /etc/security/limits.d/ directory. The settings in this file are reflected with ulimit -n. But settings from hdfs.conf and mapred.conf are not reflected. Which suggests that something is overwriting settings for hdfs and mapred.
I guess i have tried everything people suggested on several forums, is there anything else that i may have missed or done incorrectly?
I am using CDH 4.4.0 as my hadoop distribution.
How are you checking ulimit?
I was experiencing a similar issue, where I would run sudo ulimit -n, and still see 1024. This is because ulimit is a Bash built-in. In order to see the changes reflected in /etc/security/limits.conf, I had to run ulimit as the actual user.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
My mac is sending me the frequent alert of low disk space. When I am checking the system storage then it's showing 170+gb is occupied by the system. I am not sure where is my space is getting used?
I tried a few cleaner tools also but couldn't get help much.
Please help to resolve it?
After doing research over various forums of mac's and StackExchange I figured out that it's mostly because of the following reasons.
Log files (Might be crash log files/docker files)
Your email messages stored in outlook (in my case it was almost ~20 GB)
Logs related to cores when a system restarts (~ 10 GB)
Docker Images (This had ~70 GB in my case).
Your nonsystem documents/downloads/itunes
So the question is how to find what all things are unnecessary and safe to delete? These system files are not visible directly.
I tried using a few tools like cleanmymac etc but all were paid so I couldn't get help much there.
To clean up your non-system unnecessary files, you can directly take the help of the storage management tool of mac. You just have to click on optimize storage and it will show all the non-system files.
To cleanup unnecessary system files, use below command
sudo find -x / -type f -size +10G
This command will give you all the files occupying more than 10 GB. You can analyze the files and delete them as necessary.
The highlighted cores are nothing but the state files of your mac to reboot from last state when your mac restarts so it's safe to delete.
Next step is to delete a hidden tmp folder
It will show the size as 0 bytes because your user won't have permission to read it. But will be occupying a hell amount of space. So delete it by giving root permission.
Now, Look if there are any docker images present in your system. Clean them all (Docker.raw).
Using all these steps I was able to clean almost 100+ GB.
Recently found that this issue was caused by a memory leak in one of the Java applications I was running. I had to start the Activity Monitor, searching for Java processes and Force Quit them. Rinse and repeat every time my space runs out. Also fix your code where you can to get rid of memory leaks.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
After using this command
root#localhost:/var/www/google# mv /* ./
mv /* ./
mv: cannot move ?dev?to ?/dev? Device or resource busy
mv: cannot move ?proc?to ?/proc? Device or resource busy
mv: cannot move ?run?to ?/run? Device or resource busy
mv: cannot move ?sys?to ?/sys? Device or resource busy
mv: cannot move ?var?to a subdirectory of itself, ?/var?
Every command is going wrong.
After that, I want to zip my files as backup and it gets wrong.
Somebody help me, thank you .
I want to restore the system normally.
If that not, how to zip it with some zip tools?
Judging from the comments, you were running as root and the current directory was /var/www/google when you ran the command:
mv /* ./
This has moved everything movable from / to /var/www/google. One side effect is that the commands that normally live in /bin are now in /var/www/google/bin and those that live in /usr/bin are now in /var/www/google/usr/bin.
Do not reboot. Do not log out.
If you do, you will have to reinstall from scratch.
Temporarily, you can do:
PATH=/var/www/google/bin:/var/www/google/usr/bin:$PATH
cd /var/www/google
mv * /
These steps undo the primary damage (you should be able to reboot after this, but don't).
You then need to replace the directories that are now in / but that should be in /var/www/google back in the correct place.
You should create a new terminal session and check that your system is working sanely (do not close the open terminal until you've demonstrated that all is OK again).
Don't work as root unless you have to, and only for the minimum time necessary (one command at a time?).
If any of this fails, you should probably assume that a reinstall will be necessary. Or take the machine to someone who has the experience to help you fix the problems. There are endless things that could go wrong. Mercifully for you, the /dev directory was not moved; that avoids a lot of problems. However, the /etc directory was moved; commands could get upset about that.
try to revert it
cd /var/www/google
mv ./* /
Good Luck
PD to zip:
zip archive.zip /path/to/zip/*
EDIT
/var/www/google/bin/mv /var/www/google/* /
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
During the last month Ubuntu starts having some problems: it shuts down suddenly without any apparent reason.. I figured out that the problem is in the hard disk, if I run this command:
$ sudo badblocks -sv -b 512 /dev/sda
I get 24 bad blocks all in the Linux partition (I have Windows in another one and it does not have the same problem). The question is if there is a way (different from changing the disk) for avoiding this shutting down. Maybe isolating the bad blocks?
Software/file system bad blocks marking is mostly a thing of the past; recent drives automatically relocate bad blocks in a transparent way.
If you start getting bad blocks "visible" to software it probably means that the hard drive is exhausting the reserve of free replacement blocks, so it's probably failing. You should check the SMART status of the disk to see if this is actually confirmed by the other SMART attributes, do a backup and get ready to replace your drive.
I found a good tutorial that might help you: http://www.ehow.com/how_6864409_fix-bad-sectors-ubuntu.html
Open the terminal > type the command mount and follow the steps:
Choose a filesystem to repair. For example, you might choose the filesystem named "/home" if the output from the "mount" command includes this line:
/dev/mapper/vg0-home on /home type ext3 (rw)
Type the "umount" command to unmount the filesystem. To unmount the "/home" filesystem, for example, issue the command "sudo umount /home".
Type the "fsck" command to repair the filesystem. The "fsck" command stands for "file system check"; it scans the disk for bad sectors and labels the ones that aren't working. To run fsck on the /home filesystem, issue the command "sudo fsck /dev/mapper/vg0-home". Replace "/dev/mapper/vg0-home" with the output from your "mount" command, as appropriate.
Type the "mount" command to remount the repaired filesystem. If you repaired the "/home" filesystem, then use the command "sudo mount /home".
Spinrite (grc.com) is the best tool I know of for recovering bad sectors and getting the drive to use backup sectors in their place. Its not cheap but it works. If any of your friends own a copy you are allowed to borrow it. Ive used it for 7 years now. Its good for periodic maintenance too.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I previously installed PostgreSQL 9.2 on my Mac using the EnterpriseDB installer. As such I had amended .bash_profile to read export PATH=/opt/local/lib/postgresql92/bin:$PATH, and everything was working just fine.
Then I had a hard drive corruption and had to reformat my computer and reinstall OSX. Initially I had to reinstall Snow Leopard (that's the version of the recovery discs I had), and then re-upgrade to Mountain Lion (which I was running prior to my crash). I then used a Time Machine backup with Migration Assistant to restore my Users, Applications, and "Other Files".
Looking around everything seemed to be back where it was before the crash. However, now when I try to do anything PostgreSQL-related, I get the error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Reading around online I found that this might be simply because my PostgreSQL server had not been started when I performed the system restoration. The official docs say to use the following command:
$ postgres -D /usr/local/pgsql/data
But I don't have a folder at /usr/local/pgsql; the only directory with data I can find is /Library/PostgreSQL/9.2/data. So I switched to postgres by doing sudo su postgres and tried postgres -D /Library/PostgreSQL/9.2/data again, which gave:
2013-08-18 11:38:09 SGT FATAL: could not create shared memory segment: Invalid argument
2013-08-18 11:38:09 SGT DETAIL: Failed system call was shmget(key=5432001, size=32374784, 03600).
2013-08-18 11:38:09 SGT HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 32374784 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared memory configuration.
Where do I go from here? This whole thing is a bit strange; I don't remember ever having to start the server when I initially installed PostgreSQL...
EDIT: I also tried initdb -D /Library/PostgreSQL/9.2/data in case the db cluster was missing, but got:
initdb: directory "/Library/PostgreSQL/9.2/data" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/Library/PostgreSQL/9.2/data" or run initdb
with an argument other than "/Library/PostgreSQL/9.2/data".
So it should still be there, restored along with most of the other stuff on my system, right?
Your kernel parameters need tweaking. Here is the relevant documentation on kernel resources. Configure those according to your system specifications (memory) and you should be good.
As a footnote, I would like to add that from Postgresql 9.3 onwards, the above tweaking will no longer be necessary.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Problem:
I keep getting Hash: Element not found errors.
Technical Details:
uTorrent 3.2.3 (latest as of this writing)
Running about 30 Torrents (all downloading)
Win 7 64 bit
Dell N5050 :sigh:
Symptoms:
Force recheck is disabled (sometimes)
When I resume the torrent, as it halts when this happens, it proceeds smoothly until the next Hash: Element not found error
It doesn't happen at a particular %age
Solutions Attempted:
Searched online a lot to find a few below
Re-download elsewhere. Set download folder and change it and re-download the torrent. NO! DOESN'T WORK! and its FRUSTRATING that I'd to DELETE my 90% downloaded torrent!!
Good 'ol thump. Swear at the screen making heavy fist thumps and hand gestures. Surprisingly, this doesn't work!
Force recheck. Doesn't help and sometimes not available.
Disk I/O errors. Came across an article which said this might due to Disk I/O errors.
Realized I was using a DELL laptop
Realized HDD had failed on a previous DELL
Tried Solution #2 again. Same results.
Seemed like the most likely explanation to the problem, hence read articles about HDD checking and downloaded a few suggested softwares to check HDD Health
Interestingly, the HDD was a OK
None of these worked!
I got this error when my hard drive ran out of disk space, so I think it is related to some file / disk access issue, depending on where you are writing to
I was trying to download some large files to a network drive (Windows XP to Samba) and I was getting the same Element Not Found error.
In my case, enabling the disk cache has solved the issue. I had to uncheck the Disable Windows caching of disk writes and Disable Windows caching of disk reads options under uTorrent Options -> Preferences -> Advanced -> Disk Cache (this way enabling the cache).
Source: http://forum.utorrent.com/topic/34159-error-element-not-found/page-2#entry251137
I really think this question belongs to SuperUser though.
The working solution turned out to be pretty simple.
Check your Anti-Virus!
My antivirus was quietly quarantining a few suspected files.
Added those files to the exclusion list.
All is well again.