I was running some jobs in the SLURM of my PC, and the computer rebooted.
Once the computer was back on, I saw in the squeue that the jobs that were running before reboot were not running anymore due to a drain state. It seemed they had been automatically requeued after the reboot.
I couldn't submit more jobs, because the node was drained. So I did scancel the jobs that were automatically requeued.
The problem is that I cannot free the node. I tried a few things:
Restarting slurmctld and slurmd
"undraining" the nodes as explained in another question, but no success. The commands ran without any output (I assume this is good), but the state of the node did not change.
I then tried manually rebooting the system to see if anything would change
Running scontrol show node neuropc gives
[...]
State=IDLE+DRAIN ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A
[...]
Reason=Low RealMemory [slurm#2023-02-05T22:06:33]
Weirdly, the System Monitor shows that all the 8 cores keep having activity between 5% and 15%, whereas in the Process tab it shows only one app (TeamViewer) using less than of 4% processor.
So I suspect the job I was running somehow was kept running after reboot or are still on hold by SLURM.
I use Ubuntu 20.04 and slurm 19.05.5.
To strictly answer the question ; no they cannot. They might or might not be requeued depending on the Slurm configuration, and restarted either from scratch or from the latest checkpoint if the job is able to do checkpoint/restart. But there is not way a running process can survive a server reboot.
This answer solved my problem. Copying it here:
This could be that RealMemory=541008 in slurm.conf is too high for your system. Try lowering the value. Lets suppose you have indeed 541 Gb of RAM installed: change it to RealMemory=500000, do a scontrol reconfigure and then a scontrol update nodename=transgen-4 state=resume.
If that works, you could try to raise the value a bit.
Related
I have Docker image which have issues with memory leaking. It's known issue for this specific tool and authors recommend to restart nodes from time to time as a workaround.
However, daily restart is not always enough and some processed are killed by Jelastic OOM killer. I wan't not to kill them, but completely restart. If I've had real Docker running on a machine I would be able to instruct it to restart container after OOM or something like this, but in Jelastic I don't have such option.
Simple solution would be to add supervisord or something like this to my setup and take care about it but I'm wondering is there some out-of-the-box solution from Jelastic for this.
Recently My ICp 2.1.0.1 environment (built on openStack VM) master node become very slow. Actually it is the Linux VM that very slow, not only for ICP product, even some general simple Linux command (ls, pwd, cd, etc.).
The thing here is, I didn't even use this environment, no workload, system completely idled.
I used top to monitor the CPU usage but didn't find long running processes taking long time.
How is that happen?
Note, this same issue occur at least twice. I just setup the env and leave it there.
Although the root cause is not yet identified, but simply restart the cluster solved the issue:
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_cluster/restart_cluster.html
The restarting of docker takes several hours to finish. But finally it becomes fast.
I'm not sure if the same issue might happen again in the future. Monitoring..
Note that I didn't reboot the Linux VM as I found after rebooting Linux last time, some docker containers cannot startup successful, which lead me to check below part:
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/troubleshoot/restart_master_console.html
From ICP 2.1.0 release, there are some new features added, you need to ensure your VM hardware configuration matches the hardware requirements listed in below link, thanks.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/supported_system_config/hardware_reqs.html
I am running Cloudera Hadoop on my laptop and Oracle VirtualBox VM.
I have given 5.6 GB out of mine 8 and six from eight cores as well.
And still I am not able to keep it up and running.
Even without load services would not stay up and running and when I try a query at least Hive will be down within 20 minutes. And sometimes they go down like dominoes: one after another.
More memory seemed to help some: with 3GB and all services, Hue was blinking with red colors when the Hue itself managed to get up. And after rebooting it would takes 30 - 60 minutes before I manage to get the system up enough to even try running anything on it.
There has been two sensible notes (that I have managed to find):
- Warning of swapping.
- Crashing note when the system used 26 GB of virtual memory which was not enough.
My dataset is less than one megabyte, so it is hard to understand why the system would go up to dozens of gigabytes, but for whatever was reason for that has passed: now the system is running more steadily around the 5.6 GB that I have given to it after closing down a few services: see my answer to myself.
And still it is just more stable. Right after I got a warning of swapping and the Hive went down again. What could be reason for more-or-less all Hadoop services going down if the VM starts to swap?
I don't have enough reputation to post the picture to here, but when Hive went down again it was swapping 13 pages / second and utilizing 5.9 GB / 5.6 GB. So basically my system starts crashing more-or-less right after it start to swap. "428 pages were swapped to disk in the previous 15 minute(s)"
I have used default installation options as far as hard drive is concerned.
Only addition is a shared folder between Windows and VM. That works somewhat strangely locking files all the time, so I used it just like FTP and only for passing files from one system to another. Thus I can go days without using it, but systems still crash, so that is not the cause either.
Now that the system is mostly up, services crash still about twice a day: Service Monitor and Hive are quite even with their crashing frequency. After those come Activity Monitor and Event Server, which appear to crash always together. I believe Yarn crashes as well, but it gets up on its own. Last time Hive crashed first, and then it got followed by Service Monitor, Hive (second time), Activity Monitor and Event Server all.
As swap is disk, perhaps the problem is with disk:
# cat /etc/fstab
# swapoff -a
# badblocks -v /dev/VolGroup/lv_swap
Checking blocks 0 to 8388607
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.
# badblocks -vw /dev/VolGroup/lv_swap
Checking for bad blocks in read-write mode
From block 0 to 8388607
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found.
So nothing wrong with swap disk and I have not noticed any disk error anywhere else either.
Note that you could check file system from Windows side also. But I expect that if you make Windows to fix your Linux file system, you have good chances of destroying your Linux with that, so I did my checks somewhat pessimistically, because AFAIK these commands are safe to execute.
About half of the services kept going down, so giving more specifics would be a long story.
I succeeded to get the system more stable by closing down flume, hbase, impala, ks_indexer, oozie, spark and sqoop. And by increasing more memory to some remaining services that complained they had not been given enough memory.
Also I fixed couple of thing on the Windows side, I am not sure which one of these helped:
- MsMpEng.exe kept my hard drive busy. I didn't have permissions to kill it, but I decreased its priority to lowest possible.
- CcmExec.exe got to loop on my DVD and kept reading it for forever. This I solved by taking the DVD out from the drive. Then later on I killed the process tree to keep it from bothering for a while.
I found these using Windows resource manager.
The VM requires 4GB: http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html You should use that.
I am not clear whether you are using the QuickStart VM though. It's set up to run just the essential services and tuned to conserve memory rather than exploit lots of memory.
It sounds like you are running your own installation, on one virtual machine, on your Windows machine. You may be running an entire cluster's worth of services on one desktop machine. Each of these services has master, worker processes, monitoring processes, etc. You don't need most of them.
You also probably have left memory settings at default suitable for a server-class machine of 16+ GB RAM. Remember these services usually run across many machines, not all on one.
Finally, you're clearly swapping, and that makes things incredibly slow. Remember this is all through a VM too!
Bottom line, use the QuickStart VM if you really want a 1-machine cluster tuned correctly. If you want a real cluster or more services, you need more hardware.
Also consider: cloudera.com/live contains a full CDH 5.1 cluster + sample data, running on demand on AWS. Of course, the advantage of the VM is that you can BYOD, but if you're simply looking for a hands-on Hadoop experience, Live is a great option.
I have a Server 2008 scheduled job that does the following:
Tests to see if the active/passive mscs service is running on this node.
If so then maps a shared folder.
Moves a bunch of fies from shared folder to clustered drive.
Unmaps shared folder.
The job runs every 5 minutes. The files are procuded by another system and though not completley time critical delating the process much more than this is not acceptable to the business.
So far so good.
What I'm seeing is that although the script works correctly, the job history says that the second time it attempts to run the job, 'there is a previous copy of the job still running'.
Does anyone have any thoughts on:
Why this is happening?
And how to go about debugging it?
If this were Unix/Linux this would not be a problem but this is a complete mystery to me.
After several months of successful and unadulterated continuous integration, my Hudson instance, running on Mac OSX 10.7.4 Lion, decides it wants to enter shutdown mode after every 20-30 minutes of inactivity.
For those of you familiar with shutdown mode, the instance of course doesn't shutdown, but has the undesirable effect (in this case) of stopping new jobs from starting.
I know I haven't changed any settings, so it makes me think the problem was slowly growing and keeps triggering shutdown mode.
I know there is plenty of storage space on the machine with 400+ GB to go so I'm wondering what else would trigger shutdown mode without actually using the Hudson web portal to manually do it.
As mentioned before, the problem also seems to be tied to inactivity. I tried creating a quick fix, which is a build job that does nothing every 5 minutes. It appeared to work at first, but after long periods of inactivity I will find it back in shutdown mode.
Any ideas what might be going on?
Solution: disable the thinBackup plugin
...
I figured this out by taking a look at the Hudson logs at http://localhost:8080/log/all
thinBackup was running every time the Hudson instance went into shutdown mode.
The fact that shutdown mode was occurring at periods of inactivity is also consistent with the behavior of thinBackup.
I then disabled the plug-in and Hudson no longer enters shutdown mode. What's odd is that thinBackup had been installed for some time before this problem starting occurring. I am seeking out a solution from thinBackup to re-enable the plugin without the negative effects and will update here if I get an answer.
According to this link, the thinBackup plugin puts Hudson into shutdown mode on purpose to do the backup activity. It is supposed to automatically come out of shutdown mode once it is done.
I saw this with some jobs that seemed to stall and never finish overnight, so Hudson never came out of shutdown mode because thinBackup must have been waiting on the jobs to finish.