iowait for while/sleep bash script on debian/64 - bash

I'm running a mysql-DB server (debian/squeeze/64) with 48GB RAM and 8TB disk, lots of inserts and quite a few CPU-intensive background processes.
Since some of these processes kept dying I used a simple bash-watchdog to restart them, which worked, but produced a lot of iowait. I simplified the problem down to:
#!/bin/bash
while true; do sleep 1; done
which still produces iowait of up to 90% for the bash(!)-process (as seen on iotop). there's no disk read or write displayed and the test-script really is just this one line.
Note that everything's working fine and the server is still perfectly responsive. I'm just curious to know what's going on.
Anyone any idea?

I was not able to reproduce your results, possible it's a bug.
I tested on Arch(vm) and Ubuntu(physical), running a while with a sleep of one second resulted in minimal IO and essentially no IOwait.

Related

Check status of a forked process?

I'm running a process that will take, optimistically, several hours, and in the worst case, probably a couple of days.
I've tried a couple of times to run it and it just never seems to complete (I should add, I didn't write the program, it's just a big dataset). I know my syntax for the command is correct as I use it all the time for smaller data and it works properly (I'll spare you the details as it is obscure for SO and I don't think that relevant to the question).
Consequently, I'd like to leave the program unattended running as a fork with &.
Now, I'm not totally sure whether the process is just grinding to a halt or is running but taking much longer than expected.
Is there any way to check the progress of the process other than ps and top + 1 (to check CPU use).
My only other thought was to get the process to output a logfile and periodically check to see if the logfile has grown in size/content.
As a sidebar, is it necessary to also use nohup with a forked command?
I would use screen for this purpose. see the man for more reference
Brief summary how to use:
screen -S some_session_name - starts a new screen session named session_name
Ctrl + a + d - detach session
screen -r some_session_name returns you to your session

Rcpp in Rstudio, can't cache in memory when parallel if I don't open the cpp file in Rstudio

I met a wired problem but I wonder if I'm asking the correct question:
result = parLapply(cl, 1:4,
function(j,rho_list_needed,delta0_needed,
V_iter_s,Sigma_list_needed) {
rhoj = rho_list_needed[[j]]
delta0_in_cpp = delta0_needed
v = as.vector(V_iter_s[,,,j])
sigmaj = Sigma_list_needed[[j]]
sourceCpp('sample_Z.cpp')#first time complie slow,then cashed
return(Sample_Z(rhoj,delta0_in_cpp, v,sigmaj,A,Cmatrix))
},rho_list_needed,delta0_needed,
V_iter[[s]],Sigma_list_needed)
When I was testing my sample_Z.cpp with parallel through parLapply, the single calculation takes around 1 sec. By parallel, my 4 iterations takes around 1.2 secs, which is a big improvement compared to unparalleled version, which is 8 sec.
There's no problem at all when I run my program yesterday. Just now I noticed a bug and revised my program. To give my PC a fresh environment, I restarted my computer. When started to run my program, I only opened the .R file, and run. But it took 9 sec for that parallel, which used to be 1.2 sec. The 9 sec was after warming up my cores, i.e., already sourced the cpp before I time it.
I just don't know where is the bug. Then try to source the cpp file directly in my global merriment, and I found out that there was no caching at all. The second time took the same time as the first one.
But I accidentally opened the sample_Z.cpp in Rstudio, explicitly at the editor. And then, everything works correct now.
I don't know how to search this similar problem on google with what kind of key words and I don't know if opening the cpp file is a must, while I never known before.
Can anyone tell me what's the real issue? Thanks!
After restarting your PC, you probably had extra processes running which would have competed for CPU cores that slowed down your algorithm. The fact you're rebooting suggests to me you're not using Linux... but if you are, watch with top while starting your code, or equivalent for your platform.

How to check Matplotlib's speed in Xcode and increase performance?

I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.

Why does CUDA code run so much faster in NVIDIA Visual Profiler?

A piece of code that takes well over 1 minute on the command line was done in a matter of seconds in NVIDIA Visual Profiler (running the same .exe). So the natural question is why? Is there something wrong with command line, or does Visual Profiler do something different and not really execute everything as on the command line?
I'm using CUBLAS, Thrust and cuRAND.
Incidentally, there's been a noticeable slowdown in compiled code on my machine very recently, even old code that previously ran quickly, hence I'm getting suspicious.
Update:
I have checked that the calculated output on command line and Visual Profiler is identical - i.e. all required code has been run in both cases.
GPU-shark indicated that my performance state was unchanged at P0 when I switched from command line to Visual Profiler.
However, GPU usage was reported at 0.0% when run with Visual Profiler, but went as high as 98% when run off command line.
Moreover, far less memory is used with Visual Profiler. When run off command line, task manager indicates usage of 650-700MB of memory (spikes at the first cudaFree(0) call). In Visual Profiler that figure goes down to ~100MB.
This is an old question, but I've just finished chasing the same issue (though the cause may not be the same).
Namely: my app achieved between 900 and 1100 frames (synchronous launches) per second when running under NVVP, but around 100-120 when running outside of the profiler.
The cause appears to be a status message I was printing to the console via cout. I had intended for this to only happen about once every 100-200 frames. Instead, it was printing the status message for every frame, and the console IO became the bottleneck.
By only printing the status message every 100 frames (though the optimal number here would depend on your application), the frame rate jumped back up to match what I was seeing in NVVP. Of course, this could also be handled in a separate CPU thread if that sort of overhead is unacceptable in your circumstances.
NVVP has to redirect stdout to its own internal buffer in order to capture the application's output (which it shows in its console tab). It appears that NVVP's mechanism for buffering or processing that output has significantly less overhead than allowing the operating system to handle it directly. It looks like NVVP is buffering everything, and displaying it in a separate thread, or just saving a bunch of output until some threshold is reached, when it adds that buffer to its own console tab.
So, my advice would be to disable any console IO, and see if or how that affects things.
(It didn't help that VS2012 refused to profile my CUDA app. It would have been nice to see that 80% of the execution time was spent on console IO.)
Hope this helps!
This should not happen. I've never seen anything like it; probably something in your setup.
It could be that some JIT-compile step is skipped by the profiler. This could explain the difference in memory usage. Try creating a fat binary?

bash protect HD from excessive use

How do I avoid breaking the HD? I have a bash script running on an ubuntu machine, with this meta code:
bash1.sh
while(true)
run bash2.sh
sleep 60 seconds
done
bash2.sh:
if(directory is empty): exit
process file
delete file
The directory is network shared, and the computer is not doing anything else. Once per day a new file arrives and is processed. (I do know that bash1.sh can be replaced by watch). My concern is that bash1.sh is reading bash2.sh everytime - that can presumably be avoided by only having one script!? and bash2.sh is reading the same directory everytime. Is the directory really read from the HD, or is ubuntu somehow caching the dir in ram? -so it is only read when something changes? is it a problem that it is the same place on the HD that is read every time, or does it not matter because the HD is already spinning? If the HD never sleeps, does it matter if I set the loop time down to only one second?
Maybe the directory could be a pure ram dir - how do I do that? -or is there some simple way to check if something has arrived over the network without reading the directory?
Reading a file or directory once every sixty seconds is not excessive use.
Seriously, don't worry about it.
If it's really worrying you, you can rethink your strategy for detecting the file.
For example, do you really need to know, within sixty seconds, that the file has arrived? Can it arrive any time during the day? Can some parts of the day be considered unlikely?
Using information like that, you can adjust the timing of checks to suit. If the file is supposed to be delivered after 4pm, don't check for it at all before then.
Check for it every sixty seconds between 4pm and 5pm, then every ten minutes after that.
These are all business-related decisions that can be made but I would still suggest that it's unnecessary. Provided you regularly back up your disks (and have standby hardware if you need to be back up in a hurry), you shouldn't lose anything.
In fact, if you were really paranoid, you could dedicate an entire machine for this, whose sole purpose is to receive the file via FTP and, when it arrives, send it across to your real processing box.
Put nothing else on that machine and have a warm standby (exactly same software, IP address and so on but powered down) so that, if it fails, the standby can be activated in minutes.
The real processing machine is then only written to once a day - that's unlikely to affect the disk lifetime.
That's probably too paranoid for my liking but it shows that there are ways to mitigate almost any problem.

Resources