Following xv6/Linux forking and waitpid processes - fork

int main()
{
int count = 0;
int pid;
if ( !(pid=fork()))
{
while (((count<2) && (pid=fork()) ) {
count++;
printf("%d",count)
}
if(count>0)
{
printf("%d", count);
}
}
if (pid) {
waitpid(pid,NULL,0);
count = count<<1;
printf("%d", count)
}
}
I'm having trouble reading this piece of code. I can't figure out how many system calls are made or even remotely trace them. This is leading me to not be able to understand how the output works. I am assuming the scheduler can do anything. What are all possible outputs and how many system calls are made?

Launch with strace -f ./myprogram to trace all system calls.
From looking at the code, about 3 additional processes will be created.

Related

Fork/ execlp in #pragma omp parallel for stops arbitrarily

I am programming a Kalman-Filter based realtime program in C using a large number of realizations. To generate the realization output I have to execute an external program (groundwater simulation software) about 100 times. I am therefore using OpenMP with fork and exceclp for parallelization of this block:
#pragma omp parallel for private(temp)
for(i=1;i<=n_real;i++){
/* Start ensemble model run */
int = fork();
int ffork = 0;
int status;
if (pid == 0) {
//Child process
log_info("Start Dadia for Ens %d on thread %d",i,omp_get_thread_num());
sprintf(temp,"ens_%d",i);
chdir(temp);
execlp("../SPRING/dadia","dadia","-return","-replace_kenn","73","ini.csv","CSV",NULL);
log_info("Could not execute function dadia - exit anyway");
ffork = 1;
_exit(1);
}
else{
//Parent process
wait(NULL);
if (ffork == 0){
log_info("DADIA run ens_%d successfully finished",i);
}
}
}
In general the code runs smoothly for small number of realizations (with 6 threads). However sometimes the code hangs in the last cycle of parallel iterations. The occurrence only happens if number iterations >> number threads. I tried scheduling the for loop with different options, but it didn't solve the problem. I know that fork is not the best solution to use with OpenMP. But I am wondring why it's sometimes hangs at arbitrary points.
Thank's a lot for any kind if feedback.
Different Ubuntu versions tried (including difffrent comiler versions)
This version finally seems to run stable. Thank's for your hints:
#pragma omp parallel for private(temp) schedule(dynamic)
for(i=1;i<=n_real;i++) {
/* Start ensemble model run */
/* Calling bash-commands inside OpenMP enviroment requires using fork().
Else the parallel process is terminated.
Note exec automatically ends the child process after termination.
If Sitra fails, the child process needs to be terminated manually */
pid_t pid = 0;
int status;
pid = fork();
if (pid == 0) {
log_info("Start SITRA for Ens %d on thread %d",i,omp_get_thread_num());
sprintf(temp,"ens_%d",i);
chdir(temp);
execl("../SPRING/sitra","sitra","-return",(char *) 0);
perror("In exec(): ");
}
if (pid > 0) {
pid = wait(&status);
log_info("End of sitra for ensemble %d",i);
if (WIFEXITED(status)) {
printf("Sitra ended with exit(%d).\n", WEXITSTATUS(status));
}
if (WIFSIGNALED(status)) {
printf("Sitra ended with kill -%d.\n", WTERMSIG(status));
}
}
if (pid < 0){
perror("In fork():");
}
}

Detect "suspended" Windows 8/10 process

UWP (or "Metro") apps in Windows 8/10 can be suspended when they are not in the foreground. Apps in this state continue to exist but no longer consume CPU time. It looks like this change was introduced to improve performance on low-power/storage devices like tablets and phones.
What is the most elegant and simple method to detect a process in this state?
I can see 2 possible solutions at the moment:
Call NtQuerySystemInformation() and the enumerate each process and each thread. A process is "suspended" if all threads are in the suspended state. This approach will require a lot of code and critically NtQuerySystemInformation() is only semi-documented and could be removed in a future OS. NtQueryInformationProcess() may also offer a similar solution with the same problem.
Call GetProcessTimes() and record the counters for each process. Wait some longish time (minutes) and check again. If the process counters haven't changed then assume the process is suspended. I admit this is a hack but maybe could work if the time period is long enough.
Is there a more elegant way?
for this exist PROCESS_EXTENDED_BASIC_INFORMATION - meaning of flags in it described in this answer. you are need IsFrozen flag. so you need open process with PROCESS_QUERY_LIMITED_INFORMATION access (for do this for all processes, you will be need have SE_DEBUG_PRIVILEGE enabled in token). and call NtQuerySystemInformation with ProcessBasicInformation and PROCESS_EXTENDED_BASIC_INFORMATION as input. for enumerate all processes we can use NtQuerySystemInformation with SystemProcessInformation. of course possible and use CreateToolhelp32Snapshot + Process32First + Process32Next but this api very not efficient, compare direct call to NtQuerySystemInformation
also possible enumerate all threads in process and check it state and if state wait - wait reason. this is very easy, because all this information already returned by single call to NtQuerySystemInformation with SystemProcessInformation. with this we not need open processes. usually both this ways give the same result (for suspended/frozen) processes, but however use IsFrozen is most correct solution.
void PrintSuspended()
{
BOOLEAN b;
RtlAdjustPrivilege(SE_DEBUG_PRIVILEGE, TRUE, FALSE, &b);
ULONG cb = 0x1000;
NTSTATUS status;
do
{
status = STATUS_INSUFFICIENT_RESOURCES;
if (PBYTE buf = new BYTE[cb])
{
if (0 <= (status = NtQuerySystemInformation(SystemProcessInformation, buf, cb, &cb)))
{
union {
PBYTE pb;
SYSTEM_PROCESS_INFORMATION* spi;
};
pb = buf;
ULONG NextEntryOffset = 0;
do
{
pb += NextEntryOffset;
if (!spi->UniqueProcessId)
{
continue;
}
if (HANDLE hProcess = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, FALSE,
(ULONG)(ULONG_PTR)spi->UniqueProcessId))
{
PROCESS_EXTENDED_BASIC_INFORMATION pebi;
if (0 <= NtQueryInformationProcess(hProcess, ProcessBasicInformation, &pebi, sizeof(pebi), 0) &&
pebi.Size >= sizeof(pebi))
{
if (pebi.IsFrozen)
{
DbgPrint("f:%x %wZ\n", spi->UniqueProcessId, spi->ImageName);
}
}
CloseHandle(hProcess);
}
if (ULONG NumberOfThreads = spi->NumberOfThreads)
{
SYSTEM_THREAD_INFORMATION* TH = spi->TH;
do
{
if (TH->ThreadState != StateWait || TH->WaitReason != Suspended)
{
break;
}
} while (TH++, --NumberOfThreads);
if (!NumberOfThreads)
{
DbgPrint("s:%x %wZ\n", spi->UniqueProcessId, spi->ImageName);
}
}
} while (NextEntryOffset = spi->NextEntryOffset);
}
delete [] buf;
}
} while (status == STATUS_INFO_LENGTH_MISMATCH);
}

C++11 std::condition_variable - notify_one() not behaving as expected?

I don't see this program having any practical usage, but while experimenting with c++ 11 concurrency and conditional_variables I stumbled across something I don't fully understand.
At first I assumed that using notify_one() would allow the program below to work. However, in actuality the program just froze after printing one. When I switched over to using notify_all() the program did what I wanted it to do (print all natural numbers in order). I am sure this question has been asked in various forms already. But my specific question is where in the doc did I read wrong.
I assume notify_one() should work because of the following statement.
If any threads are waiting on *this, calling notify_one unblocks one of the waiting threads.
Looking below only one of the threads will be blocked at a given time, correct?
class natural_number_printer
{
public:
void run()
{
m_odd_thread = std::thread(
std::bind(&natural_number_printer::print_odd_natural_numbers, this));
m_even_thread = std::thread(
std::bind(&natural_number_printer::print_even_natural_numbers, this));
m_odd_thread.join();
m_even_thread.join();
}
private:
std::mutex m_mutex;
std::condition_variable m_condition;
std::thread m_even_thread;
std::thread m_odd_thread;
private:
void print_odd_natural_numbers()
{
for (unsigned int i = 1; i < 100; ++i) {
if (i % 2 == 1) {
std::cout << i << " ";
m_condition.notify_all();
} else {
std::unique_lock<std::mutex> lock(m_mutex);
m_condition.wait(lock);
}
}
}
void print_even_natural_numbers()
{
for (unsigned int i = 1; i < 100; ++i) {
if (i % 2 == 0) {
std::cout << i << " ";
m_condition.notify_all();
} else {
std::unique_lock<std::mutex> lock(m_mutex);
m_condition.wait(lock);
}
}
}
};
The provided code "works" correctly and gets stuck by design. The cause is described in the documentation
The effects of notify_one()/notify_all() and
wait()/wait_for()/wait_until() take place in a single total order, so
it's impossible for notify_one() to, for example, be delayed and
unblock a thread that started waiting just after the call to
notify_one() was made.
The step-by-step logic is
The print_odd_natural_numbers thread is started
The print_even_natural_numbers thread is started also.
The m_condition.notify_all(); line of print_even_natural_numbers is executed before than the print_odd_natural_numbers thread reaches the m_condition.wait(lock); line.
The m_condition.wait(lock); line of print_odd_natural_numbers is executed and the thread gets stuck.
The m_condition.wait(lock); line of print_even_natural_numbers is executed and the thread gets stuck also.

put pipe to stdin another process

I'm using pipe to send an array of numbers to another process to sort them. So far, I'm able to get the result from another process using fdopen. However, I can't figure out how to send data from the pipe as stdin for another process.
Here is my code:
int main ()
{
int fd[2], i, val;
pid_t child;
char file[10];
FILE *f;
pipe(fd);
child = fork();
if (child == 0)
{
close(fd[1]);
dup2(fd[0], STDIN_FILENO);
close(fd[0]);
execl("sort", "sort", NULL);
}
else
{
close(fd[0]);
printf ("BEFORE\n");
for (i = 100; i < 110; i++)
{
write(fd[1], &i, sizeof (int));
printf ("%d\n", i);
}
close(fd[1]);
wait(NULL);
}
}
By the way, how can the other process get input? scanf?
I think your pipe is set up correctly. The problems may start at execl(). For this call you should specify an absolute path, which is probably /bin/sort if you mean the Unix utility. There is also a version of the call execlp() which searches automatically on the PATH.
The next problem is that sort is text based, more specifically line based, and you're sending binary garbage to its STDIN.
In the parent process you should write formatted text into the pipe.
FILE *wpipe = fdopen(fd[1], "w");
for (i = 100; i < 110; i++) {
fprintf(wpipe, "%d\n", i);
...
}
fclose(wpipe);
A reverse loop from 110 down to 100 may test your sorting a bit better.

STXXL: limited parallelism during sorting?

I populate a very large array using a stxxl::VECTOR_GENERATOR<MyData>::result::bufwriter_type (something like 100M entries) which I need to sort in parallel.
I use the stxxl::sort(vector->begin(), vector->end(), cmp(), memoryAmount) method, which in theory should do what I need: sort the elements very efficiently.
However, during the execution of this method I noticed that only one processor is fully utilised, and all the other cores are quite idle (I suspect there is little activity to fetch the input, but in practice they don't do anything).
This is my question: is it possible to exploit more cores during the sorting phase, or is the parallelism used only to fetch the input asynchronously? If so, are there documents that explain how to enable it? (I looked extensively the documentation on the website, but I couldn't find anything).
Thanks very much!
EDIT
Thanks for the suggestion. I provide below some more information.
First of all I use MacOs for my experiments. What I do is that I launch the following program and I study its behaviour.
typedef struct Triple {
long t1, t2, t3;
Triple(long s, long p, long o) {
this->t1 = s;
this->t2 = p;
this->t3 = o;
}
Triple() {
t1 = t2 = t3 = 0;
}
} Triple;
const Triple minv(std::numeric_limits<long>::min(),
std::numeric_limits<long>::min(), std::numeric_limits<long>::min());
const Triple maxv(std::numeric_limits<long>::max(),
std::numeric_limits<long>::max(), std::numeric_limits<long>::max());
struct cmp: std::less<Triple> {
bool operator ()(const Triple& a, const Triple& b) const {
if (a.t1 < b.t1) {
return true;
} else if (a.t1 == b.t1) {
if (a.t2 < b.t2) {
return true;
} else if (a.t2 == b.t2) {
return a.t3 < b.t3;
}
}
return false;
}
Triple min_value() const {
return minv;
}
Triple max_value() const {
return maxv;
}
};
typedef stxxl::VECTOR_GENERATOR<Triple>::result vector_type;
int main(int argc, const char** argv) {
vector_type vector;
vector_type::bufwriter_type writer(vector);
for (int i = 0; i < 1000000000; ++i) {
if (i % 10000000 == 0)
std::cout << "Inserting element " << i << std::endl;
Triple t;
t.t1 = rand();
t.t2 = rand();
t.t3 = rand();
writer << t;
}
writer.finish();
//Sort the vector
stxxl::sort(vector.begin(), vector.end(), cmp(), 1024*1024*1024);
std::cout << vector.size() << std::endl;
}
Indeed there seems to be only one or maximum two threads working during the execution of this program. Notice that the machine has only a single disk.
Can you please confirm me whether the parallelism work on macos? If not, then I will try to use linux to see what happens. Or is perhaps because there is only one disk?
In principle what you are doing should work out-of-the-box. With everything working you should see all cores doing processing.
Since it doesnt work, we'll have to find the error, and debugging why we see no parallel speedups is still tricky business these days.
The main idea is to go from small to large examples:
what platform is this? There is no parallelism on MSVC, only on Linux/gcc.
By default STXXL builds on Linux/gcc with USE_GNU_PARALLEL. you can turn it off to see if it has an effect.
Try reproducing the example values shown in http://stxxl.sourceforge.net/tags/master/stxxl_tool.html - with and without USE_GNU_PARALLEL
See if just in memory parallel sorting scales on your processor/system.

Resources