Why is the output for the id variable 1? - fork

#include <stdio.h>
#include <unistd.h>
int main()
{
int id;
printf("here comes the date.\n");
if (id = fork() == 0) {
printf(ā€œ%dā€, id);
printf ("PID is %d and ID is %d\n", getpid (),id);
execl ("/bin/date", "date", 0);
}
printf ("that was the date.\n");
}
OUTPUT:
here comes the date.
that was the date.
PID is 1414 and ID is 1
Tue Feb 10 14:03:02 PST 2015

Because you are setting it equal to the result of fork() == 0 which is a logical test.
Fork will succeed (return zero) inside of the forked thread. The outer thread will have the PID.

Related

Different results of program in different execution environments

I am using Ubuntu 18.04. I am currently doing a course on Operating Systems and was just getting used to fork() and exec() calls.
I am running the following C program
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main(int argc, char *argv[])
{
printf("hello world (pid:%d)\n", (int) getpid());
int rc = fork();
if (rc < 0) {
// fork failed; exit
fprintf(stderr, "fork failed\n");
exit(1);
} else if (rc == 0) {
// child (new process)
printf("hello, I am child (pid:%d)\n", (int) getpid());
} else {
// parent goes down this path (original process)
printf("hello, I am parent of %d (pid:%d)\n",
rc, (int) getpid());
}
return 0;
}
On running the code in Sublime Text Editor with the build file
{
"cmd" : ["gcc $file_name -o ${file_base_name} && ./${file_base_name}"],
"selector" : "source.c",
"shell": true,
"working_dir" : "$file_path"
}
I get the result
hello world (pid:16449)
hello, I am parent of 16450 (pid:16449)
hello world (pid:16449)
hello, I am child (pid:16450)
Whereas if I use the terminal and run the same code using gcc,
I get
hello world (pid:17531)
hello, I am parent of 17532 (pid:17531)
hello, I am child (pid:17532)
Now I know that the latter is the correct whereas the output I get in Sublime is the wrong one. How can the outputs when the compiler I am using remains the same can be different?

The exit code, the return echo is different from the exit code of the program. Why?

I wrote a program that returns an error code using processes.
The result of the work of the program, if at the entrance to it provide the command of a false, is 255.
However the command
false; echo $?
returns 1
Why it happens?
Solaris, unix
I found the file false.c in the source code, it returns 255 (not sure if this is the right command)
https://github.com/illumos/illumos-gate/blob/master/usr/src/cmd/false/false.c
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
#include <wait.h>
main(int argc, char *argv[])
{
int status;
pid_t pid = fork();
if (pid == -1){
perror("fork error");
exit(-1);
}
if (pid == 0) {
execvp(argv[1], &argv[1]);
perror(argv[1]);
exit(-5);
}
if( wait(&status) == -1){
perror("wait");
exit(-1);
}
if(WIFEXITED(status))
printf("exit status: %d\n",WEXITSTATUS(status));
exit(0);
}
UNIX (Linux, Solaris, BSD, etc.) exit codes may only be 0 - 255, where 0 is good and non-zero is an error. Exit codes are not signed, so the -1 may be converted to another (non-zero) value, as are values over 255.

How the time_point created with different duration(std::chrono::milliseconds and std::chrono::nanoseconds) is so different

I have created std::chrono::milliseconds ms and std::chrono::nanoseconds ns
from std::chrono::system_clock::now().time_since_epoch(). From that duration I created timepoints and convert it to time_t using system_clock::to_time_t and print it using ctime function. But the time printed is not same. As I understand the time_point have duration and duration have rep and period (ratio). So time_point must have same value up to millisecond precision in both time_points. Why the output is different?
Here is my code
#include <ctime>
#include <ratio>
#include <chrono>
#include <iostream>
using namespace std::chrono;
int main ()
{
std::chrono::milliseconds ms = std::chrono::duration_cast < std::chrono::milliseconds > (std::chrono::system_clock::now().time_since_epoch());
std::chrono::nanoseconds ns = std::chrono::duration_cast< std::chrono::nanoseconds > (std::chrono::system_clock::now().time_since_epoch());
std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());
std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());
system_clock::time_point abc(today_day);
system_clock::time_point abc1(same_day);
std::time_t tt;
tt = system_clock::to_time_t ( abc );
std::cout << "today is: " << ctime(&tt);
tt = system_clock::to_time_t ( abc1 );
std::cout << "today is: " << ctime(&tt);
return 0;
}
This line:
std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());
is overflowing. The number of milliseconds since 1970 is on the order of 1.5 trillion. But unsigned int (on your platform) overflows at about 4 billion.
Also, depending on your platform, this line:
std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());
may introduce a conversion error. If you are using gcc, system_clock::duration is nanoseconds, and there will be no error.
However, if you're using llvm's libc++, system_clock::duration is microseconds and you will be silently multiplying your duration by 1000.
And if you are using Visual Studio, system_clock::duration is 100 nanoseconds and you will be silently multiplying your duration by 100.
Here is a video tutorial for <chrono> which may help, and contains warnings about the use of .count() and .time_since_epoch().
The conversions you do manually do not look right.
You should use duration_cast for conversions because they are type-safe:
auto today_day = duration_cast<duration<unsigned, std::ratio<86400>>>(ms);
auto same_day = duration_cast<system_clock::duration>(ns);
Outputs:
today is: Thu Jul 26 01:00:00 2018
today is: Thu Jul 26 13:01:08 2018
Because you throw away the duration info, and then interpret an integer value as a different duration type
std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());
milliseconds -> dimensionless -> 1 / 1000 seconds (i.e. milliseconds)
std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());
nanoseconds -> dimensionless -> system clocks
You should instead just duration_cast again
#include <ctime>
#include <ratio>
#include <chrono>
#include <iostream>
using namespace std::chrono;
int main ()
{
milliseconds ms = duration_cast<milliseconds>(system_clock::now().time_since_epoch());
nanoseconds ns = duration_cast<nanoseconds>(system_clock::now().time_since_epoch());
system_clock::time_point abc(duration_cast<system_clock::duration>(ms));
system_clock::time_point abc1(duration_cast<system_clock::duration>(ns));
std::time_t tt;
tt = system_clock::to_time_t ( abc );
std::cout << "today is: " << ctime(&tt);
tt = system_clock::to_time_t ( abc1 );
std::cout << "today is: " << ctime(&tt);
return 0;
}

why wait() return -1 on xcode Version 7.2.1 (7C1002)

guys, I have the following c code:
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <errno.h>
int main (int argc, char *argv[])
{
int exit;
pid_t tc_pid, ret_pid;
tc_pid = fork();
if(tc_pid != 0){
ret_pid = wait(&exit);
printf("parent process done, tc_pid = %d, ret_pid = %d, errno = %d\n", tc_pid, ret_pid, errno);
fflush(stdout);
}
printf("parent process done, tcpid = %d, my_pid = %d\n", tc_pid, getpid());
fflush(stdout);
return 0;
}
the output on xcode is:
parent process done, tcpid = 0, my_pid = 74377
parent process done, tc_pid = 74377, ret_pid = -1, errno = 4
parent process done, tcpid = 74377, my_pid = 74374
where here the return value of wait() is -1(should be 74377 if correct), and errno is -4
However, when I use the same code run in terminal(I use zsh), the output is:
parent process done, tcpid = 0, my_pid = 74419
parent process done, tc_pid = 74419, ret_pid = 74419, errno = 0
parent process done, tcpid = 74419, my_pid = 74418
which is what I want. Does anyone knows why would this happen? Thanks guys.
my OSX is 10.11.3 and my machine is MBPR early 2015, xcode 7.2.1,
gcc 4.2.1, Apple LLVM version 7.0.2 (clang-700.1.81), Target: x86_64-apple-darwin15.3.0, Thread model: posix
According to errno.h, errno of 4 is EINTR which the man page for wait says is:
The call is interrupted by a caught signal or the signal does not have the SA_RESTART flag set.
You apparently are using a signal to get wait to exit. Perhaps you might need to rethink what are you ultimately trying to do here, and is there a way to do it without using wait?

macos 10 setuid failing for no reason

I'm running this code to change the real uid if a process:
#include <cstdlib>
#include <cstdio>
#include <errno.h>
#include <sys/types.h>
#include <unistd.h>
void printstat()
{
printf("uid: %d, euid: %d\n",getuid(),geteuid());
}
int main(int argc, char** argv)
{
if (argc < 2)
{
return -1;
}
int m_targetUid = atoi(argv[1]);
printstat();
uid_t realUID = getuid();
printf("Setting effective uid to %d\n",m_targetUid);
seteuid(m_targetUid);
printstat();
if (m_targetUid != realUID)
{
printf("Setting real uid to %d\n",m_targetUid);
int res = setuid(m_targetUid);
printf("setuid(%d) returned: %d\n",m_targetUid,res);
if (0 > setuid(m_targetUid))
{
printf("setuid(%d) failed: %d, getuid() returned %d, geteuid returned %d\n",m_targetUid,errno,realUID,geteuid());
exit(-1);
}
}
}
according to the man page, the setuid functino shouldn't fail if the effective userid is equal to the specified uid, but for some reason it fails. any ideas?
man page:
The setuid() function sets the real and effective user IDs and the saved set-user-ID of the current process to the specified value. The setuid() function is permitted if the effective user ID is that of the
super user, or if the specified user ID is the same as the effective user ID. If not, but the specified user ID is the same as the real user ID, setuid() will set the effective user ID to the real user ID.
and this is the output when i run it as root:
nnlnb-mm-041: root# /tmp/setuidBug 70
uid: 0, euid: 0
Setting effective uid to 70
uid: 0, euid: 70
Setting real uid to 70
setuid(70) returned: -1
setuid(70) failed: 1, getuid() returned 0, geteuid returned 70
I finally managed to solve it, apparently in macos you have to set the effective uid back to root for it to work. code below.
#include <cstdlib>
#include <cstdio>
#include <errno.h>
#include <sys/types.h>
#include <unistd.h>
void printstat()
{
printf("uid: %d, euid: %d\n",getuid(),geteuid());
}
int main(int argc, char** argv)
{
if (argc < 2)
{
return -1;
}
int m_targetUid = atoi(argv[1]);
printstat();
uid_t realUID = getuid();
printf("Setting effective uid to %d\n",m_targetUid);
seteuid(m_targetUid);
printstat();
printf("Setting effective uid to 0\n");
seteuid(0);
printstat();
if (m_targetUid != realUID)
{
printf("Setting real uid to %d\n",m_targetUid);
int res = setuid(m_targetUid);
printf("setuid(%d) returned: %d\n",m_targetUid,res);
if (0 > setuid(m_targetUid))
{
printf("setuid(%d) failed: %d, getuid() returned %d, geteuid returned %d\n",m_targetUid,errno,realUID,geteuid());
exit(-1);
}
}
printstat();
}
and the output now is:
uid: 0, euid: 0
Setting effective uid to 70
uid: 0, euid: 70
Setting effective uid to 0
uid: 0, euid: 0
Setting real uid to 70
setuid(70) returned: 0
uid: 70, euid: 70

Resources