I want to execute the command ls -a using execv() on a Linux machine as follows:
char *const ptr={"/bin/sh","-c","ls","-a" ,NULL};
execv("/bin/sh",ptr);
However, this command does not list hidden files. What am I doing wrong?
I'm not sure why you're passing this via /bin/sh... but since you are, you need to pass all the arguments after -c as a single value because these are now to be interpreted by /bin/sh.
The example is to compare the shell syntax of
/bin/sh -c ls -a
to
/bin/sh -c 'ls -a'
The second works, but the first doesn't.
So your ptr should be defined as
char * const ptr[]={"/bin/sh","-c","ls -a" ,NULL};
If you need to get the contents of a directory from a c program, then this is not the best way - you will effectively have to parse the output of ls, which is generally considered a bad idea.
Instead you can use the libc functions opendir() and readdir() to achieve this.
Here is a small example program that will iterate over (and list) all files in the current directory:
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <dirent.h>
int main (int argc, char **argv) {
DIR *dirp;
struct dirent *dp;
dirp = opendir(".");
if (!dirp) {
perror("opendir()");
exit(1);
}
while ((dp = readdir(dirp))) {
puts(dp->d_name);
}
if (errno) {
perror("readdir()");
exit(1);
}
return 0;
}
Note the listing will not be sorted, unlike the default ls -a output.
Related
assume that I have a tar.gz archive that contains 1 shared library.
My intention, is to untar it "on-the-fly" and the .so (that is extracted), put it on LD_PRELOAD and the run my code.
So, I made a script:
#!/bin/bash
myTarLib=$1
tar -zxf $myTarLib --to-command "export LD_PRELOAD="
./run_the_func
The execution of the run_the_exec didn't use the .so from the tar.
I have the impression that the "--to-command" option creates another shell; is it correct?
Do you have any suggestion on how I could do it? The important part, is that i don't want to have the .so on the disk.
Thanks in advance!
I found a solution to the problem...
The use of memfd_create
The memfd_create creates a file descriptor. Then this can be used to store any data in it.
The manpage is here.
In order to use it, you need to create a C-Wrapper that takes care of the untar (in my case). The code is:
#include <linux/memfd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
int main()
{
int fd = memfd_create("my_test", MFD_CLOEXEC);
if (fd == -1)
{
fprintf(stderr, "Creation failed\n");
}
char command_1[128];
char *path="hiddenLibrary/libmy_func_real.tgz";
//feel free to modify it to the path of your encrypted library
sprintf(command_1, "tar -zxf %s --to-stdout > /proc/%ld/fd/%d", path, (long) getpid(), fd);
printf("Running decrypt command\n");
system(command_1);
printf("The untar-ed library is located at:/proc/%ld/fd/%d\nOnce you finished type a number and hit enter\n",(long) getpid(), fd);
float temp;
scanf("%f", &temp);
return 0;
}
Now the idea is that the C code above, will run the untar and will store the result to the fd. Once you have finished using it, you simply hit a number and the C code exits.
During the exit, all the fds are released, so the untar-ed library is "gone".
When I spawn a process in ruby and try to get it's resource limits, it fails:
io = IO.popen("/usr/bin/cat")
puts Process.getrlimit(io.pid)
this throws
-:2:in `getrlimit': Invalid argument - getrlimit (Errno::EINVAL)
It works for Process.getrlimit(1), returning [18446744073709551615, 18446744073709551615].
When I attempt the same getrlimit(2) system call in C, it works!
I modified the Ruby to output the pid and stay running:
io = IO.popen("/usr/bin/cat")
puts io.pid
while 1; end
Then ran it in the background with ruby cat.rb &, used ps to get it's pid, I cat get the resource limits using the syscall in C:
#include <sys/resource.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
int pid = 8657; # from the cat.rb program output
struct rlimit rlim;
getrlimit(pid, &rlim);
printf("Soft limit: %d, ", rlim.rlim_cur);
printf("Hard limit: %d\n", rlim.rlim_max);
return 0;
}
Compiling and running this works, why doesn't ruby let me do the getrlimit(2) system call in the same way?
The first argument to C system call getrlimit is not a pid. It is instead an integer specifying the resource, for example RLIMIT_CPU or RLIMIT_MSGQUEUE. If you put in any random pid as the first argument, then the C system call will likely fail alike, returning -1 and setting errno to EINVAL. The getrlimit always returns the values for the current process only.
To get the limits of an arbitrary process on Linux you need to use the non-portable prlimit system call. That call seems not be supported by plain Ruby, and it cannot be done on Unix systems.
My Code
#include<stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
int main()
{
char * arg_list[3];
arg_list[0] = "ls";
arg_list[1] = "-l";
arg_list[2] = 0;
char *arg_list2[3];
arg_list2[0] = " ps";
arg_list2[1] = "-ef";
arg_list2[2] = 0;
for(int i=0;i<5;i++){ // loop will run n times (n=5)
if(fork() == 0) {
if (i == 0){
execvp("ls", arg_list);
}else if(i==1){
execvp("ps" , arg_list2);
}else if(i>1){
printf("[son] pid %d from [parent] pid %d\n",getpid(),getppid());
exit(0);
}
}
}
for(int i=0;i<5;i++) // loop will run n times (n=5)
wait(NULL);
}
ME trying to modify it
#include<stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
int main()
{
for(int i=0;i<5;i++){ // loop will run n times (n=5)
if(fork() == 0) {
printf("[son] pid %d from [parent] pid %d\n",getpid(),getppid());
execlp(argv[i],argv[i],argv[i+1],(char*)NULL);
exit(0);
}
}
for(int i=0;i<5;i++) // loop will run n times (n=5)
wait(NULL);
}
-- NEED GUIDANCE AND UNDERSTANDING
I am trying to make my own tiny little shell program. When I run my first code works fine, runs all commands on the command line. But I cannot know and define all commands the user might enter. So i am trying to get a base code which could run any commands single or multiple entered by user. I tried using execlp where it does not compile saying argv is not defined which is true as i don't want to specifically define it.
I am trying to make my own tiny little shell program. When I run my first code works fine, runs all commands on the command line. But I cannot know and define all commands the user might enter.
For sure.... A shell program purpose is basically:
Read user input
Execute user input
Return result of execution.
There's nothing in your code that read user input....
So i am trying to get a base code which could run any commands single or multiple entered by user.
So read user input ;-)
I tried using execlp where it does not compile saying argv is not defined which is true as i don't want to specifically define it.
For sure ... but how would GCC guessed that `argv[]̀ must be automaticallty filled with user input ?
There's nothing automatic when coding in C language. You have to manage this manually.
Also, note that argc, argv et envp are usually reserved for main() function:
main(int argc, char **argv, char **envp)
So you may use something else to build your command array.
In pseudo code, what you must implement is:
quit=0
while (quit = 0) {
command_to_run = read_user_input();
if (command_to_run == "exit") {
quit = 1;
} else {
execute(command_to_run);
}
}
Some advices:
Try to use more functions. For example, implement a fork_and_run(char **cmd) function to fork and then execute command provided by the user. Il will make your code more readable and easy to maintain.
Read carefully manpages: everything you should know (like, for example, the fact that array provided to execvp() must be NULL-terminated) is written in it.
Your debugging messages should be printed to stderr. The result of the command run must be printed to stdin, so use fprintf() instead of printf() to write to the correct stream.
I would use a #define debug(x) fprintf(stderr, x) or something similar for debugging output so that you can easily disable later ;-)
I would like to find out if there's a sys call that gets remote process id and return it's command line in Mac OS X (the equivalent in linux is /proc/PID/cmdline.
I could use the following way of reading output of 'px ax PID' from file, but I believe there's a cleaner way.
enter code here
char sys_cmd[PATH_MAX];
snprintf(sys_cmd, PATH_MAX, "ps ax %d", pid);
fp = popen(sys_cmd, "r");
while (fgets(res, sizeof(res)-1, fp) != NULL) {
printf("%s", res);
}
pclose(fp);
Depending on exactly what you want to do, you could do something like the following with proc_pidinfo() (source code for the kernel implementation is here and header file with struct definitions is here):
$ cat procname.c
#include <stdio.h>
#include <stdlib.h>
#include <sys/proc_info.h>
extern int proc_pidinfo(int pid, int flavor, uint64_t arg, user_addr_t buffer,
uint32_t buffersize);
#define SHOW_ZOMBIES 0
int main(int argc, char **argv) {
if(argc != 2) {
puts("Usage: procname <pid>");
return 1;
}
struct proc_taskallinfo info;
int ret = proc_pidinfo(atoi(argv[1]), PROC_PIDTASKALLINFO, SHOW_ZOMBIES,
(user_addr_t) &info, sizeof(struct proc_taskallinfo));
printf("ret=%d, result=%s\n", ret, (char *) info.pbsd.pbi_comm);
return 0;
}
$ clang procname.c -o procname 2>/dev/null
$ sudo ./procname 29079
ret=232, result=Google Chrome
I would have used dtruss on ps -p ... -o args to get an exact syscall you could use to get the right information, but unfortunately on El Capitan dtruss doesn't seem to work with some binaries (including ps) because of the following error:
$ sudo dtruss ps -p 29079 -o args
dtrace: failed to execute ps: dtrace cannot control executables signed with restricted entitlements
Instead what I did was run sudo nm $(which ps) to see what library calls were happening from ps, then I looked through those to see what looked like the most likely candidates and Googled for their implementations in the xnu (Mac OS X kernel) source code.
The correct API to do this is the KERN_PROCARGS2 sysctl, however it is very hard to use correctly (I've checked every use of this API in public code and they're all wrong), so I wrote a library to wrap its use: https://getargv.narzt.cam
I'm trying to simulate a pipe behavior on Ubuntu's Terminal, for example the command:
"echo hello | wc".
Please assume I got the tokens from stdin, handled everything correctly and now These are the commands I "received" from the user who typed them in the shell for me to handle.
I'm trying to create two processes. Using a pipe, in the first process, I point the file descriptor of the writing edge of the pipe to stdout. The second process should read into stdin with the reading edge of the pipe what execvp(..) returned.?
Here is the code I wrote:
#include <stdio.h>
#include <stdlib.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
int main()
{
char* fcmd[] = {"echo", "hello", NULL};
char* scmd[] = {"wc", NULL};
pid_t pid;
int pipe_des[2];
int i;
pipe(pipe_des);
for(i = 0; i < 2; i++)
{
pid = fork();
if (pid ==0)
{
switch (i)
{
case 0: // FIRST CHILD
{
dup2(pipe_des[1], STDOUT_FILENO);
close(pipe_des[0]);
execvp(fcmd[0], fcmd);
exit(0);
}
case 1: //SECOND CHILD
{
dup2(pipe_des[0], STDIN_FILENO);
close(pipe_des[1]);
execvp(scmd[0], scmd);
exit(0);
}
}
}
else if (pid < 0)
exit(EXIT_FAILURE);
return EXIT_SUCCESS;
}
I get: " amirla#ubuntu:~/Desktop/os/class/ex4$ 1 1 6 "
Like it should, but why he's printing the bash cwd first? The pipe seems to work because I get what I should, according to the length of the word I'm sending with the echo command(in the main()). After that the cursor just waits on the line below for another command without showing me the bash pwd. (maybe stdin is waiting?)
I've looked in many posts on here as well as on other websites and I still can't seem to find a solution to my problem. Any help would be appreciated. Thanks in advance.
Note: Please Ignore checking for errors, I've delete them to make the code shorter so assume they exist.
Why do I get a prompt before the output?
Your main process doesn't wait for the children to finish. What you see is:
Main starts
Main creates children
Main exits
BASH prints prompt
Children start their work
To prevent this, you need to wait for the children. See How to wait until all child processes called by fork() complete?
In your case, it's enough to add
waitpid(-1, NULL, 0);
after the loop.