Untar and load to LD_PRELOAD - bash

assume that I have a tar.gz archive that contains 1 shared library.
My intention, is to untar it "on-the-fly" and the .so (that is extracted), put it on LD_PRELOAD and the run my code.
So, I made a script:
#!/bin/bash
myTarLib=$1
tar -zxf $myTarLib --to-command "export LD_PRELOAD="
./run_the_func
The execution of the run_the_exec didn't use the .so from the tar.
I have the impression that the "--to-command" option creates another shell; is it correct?
Do you have any suggestion on how I could do it? The important part, is that i don't want to have the .so on the disk.
Thanks in advance!

I found a solution to the problem...
The use of memfd_create
The memfd_create creates a file descriptor. Then this can be used to store any data in it.
The manpage is here.
In order to use it, you need to create a C-Wrapper that takes care of the untar (in my case). The code is:
#include <linux/memfd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
int main()
{
int fd = memfd_create("my_test", MFD_CLOEXEC);
if (fd == -1)
{
fprintf(stderr, "Creation failed\n");
}
char command_1[128];
char *path="hiddenLibrary/libmy_func_real.tgz";
//feel free to modify it to the path of your encrypted library
sprintf(command_1, "tar -zxf %s --to-stdout > /proc/%ld/fd/%d", path, (long) getpid(), fd);
printf("Running decrypt command\n");
system(command_1);
printf("The untar-ed library is located at:/proc/%ld/fd/%d\nOnce you finished type a number and hit enter\n",(long) getpid(), fd);
float temp;
scanf("%f", &temp);
return 0;
}
Now the idea is that the C code above, will run the untar and will store the result to the fd. Once you have finished using it, you simply hit a number and the C code exits.
During the exit, all the fds are released, so the untar-ed library is "gone".

Related

How can I run 'ls' with options from a C program?

I want to execute the command ls -a using execv() on a Linux machine as follows:
char *const ptr={"/bin/sh","-c","ls","-a" ,NULL};
execv("/bin/sh",ptr);
However, this command does not list hidden files. What am I doing wrong?
I'm not sure why you're passing this via /bin/sh... but since you are, you need to pass all the arguments after -c as a single value because these are now to be interpreted by /bin/sh.
The example is to compare the shell syntax of
/bin/sh -c ls -a
to
/bin/sh -c 'ls -a'
The second works, but the first doesn't.
So your ptr should be defined as
char * const ptr[]={"/bin/sh","-c","ls -a" ,NULL};
If you need to get the contents of a directory from a c program, then this is not the best way - you will effectively have to parse the output of ls, which is generally considered a bad idea.
Instead you can use the libc functions opendir() and readdir() to achieve this.
Here is a small example program that will iterate over (and list) all files in the current directory:
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <dirent.h>
int main (int argc, char **argv) {
DIR *dirp;
struct dirent *dp;
dirp = opendir(".");
if (!dirp) {
perror("opendir()");
exit(1);
}
while ((dp = readdir(dirp))) {
puts(dp->d_name);
}
if (errno) {
perror("readdir()");
exit(1);
}
return 0;
}
Note the listing will not be sorted, unlike the default ls -a output.

Could not compile when using FreeBSD-based tcphdr on Centos

I am writing a simple program that using struct tcphdr in netinet/tcp.h as follows:
#define _BSD_SOURCE
#include <netinet/tcp.h>
#include <stdio.h>
int main()
{
struct tcphdr t;
t.th_sport = 0;
printf("\n%d", t.th_sport);
return 1;
}
Because I want this program can work on both FreeBSD and Centos, so I am using the FreeBSD-based property. I also defined _BSD_SOURCE in the beginning of file. But it could not be compiled using std=c++11 when I save this source code into *.cpp file. There is no member named th_sport. However, it is compiled perfectly by std=c99 with *.c file.
What's problem here? Anyone helps me to explain this? Thanks so much.

fanotify gremlin---hard no-return fail (under gdb)

this is almost the same example as in the man page. everything is updated to recent versions. gcc is 4.9.2. gdb is 7.8.1. linux kernel is 3.17.6-1 (64bit). the install is a recent arch bootstrap. here is the whittled down case:
#define _GNU_SOURCE /* Needed to get O_LARGEFILE definition */
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
#include <sys/fanotify.h>
int main(int argc, char *argv[]) {
int fd;
fd = fanotify_init(FAN_CLOEXEC | FAN_CLASS_CONTENT | FAN_NONBLOCK, O_RDONLY | O_LARGEFILE);
if (fd == -1) exit(1);
fprintf(stderr, "calling fanotify_mark: fd=%d\n", fd);
if (fanotify_mark(fd, FAN_MARK_ADD | FAN_MARK_MOUNT, FAN_OPEN_PERM | FAN_CLOSE_WRITE, -1, "/") == -1) exit(2);
fprintf(stderr, "in gdb step through with 'n' for repeat.\n");
fprintf(stderr, " (and sometimes otherwise), a ^C works, but a ^Z and then ^C does not.\n");
}
most of the time, this works fine, but sometimes it does not. I think this is when fanotify_mark never returns. on trying to debug this, I found that I can(not) replicate this for debugging. if I use gdb and try to step through with 'n', fanotify_mark() never returns and is uninterruptible (^C, ^Z).
is this replicable elsewhere, or am I doing something wrong?
/iaw
this happens because FAN_OPEN_PERM requires another program to grant permission. this is almost verbatim from the example in the fanotify man page---and this makes it rather unfortunate, because whittling down the program can induce a hard OS block. so watch it.
my actual intent was to monitor file accesses. for this, one uses FAN_OPEN, not FAN_OPEN_PERM.

Save list of files in array

I am making a C++ program which should be able to list the files from particular directory and save each file name as a string(which will be processed further for conversion). Do I need array of strings? Which functionality should I use. The number of files is not fixed.
Main thing is I can't enter the names manually. I must accept the names from the list generated.
In this case you want to use a vector:
#include <vector>
#include <string>
using namespace std;
int main()
{
vector<string> file_names;
file_names.push_back("file1.txt");
file_names.push_back("file2.txt");
file_names.push_back("file3.txt");
file_names.push_back("file4.txt");
return 0;
}
Have you thought about using some command line tools to deal with this? Even input redirection will work for this. Example:
./Cpp < echo somedir/*
Where Cpp is the name of your compiled binary, and somedir is the directory you want to read from
Then in your c++ program, you simply use std::cin to read each filename from standard in.
#include <vector>
#include <string>
#include <iterator> // std::istream_iterator, std::back_inserter
#include <algorithm> //std::copy
#include <iostream> // std::cin
int main()
{
std::vector<string> file_names;
// read the filenames from stdin
std::copy(std::istream_iterator<std::string>(std::cin), std::istream_iterator<std::string>(), std::back_inserter(file_names));
// print the filenames
std::copy(file_names.begin(), file_names.end(), std::ostream_iterator<std::string>(std::cout, "\n"));
return 0;
}

scoped_lock doesn't work on file?

According to the link below, I wrote a small test case. But it doesn't work. Any idea is appreciated!
Reference:
http://www.cppprog.com/boost_doc/doc/html/interprocess/synchronization_mechanisms.html#interprocess.synchronization_mechanisms.file_lock.file_lock_careful_iostream
#include <iostream>
#include <fstream>
#include <boost/interprocess/sync/file_lock.hpp>
#include <boost/interprocess/sync/scoped_lock.hpp>
using namespace std;
using namespace boost::interprocess;
int main()
{
ofstream file_out("fileLock.txt");
file_lock f_lock("fileLock.txt");
{
scoped_lock<file_lock> e_lock(f_lock); // it works if I comment this out
file_out << 10;
file_out.flush();
file_out.close();
}
return 0;
}
Running the test on Linux produces your desired output. I notice these two warnings:
The page you reference has this warning: "If you are using a std::fstream/native file handle to write to the file while using file locks on that file, don't close the file before releasing all the locks of the file."
Boost::file_lock apparently uses LockFileEx on Windows. MSDN has this to say: "If the locking process opens the file a second time, it cannot access the specified region through this second handle until it unlocks the region."
It seems like, on Windows at least, the file lock is per-handle, not per-file. As near as I can tell, that means that your program is guaranteed to fail under Windows.
Your code appears to be susceptible to this long-standing bug on the boost trac site: https://svn.boost.org/trac/boost/ticket/2796
The title of that bug is "interprocess::file_lock has incorrect behavior when win32 api is enabled".
Here is a workaround to append in a file with a file locking based on Boost 1.44.
#include "boost/format.hpp"
#include "boost/interprocess/detail/os_file_functions.hpp"
namespace ip = boost::interprocess;
namespace ipc = boost::interprocess::detail;
void fileLocking_withHandle()
{
static const string filename = "fileLocking_withHandle.txt";
// Get file handle
boost::interprocess::file_handle_t pFile = ipc::create_or_open_file(filename.c_str(), ip::read_write);
if ((pFile == 0 || pFile == ipc::invalid_file()))
{
throw runtime_error(boost::str(boost::format("File Writer fail to open output file: %1%") % filename).c_str());
}
// Lock file
ipc::acquire_file_lock(pFile);
// Move writing pointer to the end of the file
ipc::set_file_pointer(pFile, 0, ip::file_end);
// Write in file
ipc::write_file(pFile, (const void*)("bla"), 3);
// Unlock file
ipc::release_file_lock(pFile);
// Close file
ipc::close_file(pFile);
}

Resources