How to initialize a posix semaphore from a static library during process startup? - static-libraries

I have a static library that defines a semaphore. The semaphore needs to be initialized before any calls to the library (for the reason that it be can safely used by multiple threads from the same process).
Therefore, I would like to initialize (e.g. by running sem_init) the library's semaphore during the start-up of the process. How can I do that?

// This seems to solve the problem. Init(void) will run before main()
void Init(void) __attribute__((constructor));
void Init(void) // This will always run before main()
{
printf("HSA LIB Init\n");
sem_init (&HSA_lib.semaphore, 0, 1);
}

Related

Why doesn't the Linux real-time scheduler remove the task from the queue for group scheduling?

static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
{
struct rq *rq = rq_of_rt_se(rt_se);
update_stats_dequeue_rt(rt_rq_of_se(rt_se), rt_se, flags);
dequeue_rt_stack(rt_se, flags);
for_each_sched_rt_entity(rt_se) {
struct rt_rq *rt_rq = group_rt_rq(rt_se);
if (rt_rq && rt_rq->rt_nr_running)
__enqueue_rt_entity(rt_se, flags);
}
enqueue_top_rt_rq(&rq->rt);
}
why need call (__enqueue_rt)entity(rt_se, flags) for group-scheduled?
My understanding is that for a group-scheduled process, after the time on the current cpu runs out, there may be time on other cpus, so it will continue to be placed on the queue to wait for the next scheduling, but I don't know if this understanding is correct

Could an interrupt be serviced while the scheduler is changing the current task to another(i mean the scheduler is doing thread context switching)?

Interruptions could be handled while process is runing at the time of interruption occurs. Could an interrupt be serviced while the scheduler is changing the current task to another(i mean the scheduler is doing thread context switching)?
I would be grateful for any hint on this question.
It's impossible, the schedule() function will disable interrupt before doing the context switch.
static void __sched notrace __schedule(bool preempt)
{
struct task_struct *prev, *next;
unsigned long *switch_count;
struct rq_flags rf;
struct rq *rq;
int cpu;
cpu = smp_processor_id();
rq = cpu_rq(cpu);
prev = rq->curr;
schedule_debug(prev);
if (sched_feat(HRTICK))
hrtick_clear(rq);
//look at here
local_irq_disable();
......
};
Here is the source code, and I write a blog about that, maybe helpful for you.

Boost process continuously read output

I'm trying to read outputs/logs from different processes and display them in a GUI. The processes will be running for long time and produce huge output. I'm planning to stream the output from those processes and display them according to my needs. All the while allow my gui application to take user inputs and perform other actions.
What I've done here is, from main thread launch two threads for each process. One for launching the process and another for reading output from the process.
This is the solution I've come up thus far.
// Process Class
class MyProcess {
namespace bp = boost::process;
boost::asio::io_service mService; // member variable of the class
bp::ipstream mStream // member variable of the class
std::thread mProcessThread, mReaderThread // member variables of the class.
public void launch();
};
void
MyProcess::launch()
{
mReaderThread = std::thread([&](){
std::string line;
while(getline(mStream, line)) {
std::cout << line << std::endl;
}
});
mProcessThread = std::thread([&]() {
auto c = boost::child ("/path/of/executable", bp::std_out > mStream, mService);
mService.run();
mStream.pipe().close();
}
}
// Main Gui class
class MyGui
{
MyProcess process;
void launchProcess();
}
MyGui::launchProcess()
{
process.launch();
doSomethingElse();
}
The program is working as expected so far. But I'm not sure if this is the correct solution. Please let me know if there's any alternative/better/correct solution
Thanks,
Surya
The most striking conceptual issues I see are
Process are asynchronous, no need to add a thread to run them.¹
You prematurely close the pipe:
mService.run();
mStream.pipe().close();
Run is not "blocking" in the sense that it will not wait for the child to exit. You could use wait to achieve that. Other than that, you can just remove the close() call.
With the close means you will lose all or part of the output. You might not see any of the output if the child process takes a while before it outputs the first data.
You are accessing the mStream from multiple threads without synchronization. This invokes Undefined Behaviour because it opens a Data Race.
In this case you can remove the immediate problem by removing the mStream.close() call mentioned before, but you must take care to start the reader-thread only after the child has been initialized.
Strictly speaking the same caution should be taken for std::cout.
You are passing the io_service reference, but it's not being used. Just dropping it seems like a good idea.
The destructor of MyProcess needs to detach or join the threads. To prevent Zombies, it needs to detach or reap the child pid too.
In combination with the lifetime of mStream detaching the reader thread is not really an option, as mStream is being used from the thread.
Let's put out the first fixes first, and after that I'll suggest show some more simplifications that make sense in the scope of your sample.
First Fixes
I used a simple bash command to emulate a command generating 1000 lines of ping:
Live On Coliru
#include <boost/process.hpp>
#include <thread>
#include <iostream>
namespace bp = boost::process;
/////////////////////////
class MyProcess {
bp::ipstream mStream;
bp::child mChild;
std::thread mReaderThread;
public:
~MyProcess();
void launch();
};
void MyProcess::launch() {
mChild = bp::child("/bin/bash", std::vector<std::string> {"-c", "yes ping | head -n 1000" }, bp::std_out > mStream);
mReaderThread = std::thread([&]() {
std::string line;
while (getline(mStream, line)) {
std::cout << line << std::endl;
}
});
}
MyProcess::~MyProcess() {
if (mReaderThread.joinable()) mReaderThread.join();
if (mChild.running()) mChild.wait();
}
/////////////////////////
class MyGui {
MyProcess _process;
public:
void launchProcess();
};
void MyGui::launchProcess() {
_process.launch();
// doSomethingElse();
}
int main() {
MyGui gui;
gui.launchProcess();
}
Simplify!
In the current model, the thread doesn't pull it's weight.
I you'd use io_service with asynchronous IO instead, you could even do away with the whole thread to begin with, by polling the service from inside your GUI event loop².
If you're gonna have it, and since child processes naturally execute asynchronously³ you could simply do:
Live On Coliru
#include <boost/process.hpp>
#include <thread>
#include <iostream>
std::thread launch(std::string const& command, std::vector<std::string> args = {}) {
namespace bp = boost::process;
return std::thread([=] {
bp::ipstream stream;
bp::child c(command, args, bp::std_out > stream);
std::string line;
while (getline(stream, line)) {
// TODO likely post to some kind of queue for processing
std::cout << line << std::endl;
}
c.wait(); // reap PID
});
}
The demo displays exactly the same output as earlier.
¹ In fact, adding threads is asking for trouble with fork
² or perhaps idle tick or similar idea. Qt has a ready-made integration (How to integrate Boost.Asio main loop in GUI framework like Qt4 or GTK)
³ on all platforms supported by Boost Process

Is there a better way in c++ for one time execution of a set of code instead of using a static variable check

In many places i have code for one time initialization as below
int callback_method(void * userData)
{
/* This piece of code should run one time only */
static int init_flag = 0;
if (0 == init_flag)
{
/* do initialization stuff here */
init_flag = 1;
}
/* Do regular stuff here */
return 0;
}
just now I started using c++11. Is there a better way to replace the one time code with c++11 lambda or std::once functionality or any thing else?.
You can encapsulate your action to static function call or use something like Immediately invoked function expression with C++11 lambdas:
int action() { /*...*/ }
...
static int temp = action();
and
static auto temp = [](){ /*...*/ }();
respectively.
And yes, the most common solution is to use std::call_once, but sometimes it's a little bit overkill (you should use special flag with it, returning to initial question).
With C++11 standard all of these approaches are thread safe (variables will be initialized once and "atomically"), but you still must avoid races inside action if some shared resources used here.
Yes - std::call_once, see the docs on how to use this: http://en.cppreference.com/w/cpp/thread/call_once

Using ThreadSafe variable macros in LabWindows/CVI

I am using the thread safe variable macros in the LabWindows/CVI environment and have observed that it is possible to get a pointer to a thread safe variable before it has been released. (from a previous request)
Because the data I am interesting in protecting is a struct, I am unable to explicitly set nesting level, so I assume that the nesting level stays at 0, i.e. that once a single thread safe pointer has been issued, a request for second will be denied until the first has been released. However, I have observed this not to be true while stepping through a debug session. Execution continues through the DefineThreadSafeVar(CLI, SafeCli); statement by continuing to use the F8 step-into key, and subsequent requests for a pointer to the thread-safe variable are granted without ever having released the original.
My expectations are that these macros should prevent access to a thread-safe variable once a pointer to it has been issued and not yet released.
Are my expectations incorrect?
Or have I implemented the calls incorrectly?
Here is my source code:
#include <utility.h>
typedef struct {
int hndl;
int connect;
int sock;
}CLI;
DefineThreadSafeVar(CLI, SafeCli);
void func1(void);
void func2(void);
int main(void)
{
InitializeSafeCli();
func1();
return 0;
}
void func1(void)
{
CLI *safe;
safe = GetPointerToSafeCli();//original issue
safe->connect = 2;
safe->hndl = 3;
safe->sock = 4;
func2();
safe->connect;
safe->hndl;
safe->sock;
ReleasePointerToSafeCli();
}
void func2(void)
{
CLI *safe;
safe = GetPointerToSafeCli();//request is granted. previous issue had not been released.
//shouldn't request have been denied ?
safe->connect = 5;//variable is modified.
safe->hndl = 6;
safe->sock = 7;
}
In your case, you are calling func2() within func1() and subsequently is within the same call stack and thus the same thread. You're granted access because you are requesting the pointer from within the same thread that already has access to the pointer.
GetPointerToSafeCli() is a waiting call. Had it been called from thread A and then again in thread B before ReleasePointerToSafeCli() was called back in thread A, thread B would wait until the pointer was released before granting access.
LabWindows/CVI - Programming with DefineThreadSafeScalarVar

Resources