I have the following system architecture (cannot be changed - legacy code): One main application invokes one or more other applications and these applications interact over a IP protocol.
All applications write to one console window. Unfortunately the console output can get messed up (one character from app 1, next char from app 2, next character from app 4 etc.).
All applications write to console via one Logger.dll (provides static logging functions) using cout/cerr.
Is there a way how I can prevent mixed logging messages in this setup?
Thanks in advance.
EDIT code added:
void Logger::Log(const std::string & componentName, const std::string & Text, LogLevel logLevel, bool logToConsole, bool beep)
{
std::ostringstream stream;
switch (logLevel)
{
case LOG_INFO:
if (logToConsole)
{
stream << componentName << ": INFO " << Text;
mx_console.lock(); // this is a static boost::mutex
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
}
break;
case LOG_STATUS:
stream << componentName << ": STATUS " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
case LOG_WARNING:
stream << componentName << ": WARNING " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
default:;
}
if (beep)
Beep( 500, 50 );
}
Since you have separate logging functionality you can at minimum use some kind of locking (global mutex, etc.) to avoid interspersing messages from different applications too much. To make it more readable and grepable, add some identifying information, like process name or PID. Wrapping your Logger.dll around existing logging library sounds like an option as well.
Alternatively, you could have logging functions just forward messages to your main application and let that to sort out the synchronization and interspersing.
Syslog might be a solution for you as it is intended to handle logs from various places. Syslog is developed for unix, but this answer shows versions for windows.
You can change your logger to log to syslog instead of the console.
I replaced now all the
std::cout << stream.str();
statements with
std::string str = stream.str();
printf(str.c_str());
and now the output isn't messed up character-wise anymore.
But I don't have a good explanation for this behavior, does anybody know why?
Related
I'm thinking of coding something up that will change a laptop's refresh rate based on whether or not the device is plugged in.
From my research, these are two links I came across. One is 20 years old and the other is from Microsoft, but I don't see any mentions of refresh rate specifically.
https://www.codeproject.com/Articles/558/Changing-your-monitor-s-refresh-rate
https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-changedisplaysettingsa?redirectedfrom=MSDN
Does anyone have any insight into how to do this? I'm not too particular about what language would have to be used for it, so let me know whatever would be most viable. Of course I'd also have to be able to check a change in state for plugged in/unplugged, but I haven't gotten to that point yet.
I'm mostly targeting Windows 10 since that's what my device is on.
You can use EnumDisplaySettings to enumerate the information of the current display device, and then set the display by ChangeDisplaySettingsA.
If you want to modify the refresh rate, you only need to modify the dmDisplayFrequency parameter of DEVMODEA.
Here is the sample:
#include <Windows.h>
#include <iostream>
using namespace std;
int main(int argc, const char* argv[])
{
DEVMODE dm;
ZeroMemory(&dm, sizeof(dm));
dm.dmSize = sizeof(dm);
if (0 != EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm))
{
cout << "DisplayFrequency before setting = " << dm.dmDisplayFrequency << endl;
dm.dmDisplayFrequency = 60; //set the DisplayFrequency
LONG ret = ChangeDisplaySettingsEx(NULL, &dm, NULL, 0, NULL);
std::cout << "ChangeDisplaySettingsEx returned " << ret << '\n';
if (0 != EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm))
{
cout << "DisplayFrequency after setting = " << dm.dmDisplayFrequency << endl;
}
switch (ret)
{
case DISP_CHANGE_SUCCESSFUL:
std::cout << "display successfully changed\n";
break;
case DISP_CHANGE_BADDUALVIEW:
std::cout << "The settings change was unsuccessful because the system is DualView capable\n";
break;
case DISP_CHANGE_BADFLAGS:
std::cout << "An invalid set of flags was passed in.\n";
break;
case DISP_CHANGE_BADMODE:
std::cout << "The graphics mode is not supported.\n";
break;
case DISP_CHANGE_BADPARAM:
std::cout << "An invalid parameter was passed in. This can include an invalid flag or combination of flags.\n";
break;
case DISP_CHANGE_FAILED:
std::cout << "The display driver failed the specified graphics mode.\n";
break;
case DISP_CHANGE_NOTUPDATED:
std::cout << "Unable to write settings to the registry.\n";
break;
case DISP_CHANGE_RESTART:
std::cout << "The computer must be restarted for the graphics mode to work.\n";
break;
}
}
system("pause");
}
This example is not always successful. Whether you can modify the refresh rate depends on whether your monitor supports it. This is the output of successful setup:
Used Versions: OMNeT++ 5.0 with iNET 3.4.0
I created some code, which gives me reliable results in ‘step-by-step’- or ‘animated’ simulation mode. The moment I change to ‘fast’ or ‘express’ mode, it gets buggy. The following simplified example will explain my problems:
void MyMacSlave::handleSelfMessage(cMessage *msg)
{
if (msg == CheckAck) {
std::cout << “CheckAck: “ << msg << std::endl;
}
if (msg == transmissionAnnouncement) {
std::cout << “transmissionAnncouncement: “ << msg << std::endl;
}
if (msg == transmissionEvent) {
std::cout << “transmissionEvent: “ << msg << std::endl;
}
delete msg;
}
There is a function, which is called for handling self-messages. Depending on what self-message I got, I need to run different if queries.
I get this correct output in step-by-step or animated mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionEvent: (omnetpp::cMessage)transmissionEvent
And this is the strange output I get using fast or express mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionAnncouncement: (omnetpp::cMessage)transmissionEvent
transmissionEvent: (omnetpp::cMessage)transmissionEvent
The third output line shows that the self-message is ‘transmissionEvent’, but the ‘if (msg == transmissionAnnouncement)’ is mistakenly considered as true as well.
As shown above I get different simulation results, depending on the simulation mode I am using. What is the reason for the different output? Why is there even a difference?
As Christoph and Rudi mentioned there was something wrong with the memory allocation. When a pointer is de-allocated and a new one is allocated on the same memory, there will be something wrong. The difference regarding the usage of different running modes is just a sign that there are errors to that effect.
In my case it was useful to check for message-kinds like:
if (msg->getKind() == checkAckAckType) {
instead of the method used in the originally question. I defined the message-kinds using simple enums.
Say I have a PUB server that zmq_send()'s realtime messages to SUB client. If client is busy and can not zmq_recv() messages quick enough, then messages will be buffered in client (and/or server).
If the buffer grows too large (high water mark) then NEW messages will be dropped. For realtime messages this is the opposite of what one wants. OLD messages should be dropped to make place for NEW ones.
Is there some way to do this?
Ideally I would like the SUB client's receive queue to be either empty or contain the most recent message only. When a new message is received it would replace the old one. ( I guess the problem here would be that the client would block on zmq_recv() when the queue is empty, wasting time doing so. )
So how are realtime feeds usually implemented in ZeroMQ?
I'll answer my own question here. The setting ZMQ_CONFLATE "Keep only last message" seemed promising but it doesn't work with subscription filters. It only ever keeps one message in the queue. If you have more than one filter, both old and new messages of the other filters type gets thrown away.
Likewise the recommendation of the zeromq guide to simply to kill slow subscribers, but that doesn't seem like realistic solution. Having subscribers with different read speeds, subscribed to the same fast publisher, should be a normal use case. Some of these subscribers might live on slow computers others on fast ones, etc. ZeroMQ should be able to handle that somehow.
http://zguide.zeromq.org/page:all#Slow-Subscriber-Detection-Suicidal-Snail-Pattern
I ended up doing manual dropping of old queued up messages on the client side. It seems to work fine. I get subscribed messages to the client that are less than 3ms old (through tcp localhost) that way. This works even in cases where I have five thousand, 10 second old messages, in the queue in front of those few real-time message at the back. This is good enough for me.
I cant help but think this is something that should be provided by the library. It could probably do a better job of it.
Anyways here is the client side, old message dropping, code:
bool Empty(zmq::socket_t& socket) {
bool ret = true;
zmq::pollitem_t poll_item = { socket, 0, ZMQ_POLLIN, 0 };
zmq::poll(&poll_item, 1, 0); //0 = no wait
if (poll_item.revents & ZMQ_POLLIN) {
ret = false;
}
return ret;
}
std::vector<std::string> GetRealtimeSubscribedMessageVec(zmq::socket_t& socket_sub, int timeout_ms)
{
std::vector<std::string> ret;
struct MessageTmp {
int id_ = 0;
std::string data_;
boost::posix_time::ptime timestamp_;
};
std::map<int, MessageTmp> msg_map;
int read_msg_count = 0;
int time_in_loop = 0;
auto start_of_loop = boost::posix_time::microsec_clock::universal_time();
do {
read_msg_count++;
//msg format sent by publisher is: filter, timestamp, data
MessageTmp msg;
msg.id_ = boost::lexical_cast<int>(s_recv(socket_sub));
msg.timestamp_ = boost::posix_time::time_from_string(s_recv(socket_sub));
msg.data_ = s_recv(socket_sub);
msg_map[msg.id_] = msg;
auto now = boost::posix_time::microsec_clock::universal_time();
time_in_loop = (now - start_of_loop).total_milliseconds();
if (time_in_loop > timeout_ms) {
std::cerr << "Timeout reached. Publisher is probably sending messages quicker than we can drop them." << std::endl;
break;
}
} while ((Empty(socket_sub) == false));
if (read_msg_count > 1) {
std::cout << "num of old queued up messages dropped: " << (read_msg_count - 1) << std::endl;
}
for (const auto &pair: msg_map) {
const auto& msg_tmp = pair.second;
auto now = boost::posix_time::microsec_clock::universal_time();
auto message_age_ms = (now - msg_tmp.timestamp_).total_milliseconds();
if (message_age_ms > timeout_ms) {
std::cerr << "[SUB] Newest message too old. f:" << msg_tmp.id_ << ", age: " << message_age_ms << "ms, s:" << msg_tmp.data_.size() << std::endl;
}
else {
std::cout << "[SUB] f:" << msg_tmp.id_ << ", age: " << message_age_ms << "ms, s:" << msg_tmp.data_.size() << std::endl;
ret.push_back(msg_tmp.data_);
}
}
return ret;
}
I am creating a large pintool and I have two questions:
The tool (abridged below to the relevant part only) sometimes cannot identify the image/routine for particular executed instructions. Does anybody know when/why can that happen?
The tool (when instrumenting a Barnes-Hut benchmark) always terminates with an out-of-memory (OOM) error after running for a while (although the benchmark, when run standalone, completes successfully). Which tools to use to debug/trace the OOM error of Pin-instrumented applications?
int main(int argc, char *argv[])
{
PIN_InitSymbols();
if( PIN_Init(argc, argv) )
{
return 0;
}
INS_AddInstrumentFunction(Instruction, 0);
PIN_StartProgram();
return 0;
}
VOID Instruction(INS ins, VOID *v)
{
INS_InsertPredicatedCall( ins,
IPOINT_BEFORE,
(AFUNPTR) handle_ins_execution,
IARG_INST_PTR,
.....);
}
VOID handle_ins_execution (ADDRINT addr, ...)
{
PIN_LockClient();
IMG img = IMG_FindByAddress(addr);
RTN rtn = RTN_FindByAddress(addr);
PIN_UnlockClient();
if( IMG_Valid(img) ) {
std::cerr << "From Image : " << IMG_Name( img ) << std::endl;
} else {
std::cerr << "From Image : " << "(UKNOWN)" << std::endl;
}
if( RTN_Valid(rtn) ) {
std::cerr << "From Routine : " << RTN_Name(rtn) << std::endl;
} else {
std::cerr << "From Routine : " << "(UKNOWN)" << std::endl;
}
}
I recently asked this on the PinHeads forum, and I'm awaiting a response. What I have read in the documentation is that the IMG_FindByAddress function operates by looking "for each image, check if the address is within the mapped memory region of one of its segments." It may be possible that instructions are executed that are not within the valid ranges.
The best way to know what image it is in for cases like this is to look at the context. My pintool (based on DebugTrace) continues to run even without knowing what image it is in. You can look at the log entries before and after this occurs. I see this all the time in dydl on OSX.
I'm trying to write a custom 'driver' for a keyboard (HID, if it matters), under Windows 7. The final goal is having two keyboards connected to the computer, but mapping all of the keys of one of them to special (custom) functions.
My idea is to use libusb-win32 as the 2nd keyboard's driver, and write a small program to read data from the keyboard and act upon it. I've successfully installed the driver, and the device is recognized from my program, but all transfers timeout, even though I'm pressing keys.
here's my code:
struct usb_bus *busses;
struct usb_device *dev;
char buf[1024];
usb_init();
usb_find_busses();
usb_find_devices();
busses = usb_get_busses();
dev = busses->devices;
cout << dev->descriptor.idVendor << '\n' << dev->descriptor.idProduct << '\n';
usb_dev_handle *h = usb_open(dev);
cout << usb_set_configuration(h, 1) << '\n';
cout << usb_claim_interface(h, 0) << '\n';
cout << usb_interrupt_read(h, 129, buf, 1024, 5000) << '\n';
cout << usb_strerror();
cout << usb_release_interface(h, 0) << '\n';
cout << usb_close(h) << '\n';
and it returns:
1133
49941
0
0
-116
libusb0-dll:err [_usb_reap_async] timeout error
0
0
(I'm pressing lots of keys in those 5 seconds)
There's only one bus, one device, one configuration, one interface and one endpoint.
The endpoint has bmAttributes = 3 which implies I should use interrupt transfers (right?)
so why am I not getting anything? Am I misusing libusb? Do you know a way to do this without libusb?
It's pretty simple actually - when reading from the USB device, you must read exactly the right amount of bytes. You know what that amount is by reading wMaxPacketSize.
Apparently a read request with any other size simply results in a timeout.