Used Versions: OMNeT++ 5.0 with iNET 3.4.0
I created some code, which gives me reliable results in ‘step-by-step’- or ‘animated’ simulation mode. The moment I change to ‘fast’ or ‘express’ mode, it gets buggy. The following simplified example will explain my problems:
void MyMacSlave::handleSelfMessage(cMessage *msg)
{
if (msg == CheckAck) {
std::cout << “CheckAck: “ << msg << std::endl;
}
if (msg == transmissionAnnouncement) {
std::cout << “transmissionAnncouncement: “ << msg << std::endl;
}
if (msg == transmissionEvent) {
std::cout << “transmissionEvent: “ << msg << std::endl;
}
delete msg;
}
There is a function, which is called for handling self-messages. Depending on what self-message I got, I need to run different if queries.
I get this correct output in step-by-step or animated mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionEvent: (omnetpp::cMessage)transmissionEvent
And this is the strange output I get using fast or express mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionAnncouncement: (omnetpp::cMessage)transmissionEvent
transmissionEvent: (omnetpp::cMessage)transmissionEvent
The third output line shows that the self-message is ‘transmissionEvent’, but the ‘if (msg == transmissionAnnouncement)’ is mistakenly considered as true as well.
As shown above I get different simulation results, depending on the simulation mode I am using. What is the reason for the different output? Why is there even a difference?
As Christoph and Rudi mentioned there was something wrong with the memory allocation. When a pointer is de-allocated and a new one is allocated on the same memory, there will be something wrong. The difference regarding the usage of different running modes is just a sign that there are errors to that effect.
In my case it was useful to check for message-kinds like:
if (msg->getKind() == checkAckAckType) {
instead of the method used in the originally question. I defined the message-kinds using simple enums.
Related
I'm thinking of coding something up that will change a laptop's refresh rate based on whether or not the device is plugged in.
From my research, these are two links I came across. One is 20 years old and the other is from Microsoft, but I don't see any mentions of refresh rate specifically.
https://www.codeproject.com/Articles/558/Changing-your-monitor-s-refresh-rate
https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-changedisplaysettingsa?redirectedfrom=MSDN
Does anyone have any insight into how to do this? I'm not too particular about what language would have to be used for it, so let me know whatever would be most viable. Of course I'd also have to be able to check a change in state for plugged in/unplugged, but I haven't gotten to that point yet.
I'm mostly targeting Windows 10 since that's what my device is on.
You can use EnumDisplaySettings to enumerate the information of the current display device, and then set the display by ChangeDisplaySettingsA.
If you want to modify the refresh rate, you only need to modify the dmDisplayFrequency parameter of DEVMODEA.
Here is the sample:
#include <Windows.h>
#include <iostream>
using namespace std;
int main(int argc, const char* argv[])
{
DEVMODE dm;
ZeroMemory(&dm, sizeof(dm));
dm.dmSize = sizeof(dm);
if (0 != EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm))
{
cout << "DisplayFrequency before setting = " << dm.dmDisplayFrequency << endl;
dm.dmDisplayFrequency = 60; //set the DisplayFrequency
LONG ret = ChangeDisplaySettingsEx(NULL, &dm, NULL, 0, NULL);
std::cout << "ChangeDisplaySettingsEx returned " << ret << '\n';
if (0 != EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm))
{
cout << "DisplayFrequency after setting = " << dm.dmDisplayFrequency << endl;
}
switch (ret)
{
case DISP_CHANGE_SUCCESSFUL:
std::cout << "display successfully changed\n";
break;
case DISP_CHANGE_BADDUALVIEW:
std::cout << "The settings change was unsuccessful because the system is DualView capable\n";
break;
case DISP_CHANGE_BADFLAGS:
std::cout << "An invalid set of flags was passed in.\n";
break;
case DISP_CHANGE_BADMODE:
std::cout << "The graphics mode is not supported.\n";
break;
case DISP_CHANGE_BADPARAM:
std::cout << "An invalid parameter was passed in. This can include an invalid flag or combination of flags.\n";
break;
case DISP_CHANGE_FAILED:
std::cout << "The display driver failed the specified graphics mode.\n";
break;
case DISP_CHANGE_NOTUPDATED:
std::cout << "Unable to write settings to the registry.\n";
break;
case DISP_CHANGE_RESTART:
std::cout << "The computer must be restarted for the graphics mode to work.\n";
break;
}
}
system("pause");
}
This example is not always successful. Whether you can modify the refresh rate depends on whether your monitor supports it. This is the output of successful setup:
I am a novice trying to use google protobuf for work project. I want to find out difference between protobuf messages and hence trying to use the MessageDifferencer APIs. I get the SEGV while running the code below. Commenting the line "reporter->ReportModified(*Obj1, *Obj2, field_path);" results in no segv
Any help in usage of differencer appreciated!
google::protobuf::util::MessageDifferencer diff;
diff.set_report_matches(false);
diff.set_report_moves(false);
std::string reportDiff;
google::protobuf::io::StringOutputStream* opstream = new google::protobuf::io::StringOutputStream(&reportDiff);
google::protobuf::util::MessageDifferencer::StreamReporter* reporter = new google::protobuf::util::MessageDifferencer::StreamReporter(opstream);
diff.ReportDifferencesTo(reporter);
std::vector<google::protobuf::util::MessageDifferencer::SpecificField> field_path;
try
{
reporter->ReportModified(*Obj1, *Obj2, field_path);
}
catch (const std::exception& e)
{
std::cout << e.what() <<"\n";
}
cout << __func__ << " Report added " << field_path.size();
//Cleanup objects
delete Obj1;
delete Obj2;
delete reporter;
Thanks,
Maddy
You shouldn't be calling the ReportModified method directly, the MessageDifferencer class calls it when it finds a difference.
MessageDifferencer::Compare is the correct method to call, according to the docs. Assuming all else is correct, I believe changing your code inside the try-loop to call that should work.
Moving your code to a function, you could have something like
std::string CompareMessages(
const google::protobuf::Message& m1,
const google::protobuf::Message& m2) {
using google::protobuf::util::MessageDifferencer;
MessageDifferencer diff;
diff.set_report_matches(false);
diff.set_report_moves(false);
std::string reportDiff;
{
google::protobuf::io::StringOutputStream opstream(&reportDiff);
MessageDifferencer::StreamReporter reporter(&opstream);
diff.ReportDifferencesTo(&reporter);
diff.Compare(m1, m2);
}
return std::move(reportDiff);
}
Tried this program out of curiosity to understand behavior of shared_ptr over raw pointers. I hope the problem could be double delete but here I am facing other:
MyClass *raw_ptr = new MyClass();
shared_ptr<MyClass> sptr1(raw_ptr);
shared_ptr<MyClass> sptr2 = sptr1;
cout << sptr1.use_count() << endl; // prints 2
sptr1.reset(); // occurs Segmentation Fault here
Expected Behavior: reduce the count to 1 and moves control to next line.
Solved: Actual issue is at the next line in which sptr1 accessing the public class member MyClass::a which is invalid access after reset and hence the segfault. Confused because it didn't print the cout messages.
cout << "count: "<< sptr1.use_count()
<< "value: "<< sptr1->a;
There's no issues in that code. It is perfectly fine.
So either your compiler is broken, your development environment is broken, or there's other code there you're not showing us which is responsible for the crash.
I am creating a large pintool and I have two questions:
The tool (abridged below to the relevant part only) sometimes cannot identify the image/routine for particular executed instructions. Does anybody know when/why can that happen?
The tool (when instrumenting a Barnes-Hut benchmark) always terminates with an out-of-memory (OOM) error after running for a while (although the benchmark, when run standalone, completes successfully). Which tools to use to debug/trace the OOM error of Pin-instrumented applications?
int main(int argc, char *argv[])
{
PIN_InitSymbols();
if( PIN_Init(argc, argv) )
{
return 0;
}
INS_AddInstrumentFunction(Instruction, 0);
PIN_StartProgram();
return 0;
}
VOID Instruction(INS ins, VOID *v)
{
INS_InsertPredicatedCall( ins,
IPOINT_BEFORE,
(AFUNPTR) handle_ins_execution,
IARG_INST_PTR,
.....);
}
VOID handle_ins_execution (ADDRINT addr, ...)
{
PIN_LockClient();
IMG img = IMG_FindByAddress(addr);
RTN rtn = RTN_FindByAddress(addr);
PIN_UnlockClient();
if( IMG_Valid(img) ) {
std::cerr << "From Image : " << IMG_Name( img ) << std::endl;
} else {
std::cerr << "From Image : " << "(UKNOWN)" << std::endl;
}
if( RTN_Valid(rtn) ) {
std::cerr << "From Routine : " << RTN_Name(rtn) << std::endl;
} else {
std::cerr << "From Routine : " << "(UKNOWN)" << std::endl;
}
}
I recently asked this on the PinHeads forum, and I'm awaiting a response. What I have read in the documentation is that the IMG_FindByAddress function operates by looking "for each image, check if the address is within the mapped memory region of one of its segments." It may be possible that instructions are executed that are not within the valid ranges.
The best way to know what image it is in for cases like this is to look at the context. My pintool (based on DebugTrace) continues to run even without knowing what image it is in. You can look at the log entries before and after this occurs. I see this all the time in dydl on OSX.
I have the following system architecture (cannot be changed - legacy code): One main application invokes one or more other applications and these applications interact over a IP protocol.
All applications write to one console window. Unfortunately the console output can get messed up (one character from app 1, next char from app 2, next character from app 4 etc.).
All applications write to console via one Logger.dll (provides static logging functions) using cout/cerr.
Is there a way how I can prevent mixed logging messages in this setup?
Thanks in advance.
EDIT code added:
void Logger::Log(const std::string & componentName, const std::string & Text, LogLevel logLevel, bool logToConsole, bool beep)
{
std::ostringstream stream;
switch (logLevel)
{
case LOG_INFO:
if (logToConsole)
{
stream << componentName << ": INFO " << Text;
mx_console.lock(); // this is a static boost::mutex
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
}
break;
case LOG_STATUS:
stream << componentName << ": STATUS " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
case LOG_WARNING:
stream << componentName << ": WARNING " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
default:;
}
if (beep)
Beep( 500, 50 );
}
Since you have separate logging functionality you can at minimum use some kind of locking (global mutex, etc.) to avoid interspersing messages from different applications too much. To make it more readable and grepable, add some identifying information, like process name or PID. Wrapping your Logger.dll around existing logging library sounds like an option as well.
Alternatively, you could have logging functions just forward messages to your main application and let that to sort out the synchronization and interspersing.
Syslog might be a solution for you as it is intended to handle logs from various places. Syslog is developed for unix, but this answer shows versions for windows.
You can change your logger to log to syslog instead of the console.
I replaced now all the
std::cout << stream.str();
statements with
std::string str = stream.str();
printf(str.c_str());
and now the output isn't messed up character-wise anymore.
But I don't have a good explanation for this behavior, does anybody know why?