I have been using likwid (link) for accessing performance counters in my dual socket Intel Xeon E5 2660 v4 processors. I was able to use the tool (likwid-perfctr) successfully until last december. When I got back to the tool today after almost a month, I am getting the following warning:
WARN: Counter PMC0 is only available with deactivated HyperThreading. Counter results defaults to 0.
WARN: Counter PMC1 is only available with deactivated HyperThreading. Counter results defaults to 0.
WARN: Counter PMC2 is only available with deactivated HyperThreading. Counter results defaults to 0.
The problem persists even after enabling/disabling hyperthreading from BIOS. Additionally, I get this error even when I run the perfctr command as root.
Has anybody run into this issue? Was there any recent kernel update that makes it difficult to read the MSR registers (which could explain the appearance of the warning message in the last month)?
System Information: Debian Stretch, kernel 3.16, likwid version 4.3, and finally the command I am trying to run
likwid-perfctr -C N:0-27 -g L3CACHE -m executable
The above problem has been fixed in commit 03422ed of likwid. The problem was due to incorrect ifdefs which was causing likwid to read the number of performance counters incorrectly
Link to answer in the likwid-user google group - https://groups.google.com/forum/#!topic/likwid-users/oe2ch0aHONY
Related
I have a problem with Cuckoo Sandbox and its memory dump it should generate in order to be able to analyse it with Volatility.
My issue is:
Cuckoo's log files telling me that a memory dump has successfully been generated but it can not access them because they can not be found. Manually looking for them in the directory confirms that they do not exist. Cuckoo tells me to enable memory_dump in cuckoo.conf which is enabled.
My Cuckoo version and operating system are:
Cuckoo: 2.0.6
Host: Ubuntu 18.04.1 LTS
Guest: Win7 Ultimate, Service Pack 1, 32-bit
Those are my config files:
cuckoo.conf
memory_dump = yes
memory.conf
guest_profile = Win7SP1x86
delete_memdump = no
processing.conf
[memory]
enabled = yes
This is the output of the cuckoo.log:
INFO: Successfully generated memory dump for virtual machine with label Win7 to path /home/test/.cuckoo/storage/analyses/1/memory.dmp
[...]
ERROR: VM memory dump not found: to create VM memory dumps you have to enable memory_dump in cuckoo.conf!
Any kind of help is appreciated. If you need any more information from me please let me know
Edit: Only memory dump of full machine is not being generated. If malware is injected in a new process then memory dump is generated as shown in the report.json
INFO: injected into process with pid 3844 and name 'iexplorer.exe'
INFO: memory dump of process with pid 3844 completed
and I can also find the 3844-1.dmp file in the directory
I had a similar issue some time back where the memory dump creation was a little inconsistent. However that was with a older version of the cuckoo sandbox.
In processing.conf, check to see if you have set
[procmemory]
enabled = yes
I do remember that I had issues where I would sometimes get full memory dumps if I submitted a sample via the web GUI but I would not get memory dumps if I submitted a sample via commandline or vice versa. Sometimes I would only get memory dumps after the first sample failed. I found that a good place to start was with with something like a 32 bit putty.exe. Once the memory dumps started to work though I never had a issue after that. So I never documented what I done. I do remember playing around with the memory settings, so it may be worth playing around with processing.conf settings, turn them on and off to see what works.
[memory]
enabled = yes
[procmemory]
enabled = yes
and cuckoo.conf
memory_dump = yes
I know it may sound odd but I sometimes seen different functionality when submitting samples through both terminal or webgui mode. I no longer have my setup so I have nothing to compare it to.
[Edit]
Also make sure you have the correct dependencies installed
https://github.com/volatilityfoundation/volatility/wiki/Linux
I am trying to start a twincat project on my pc in order to debug it. I've disabled the EtherCAT device and isolated a CPU on my windows 10 with an 8-core ADM processor. After trying to start the run mode, I get a fatal error on the target system. With following message:
'TwinCat System' (10000): Sending ams command >> Init4
RTime: Start Interrupt: Ticker started >> AdsWarning: 4131 (0x1023, RTIME: Intel CPU required) << failed!
I've searched the internet and am not able to find a solution to this problem. There seems to be little information about this. Anyone of you having an idea?
Answering my own question, just in case anyone else would come across the same problem: make sure when you isolate a CPU to emulate the PLC on that you also check it as the default one to be used. Just isolating isn't enough, you have to explicitly indicate the one to use.
Your answer appears to be right there in the message: Intel CPU required, but you stated you're trying to run it on an ADM (I assume AMD) processor.
After updating OSX to High Sierra and updating Xcode to 9.2.0, project build times for bigger projects got out of hand. The build times went from ~10 minutes up to ~120 minutes.
While researching I noticed that Xcode spawns xcexec child processes which take most of the cpu usage. xcexec spend almost all of the time calling the system close call. Each xcexec process calls about 2 million close calls per minute.
Upon inspecting the xcexec binary, this seems to be a wrapper tool for launching other build actions (e.g. clang).
I have fully reinstalled Xcode with no change. The build system is set to default.
What causes this behaviour?
The installation instructions for watchman instructs you to set kern.maxfiles like this:
$ sudo sysctl -w kern.maxfiles=10485760
$ sudo sysctl -w kern.maxfilesperproc=1048576
The default setting for (both of) these values is 131072 on macOS High Sierra. Watchman's suggestion is a 80x change to a performance critical setting of your kernel. Adjusting these values can result in different performance characteristics, especially for file heavy operations like compilation.
Watchman changes the limit so that it is allowed to watch more files at the same time.
Xcode however will start indexing your project and open as many files as allowed (via kern.maxfiles). During the compilation phase, Xcode launches xcexec which will close any open file descriptors for indexing and only then launch the build step sub-process. That operation should take almost no time. But after changing kern.maxfiles it suddenly does.
I benchmarked on an MBP mid 2015, macOS 10.13.3, Xcode 9.2.0.
According to my benchmarking kern.maxfilesperproc has no influence on Xcode`s build performance.
The performance of Xcode builds is heavily impacted as soon as kern.maxfiles is above 327680.
I recommend setting kern.maxfiles to (no value bigger than) 327680 if you need to support watchman with bigger projects.
Note that setting kern.maxfiles with sysctl is not persistent across reboots. Adjust the values in /Library/LaunchDaemons/limit.maxfiles.plist.
I'm trying to do some debugging on a server on an issue that I suspect is related to a buffer overflow, so I tried to compile my code with -fsanitize=address to enable address sanitizing.
It compiled, and the resulting software runs. However, I'm trying to get a core dump when the address sanitizer detects an error since that is pretty much the only way I can get information out of the system due to the setup.
I am calling the software with ASAN_OPTIONS=abort_on_error=1 prepended on the command line (using a shell script to do that), and have checked that ulimit -c gives unlimited as result, but it just won't produce a core dump.
What am I missing?
This is on an ubuntu 14.04 server with gcc version 4.8.4
EDIT: sysctl kernel.core_pattern gives back kernel.core_pattern = |/usr/share/apport/apport %p %s %c %P. This probably means that apport is enabled (at least in some form). However, I have been able to get proper core files on this system from asserts and SIGFPEs in the software (that is where the suspicion of array overruns comes from).
Let me guess, is this x64 target? Coredumps are disabled there to avoid dumping 16 TB shadow memory (see docs for disable_coredump here for details).
Newer versions of GCC/Clang remove shadow from core by default so that one could do something like
export ASAN_OPTIONS=abort_on_error=1:disable_coredump=0
but I'm afraid 4.8 is too old for this.
As an alternative suggestion, why backtraces are not enough for you? You could use log_path or log_to_syslog to preserve them if you do not have access to programs stderr.
NB: I posted suggestion to enable coredumps on all platforms.
On Linux, FreeBSD and other systems I have valgrind for checking for memory errors like invalid reads and similar. I really love valgrind. Now I have to test code on Solaris/OpenSolaris and can't find a way to get information on invalid reads/writes in an as nice way (or better ;-)) as valgrind there.
When searching for this on the net I find references to libumem, but I get only reports about memory leaks there, not invalid access. What am I missing?
The dbx included with the Sun Studio compilers includes memory access checking support in its "Run Time Checking" feature (the check subcommand). See:
Solaris Studio 12.4 dbx manual: Chapter 9: Using Runtime Checking
Debugging Applications with Sun Studio dbx, dbxtool, and the Thread Analyzer
Leonard Li's Weblog: Runtime Memory Checking
The related "Sun Memory Error Discovery Tool" is also available from
http://cooltools.sunsource.net/discover/
Since version 3.11.0, Valgrind does run on Solaris.
See Release Notes and Supported Platforms.
More precisely, x86/Solaris and amd64/Solaris is now supported.
Support for sparc/Solaris is still in works.
watchmalloc is a quite useful library that can be dynamically loaded for your program (usually no need for recompiling) and then sets watchpoints at all the usually problematic memory locations, like freed areas or after an allocated memory block.
If your program accesses one of these invalid areas it gets a signal and you can inspect it in the debugger.
Depending on the configuration problematic areas can be watched for writes only, or also for reads.