I'm using Azul Zulu JDK 8 (v1.8.0_202) with Azul ZMC 7.1.1 on Win10, and I'm creating a 20s Flight Recording of a running JVM. The resultant data displays various captured metrics (e.g. CPU Usage Heap Usage, Allocation etc) but no Method Profiling data. I've tried with both a Time Fixed recording and the Continuous recording - both have the same issue. The highlighted section in the following screenshot indicates where I'd expect to see method profiling data:
Is there some trick to enable this? I can't any reference to this in the somewhat sparse documentation.
According to Azul Zulu documentation there was a bug with Method Profiling in Zulu 8 fixed in Zulu8.38 (8u212).
Please, try a newer version for Zulu8: https://www.azul.com/downloads/zulu-community/?version=java-8-lts&architecture=x86-64-bit&package=jdk
Also, make sure you've enabled Method Profiling while starting recoding:
Related
I am currently reviewing a Jmeter framework set up. I wanted check get some feedback if there are any specific advantages of choosing a Linux server to run Jmeter as Load Generator over using a Windows server.
Are there any specific advantages in terms of the cost, efficiency if I choose Linux over Windows to run Jmeter?
As per JMeter project website
The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance
it means that JMeter will run whenever Java runs and Java will run whenever it's possible to install Java Virtual Machine. So for Java/JMeter there is no difference between Linux and Windows
I can think of the following "advantages":
Linux can have "minimal" installation with only base system, an SSH server and JVM while in Windows in majority of cases you will have GUI causing increased resources consumption
Pricing. As of now Home version of Windows costs $139 and Pro version is $199, for extremely high loads you may even have to go for Windows Server options,theoretically you can use trial versions for assessment but if you plan to have load generators up and running longer than the allowed trial period - you will have to pay. The majority of Linux distributions are free as long as you don't need "support" from the distribution vendors
When it comes to "efficiency" I cannot state anything as it mainly depends on hardware, specific Windows or Linux distribution and mainly the nature of your test as both Windows and Linux might need to be properly tuned. You need to make sure that JMeter isn't the bottleneck as if JMeter isn't capable of sending the requests fast enough you will get false negative results so be extremely careful while calculating how many users you can run on this or that load generator.
I am using 4 projects together in a single workspace. Currently I am using Eclipse mars (32-bit) and JAVA 7. When I tried to change Java 7 to Java 8, I am facing Java heap size no memory issue. Is there any other configuration to be updated while upgrading from java 7 to java 8 ?
When your source code makes heavy use of generics, increased demands of heap space may be due to intrinsic complexity of the new type inference in Java 8.
Under Preferences > General you can enable [x] Show heap status to watch heap usage over time.
Finally, have a look at FAQ How do I increase the heap size available to Eclipse?
I have been wanting to work with Vulkan, the new graphics API and have gotten it up and running with no problems on Windows 7. However I can't get Vulkan to work on linux. When I try running any of the LunarG samples, or even my own code, vkEnumeratePhysicalDevices always says that there are no physical devices. Here is my setup:
OS: Ubuntu 16.04 (LTS) [x64]
GPU: Nvidia Geforce GT 730 2GB GDDR5
Driver: NVIDIA Binary driver - version 364.19 from nvidia-364 (open source)
Vulkan SDK: LunarG v1.0.17.0 [ latest version]
I was wondering if maybe there's a file for my GPU that I need to set an environment variable for, but I really don't know. As I said before, this worked on Windows 7 perfectly, but I can't seem to get this to work this the above configuration. I am able to create an instance with the LunarG standard validation layer and the correct extensions, but vkEnumeratePhysicalDevices doesn't find any physical devices. It doesn't give an error, just says it can't find any physical devices. This has really got me stumped and I would really appreciate the help. Thanks!
Depending on your distribution you may have to install the nvidia-utils package. See this issue on my Vulkan repo for details.
If this isn't the case for you check the directories Karl mentioned and check if there is no other ICD (maybe one from Intel) that may cause troubles. If you're on an optimus system with dual GPU you may need to explicitly activate the NVIDIA GPU.
The 730 should work fine on Linux, at least judging from the Linux hardware reports I got on my database like this one.
You shouldn't have to set an environment variable if the driver installed properly.
One way to check for a proper installation is to look for the JSON file that identifies the driver. For example, an nvidia driver will place a file called nvidia_icd.json in /etc/vulkan/icd.d/. /usr/share/vulkan/icd.d/ is another standard, but less common location.
It may also be the case that your GPU does not support Vulkan. Be sure to check your GPU vendor's web pages to confirm support. You may want to download the driver straight from the vendor's site in order to get one that they say has Vulkan support.
And are you sure that using the "Additional Drivers" page is supposed to give you a Vulkan driver?
You can refer to the loader documentation in the docs section at https://vulkan.lunarg.com for more info.
I'm developing a WinCE 5.0 application that uses two commercial libraries. When the application starts calling the second library it gets slowlier and then after some use, it hangs and the whole OS freezes. It has to be rebooted to work again. The thing is that I'm developing this without a physical device (a testing person installs each release and runs the tests) and without an emulator (the device provider is not facilitating an OS image).
My intuition tells me that the second library is using all the available resources (basically, handles and memory) for a WinCE 5.0 process. I have to prove this to the library vendor. So I wish to add to my logs some general process and system information. Could you recommend me which APIs to call to get this information in CE?
I would really appreciate any hint
Thanks in advance!
Windows CE provides a very robust set of APIs for a subsystem called CeLog. CeLog is what Kernel Tracker uses to collect and display it's information. You can get all the way down to scheduler calls and thread migrations if you want. The real danger with using CeLog is in collecting too much data so that making sense of it is difficult, but if you filter the collections to just your process, that should help. You could collect the data to a log file, then use Kernel Tracker to open and view that data.
Here are some good starting points for info:
Introduction to Remote Kernel Tracker
More on Remote Kernel Tracker
CeLogFlush.exe (particularly the section 'Collecting Data on a Standalone Device with CeLogFlush')
Implementing a Custom Event Tracking Library
Embedded Visual C++ 4 contained "Remote Performance Monitor" that could do just that. Microsoft retracted EVC4 as free download some time ago, but it can still be downloaded from MSDN or found on the internet.
With service pack 4 it should work for WinCE 5.0. It does not appear to work with Windows Embedded 6.0 and newer though.
I am developing an OpenGL application and I am seeing some strange things happen. The machine I am testing with is equipped with an NVidia Quadro FX 4600 and it is running RHEL WS 4.3 x86_64 (kernel 2.6.9-34.ELsmp).
I've stepped through the application with a debugger and I've noticed that it is hanging on OpenGL calls that are receiving information from the OpenGL API: i.e. - glGetError, glIsEnabled, etc. Each time it hangs up, the system is unresponsive for 3-4 seconds.
Another thing that is interesting is that if this same code is run on RHEL 4.5 (Kernel 2.6.9-67.ELsmp), it runs completely fine. The same code also runs perfectly on Windows XP. All machines are using the exact same hardware:
PNY nVidia Quadro FX4600 768mb PCI Express
Dual Intel Xeon DP Quad Core E5345 2.33hz
4096 MB 667 MHz Fully Buffered DDR2
Super Micro X7DAL-E Intel 5000X Chipset Dual Xeon Motherboard
Enermax Liberty 620 watt Power Supply
I have upgraded to the latest 64bit drivers: Version 177.82, Release Date: Nov 12, 2008 and the result is the exact same.
Does anyone have any idea what could be causing the system to hang on these OpenGL calls?
It appears that this is an issue with less-than-perfect NVidia drivers for Linux. Upgrading to a newer kernel appears to help. If I am forced to use this dated kernel, there are some things that I've tried that seem to help.
Defining the __GL_YIELD environment variable to "NOTHING" prior to starting X seems to increase stability with this older kernel.
http://us.download.nvidia.com/XFree86/Linux-x86_64/177.82/README/chapter-11.html
I've also tried disabling Triple Buffering and Flipping.
I've also found that these forums are very helpful for Linux/NVidia problems. Just do a search for "linux crash"
You may be able to dig deeper by using a system profiler like Sysprof or OProfile. Do other OpenGL applications using these calls exhibit similar behavior?