Is there a CAPL function to access the hardware configuration? - capl

I am setting up a test environment using CANoe. Depending on the test that I want to run, I need to configure properly the hardware just before running the test.
I was able to use the panel to configure the hardware and export the settings for later import, but I am not able to launch this from the test itself, therefore there is a manual action that is required before the tests attached to different HW configuration are enabled.
I would like to be able to restore the HW configuration directly from the test (CAPL).

Depending on what kind of hardware you are using (VNs, CANCards,Flexray,CAN, Ethernet, LIN,etc) you have or you have not access to such settings.
For example, for CAN-FD you have the canFdGetConfiguration(), canFdSetConfiguration(), and for CAN canGetConfiguration().
You also have a similar access for FlexRay network also: frGetConfiguration().
Hit the F1 on these, and the Help will explain their usage.

Related

Why it is recommended to run load test in non gui mode in jmeter

I'm monitoring the connect time and latency to connect from jmeter machine while running in GUI mode and that is in within acceptable limit.
Should we strictly follow non GUI mode even though I can able to perform load test with GUI mode?
I'm targeting 250 TPS and able to achieve that ..I have increased my memory and monitoring CPU and memory of load generator is below 60%.
Should I go for non GUI mode ?
The main limitation is that each event in the queue is being handled by a single event dispatch thread which will act as the bottleneck on your JMeter side.
My expectation is that your "250 TPS" look like:
while it should look like:
So check how does your load pattern look like using i.e. Transactions per Second listener (installable via JMeter Plugins Manager)
Also check how does your JVM look like especially when it comes to garbage collection, it can be done via i.e. JVisualVM, most probably you will see the same "chainsaw" pattern
You don't need to follow JMeter best practices, but
you may encounter issues to achieve specifc goals (as TPS)
your machine can't execute GUI or have low resources
you execute JMeter using a script or build tool as Jenkins
Also it's better to be familiar with JMeter CLI (non GUI) and its report capabilities
JMeter supports dashboard report generation to get graphs and statistics from a test plan.
Also it will be needed for using distributed testing
consider running multiple CLI JMeter instances on multiple machines using distributed mode (or not)
CLI also useful for Parameterising tests
The "loops" property can then be defined on the JMeter command-line:
jmeter … -Jloops=12

Which local machine components could affect a RDP-session performance-wise?

I've got the following totally reproducible scenario, which I'm unable to understand:
There is a very simple application, which does nothing else than calling CreateObject("Word.Application"), that is creating an instance of MS Word for COM interop. This application is located on a Windows Terminal Server. The test case is to connect via RDP, execute the application and the application will output the time taken for the CreateObject call.
The problem now is that the execution time is significantly longer, if I connect from a specific notebook (HP Spectre): It takes 1,7s (+/- 0.1s).
If I connect from any other machine (notebook or desktop computer), then the execution time is between 0,2-0,4s.
The execution times don't depend on the used RDP account, or screen resolution, or local printers. I even did a fresh install of Windows on that HP notebook to rule out any other side-effects. It doesn't matter if the HP notebook is connected via WLAN or an USB network card. I'm at a loss understanding the 4x to 8x execution time difference to any other machine.
Which reason (component/setting) could explain this big difference in execution time?
Some additional information: I tried debugging the process using an API monitor and could see that >90% of the execution time is actually being spent between a call to RpcSend and RpcReceive. Unfortunately I can't make sense of this information.
It could be the credential management somehow being in the way.
Open the .rdp file with notepad and add
enablecredsspsupport:i:0
This setting determines whether RDP will use the Credential Security Support Provider (CredSSP) for authentication if it is available
Related documentation
https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff393716%28v%3dws.10%29
According to your information about RpcSend and RpcReceive time consumption, it could be the case you have some service stopped on your client machine, like DCOM server or some other COM-related (they usually have "COM" or "transaction" in their names).
Some of that services could be started/stopped (if Manually mode selected) by system to/after transfer your request, but there is a time delay to starting service.
I suggest you to open Computer Management - Services or run -> services.msc and compare COM-related services running on your "slow" client and on your "fast" clients, and try to set Automatically running instead Manually or Disabled.
Also, try to run API Monitor on such processes to determine the time-consuming place more precisely.

tuning kernel modules parameter

I am quite new to Linux and I was going through some online tutorial
and got over two ways for altering the kernel parameter:
create /lib/modprobe.d/XYZ.conf file and put for example something like options cdrom lockdoor=0
navigate to /etc/sysctl.d, create some file like mnq.conf and put something like aaa.bbb.ccc=0
What is the difference between these two ways?
To understand the difference, let's see when these parameters are applied to the boot process:
Kernel (vmlinuz) is loaded with default sysctl parameters (nowhere to read actual parameters yet). No modules (drivers) are present at the moment (these are separate entities!).
Kernel reads initial ram disk with modules
Kernel loads modules for disk drivers and other modules needed for booting. These drivers might need some tuning (i.e. certain SCSI devices might cause initialization timeouts without proper setting). These settings also need to be applied before starting driver initialization, hence modprobe and module params in general implementation.
...
System services are starting including sysctl service which applies both kernel and some module parameters dynamically (at runtime). Those parameters doesn't affect workability of the kernel, rather than simple stability/performance tuning.

LoadRunner11.03 performance issue?

Recently, I received a PC installed LoadRunner 11.03(perhaps patch 3) from my client and watched a web performance with it by long-run test.
In multiple user test, it seems not to work on proper performance because my web's performance monitor couldn't reach any limitation, usage of CPU, network bands, disk usage per minute, usage of Memory. Only waiting threads was little bad, but it was not obvious.
It seems a sequential behavior rather than a parallel access.
(No error occured.)
So I though it was not problem of servers, but the client have some problem having prevent to be acting parallel access for some reasons.
I don't have proper HP passport ID, I can't access the LoadRunner patches' website.
Please notice me if not LoadRunner patches, especially patch 4 or higher , let it show such the above behavior or not.
Ok, it sounds like you are just running a script in VUGen. If that is the case I am guessing (based on what you wrote, correct me if I'm wrong) you are running a script in the Virtual User Generator and not in the Controller. LoadRunner is actually a suite of multiple applications. The Virtual User Generator is the script development application, a development environment like Eclipse. It is single threaded and running a script there is meant only to test the script individually.
To run a multi-threaded test you need to use the Controller app and develop a test scenario, assign multiple virtual users (the LR term for concurrent threads) to each script you want to run and execute the test from the Controller. You can configure machines to be the Load Generators (another app set up to run as a process or service) and push out the test from the Controller to the Generators.

Testing network interrupts in software

I have a network C++ program in Windows that I'd like to test for network disconnects at various times. What are my options?
Currently I am:
Actually disconnecting the network wire from the back of my computer
using ipconfig /release
Using the cports program to close out the socket completely
None of these methods though are ideal for me, and I'd like to emulate network problems more easily.
I would like for sometimes connects to fail, sometimes socket reads to fail, and sometimes socket writes to fail. It would be great if there was some utility I could use to emulate these types of problems.
It would also be nice to be able to build some automated unit tests while this emulated bad network is up.
You might want to abstract the network layer, and then you can have unit tests that inject interesting failure events at appropriate points.
The closest I can think of is doing something similar with VEDekstop from Shunra..
Simulating High Latency and Low Bandwidth in Testing of Database Applications
Shunra VE Desktop Standard is a Windows-based client software solution that simulates a wide area network link so that you can test applications under a variety of current and potential network conditions – directly from your desktop.
You can subclass whatever library class you are using to manage your sockets (presumably CAsyncSocket or CSocket if you are using MFC), override the methods whose failure you want to test, and insert appropriate test code in your overrides.
There are some methods you can use, it is depend on which level you want to test. For function level, you can use XUNIT testing framework to mock a responce. For software level, you can use a local proxy server to contral connection.

Resources