Loadrunner measure Siebel rendering UI - performance

Is there a way to generate a LoadRunner script that also measures the rendering Siebel UI?
I'm using Siebel Open UI 8.1, LR 12 and IE11, Chrome 39 and Firefox 24 browsers.
If it's not to execute this performance scenario with LoadRunner, is there another option to cover it?

Yes, GUI Virtual user
GUI VUSER Login Time - Siebel Web Virtual user Login time = Time in GUI for Login
Continue this for any named timed event (save, page, etc...) This is a well established prototocol dating back as far as 1994.
Time in GUI consists of time required to run JavaScript and time required to draw information on the screen (rendering.) Check the developer tools of your browser to measure time in JavaScript and time in rendering
FYI: GUI Virtual Users are QTP-based clients. You could also achieve substantial similarity, without the actual drawing on the screen, but including JavaScript, with Truclient. But, since you indicate rendering is what you are after, then it goes to GUI virtual user

Related

WebAuthn on Chrome on Windows: Skip Windows dialog in favor of Chrome dialog

In developing our passkey integration I'm encountering unusual behavior in Chrome on Windows.
On my PC, when I register a new physical key I see this Windows dialog.
When I enable the virtual authenticator environment in the Chrome Dev Tools I get this Chrome dialog instead.
However, someone testing the application for me on another PC, without using the virtual authenticator environment, gets the Windows dialog first. If they click Cancel in the Windows dialog, then they get the Chrome dialog.
Is there anything I can do to nudge the browser towards delivering a more consistent experience? I'd rather always show the Chrome dialog if possible.
For reference, this is the virtual authenticator environment in the Chrome Dev Tools:
The problem is that lots of enterprise users have to use a physical security key one or more times a day. So there's a strong desire not to put extra clicks in their way and thus to jump directly to the Windows system UI. But the Windows UI doesn't support using phones as authenticators, so sometimes the browser UI is needed as hitting escape is quite non-discoverable.
Quite how that balance is struck has varied over time and might change again in the future. You can see the current logic here if you want to craft requests that trigger the browser UI. But the intent is that sites should do the obvious thing and the UI should be fairly reasonable.

Power Automate: Using "Wait for image" and "Extract text with OCR" in unattended mode possible?

I want to automate a Webswing session to run in unattended mode. Webswing is a web server that allows applications to run within the web browser. So there is no access to UI elements that the bot could access.
Therefore, I initially worked with image recognition (e.g., using the "Wait for image" and "Extract text with OCR" actions) in attended mode. Now I would like to switch to unattended mode. Does anyone have experience with this and know if the image recognition actions in a session like Webswing can be applied to unattended robots or are there other commands I can use for this use case?
Yes, however it is worth keeping in mind that the screen size needs to be set to the size you run the flow in when attended.
See: how to set screen resolution unattended mode

Slow Page Loads in all browsers with long idle time to start

When browsing the internet (all browsers) I experience a significant delay before the page load is initiated. I've used Chrome's Developer Tools to analyze the issue and in looking at the Performance tab, there is significant Idle time before any activities are started (see image). In addition, if I look at the network timeline, I see a gap in the waterfall timeline with no activity after the initial page request is made. Any suggestions as to the root cause or ideas for further troubleshooting?
Performance Tab in Chrome Developer Tools - page load for Firefox.com
Network Tab in Chrome Developer Tools - page load for Google.com

Run DOH robots tests in java program in background

I would like to embed dojo/robot tests is java application.
Java application would use java-webengine for load web pages and for embed dojo script to these pages. Java-webengine gives possibility run java script.
I understand, that DOH use system mouse and keyboards events. User of my application does not see web browser page (browser running in background by webengine).
I have a couple of questions:
1. What happen with mouse pointer during DOH test execution?
2. It is possible to run DOH tests in my application internally(in the background)?
3. What happens if user will type on the keyboard or move mouse during test execution? (For instance user may switch for other application, e.g. Microsoft Word.)
Thanks!
A few things --
Dojo tests can be run from the command-line using node.js or Rhino.
I have created a DOH test suite that is backed with a Java web server and that works well, BUT...
To clarify, not all of the DOH robots use system mouse & keyboard events, only 1 particular robot (robotx) simulates actual user input. When using robotx, the mouse & keys behave as directed by the tests. If you mouse off the browser, the tests will be aborted (an alert comes up notifying you of this). Therefore, robotx cannot be run in the background because it is actually interacting with the browser.
You may have some luck using the other robots coupled with node.js or Rhino. The key concept is that you should be looking for some "headless" browser testing scenario, which is generally what Rhino handles (I believe Node can do this as well) while avoiding use of robotx.
Basically, as long as you are not using robotx (the one that actually takes control of UI) you should be able to start the tests & minimize the browser or use a headless browser engine.

how can I simulate hardware key Back on windows phone 7 real device?

I want to simulate hardware key Back from PC. It's for automation test. I use windows phone test framework by expensify, but it do not support real device.
This can't be done on the real device unless you use robots (I am talking with LessPainful and this may happen one day!)
If this is required for your real device automation tests now, then the only things I can suggest are:
that you hook a new custom command into the automation stack and then respond to that command by calling Back on the RootFrame's navigation stack.
this might not be a perfect simulation of what would happen for a real back button press (e.g. if a modal dialog is up) - in which case you'll need to engineer code into your app to simulate the flow of the back press.
if you ask around on XDA developers then someone might have a solution for you (there are ways to hack the OS on your test phont...)

Resources