How to Auto-start selenium nodes under Windows - windows

I am trying to automate the startup of my Selenium Grid.
I have the Hub registered as a service, so that starts when the machine starts, but
the literature tells me I can't do the same with the node, because it won't be in a User context, and so I would not be able to get screenshots etc.
I've seen vague hints that you can add something to the registry to start a program, but I'm not really convinced thats what I want.
IT pulls down the servers for upgrades at intervals, and sessions are set to time out after X amount of inactivity, so its a tedious and silly process to open remote desktops to all 6 nodes, in order to log in, then start the process every time.
How do you best manage this?
- Configure the machines to auto-login, and place startSeleniumNode.bat in that users start-folder?
- Add some kind of commandline entry in the jenkins build script that launches the test, to call each of the 6 nodes in turn to start the selenium node (and how would you do that?)

Take a look at AlwaysUp - it allows you to run almost any application as a Windows service including Selenium Grid hubs and nodes.
I've previously created a fairly large Grid infrastructure using AlwaysUp for node management. It's very useful for starting up Grid on boot and lets you specify a user account to run as, schedule restarts at regular intervals and a lot more.

Related

Cross-machine UI automation frameworks?

Good time of day,
Tried to google it but haven't found an answer - are there any free or paid UI automation frameworks that can give me ability to automate an applications that installed on different computers but communicate with each other?
Ideally what I want is:
Do something on Machine A
Wait for event on Machine B
Do something on Machine B after event occured
Wait for event on A
I'm a bit lazy to write and run different tests on both machines (e.g. test1 with steps 1 and 4 and test2 with steps 2 and 3 so I'm looking for other solution.
Perhaps you could set something like this up using Jenkins: http://jenkins-ci.org/
One idea for how it might be done:
The Jenkins master node launches a job on Machine A
The program running on Machine A contacts the master node (via the Jenkins REST API) to launch a job on Machine B
Machine A then starts polling the master node, waiting for Machine B's job to go into a completed state
Machine A continues with its work once the Machine B job is complete
Note that you might be able to dispense with the need for a third machine on which to run the Jenkins server software, and instead make your A or B machine serve a double role as the Jenkins master node as well as a job runner.
This approach means you'd end up with code specific to the Jenkins API code in the processes you're launching on A and B, but nonetheless it might be fairly quick to implement.

100% uptime for web application

Our requirement from our next web app is that we will be able to deploy a new version of the web app without a downtime.
how is it possible to achieve such task?
does it mean we need to run 2 different servers (tomcats) ? and redirect users to each one when needed?
are there tools that are doing this specific task? in what category these tools in?
Thanks
Just use Tomcat's parallel deployment feature. It is available from Tomcat 7 onwards.
Don't forget, 100% availability is impossible - it may happen for a certain period, but no one can guarantee it, no matter what setup you have.
But since you're looking for a smooth change from one version to another, then the best you can do is update one node and switch nodes then. Of course, since you likely have sessions which shouldn't disconnected, you'll need to make sure that an instance (e.g. load balancer) directs all new requests to the new node, whereas old session requests stay on the old node until no one uses it anymore, after which you can upgrade the second node and finally, balance load again to both nodes.

Windows "system service", not "web service" performance

We have an image processing workflow product. Typically 10,000->100,000 images can be run though our processing in a job. More than one job may be pending.
Currently, all the image processing is performed in our home grown imaging library, a managed C++ library, .NET compatible. It is run in the user’s application space. What I mean by that is that if you log on as “PeteSmith” the images will be run on Pete Smiths’ account.
Currently, we only allow one instance of this image processing at a time. Customers are asking us for a new version, one that allows more than one instance to run at the same time, so the question of how we do this is now something we are examining.
The idea of getting processing off the “users account” and using a “system account” to do the processing in the background is appealing. It is appealing, because of the way windows services are naturally managed by OS events like logging in and logging out and other system resource utilization events alarms.
It appears to me that all we would need to do is manage a small number of well-defined events, well documented by Microsoft.
That’s all nice and wonderful. But what I need to understand is what going to a service implantation for our image processing code means for performance, from our customer’s point of view.
In their view, they need more processed, faster.
QUESTION How I should think about tradoffs:
1) Using a service to run a job vs. running N different “instances” of the software running only on Pete Smith (the users’) account?
2) Allowing N number of services to run N different jobs (no cross talk needed) in comparison to running N different “instances” of the software running only on Pete Smith (the users’) account?
Well, the image processing requires a certain amount of CPU and IO resources for the processing. That amount does not change by fiddling with how and where you start your process.
The difference between service or not should be governed by the required usage pattern. If you want the application to go on processing images automatically regardless if anyone is logged on you should run as a service, but if the usage is more like "choose an image, start processing and wait for the result" style you should go for a client app.
It is not entirely clear why your customer wants to run multiple instances. Is it because they want to have one instance do the heavy processing work while they configure the processing for the other? Or do they want to run multiple instances because the processing is heavy and they want to run multiple in parallel?
In both those cases I would consider running the calculation(s) on a background thread in the application. If it is not possible to use threads (maybe due to some global shared state in the library) my second best bet would be to start each processing in a new process and wait for the result on the main process.

How to prevent WebSphere from starting before files from an application update have been unpacked

Using the WebSphere Integrated Solutions Console, a large (18,400 file) web application is updated by specifying a war file name and going through the update screens and finally saving the configuration. The Solutions Console web UI spins a while, then it returns, at which point the user is able to start the web application.
If the application is started after the "successful update", it fails because the files that are the web application have not been exploded out to the deployment directory yet.
Experimentation indicates that it takes on the order of 12 minutes for the files to appear!
One more bit of background that may be significant: There are 19 application servers on this one WebSphere instance. WebSphere insists that there be a lot of chatter between them, even though they don't need anything from each other. I wondered if this might be slowing things down when it comes to deployment. Or if there's some timer in the bowels of WebSphere that is just set wrong (usual disclaimers apply...I'm just showing up and finding this situation...I didn't configure this installation).
Additional Information:
This is a Network Deployment configuration, and it's all on one physical host.
* ND 6.1.0.23
Is this a standalone or a ND set up? I am guessing it is ND set up considering you have stated that they are 19 app servers. The nodes should be synchronized with the deployment manager so that the updated files are available to the respective nodes.
After you update and save the changes, try and synchronize the nodes with the dmgr (or alternatively as part of the update process, click on review and the check the box which says synchronize nodes) and this would distribute the changes to the various nodes.
The default interval, i believe is 1 minute.
12 minutes certainly sounds a lot. Is there any possibility of network being an issue here?
HTH
Manglu

How to identify users which are connected to a windows server via remote desktop

At my workplace, we have lab machines that we use to do our testing.
The standard procedure to reserve a machine for testing was to walk around the office to make sure that no one was using the machine.
This is highly inefficient and time consuming.
At first, I set up a web page where people could reserve the lab machine but nobody was keeping the page updated so that turned up to be useless.
I finally found a solution using Microsoft log parser and wanted to share it to the stack overflow community.
It is a batch file that runs on the machine so the user can identify the last users that use the machine and easily IM them to ask if the machine is free.
Is there a better solution to do this?
Use the built-in command qwinsta (Query Win Station) to figure out what sessions (including console) are active or inactive (disconnected) and then act on the given information (creds to krusty.ar btw for linking this already).
If you feel people are abusing the machine in question, refer to rwinsta to nuke their sessions into oblivion...
You will need to install the Microsoft Log Parser
Then create the following 2 files
TSLoginsDetails.sql
SELECT
timegenerated,
EXTRACT_TOKEN(Strings,1,'|') AS Domain,
EXTRACT_TOKEN(Strings,0,'|') AS User,
EXTRACT_TOKEN(Strings,3,'|') AS SessionName,
EXTRACT_TOKEN(Strings,4,'|') AS ClientName,
EXTRACT_TOKEN(Strings,5,'|') AS ClientAddress,
EventID
FROM Security
WHERE EventID=682
ORDER BY timegenerated DESC
TSLogins.bat
echo off
cls
c:
cd "c:\Program Files\Log Parser 2.2\"
logparser.exe file:TSLoginsDetails.sql -o:DATAGRID
Now by placing this batch file on the desktop, the user can see who were the last people to login and contact them by IM to verify if they are done.
How about posting the information from the log file to the website that tells who is currently using the machine as well.
Check and notify when they log in.
Updated the "who is using the machine" page you made prior.
Run a AT job that checks every couple of hours who is on it.
Totally out of the box:
You can install the Software Testing Automation Framework (STAF) on your servers and desktops to manage your tests. It's written in Java, so you can use it on Windows and Unix/Linux desktops and servers.
Using STAF, you can create a resource pool of test servers on which you conduct tests, then write STAX jobs (STAX is a STAF execution framework) to conduct the tests. The job can grab the first available server from the resource pool, run the test, monitor the test status, log results, notify the submitter, then release the server back into the pool when done. If you have multiple people submitting jobs for tests, STAF will manage the queue of requests and satisfy them as they came in. Users can either monitor the job from their desktop, or you can set up email alerts to notify them when the test is complete.
I'm not sure if I understand you, but there are a set of command line tools to deal with terminal server sessions, and there's also a Windows API to do the same if you need to do this from a program.
Since it sounds like you're a microsoft shop, you can set up the machines as resources in outlook/exchange and reserve them that way.

Resources