Recommanded hardware configuration for Cypress CI Test Machines - continuous-integration

Is there any recommended/suggested configuration for machines to setup as CI Test Executors for Cypress UI Scripts (If information could be shared for all OS platform will help - Windows, Linux & Mac)
I tried to find these information from the official documentation at the official docs here. But no luck(yet)!
So maybe from your past experience, What could be best hardware configuration to go for in below cases,
when planning to run in parallel testing mode with 4 machines
when just using single machine
Scenario: Let's assume there are 50 scenarios (Web UI validation End to End scripts), and each script takes around ~3mins to finish

Related

Issues with GitLab Runner on 32-bit Windows

I have a problem with GitLab Runner on 32-bit Windows. The runners are at version 14.4.0 and our GitLab instance is at version 14.4.1-ee. The runners are tied to specific machines running 32-bit Windows 10 Pro (10.0.19043), use shell executors (PowerShell), and run with full administrative privileges (i.e., as the local system user). This is outside my control.
Sporadically, and for no discernable reason, the runners stop sending log traffic to our GitLab instance. They should be uploading several MB worth of logs. I don't see failed attempts to upload logs in debug mode. I don't see any of the network traffic I expect in WireShark. This might correlate with issues loading a custom driver, but I can't say for sure.
The workaround is even more perplexing. The following protocol fixes the issue: remove all the runners using the GitLab CI interface; uninstall the malfunctioning runner; download a new runner binary, register and install it. If I repeat the same steps, except without downloading a new binary, the issue persists. The files are identical when I run a binary diff on them.
I haven't been able to extract any relevant information from the system event logs or network traffic. The issue only affects our runners on 32-bit Windows. It doesn't affect 64-bit Windows or runners on Linux, regardless of architecture. It seems to happen sporadically, in the sense that I can't correlate it with anything interesting happening on the affected machines.
Clearly, something about our 32-bit Windows environments is different and causing the runners to malfunction. I just don't know what it is. I would appreciate any direction figuring out the source of this problem. The fact that downloading new binaries makes the difference has me worried, but I don't have any reason to suspect our machines have been compromised.
This problem was resolved by running tests remotely over SSH. It's almost certainly a bug with the 32-bit Windows distribution of gitlab-runner.

Automated integration testing of a client/server Windows desktop application

My team is developing a desktop application (mixed C++/Tcl) that is used in a client-server setup. Currently it is Windows-only, but soon we will need to port it to Linux. CruiseControl.NET builds it every night from the source code in SVN and packages it into NSIS installer, but we have no automated tests to run.
It is nearly impossible to add any unit tests, but integration testing of the application is easy, because it is already heavily script-based.
The main task is to install the app into 3 PCs, configure it (that involves copying some files around), run it, monitor a possible crash, wait till integration testing is done, collect a summary, send emails. It could be done with a bunch of custom PowerShell scripts, but
In future we will want to add more features and more testing, and
what used to be a simple script soon blows up (as usual), so I want
to minimize custom scripting, and if I need to script something, I
prefer bash/cygwin (I am not familiar with Python or Ruby).
I want a web dashboard that will report current progress, and if
something failed - show logs
I need some supervisor that will monitor the app under test and
report if it hangs or crashes
we will need to test it also on Linux
ideally I would like to orchestrate some test steps between the PCs
(e.g. run test X on PC1 and test Y on PC2 in parallel, wait till they
both finish, then run test Z on PC1, while monitoring that nothing
crashes on PC2 etc)
So, I am looking for a COTS tool/set of tools that will help me to do it and don't have a steep learning curve. Ideally, for free, but if it is really good and has fair pricing, my company may purchase a license.
The process should be triggered from CruiseControl.NET when the NSIS installer is ready, and then perform everything described above. Basically, it should allow at least remote installation of software, running custom scripts and have a web dashboard.
Apparently, SCCM tools like Chef could be used, but so far neither of them supports a Windows server, only nodes. I would like to avoid setting up a Linux VM just for that, although I can do it, if I have no other choice. Also, Chef seems to be a bit overkill - good for 10k machines, but I have only 3... maybe 5 in future. And I am particularly curious about chances to orchestrate a distributed test.
Most of the similar questions here on StackOverflow and in internets are about web apps, Java containers, Maven etc, and there are just so many tools and plugins for these tools to evaluate.
Thanks in advance.
Install ccnet on your test machines. Have those ccnet projects listen to a file that gets edited when a new installer is ready. Have the test machines install that new installer and run tests. There you go. ccnet sends emails so there's your basic reporting.
Have the test results get reported into a database via web services using gSOAP(that's what we did). For linux you can run java cruisecontrol if you must. Write a gSOAP enabled test controller program to report the test results from the test machines. A little c++ app will do. Then write a website(we use ASP.NET) to query the database(Postgresql) and show results. Have the test machines auto update themselves via SVN to get the latest changes to the configuration. Use Nant. Nant is far superior to just using ccnet to run tasks. Nant works through ccnet. Use XML, XSL and CSS with ccnet to make test emails have the information you want(new passes, new failures, SVN differences to code bases, etc...)
Our latest development is putting a big TV in the kitchen with a summary of test results so people can know more readily what they broke!
The first thing I'd get working is a test machine listening for the new installer, installing it, running some basic tests and emailing the results back. Put the ccnet and nant configuration in version control and get that auto updating on the test machine so you don't have to log into every test machine and do an update every time you make a change.
This is hugely broad and pretty close to opinion based. Chef can handle steps like deploying the application to the test machines but it isn't a GUI test framework so you would need something else to handle that. Jenkins supports distributing tests to windows hosts so that seems like a good choice on that side of things but it isn't that great at multi-node tests or orchestration between them. I suspect you'll need to write most of this yourself given the requirements.

Apache Cassandra and Windows

What are the fine tuning configuration for Apache Cassandra for windows machine,I have seen "Unable to create new native thread" due to less number of "max user processes" in linux and the one of the solution is [1]
[1]http://vanjikumaran.blogspot.com/2014/01/unable-to-create-new-native-thread-and.html
Therefore, what are the best practices for Apache Cassandra configuration and OS settings for windows?
The best practice for "Cassandra on Windows" right now is "don't". There are a bunch of edge case issues that crop up on Windows because things like file handles behave slightly differently and do not have the same guarantees they do on Linux.
It works well enough to run a dev/test instance on your Windows box for development purposes. But for anything other than that you should really use Linux, as that is what everyone else uses, and it has the most testing.
Here is a blog post with the current status of Cassandra on Windows:
http://www.datastax.com/dev/blog/cassandra-and-windows-past-present-and-future

Code, Build and Run on seperate Machines. Posssible?

I know that I can code on one machine and have it build on a different machine (ie. a build server). Now I have also heard that you can have visual studio run a build on a virtual machine (i think it requires Virtual PC). Now my question is if anyone has been able to code on machine A, have it compile on machine B and run a debugging sesion on machine C?
This is pretty common in enterprise development and just about the de facto standard way of doing things.
Typically, a dev works locally. Once s/he is happy with their changes, they'll check it into a source control system.
From that point there are a couple of options ranging from automated building to having someone push the button to cause the remote build.
Once the build is complete there are a host of options available for deploying the app to one or more other servers. And yet other options for kicking off automated test suites.
Concerning remote debugging, you can do that independently of whether you are using a build/deployment/automated testing. It's just a matter of getting the right stuff installed and configured (see ho1's answer for a link).
All of that said, I highly recommend you never enable remote debugging on a production server. Some people might disagree with me but I personally think it's dangerous for security reasons and can certainly lead to site outages.
Finally, the only reasons you would need a virtual machine is if the servers aren't available or if you just want to sandbox everything.
You can do remote debugging, so if you had an automated process to copy the compiled code from B to C, I suppose you could do what you're asking.
See this MSDN article for more details: How to: Set Up Remote Debugging

How can I integrate a virtual machine into my automated unit tests in Visual Studio?

I've got some legacy software that I'd like to involve in an automated unit test (for testing network protocol compatibility) and because this software is old and runs in an outdated environment I'd like to encapsulate it in a virtual machine. What is the best way to control a virtual machine from a Visual Studio unit test? Once I have the vm configured and have saved the state appropriately, I will need to be able to start and stop the vm and possibly launch some programs inside the vm on command.
One consideration I do have is that I'd like for developers not to have to download the vm image if they aren't planning to run this test. The unit test may therefore have to also handle downloading the latest vm image from some location. Our convention is to tag long running tests with a special description so developers will be able to exclude this test during active development.
The virtual machine platforms provide a scripting API that let you control VMs from the command line. The VMware server docs and a video on Hyper-V Scripting are available.
You will need to include some logic in your build scripts to decide if you should execute the VM code, or just check for the presence of the VM on developers machines.
You may want to check out some of the NAnt and MSBuild task repositories for VM-related tasks to make this easier.

Resources