In my line of work, I’m working with MS Visual Studio, Docker for Windows and VS’s Docker tooling.
I’m working on a multi-container solution that has dependencies on containers with long startup times (such as elasticsearch). I’ve experimented with the docker-compose 2.4 format and successfully used the depends_on with healthchecks combination in order to ensure the startup order of my projects. This however, negatively impacts my developement experience, as it takes a long amount of time to start my solution (using the VS / .dcproj / docker-compose integration) or to attach the debugger to it.
I’m looking to optimize my developement experience, while retaining the guarantee that my dependencies are up and running before my under-developement containers are being launched.
I was wondering if it would be a good solution to have the dependencies such as elasticsearch/consul etc. started up as a separate docker-compose solution and have it run side-by-side ?
Is it possible to bridge 2 docker-compose solutions using docker-network and preferable retain the docker network dns?
Is there a better conceptual solution than this?
Best regards
Currently we are running some of our small ruby scripts through CI machines like teamcity. But the problem is that teamcity is only free to a certain point and we are reaching that cap. The thing I like about teamcity is the fact that I can define how to run the scripts in it and then have the logs shown in each of the "build" processes so if something goes wrong or I want to verify something I don't have log onto the server and inspect individual files.
The problem is that I need to be able to run the same process at about 4x the capacity I am now, which means I need about 4 times the build agents which leaving the free licensing. Now obviously I could just spin up more teamcity servers but that then becomes a pain.
So my question is, what is another way that I could just basically setup cron processes on linux machines (i have a lot "freely") but then give myself the exposure and ease of access of logs similarly that I gain from teamcity. Obviously I know that setting up cron processes aren't hard but I really want to avoid having to log onto the machine to check and see if my automated processes are running correctly or struggling.
Thanks in advance!
p.s. I also have access to windows machines if there is an easier way to do it there.
Install an open-source CI server like Jenkins if you want to host it yourself. You can also run it on your own machine, though it's usually better to have it always on in the cloud.
My team is developing a desktop application (mixed C++/Tcl) that is used in a client-server setup. Currently it is Windows-only, but soon we will need to port it to Linux. CruiseControl.NET builds it every night from the source code in SVN and packages it into NSIS installer, but we have no automated tests to run.
It is nearly impossible to add any unit tests, but integration testing of the application is easy, because it is already heavily script-based.
The main task is to install the app into 3 PCs, configure it (that involves copying some files around), run it, monitor a possible crash, wait till integration testing is done, collect a summary, send emails. It could be done with a bunch of custom PowerShell scripts, but
In future we will want to add more features and more testing, and
what used to be a simple script soon blows up (as usual), so I want
to minimize custom scripting, and if I need to script something, I
prefer bash/cygwin (I am not familiar with Python or Ruby).
I want a web dashboard that will report current progress, and if
something failed - show logs
I need some supervisor that will monitor the app under test and
report if it hangs or crashes
we will need to test it also on Linux
ideally I would like to orchestrate some test steps between the PCs
(e.g. run test X on PC1 and test Y on PC2 in parallel, wait till they
both finish, then run test Z on PC1, while monitoring that nothing
crashes on PC2 etc)
So, I am looking for a COTS tool/set of tools that will help me to do it and don't have a steep learning curve. Ideally, for free, but if it is really good and has fair pricing, my company may purchase a license.
The process should be triggered from CruiseControl.NET when the NSIS installer is ready, and then perform everything described above. Basically, it should allow at least remote installation of software, running custom scripts and have a web dashboard.
Apparently, SCCM tools like Chef could be used, but so far neither of them supports a Windows server, only nodes. I would like to avoid setting up a Linux VM just for that, although I can do it, if I have no other choice. Also, Chef seems to be a bit overkill - good for 10k machines, but I have only 3... maybe 5 in future. And I am particularly curious about chances to orchestrate a distributed test.
Most of the similar questions here on StackOverflow and in internets are about web apps, Java containers, Maven etc, and there are just so many tools and plugins for these tools to evaluate.
Thanks in advance.
Install ccnet on your test machines. Have those ccnet projects listen to a file that gets edited when a new installer is ready. Have the test machines install that new installer and run tests. There you go. ccnet sends emails so there's your basic reporting.
Have the test results get reported into a database via web services using gSOAP(that's what we did). For linux you can run java cruisecontrol if you must. Write a gSOAP enabled test controller program to report the test results from the test machines. A little c++ app will do. Then write a website(we use ASP.NET) to query the database(Postgresql) and show results. Have the test machines auto update themselves via SVN to get the latest changes to the configuration. Use Nant. Nant is far superior to just using ccnet to run tasks. Nant works through ccnet. Use XML, XSL and CSS with ccnet to make test emails have the information you want(new passes, new failures, SVN differences to code bases, etc...)
Our latest development is putting a big TV in the kitchen with a summary of test results so people can know more readily what they broke!
The first thing I'd get working is a test machine listening for the new installer, installing it, running some basic tests and emailing the results back. Put the ccnet and nant configuration in version control and get that auto updating on the test machine so you don't have to log into every test machine and do an update every time you make a change.
This is hugely broad and pretty close to opinion based. Chef can handle steps like deploying the application to the test machines but it isn't a GUI test framework so you would need something else to handle that. Jenkins supports distributing tests to windows hosts so that seems like a good choice on that side of things but it isn't that great at multi-node tests or orchestration between them. I suspect you'll need to write most of this yourself given the requirements.
We just started using Jenkins for continuous intergration. The code is pulled from Perforce. We have one jenkins master (Windows VM) and 3 slaves (Windows VMs). I am more the VMware admin than a programmer.
I have been trying to tweak more and more the Jenkins slave setup. Now they are configured as 16vCPUs + 48GB of RAM per slave. Each time during a build, the CPU is always spiking at 100%. We are closing the build in 2h20m but the goal is to reach 1hr.
What is the best way to do so? What type of tweaks in VMware? How can we push through the build faster?
Thanks!
Try to make it parallel if possible...
Build Pipeline Plugin
The beauty of this is that you can trigger parametrized or regular.
So your master can start the process and the three slaves will make sure isolated steps are accomplished.
This may give a better view.
I would like to run Teamcity (with a build agent) in a Linux VM to handle our none-.net projects. But in the same breath I'd like to have a BuildAgent setup on a Windows server to handle all of the .net projects.
I can't think of any reasons why this wouldn't work but has anyone any experience and any ideas about the problems I might encounter before I spend too much real time on this?
Ta
It's fully supported. TeamCity also knows which agents to route builds to.
This is a very normal scenario and many project I know do this without any problems. Just make sure that for the builds' Agent Requirements, you properly direct the appropriate job to the appropriate agent. One criterion can be that agent.os.name should contain Windows or Linux etc.