One used to be able to download Vagrant boxes to debug Travis builds (for GitHub projects for instance). Apparently, this is no longer possible, so how do people currently debug complex Travis build chains locally?
One way to inspect the build (not to debug, sorry) is to send the build logs to another server on failure.
Here is an example:
after_failure
- sudo tar -czf /tmp/build-${TRAVIS_BUILD_NUMBER}-logs.tgz your-application-logs/
- scp /tmp/build-${TRAVIS_BUILD_NUMBER}-logs.tgz travis#your-server.com:~/logs
You could send them via email, store them on a storage server or whatever.
These logs would be useful to you if you run your tests in a debug mode and include our own logs as well in the tarball.
Unfortunately, there is no good solution for testing travis-ci builds locally at the moment. The closest thing I can recommend is a Ubuntu 12.04 vagrant vm and to provision it with the Travis chef cookbooks from here. This will solve most use cases as most of the time the test failures are not from the travis software (Though when it is you're in tough luck, as most Travis software depends on the other pieces of Travis software, making it fairly difficult to set up) but from the underling OS (Ubuntu) and software such as ruby and ruby gems.
I had a crack at making a docker file for JVM builds here which works well for me. It is based of the travis published containers and utilises the Travis CLI and Travis Build projects to be able to run your .travis.yml file within a docker container.
A built JVM image is on docker hub.
Related
I am looking to play with apache-superset on a cloud-based ide. I have it on my local. I tried unsuccessfully to set it up on gitpod. I wanted suggestions on where can I set it up, opensource preferably not necessarily. I believe cloud9 is 1 such place, but I am looking for other options before I settle. If you've ever set it up on any such platform, even if it is on gitpod and can help me, kindly do so.
[Disclaimer: Gitpod staff]
You can indeed use Gitpod to work on apache-superset, and for that you'll just need a working configuration.
From what I can see in apache-superset's requirements, you'll need to get:
PostgreSQL (e.g. by using Gitpod's official gitpod/workspace-full-postgres Docker base image)
Redis (e.g. by installing it in a Dockerfile via sudo apt-get install)
Various Python dependencies (e.g. by running pip install . after cloning)
Various Node.js dependencies for the front-end (e.g. by running npm install)
Here is a basic configuration I wrote to achieve this:
https://github.com/jankeromnes/incubator-superset/commit/0d345a76ec8126fd1f8b9bc7b6ce4961bf3b593d
What it does is:
Create a Docker image with PostgreSQL and Redis
Once the repository is cloned, open 4 separate Terminals ("tasks"):
Redis server
Superset backend
Superset worker
Superset front-end
All dependencies will be installed automatically, and once the front-end is ready, it will automatically open in a web preview IDE side panel.
You can try it out by opening my personal fork of the apache-superset repository in Gitpod, e.g. by following this link:
https://gitpod.io/#https://github.com/jankeromnes/incubator-superset
So I am having some issues with vagrant. I had initially tried to report this as an issue on the vagrant github issue boards, but they kept closing the issues without responding to them. I guess they decided I wasn't worth their time, or they were just behaving unprofessionally. Anyway, Here is the problem: I use vagrant with virtualbox, and a new version of virtualbox was recently released that is, unfortunately, not compatible with the latest vagrant installation.
However, the people at hashicorp have already updated the source code so that it is compatible with the new version of virtualbox, but you have to build the vagrant executable from the source repo (instructions here). So I followed the instructions and vagrant is working just like it used to.....when the only command I need to run is vagrant up. I should also mention ahead of time that, in order to run the vagrant dev build, the current working directory needs to be the root of the source code repo and the dev build can only be run using the following command with ruby:
bundle exec vagrant
With that being said, I needed to update one of my custom boxes, so I built a vm in the updated version of virtualbox and ran the below command
bundle exec vagrant package --base go --vagrantfile ../../vagrant/vagrantfile
After an extended period of time, vagrant spat back out the following error
The executable 'bsdtar' Vagrant is trying to run was not found in the %PATH% variable. This is an `error. Please verify this software is installed and on the path.`
I should also note that I use a windows machine and that this error never occurred when using the installed version of vagrant. At this point, I had posted the issue on github to get some input from the devs, but they (very unprofessionally) decided to ignore my requests for help and close the issues without providing any response. I used the GNUwin32 project to make numerous unix commands available to my Windows environment and added the folder to my PATH environment variable. I then run the same command again to create my new box and it works!! So then I upload it to the vagrant cloud and attempt to update the vagrant box that is stored on my system by running the following command:
bundle exec vagrant box update
Then, after waiting for a while, vagrant then spat this error out at me:
The box failed to unpackage properly. Please verify that the box
file you're trying to add is not corrupted and that enough disk space
is available and then try again.
The output from attempting to unpackage (if any):
C:\gnuwin32\bin/bsdtar.EXE: invalid option -- s
Usage:
List: bsdtar.EXE -tf <archive-filename>
Extract: bsdtar.EXE -xf <archive-filename>
Create: bsdtar.EXE -cf <archive-filename> [filenames...]
Help: bsdtar.EXE --help
Another error, and still involving this bsdtar tool. It does not appear that anyone else is reporting the issue I am running into because I think they are just waiting for hashicorp to release the new official installation, but, just to give you a look into their priorities, the version of virtualbox that was released which no longer worked with vagrant was released back on December 10. It has been over a month since and there is still no updated release.
So, I am hoping that someone out there might be able to find out why I keep running into these errors when trying to use vagrant's dev build and provide a solution. If not, then maybe if someone else is able to reproduce the issue and report it to hashicorp, maybe they will listen to someone else.
If you are on Ubuntu 20.04 then bsdtar was removed. Try to install libarchive-tools package.
$ sudo apt-get install libarchive-tools
I figured it out. My original hypothesis was correct: since vagrant is a tool that was built primarily to be run on linux machines, then vagrant runs in windows, the installation includes a mingw environment with all of the dependencies vagrant needs to function and which the installed vagrant executable imports into the console session when run. This why the dev build kept failing: because it was not importing this mingw environment. So, in order to fix the issue, I first cloned the vagrant source code repo from github and followed the instructions I linked to above to build the executable from the source repo. I then copied all of the files in the source repo into the following folder:
<hashicorp install folder root>\Vagrant\embedded\gems\2.2.6\gems\vagrant-<version num>
So, for me, the destination directory is C:\HashiCorp\Vagrant\embedded\gems\2.2.6\gems\vagrant-2.2.6
This directory is identical to the source code repo, and copying the source code repo to the above folder replaces the installation version of vagrant with the dev build. After I did this, running the vagrant commands which had failed previously normally (as in, without using ruby or bundle) worked. I hope this helps someone else out there who Hashicorp has decided is not worth their time.
I'm trying to set up CI on my local machine running on Mac. To do so I use Xubuntu virtual machine, Jenkins, and some simple selenium tests. tests on github
I get fresh install of Xubuntu, where I install Jenkins using official manual.
In Jenkins I installed some plugins(git, ruby, rake, rbenv).
In job config I use rbenv wrapper(2.1.0) with ignorance of os versions, also I use this gemlist:
bundler,rake,rspec,selenium-webdriver,capybara
and running that job with
rspec spec
And when I run this job I recieve something like that for every test:
Selenium::WebDriver::Error::WebDriverError:
unable to obtain stable firefox connection in 60 seconds (127.0.0.1:7055)
full output is here
it looks like jenkins user have no access to display to run/see firefox.
Anyone know how to make it work?
We ran into this at work recently and actually opted for Capybara and set the driver to poltergeist. This seemed better than trying to figure out how to run FF on our VM's.
That said, we were able to get a small test suite running by following the instructions here
Answer was not so simple as I think.
The problem is that jenkins service has no access to displays (when it installed via native package). Thats why when I try to start Firefox it's throw me a error. try:
$ sudo su - jenkins && firefox
So it's need access to display to start browser successfully.
This is how I done it:
first of all I used answer form here where I changed to my local user.
Then I installed xvfb plugin to Jenkins, and in my build job preset display to '0' - which is my actual user display. with that option all my tests would run 'headless' but on actual display.
This could be not the best way to solve my problem, but it definitely works for me.
I want to use Jenkins for creating RPM packages to deploy code and scripts onto a Linux redhat machine(s)
So the applications are a mix of technologies (no compiling needed) i just need to package up the applications deploy them to the correct location restart apache
Would anybody have some instructions on how to do these steps for a total Newbie:
Some questions:
Do i need to install jenkins on a local linux machine if im going to be creating RPM's that will be deployed on to linux redhat machine (i was hoping to install jenkins on windows)
Does anybody have a example of creating a package out of a local folder (no source control for the moment)
I want to just specify the directory of where to take the code from and specify where to deploy the code to on a machine the rpm is installed on
On the destination machine i want to run something like
yum -install mypackage-version12.rpm
and it will install the code/scripts to the specified directory and restart apache
i need an example of this also.
Thanks
You can install Jenkins on a different machine, but you generally must have a Jenkins "node", "slave", "agent" installed on a machine that can generate RPM packages.
Running each step of the RPM package setup is putting all the steps to build within Jenkins. It works much better if you extend your build system to build the RPM, and have Jenkins do what it does best, manage the build (schedule, etc), not micro-manage the build (do the steps).
Depending on what you currently have as your build system, this might include ant directives to setup the rpm build tree, copy in the .spec file, and a executable call to rpmbuild.
Jenkins can easily call a post-build task to do this, or you might want to configure a mini "fake" project that does the update, depending on tastes.
As an aside, for a yum command to work without using the --localinstall option, you will need to have a web server set up, the new RPM copied to the right folder on the web server, and the indexing files rebuilt (repobuild is the script to do so, if I recall correctly).
On the client machine (where the package will be installed), you will need to have a yum configuration that directs the client machine to include the web server as one of the known yum repositories.
Why not use an Docker images to build the RPM inside it though a dedicated stage ?
Your code needs to provide /rpm/SPEC files and inside the Docker (Jenkins) you can have a Jenkinsfile like :
mkdir -p ./rpm/BUILD && cd ./rpm/ && for f in ./SPECS/*; do rpmbuild --define \"_topdir \$(pwd)/\" --define \"_builddir \$(pwd)/BUILD\" -bb \$f;
And you are done.
We have several build machines, each running a single TeamCity build agent. Each machine is very strong, and we'd like to run several build agents on the same machine.
Is this possible, without using virtualization? Are there quality alternatives to TeamCity that support this?
Yes, it's possible:
Several agents can be installed on a single machine. They function as separate agents and TeamCity works with them as different agents, not utilizing the fact that they share the same machine.
After installing one agent you can install additional one, providing the following conditions are met:
the agents are installed in the separate directories
they have distinctive work and temp directories
buildAgent.properties is configured to have different values for name and ownPort properties
Make sure, there are no build configurations that have absolute checkout directory specified (alternatively, make sure such build configurations have "clean checkout" option enabled and they cannot be run in parallel).
Under Windows, to install additional agents as services, modify [agent dir]\launcher\conf\wrapper.conf
to change the properties to have distinct name within the computer:
wrapper.console.title
wrapper.ntservice.name
wrapper.ntservice.displayname
wrapper.ntservice.description
You could also take a look at this blog post for Step-by-step guide
http://handcraftsman.wordpress.com/2010/07/20/multiple-teamcity-build-agents-on-one-server/
The top answer is the correct method, but if you want to complete this more easily you can use the TeamCityAgent Chocolatey package and supply the agent name, the agent folder and the port as --params and it will handle setting up the config files as well as pulling in the required version of Java via the server-jre package.
The one caveat to this is you need to use --force on any installs after the first agent as Chocolatey doesn't currently understand installing the same application with a different configuration as a "new" installation.
You will also need to use --version 2.0.1-beta-05 since this is still in a testing phase, but should make it out of beta soon.
Full install example for a second agent:
choco install teamcityagent --force -y --params 'serverUrl=http://teamcity.local:8111 agentName=AgentUno agentDir=C:\buildAgentUno ownPort=9091' --version 2.0.1-beta-05