I don't know how to collect the data from each build machine on CI. (I use TeamCity for CI and this is the first time to use CI by myself.)
After building code and running the .exe file, an output file is generated. It is a .csv file and its size is less than 1KB and very simple. I want to collect the data to one place and do some statistics.
The build and running .exe file is working fine. However, I don't know the next step. I have two ideas.
(Idea 1) Set-up a log database server (e.g. kibana-elastic search) and send the output to it. However, it seems an overkilling solution.
(Idea 2) Create a batch file and just copy the log to somewhere.
However, I don't know what is a usual way to use CI and collect the data. I guess there will be a better solution. Is there any way to collect the data by using CI?
I can suggest using build artifacts: you can configure your builds so that they will produce and make some files available for the users of Teamcity. Then you can download them and analyze as you need. Taking into account that files are pretty small, I think it's an ideal variant.
If you need to collect all artifacts from every build, you can configure another build, which would run some python script, which in turn would utilize Teamcity REST API to collect all artifacts from specific build and zip and produce complete set of your files.
As an example you can check some build at JetBrains test server: just select finished build and navigate to Artifacts tab.
Please ask more questions if my answer is not clear enough.
Related
We have a VSTS build setup. Currently we have a single repository hosting multiple services. We have a build definition per service, and triggers each only when the current service is touched, by a trigger pattern.
Now the issue is that each build definition hence the single repository makes the GetSource download the whole repo and also we do a clean.
I have been searching to see if there is a solution like the trigger where we can set a pattern, to just get a part of the repository downloadet. This should be to reduce build/download time.
A workaround might be to not make a clean each time or make multiple repositories. At the moment we would like to avoid the latter.
Let me hear if anyone knows of a good solution.
There is no way to specify the files to be downloaded in Get Sources step. Instead, VSTS will download the whole repo.
The workaround is set Clean option as false so that only the modified files and new added files will be updated in Get Sources step.
When a private agent build starts in VSTS, it gets assigned a directory, e.g. C:\vstsagent_work\1\s
Is there a way to set this to a different path? On other CI servers, like Jenkins, I can define a custom workspace for a job. I'm dealing with a huge monorepo and have dozens of build definitions around the same repository. It makes sense (to me anyway) to share a single directory on the build agent computer.
The benefit to me is that my builds can use pre-built components from upstream repositories, if they have already been built.
Thanks for any help
VSTS build always creates a working directory per build definition. This leaves you two options:
Create a single build definition and use conditionals on steps to skip certain steps in order to only run what is needed. This allows you to use the standard steps and may require a powershell script to figure out which steps to run and which ones to skip. Set variables from powershell using the special logging commands.
Disable the get sources step and add a step that manually fetches sources. You'll need to clean the working directory, checkout the right commit, basically replicating the actions in the get sources step manually. It may require some fidgeting to get all the behavior correctly for normal build, pull request builds etc. That way you can take full control over the location where sources are checked out.
I'd also recommend you investigate the 2017 project formats that use the new <packageReference> in the project files to fetch packages. The new system supports configuring a version range which can always fetch the latest available version of packages. It's a better long-term solution.
No, it isn’t available in VSTS build system.
You can change working directory of agent (C:\vstsagent_work) (Re-configure it and specify another working folder), but it won’t uses the same source folder for different build definitions, the folder would be 1, 2, 3 ….
I want to write a bash script which grabs only the outputted jars for the modules within my project which have changed (after a build) so that I can copy them up to a server. I don't want to have to copy every single module jar every time, as in if you do a full clean build. It's a gradle project using git. I know that gradle can do an incremental build based on only the modules whose code has updated but is there a way this plugin (assuming it's a plugin) can be called? I have done some searching online but can't find any info.
Gradle has the notion of inputs and outputs that are associated with a task. Gradle takes snapshots of the inputs and outputs for a task the first time they run and on each subsequent execution. These snapshots contain hashes of the contents of each file. This enables gradle to check upon subsequent executions, if the inputs and/or outputs have changed and decide if the task needs to be executed again.
This feature is also available to custom gradle tasks (those that you write yourself) and is one way in which you could implement the behaviour you are looking for. You could invoke the corresponding task from a bash script, if needed. More details can be found here:
Gradle User Guide, Chapter 14.
Otherwise, I imagine your bash script might need to compare the modified timestamps of the files in question or to compute and compare hashes itself.
The venerable rsync exists to do exactly this kind of thing: find differences between an origin and a (possibly remote) destination, and synchronize them, with lots of options to choose how to detect the differences and how to transfer them.
Or you could use find to search for .jar files modified in the last N minutes ...
Or you could use inotifywait to detect filesystem changes as they happen...
I get that getting Gradle to tell you directly what has been built would be the most logical thing, but for that I'd say you have to think more Java/Groovy than Bash... and fight your way through the manual.
I'm dealing with an enormous amount of data (say, video) and most integration tests require at least a decent subset of this data.
These test files (subsets) can range from 200MB to 2GB.
Where would be a good place to put these files? Ideally they would not go directly into our version control system because people shouldn't have to download 5GB+ of test data every time they want to check out the project.
The test data needs to be updated by Jenkins whenever a schema change occurs (we already have this part figured out), so either maven or svn would need to download the latest version if anybody wanted to run the integration tests.
It would be great if it could be on-demand since we never run all the tests at once locally (e.g., if we are running TestX, then download the files required for this test before running).
Does anybody have any suggestion(s) on how to approach this?
Edit -- For the sake of simplicity let's say that the test files are incompressible.
In this case I would setup a file server share, that contains all the test data in a nicely organized way. Then let your test download the necessary test data itself. The advantage is that you can update the test data in the central place without updating the tests themselves. The next time the tests run, the new testdata will be downloaded.
If you need versioning, you would could use a repository manager like Nexus instead of a simple filesystem. If you need audit-ability, I would suggest a repository manager like subversion. However, make sure that you use a separate repo just for your testdata, so you can easily clean out the repo by replacing it with an empty repo that gets only the newest testdata loaded.
I am a Jenkins newbie and need a little hand holding because we only maintain parts of our app in SVN. I have basic Jenkins install setup.
This is what I do to get a local DEV environment setup and need that translated to Jenkins in order to make a build:
DO SVN checkout (and get the 2 folders that are under SVN)
Delete the folders
Copy over the full app from FTP location
Do SVN restore
download sql file
Import into MySQL
How would I get the above mentioned steps in Jenkins? I know there are some post build steps that I can use. Just not sure how to put it all together. Any help will be much appreciated.
Tell Jenkins about the SVN repository and it will check it out automatically when a new build is started. That should take care of 1. 2-5 would be build steps (i.e. execute shell commands). Basically, you can set up Jenkins to do exactly what you do on the command line, except that the first step is taken care of automatically if you tell Jenkins about the repository.
Rather than trying to do these sort of things in Jenkins, you'll likely save yourself some trouble if you make use of something like Ant or NAnt to handle the complexities for your build.
I've found that doing my builds this way gives me added flexibility (ie, if it can be done via the command-line, I can use it in my build, rather than needing a Jenkins plugin to support it), and makes maintenance easier as well (since my NAnt scripts become part of the project and are checked into the VCS system, I can go back if I make a change that doesn't work out.
Jenkins has some build-history plugins, but over time I've found it easier to keep the majority of my 'build' logic and complexity outside of the CI environment and just call into it instead.