I am currently using a binary from a 3rd party source and make changes to its configs in order to integrate it with our master branch. Every time I do it we increase the version number of our build. As of now, it's done manually in the C file that comes with the binary. I am planning to automate it in a way such that every time I merge binary with master the build number increments. Also, we use a makefile to build the master. Is there a way I could do that via makefile or in the C file itself. Is this possible? are there any posts or links someone can refer me to?
Related
I don't know how to collect the data from each build machine on CI. (I use TeamCity for CI and this is the first time to use CI by myself.)
After building code and running the .exe file, an output file is generated. It is a .csv file and its size is less than 1KB and very simple. I want to collect the data to one place and do some statistics.
The build and running .exe file is working fine. However, I don't know the next step. I have two ideas.
(Idea 1) Set-up a log database server (e.g. kibana-elastic search) and send the output to it. However, it seems an overkilling solution.
(Idea 2) Create a batch file and just copy the log to somewhere.
However, I don't know what is a usual way to use CI and collect the data. I guess there will be a better solution. Is there any way to collect the data by using CI?
I can suggest using build artifacts: you can configure your builds so that they will produce and make some files available for the users of Teamcity. Then you can download them and analyze as you need. Taking into account that files are pretty small, I think it's an ideal variant.
If you need to collect all artifacts from every build, you can configure another build, which would run some python script, which in turn would utilize Teamcity REST API to collect all artifacts from specific build and zip and produce complete set of your files.
As an example you can check some build at JetBrains test server: just select finished build and navigate to Artifacts tab.
Please ask more questions if my answer is not clear enough.
When a private agent build starts in VSTS, it gets assigned a directory, e.g. C:\vstsagent_work\1\s
Is there a way to set this to a different path? On other CI servers, like Jenkins, I can define a custom workspace for a job. I'm dealing with a huge monorepo and have dozens of build definitions around the same repository. It makes sense (to me anyway) to share a single directory on the build agent computer.
The benefit to me is that my builds can use pre-built components from upstream repositories, if they have already been built.
Thanks for any help
VSTS build always creates a working directory per build definition. This leaves you two options:
Create a single build definition and use conditionals on steps to skip certain steps in order to only run what is needed. This allows you to use the standard steps and may require a powershell script to figure out which steps to run and which ones to skip. Set variables from powershell using the special logging commands.
Disable the get sources step and add a step that manually fetches sources. You'll need to clean the working directory, checkout the right commit, basically replicating the actions in the get sources step manually. It may require some fidgeting to get all the behavior correctly for normal build, pull request builds etc. That way you can take full control over the location where sources are checked out.
I'd also recommend you investigate the 2017 project formats that use the new <packageReference> in the project files to fetch packages. The new system supports configuring a version range which can always fetch the latest available version of packages. It's a better long-term solution.
No, it isn’t available in VSTS build system.
You can change working directory of agent (C:\vstsagent_work) (Re-configure it and specify another working folder), but it won’t uses the same source folder for different build definitions, the folder would be 1, 2, 3 ….
I did my build numbers as 1, and then 2.
Does this matter - is it just a mater of preference as to how you do them?
The Build Number (or CFBundleVersion) is not shown in the App Store so for the user it does not really matter.
The purpose of the Build Number is that developers can distinguish different builds using the same Version (CFBundleShortVersionString).
Consider you are working towards a version 2.1.0. Before you publish this version on the App Store, you probably want to distribute Beta builds to testers. If they report any issues and you fix them, you will need to create and upload a new build but probably still use the version 2.1.0. In that case you would use the Build Number to distinguish the two version.
You can use whatever you like as a build version. Apple provides a tool to increase the Build Number in Xcode projects named agvtool.
Another way (and what I personally do) is to use the git commit count as the Build Number. This can be automated via a Build Phase. That way, every change that you make (and commit) automatically increases your Build Number.
I want to write a bash script which grabs only the outputted jars for the modules within my project which have changed (after a build) so that I can copy them up to a server. I don't want to have to copy every single module jar every time, as in if you do a full clean build. It's a gradle project using git. I know that gradle can do an incremental build based on only the modules whose code has updated but is there a way this plugin (assuming it's a plugin) can be called? I have done some searching online but can't find any info.
Gradle has the notion of inputs and outputs that are associated with a task. Gradle takes snapshots of the inputs and outputs for a task the first time they run and on each subsequent execution. These snapshots contain hashes of the contents of each file. This enables gradle to check upon subsequent executions, if the inputs and/or outputs have changed and decide if the task needs to be executed again.
This feature is also available to custom gradle tasks (those that you write yourself) and is one way in which you could implement the behaviour you are looking for. You could invoke the corresponding task from a bash script, if needed. More details can be found here:
Gradle User Guide, Chapter 14.
Otherwise, I imagine your bash script might need to compare the modified timestamps of the files in question or to compute and compare hashes itself.
The venerable rsync exists to do exactly this kind of thing: find differences between an origin and a (possibly remote) destination, and synchronize them, with lots of options to choose how to detect the differences and how to transfer them.
Or you could use find to search for .jar files modified in the last N minutes ...
Or you could use inotifywait to detect filesystem changes as they happen...
I get that getting Gradle to tell you directly what has been built would be the most logical thing, but for that I'd say you have to think more Java/Groovy than Bash... and fight your way through the manual.
I'm deploying my build remotely to a file server and clients are downloading it. My build is mostly a loose collection of binary files with some text files. In total the build is around 1 GiB.
However, most of the time I'm making small changes to the executable or small binary data changes so the delta between builds is small.
I'd like to be able to push a build and anyone that downloads the build only downloads the delta. If a new user downloads it for the first time they would get the full build and not the delta. It would also be nice for users to pick a build from the past and grab that build.
I was thinking something like git would work because it has all the requirements that I listed but git requires users to download the entire history of the repository.
I could write something like this myself that has delta patching and compression but I imagined someone has written this before. Does anyone have any recommendations that meet my requirements?