Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How can I get the last build date in maven context from Hudson? I need to pass that date back to Maven to generate a*changelog* report.Is there a better way to achieve that?
I am using Hudson 2.2.1 and Maven 3.x
Although it would help to understand what you are exactly trying to achieve, from what I understand,
Could you not maintain a properties file with a timestamp of the last build? The time stamp will be updated with each built.
You could read the last build time from this properties file and use it in whatever maven plugin you are using.
You may choose to check-in the properties file back to SCM to ensure that last build date is reflected to be same on all the individual machines. If you only need this on Hudson machine, you are good without putting this file into SCM.
Use the environment variable BUILD_ID to get the timestamp of the latest build.
The timestamp you get will be in YYYY-MM-DD_hh-mm-ss format. Use ZenTimestamp plugin to format it to the required format.
or
Install Hudson Groovy builder plugin and execute the groovy script to get the last build information
myjob = hudson.model.Hudson.instance.getItem("job_name")
lastbuild = myitem.getLastBuild()
println lastbuild.getTime()
I solved this a while ago. I thought that I should share this here as I originally asked this question. There is a REST call you can make in Hudson, which gives you the last successful build. A mixture of the rest call and xpath, you will get the last build date in the YYYY-MM-DD_hh-mm-ss format.
Maven Antrun task:
<loadresource property="build_start_date">
<url url="${JOB_URL}/lastSuccessfulBuild/api/xml?xpath=/*/id/text()"/>
</loadresource>
Related
This question already has an answer here:
karate-gatling report aggregation
(1 answer)
Closed 1 year ago.
I am execuiting my karate-gatling reports from teamcity. I have my reports folder structure like below
target/gatling/xmltest-201506234/index.html
The folder name with current timestamp, is there a way to use a wildcard and show it in reports tab for teamcity.
Or is there a way i can remove time stamp from the folder for gatling.
I have seen outPutDirectoryBaseName option for gatling but still it adds the timestamp to the basename
For cucumber reports I have path like this target/cucumber-html-reports/cucumber-feature.html
So in the reports configuration in teamcity i passed basepath as cucumber-html-reports/cucumber-feature.html and artifacts paths as target. So i am able to integrate reports with build.
Is there any way i can ahieve same for target/gatling/xmltest-201506234/index.html
I tried gatling/xml%/index.html its not working. Any help is appreciated.
Can't say for karate-gatling, but FrontLine, Gatling's Enterprise version, does have a plugin for TeamCity.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have never used the gradle wrapper, but IntelliJ Documentation recommends it. Are there any drawbacks to using it?
In particular, if I use 'too new' a version, will I have any trouble porting to older OSs?
The wrapper makes your projects more self contained and build-predictable. You can set the version to whatever you want, so 'too new' a version is not an issue. The wrapper is also cross platform.
The main upside is that you can set the version in your build file, and anyone pulling down the code will build off of your specified Gradle version, regardless of what they have installed on their machine, or even not installed at all.
The downsides are:
Uses more drive space.
Harder to change the Gradle version on multiple projects at once, unless you have a common defined version somewhere, and that creates a dependency between projects. If you have many projects and upgrade gradle version to the latest and greatest, each will have to download it. Then again, the Gradle files to install will be cached locally, so that will speed things up.
Now for the opinion part...
I find that unless I need to lock a project to a specific Gradle version for compatibility, or have many others building a project that may be using different Gradle versions, I am happy linking to a local version from the IJ project, and not using the wrapper. That way I can change the version for all my project’s modules in one place.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Suppose a deployment pipeline is going on. SVN tagging and development version change is going on. At that time a developer is committing his changes. so there is a chance that CI server releasing the new committed untested changes to the production or some other conflicts occure. How can I handle this situation. Do I need to lock the entire trunk until the build pipeline completes. Is there any other work around.
If I understand correctly, you assume the following steps
after a commit, the build server will check out the current trunk (let's say revision A),
perform the build,
execute some tests,
tags the trunk if the tests are successful
and deploys to production (still only if tests are successful)
The "crazy" developer commits between step 3 and 4 and thus creates revision B. Now you assume that the build server will again check out the latest revision (which would be revision B). This behaviour could indeed cause some trouble.
However, the build server should do all the steps based on a specific revision which is not a problem in common setups. E.g. Jenkins usually has a check-out step at the beginning of the job. If there is a tagging step in the end, you will usually not want that Jenkins blindly tags the current trunk (causing the problem you describe) but instead tag the revision that is checked out in Jenkins' workspace.
Additionally, please consider that there should be at least some manual approval step before anything gets deployed automatically to production. This is usually mentioned in the context of continuous delivery as far as I can see.
The key to continuous delivery is IMHO that you are able to deploy the current version of your source code at any time at the push of a button. IMHO it does not mean that every commit should be deployed automatically.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am interested in knowing about what other teams are doing about limiting internal artifact storage.
So how long an internal artifact is stored in Artifactory?
Sonatype (people behind Maven and Nexus) published a blog article on this issue:
http://www.sonatype.com/people/2012/01/releases-are-forever/
The vast majority of files published into our Maven repository are snapshot releases. Both Nexus and Artifactory have functionality for periodically purging old snapshots (useful for keeping disk under control)
It's the management of release builds that becomes the issue. In my opinion this falls into a couple of categories
Not every release is used
During QA some releases are rejected, this means it makes sense to publish these into a temporary "staging" repository, prior to full release.
I call these "release candidates" and Nexus Professional has functionality to manage these for me. (I assume artifactory also supports staging)
Not every release is needed
Sonatype's Blog addresses this point. Applications in production rarely need to rollback to a version older than 6 months. Applications in your Maven repository are unlikely to be used as depedencies in a 3rd party build, so it calls into question the need for continued storage.
Deleting such artifacts remains a judgement call.
We store every released artifact and I'm pretty sure you should also do that unless you have really strong reason to not to. We limit our snapshots to just last one for the artifact and only if there's no such release version, however we ensure that every snapshot live at least 3 days. It's easy to set this up in Nexus and AFAIR it's more or less its default policy about snapshots.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Our team has some controversy on which tasks should be handled by CI tool and which should be in a build script (we use Ant for building and FinalBuilder for CI).
My thought is that all the tasks which are useful not only on a build server but also on developers/QA machines should be placed to the Ant build script (but I'm not sure about the actual best practices).
For now we have the next list of tasks:
update directory (svn update)
compile
run tests
make coverage report
run static analyzers and generate reports
package (make war-file)
deploy to a web-server
send email notifications (with linked reports and build status)
run DB update tool
put a build result (war file and reports) to a special place
(any other CI-common tasks?)
Which tasks would you do by means of your CI tool and which would you place to the build script?
My approach is next:
Ant tasks: compile, tests, coverage reports, analyzers, package, deploy, DB update.
CI tool: svn update, email notifications, putting build result to a special place.
(ant tasks set is partly inspired by default maven set of tasks).
Good question.
I do think that anything you want to do routinely outside of the build server should be in scripts, but not necessarily in your "build" script.
For instance, your deployment and database upgrade steps I would put into a separate script (and yes I disagree with David W and think you absolutely should automate these). We've used Ant for deployment tasks in the past and done ok with it. But I've also heard that Ant is a bad build scripting language because it's not as good with procedural deployment tasks. That's backwards. Use Ant for build, and if it's not a good fit for deployments, script that with something else.
The core role of your build server is to consistently and automatically run these processes and report on the results. For unit tests, etc, this may mean invoking the script that runs the tests, but having the intelligence to parse the results in a meaningful way for things like trending and analysis.
All of the above advice is framed by "within reason". If you occasionally do something outside of the build server, scripting it is hard, and the integration at the build server level is easy, by all means save yourself the work and just do it there.
Let's see the tasks you want to do...
update directory (svn update)
Well, Jenkins will do that anyway.
compile
And, that too...
run tests
Why not? Jenkins can display the JUnit tests results right on the build page. If the tests take a really long time to complete, you can setup a second job to do the tests. Have Jenkins copy the working files from the old job to the new job, and run the second job. There's a Copy Artifacts plugin that will help you do this.
make coverage report
Jenkins can do that too. And, just like the JUnit tests, Jenkins can display the results on the build page.
run static analyzers and generate reports.
Jenkins can do that too.
package (make war-file)
Jenkins can do that. You can even store the war file on Jenkins. People will be able to copy it and deploy it on their systems. Or, you can have Jenkins store it in your Maven repository. Heck, you can do both.
deploy to a web-server
Jenkins can do this, but I prefer to do this manually -- unless there's some testing I want to do as part of the build process. When it comes to deployment, I'd rather do things myself.
send email notifications (with linked reports and build status)
Standard Jenkins is to send out notifications on bad and unstable builds (builds that built, but where tests failed), then send an email once the build is good again. Do you really want an email sent out with each and every build? If so, use the ext-mail plugin.
run DB update tool
Again, this is something I prefer to do manually -- unless this is part of my testing.
put a build result (war file and reports) to a special place
No need to do that. The Jenkins individual build webpage itself can store the war file, the testing results, who started the build, and what was changed. The changes can be linked to Fisheye, Sventon, or another source repository web browser which allows a user to click on a file and see exactly the lines changed.
Jenkins also has a permanent link to the last good build, the last bad build, and the last build. I use iframes (Bad David! Using obsolete HTML code) to embed these pages in the official corporate web pages.
In short, Jenkins can do all of that stuff for you, so why not let it?