Build \ deploy - complex scenario and application - maven

The managers grant our team with the task of creating an automated build \ deploy script for the production servers.
The script requirements are:
fetch latest release src code of web application from git.
compile - > WAR
connect to a remote server (production\test)
shutdown tomcat server on remote
execute schema updates on remote DB server (for new release)
deploy new war to tomcat and start it.
My questions are:
do all 3 major players in the build\deploy area can do that (ant \ maven \ gradle)?
is building a small application (java application) that does this exact steps is good practice ? (probably write a java app will be much faster than learning doing that in maven \ ant \ gradle)
are there any alternative tools for this kind of work?
are there any better alternatives for the whole "build-machine" idea?
thanks!

Do all 3 major players in the build/deploy area can do that (ant/maven/gradle)?
With enough customization of targets/goals, any of these can do what you want.
Is building a small application (java application) that does this exact steps is good practice?
You certainly could (or just use a shell script), this is essentially the same thing as customizing the build tools listed prior.
Are there any alternative tools for this kind of work?
My company has created BuildMaster specifically to solve these and additional problems related to deployment, and it sounds like the free version may suit your scenario.
The basic solution would be to:
Connect to Git by adding a Git source control provider (or if you're using GitHub, the GitHub provider)
Connect an agent to the target server and add it to BuildMaster (requires installation for Windows, but if deploying to Linux it just uses an SSH connection)
In your deployment plan, you'd use the following actions:
"Build Ant Project" or "Execute Maven" to perform the actual build process
"Create Build Artifact" to associate the build output (whether a WAR file or its contents) with the BuildMaster build
"Stop Service" to stop Tomcat
"Execute Database Scripts" to execute scripts on disk (whether you've pulled them from source or whatever) or "Execute Database Change Scripts" which works automatically if you've uploaded them to BuildMaster
"Deploy Build Artifact" to deploy the previously captured artifact to the remote server
"Start Service" Tomcat
What's neat about this approach is that when you create this deployment plan, it reads very similar to what is written out in the above steps.
Additional benefits that may or not pertain to your exact scenario that can be trivially added include:
Approvals & Signoff - workflows can specify these to ensure QA signoffs occur before promotion
Release Management and Auditing - know what build is in what environment and when it went there
Variable Deployments - you can add branching logic to deployment plans to make it easy to select whether the "Debug" or "Release" build goes to test, for example.
Notifications - users can subscribe to certain events (deployment, release, etc.) and receive email notifications when those events occur

Related

Specflow tests running on local web server

I am trying to use Specflow with Playwright in order to do BDD on a portal app developed but I am facing a small problem.
The Specflow project is a separate project with the ASP.Net core server that has the Api of the portal app (it is in Vue). Since the tests are pointing to a specific URL (currently localhost), before running the tests, I need to run the ASP.Net core & Vue project locally. Otherwise, Specflow & Playwright will not be able to do the test (as it will not find the localhost).
Is it any way I can force the run of the Web Server project? I tried to run it from outside Visual Studio with dotnet build and then dotnet run commands but somehow they are missing parameters (that exist while running it from inside VS) and apart from that, these commands must somehow be triggered while trying to run the tests.
I have seen solutions like creating a Docker image from a Docker Compose file in order to pack a .Net project & server in it before running the Specflow tests. Then in the BeforeTestRun hook using the FluentDocker to spin-up the server but I am not quite sure it is the easier (or best) solution.
Does anyone know how I can trigger running the .net core project (with the Vue pages)?
This is actually a pretty big question, with a pretty big answer, however this is well-trodden ground. The issue isn't so much a "specflow" issue as a general automated testing issue. Development practices like continuous integration and continuous delivery can help. Each one is too big for a single question, however I can answer this in more general terms.
In its simplest form, running automated tests locally involves these steps:
Build the application
Deploy the application to a real web server
Run tests
I'm going to assume you are developing in a Windows environment, however every operating system has some sort of command line scripting solution available. The scripting language might change, but the overall idea will not.
Configure a web server. In Windows, this would be Internet Information Services (IIS).
Add a new "application" (or "IIS app" as some people call it) to your localhost web server. Point the physical directory to the root directory for the web project. Repeat this for each web site or web app your system requires.
Write a PowerShell script that gives you an easy way to build and deploy the applications to your local web server.
This script should use publish profiles set up in Visual Studio, which allows you to publish directly from Visual Studio before invoking tests manually through Test Explorer.
Write a PowerShell script used has a "harness" script to coordinate building, deploying locally, and then invoking dotnet test.
Running tests locally just requires a single line of PowerShell to invoke your test harness script:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create
# Skip deploying in case web apps haven't changed:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create -deploy:False

run release tasks selectively based on project code changes

We are using VSTS for build and release management, and using CI/CD. Typically, our solutions consist of a web application project, and a database project.
Our current release tasks take the application offline (using app_offline.htm), publish the database, then publish the web application. Publishing the database project often results in no changes, as due to CI/CD we are much more frequently updating code on the web app than changing the db schema.
Is there a way to only run the database publish task (using WinRM) when it detects a change in the database project code, in our git repository?
EDIT: This in itself isn't a problem, as typically when the DACPAC gets published, there will be no activity. HOWEVER, I've been requesting that the database is backed up using the /p:BackupDatabaseBeforeChanges=true flag - which seems to back up the database even if there are no changes. This is an issue for large databases.
The simple way is that you can separate web project and database project to two build definitions.
Create a new build definition
Enable Continuous Integration in Triggers tab
Specify Path filter to include database project
Modify Visual Studio Build task, specify /t:[database project name] argument in MSBuild Arguments box to just build database project
The same steps for web project
Create a new related definition
Add artifacts for previous two build definitions and enable Continuous deployment trigger
Add two environments (e.g. database, web)
Open Pre-deployment conditions of an environment (e.g. database)
Enable Artifact filters and select corresponding artifact (e.g. database build artifact), specify build branch (can specify *, it means all branches)
Add tasks to just deploy database in this environment
The same steps for web environment
The answer is - exactly what I want isn't possible.

SonarQube - how is it used

I have a simple problem, with a simple answer probably, but I can't find what is it. We want to deploy SonarQube along with Checkstyle and some other tools, but we can't find out is it meant for a centralized, server deployment, or on each developer machine? All tutorials show installations on separate machines and being used in the localhost, while there is a public instance example, and the requirements and specs certainly look service-like.
On the other hand, I'm not getting how do the developers submit their code for checks if it is on a server.
So, in short, how is it deployed? Any checklist or something similar would be of great help.
The SonarQube "runtime" architecture has several elements:
SonarQube server. It contains a database (e.g., MySql) and an
embedded web server (Tomcat). The SonarQube server stores the
results of analyses (the metrics), but does not execute the code
analyses. This server provides a web UI that shows the dashboard of
the projects, various metrics and drill down into code, admin options. It uses a pluggable architecture--you can add/remove funcitionality via plug-ins.
Program that runs code analysis on the developer machine. There are options: (a) if they are using Eclipse or IntelliJ, they can use the respective SonarLint plug-in, which provides configuration properties, menu options to run analysis, a view to show violations, etc.; (b) developers can also run code analysis via maven (mvn sonar:sonar) or gradle (gradlew sonarqube); (c) developers can execute the various code analyses through a program called SonarQube Runner.
All these options of programs that run the analysis on the developer machine need to be configured to communicate with a SonarQube server. For example, when you run code analysis in IntelliJ using SonarLint, the metrics will be uploaded to the server. This server is typically shared by all developers, but it can also be localhost.
Program that runs code analysis on the CI/CD server. The job/pipeline that builds a software project can be configured to run SonarQube code analysis. It can be done via maven or gradle just like on the developer's machine, or via a plug-in. There are SonarQube CI plug-ins for Jenkins, Hudson, Bamboo, and others. Depending on the size of your project, you may want to configure the code analysis to run once a day only, and not upon each code commit or changes to dependencies. The SonarQube code analysis executed on the CI server will likewise send the generated metrics to the SonarQube server.
The SonarQube architecture documentation is very poor (not to say absent), so it's hard to get the big picture. I hope this helps.
SonarQube (formerly just "Sonar") is a server-based system. Of course you can install it on your local machine (the hardware requirements are minimal). But it is a central server with a database.
Analyses are performed by some Sonar "client" software, which could be the sonar runner, the sonar ant task, the sonar Eclipse plugin etc. The analysis results can be automatically uploaded to the server, where they can be accessed via the sonar Web application.
In an environment with many developers, you should run a build server (e.g. Hudson or Jenkins), which performs automatic sonar analyses as part of the nightly build. Other schedules are possible, but the developers should know when they can expect updates of the server-side analysis results. The results of the automated analysis can be displayed in the individual developer's Eclipse editor by way of the sonar Eclipse plugin.
The architectural documentation on Sonar is quite sparse. I've looked for a picture to visualize what I just described, but could not find one ...

Maven, switching to a different profile

I have a problem with proper maven profile configuration of a project that is deployed to a continuous integration server.
In my project, there are some resources that needs to be included only during tests at the daily building phase and others that needs to be included during nightly builds, and they can never be included both at the same time, because building process will fail, I can achive this locally by activating one profile at the same time.
Continuous integration server runs following maven commands:
-during daily builds:
mvn clean package -Pci -Dci
-during nightly builds
mvn clean install -Dmaven.test.failure.ignore -Pci,nightly -Dci -Dnightly
As you see, nightly build command include maven variables and profiles defined in daily build command, which makes some troubles for me, becouse I want to have only one profile activated at the same time.
Specifically, what I want is having 3 separate profiles:
-my-pforile (activated by default, not used on CI server)
-ci-profile (activated only on daily builds, used on CI server)
-nightly-profile (activated only on nightly builds, used on CI server)
How can I achieve that? I tried almost everything. Reconfiguring CI server is not an option.
When I have to configure the same build with different profiles, using Jenkins as a CI,
I usually create as much builds as profiles, so each build uses the correct configuration.
If adding a new build is not an option probably you can try to create a workaround
using something like the exec plugin (http://mojo.codehaus.org/exec-maven-plugin/) to download
the resources from a ftp (or something else).
You will have also to create a cron job (or equivalent) to replace the correct resources between the builds:
in the evening you put there the resources for the night, in the morning the ones for the day.
But considering how cumbersome this process will be, probably it is better to try to add
a new build.

TeamCity and PHP

We are considering TeamCity for continuous integration but have projects in both Rails (Rake tests) and PHP (PHPUnit tests).
I'm a bit new to CI - Has anyone setup TeamCity for PHP projects? If so, is it straight-forward?
Thanks,
Chad
To get the question answered:
Just use ant build scripts, and it'll work with TeamCity.
In the high demand market of web development, using CI is very beneficial and almost a requirement (now a days).
We use TeamCity, YouTrack, Perforce and PHP Maven to build, package and deploy our web applications. The setup is as follows:
Once developed, code is commited to the Perforce repository main folder for the app
TeamCity is configured to check this folder for changes and build each time changes are found (see configuring TeamCity)
Once development has reached a point where it's ready to be deployed, we integrate the main branch with the release branch
TeamCity is configured to check the release branch for changes and deploy via FTP to the server
Cron jobs are running on the app to deploy new releases to a QA branch
Once changes and functionality is verified, the status of the QA deployment is set to "deploy"
Another Cron job is running looking for new QA releases that are ready to be deployed. Once found, it extracts the package into the live folder
In this case, our PROD and QA folders are on the same server. Alternatively, you can have multiple TeamCity build configurations that push the app to different servers (or use a teamcity to define the environment variable).
Also, when we close tickets/issues in YouTrack, we can pull the build info from TeamCity as they interact with each other.
Links:
Configuring TeamCity, Maven for PHP for Joomla continuous build:
http://www.waltercedric.com/joomla-mainmenu-247/continuous-build/1552-configuring-teamcity-maven-for-php-for-joomla-continuous-build.html
We are using TeamCity to deploy a number of PHP sites -- static, Wordpress and Drupal shortly.
We use the Deployer plugin to sftp files to the appropriate server and then a script to rsync the files to the right place and to setup apache. Works very, very well.
Here is a fresh article from JetBrains on how to setup TeamCity with PHP:
http://blog.jetbrains.com/webide/2013/01/continuous-integration-for-php-using-teamcity/

Resources