We are testing an upgrade to Sonarqube5.3 and are having some of the issues identified in other StackOverflow posts, such as the Cobertura interactions. The problem I have is that I am not an administrator for the test server, so I can't look at the background tasks to find out why they are failing. Is there a way that the background task results can be printed in the sonar analysis logs? And is there a way to get the analysis in the build not to generate an exception when the background task fails, so that it just prints an error? Can the permissions for "view background tasks" be changed from just administrators?
The presumption is that non-admins don't care about background tasks except whether the most recent one has succeeded/failed.
Unfortunately, you're not going to be able to see server-side processing errors on the client side. The scanner compiles an analysis report and submits that to the server for processing, where it's queued and handled asynchronously.
What might work for you is requesting admin permissions, not on the SonarQube instance, but only on the project you're trying to analyze. That would give you access to the list of that project's background tasks (as well as the ability to administer the project more generally).
Related
I run a SonarQube scan on a Java maven project whith my Sonar server set as parameters.
But when the command line says it's done, it does not appear imediately on the SonarQube web UI nor via API. I need to wait from 1 to 3 minutes before the number of vulnerabilities increases.
I think there is a sort of post-analysis done when it's pushed to the server.
How can I know when this treatment is done ?
This is because I have another service that asks imediately the SonarQube API once the analysis is done on the machine, but it always returns 0 vulnerabilities since post treatment seems to not be done.
When the scanning work is done, the report link is provided, but right after the scan completes, SonarQube initiates a "background task" that does some things that are somewhat opaque. After that background task is complete, it is likely that your statistics are up to date.
Depending on how you run the scan, this "wait" is automatically managed for you. For instance, if you use Jenkins pipelines, you should be using the "withSonarQubeEnv" and "waitForQualityGate" pipeline steps. The latter goes into a wait loop, checking for the background task to be complete. It is also possible to use the SonarQube "WebApi" RESt service, which can be used to get the status of the background task. I would provide more info, but you haven't provided much info about the environment your scan is running in.
I've got a pipeline that logs in and logs out off a web-application every 5 minutes to ensure that the apps backend works, the database is up...
There occurred a problem that was not even related to the app directly, and my boss was bombarded with email-notifications. Is it possible to limit the emails that notify of a series of broken pipelines to only one, and suppress all subsequent emails until the pipeline has been fixed?
It seems that the editor for "Pipelines emails" is rather limited and doesn't support this directly. However, this option exists in Jenkins, and I'm wondering if someone figured out a solution or a workaround to achieve this in Gitlab CI. (Is it possible to script something like this in the ".gitlab-ci.yml"-file?)
I have some long running integration tests that are automatically run by the TeamCity server when I commit to source control.
TeamCity allows me to prevent these tasks from taking up all the build agents concurrently by limiting the simultaneous builds, however I wonder if it's possible to have TeamCity cancel any currently running tasks of this configuration when a new one starts?
In this environment as soon as there is a new commit to source control, old runs of the integration tests are irrelevant, so I don't want the server to waste its time running tests for old versions.
I don't think this is possible, and I would say this is by design.
Imagine a world where this is allowed, you would never know which commit caused a test to start failing. If you had enough overlapping commits you could have 50 builds before you know that the final test to be run fails, and would have no idea whether it was the last commit or the one 49 before that which caused it to fail.
IMHO you would be better focusing efforts into making so that multiple runs can happen simultaneously on different servers, to get the speed up you want, not throwing the baby out with the bath water
UPDATE
Whilst I don't think this is supported out of the box, if I had to do this I think I would look at getting a notification when a build starts (seems there are no notifications for builds being queued, so you'll have to allow multiple builds to run concurrently for this to work) and then you can use the API to do cancel the other builds:.
you can get a list of builds using the API as well so should be able to cancel all the ones which are not the most recent
No, it's not possible. You can cancel it manually. Or you can add a quiet period (default is 60 seconds) so a build doesn't start immediately when something has been pushed. Then if some commits arrive after few seconds or minutes, they will be included in the TeamCity build.
My solution is similar to Sam's update but I would use a "preamble" configuration that is triggered by commits to source control. This job is solely responsible for checking to see if any of your integration test jobs are already running and stopping them with a REST API call, as needed.
The main integrations tests are run from a dedicated job configuration that uses a build finish trigger associated with the preamble configuration.
This setup makes it quite straightforward to query which jobs are running and may need to be cancelled if there is newer work to do. So the steps become:
Preamble - cancel any running integration test job, triggered by VCS
Integration test - triggered by completion of a "Preamble" build
I'm using System.Runtime.Caching.MemoryCache to simulate a repeated task on a running .NET MVC application deployed on AppHarbor.
Entries in the cache are added using a CacheItemPolicy which contains an AbsoluteExpiration offset and a RemovedCallback that calls a method and retriggers the adding of the item in the cache (as described here)
MemoryCache is populated first time in Application_Start. It works fine locally, but doesn't seem to work once deployed on AppHarbor (tried also with HttpRuntime.Cache, same result).
My application is running under a CANOE (free) account on AppHarbor that only has one worker. Does this mean that I won't be able to simulate the background task until I upgrade to some paid plan?
Thanks!
Your application has to have visitors every once in a while for this to work. Other than StillAlive, Pingdom is also a good bet for generating requests to your app. You should also take a look at MomentApp. We expect to have background tasks ready shortly.
I don't think upgrading will help, they are working on adding background jobs to AppHarbor but to my knowledge they available yet.
What about using a service like https://stillalive.com/ to periodically hit a page on your site that then spins up a new thread and starts running your background task? Its available as a free add-on.
I was thinking of doing something like this while waiting for the background task functionality to be available.
I am wondering if there is a way to monitor these automatically. Right now, in our production/QA/Dev environments - we have bunch of services running that are critical to the application. We also have automatic ETLs running on windows task scheduler at a set time of the day. Currently, I have to log into each server and see if all the services are running fine or not, or check event logs for any errors, or check task scheduler to see if ETLs ran well etc etc... I have to do all the manually... I am wondering if there is a tool out there that will do the monitoring for me and send emails only in case something needs attention (like ETLs fail to run, or service get stopped for whatever reason or errors in event log etc). Thanks for the help.
Paessler PRTG Network Monitor can do all that. we have very good experience with it.
http://www.paessler.com/prtg/features
Nagios is the best tool for monitoring. It checks for the server status as well the defined services in it and if any service goes down or system goes down, sends the mail to specified mail id.
Refer the : http://nagios.org/
Thanks for the above information. I looked at the above options but they have a price.. what I did is an inexpensive way to address my concerns..
For my windows task scheduler jobs that run every night - I installed this tool/service from codeplex that is working great.
http://motash.codeplex.com/documentation#CommentsAnchor
For Windows services - I am just setting the "Recovery" Tab in each service "property" with actions to do when it fails. (like restart, reboot, or run a program which could be an email that will notify)
I built a simple tool (https://cronitor.io) for monitoring periodic/scheduled tasks. The name is a play on "cron" from the unix world, but it is system/task agnostic. All you have to do is make an http request to a unique tracking URL whenever your job runs. If your job doesn't check-in according to the rules you define then it will send you an email/sms message.
It also allows you to track the duration of your jobs by making calls at the beginning and end of your task. This can be really useful for long running jobs since you can be alerted if they start taking too long to run. For example, I once had a backup task that was scheduled every hour. About six months after I set it up it started taking longer than an hour to run!
There is https://eyewitness.io - which is for monitoring server cron tasks, queues and websites. It makes sure each of your cron jobs run when they are supposed to, and alerts you if they failed to be run.