Spring Cloud Data Flow Task execution monitoring - spring-cloud-task

We have been looking into spring cloud task. It looks very promising but we seem to be missing how monitoring should work, especially for tasks that are executed from a stream.
For tasks manually executed from the dashboard there is an execution tab, but does not seem to be a page where you can find an overview of the tasks executed from within a stream.
What is the way to monitor the executions, exit codes, progress and other things for such tasks?

The tasks that are executed from your stream will create a TaskExecution entry in the Task_Execution table just like tasks that executed from the dashboard. In that case the executions tab will fill this need.

Related

How to know if SonarQube has done post-analysis via API

I run a SonarQube scan on a Java maven project whith my Sonar server set as parameters.
But when the command line says it's done, it does not appear imediately on the SonarQube web UI nor via API. I need to wait from 1 to 3 minutes before the number of vulnerabilities increases.
I think there is a sort of post-analysis done when it's pushed to the server.
How can I know when this treatment is done ?
This is because I have another service that asks imediately the SonarQube API once the analysis is done on the machine, but it always returns 0 vulnerabilities since post treatment seems to not be done.
When the scanning work is done, the report link is provided, but right after the scan completes, SonarQube initiates a "background task" that does some things that are somewhat opaque. After that background task is complete, it is likely that your statistics are up to date.
Depending on how you run the scan, this "wait" is automatically managed for you. For instance, if you use Jenkins pipelines, you should be using the "withSonarQubeEnv" and "waitForQualityGate" pipeline steps. The latter goes into a wait loop, checking for the background task to be complete. It is also possible to use the SonarQube "WebApi" RESt service, which can be used to get the status of the background task. I would provide more info, but you haven't provided much info about the environment your scan is running in.

Streaming metrics for Flink application

I have set up Flink UI for application running in Intellij IDEA. I would like to get some streaming metrics like - scheduling delay and processing time. However, I can not find the anywhere in UI. Should there be some specific setup for that or should I explicitly submit app jar?
Currently, Flink UI for the job looks like this:
All of the task metrics are exposed in the web UI, as Dominik mentioned, but for other metric scopes (e.g., job metrics) only some selected metrics are displayed. You can access all of the metrics via the REST API or by connecting a metrics reporter to send the metrics to an external metrics system.
I don't think any attempt has been to made to measure scheduling delay, but in the job metrics you will find things like restarting time and uptime.
In the UI you should have a tab Task Metrics when You select the currently running job. This tab allows You to choose a task and see all the metrics that are available. Although, I am not sure if the scheduling delay is one of currently available metrics.
Probably the better idea is to refer to expose the metrics for some collector of Your choice, You can find more info in the documentation: https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html.

sonarqube: why are background tasks only viewable by admins?

We are testing an upgrade to Sonarqube5.3 and are having some of the issues identified in other StackOverflow posts, such as the Cobertura interactions. The problem I have is that I am not an administrator for the test server, so I can't look at the background tasks to find out why they are failing. Is there a way that the background task results can be printed in the sonar analysis logs? And is there a way to get the analysis in the build not to generate an exception when the background task fails, so that it just prints an error? Can the permissions for "view background tasks" be changed from just administrators?
The presumption is that non-admins don't care about background tasks except whether the most recent one has succeeded/failed.
Unfortunately, you're not going to be able to see server-side processing errors on the client side. The scanner compiles an analysis report and submits that to the server for processing, where it's queued and handled asynchronously.
What might work for you is requesting admin permissions, not on the SonarQube instance, but only on the project you're trying to analyze. That would give you access to the list of that project's background tasks (as well as the ability to administer the project more generally).

Monitor server, process, services, Task scheduler status

I am wondering if there is a way to monitor these automatically. Right now, in our production/QA/Dev environments - we have bunch of services running that are critical to the application. We also have automatic ETLs running on windows task scheduler at a set time of the day. Currently, I have to log into each server and see if all the services are running fine or not, or check event logs for any errors, or check task scheduler to see if ETLs ran well etc etc... I have to do all the manually... I am wondering if there is a tool out there that will do the monitoring for me and send emails only in case something needs attention (like ETLs fail to run, or service get stopped for whatever reason or errors in event log etc). Thanks for the help.
Paessler PRTG Network Monitor can do all that. we have very good experience with it.
http://www.paessler.com/prtg/features
Nagios is the best tool for monitoring. It checks for the server status as well the defined services in it and if any service goes down or system goes down, sends the mail to specified mail id.
Refer the : http://nagios.org/
Thanks for the above information. I looked at the above options but they have a price.. what I did is an inexpensive way to address my concerns..
For my windows task scheduler jobs that run every night - I installed this tool/service from codeplex that is working great.
http://motash.codeplex.com/documentation#CommentsAnchor
For Windows services - I am just setting the "Recovery" Tab in each service "property" with actions to do when it fails. (like restart, reboot, or run a program which could be an email that will notify)
I built a simple tool (https://cronitor.io) for monitoring periodic/scheduled tasks. The name is a play on "cron" from the unix world, but it is system/task agnostic. All you have to do is make an http request to a unique tracking URL whenever your job runs. If your job doesn't check-in according to the rules you define then it will send you an email/sms message.
It also allows you to track the duration of your jobs by making calls at the beginning and end of your task. This can be really useful for long running jobs since you can be alerted if they start taking too long to run. For example, I once had a backup task that was scheduled every hour. About six months after I set it up it started taking longer than an hour to run!
There is https://eyewitness.io - which is for monitoring server cron tasks, queues and websites. It makes sure each of your cron jobs run when they are supposed to, and alerts you if they failed to be run.

Windows Workflow - Is there a way to guarantee only one workflow running?

The workflow is being published as a wcf service, and I need to guarantee that workflows execute sequentially. Is there a way--in code or in the config--to guarantee the runtime doesn't launch two workflows concurrently?
There is no way to configure the runtime to limit the number of workflows in progress.
Consider though that its the responsibility of the workflow itself to control flow. Hence the workflow itself should have means to determine if another instance of itself is currently in progress.
I would consider creating an Activity that would transactionally attempt to update a DB record to the effect that an instance of this workflow is in progress. If it finds that another is currently in progress it could take the appropriate action. It could fail or it could queue itself using an EventActivity to be alerted when the previous workflow has completed.
You probably will need to check at workflow start for another running instance.
If found, cancel it.
I don't agree that this needs to be handled at the WorkflowRuntime level. I like the idea of a custom Activity, sort of a MutexActivity that would be a CompositeActivity that has a DB backend. The first execution would log to the database it has a hold of the mutex. Subsequent calls would queue up their workflow IDs and then go idle. When the MutexActivity completes, it would release the Mutex, load up the next workflow in the queue and invoke the contained child activities.

Resources