I am trying to kill a background task that is in progress in m sonarqube.
This task is doing a scan on a project.
I connected to sonarqube DB and I executed this query to retrieve the "in progress" job
select * from ce_queue where status<>'PENDING';
Then I deleted the record delete from ce_queue where id=2325;
In-progress tasks cannot be canceled for the moment
I also continue to have this problem with Enterprise EditionVersion 7.2.1 (build 14109). It's ridiculous that this problem has persisted for years and it's a big deal because it takes one of your workers out of the pool.
The only way I have found to kill the task is to delete the entire project that is associated with the task. Restarting the service / rebooting (to catch other processes, etc) / none of this helped.
Deleting the project kills the task
you're welcome
Related
I am using SchedulerLock in Spring Boot And I am using 2 servers.
What I'm curious about is why is "lockAtMostFor" an option that exists?
Take an example: on one of my 2 servers, the schedule runs first and then locks.
But something went wrong while running, and my server went down.
At this moment, my scheduled task ends incompletely.
Any guide I read is full of vague answers about "lock time in case a node dies".
When a node dies, it can no longer execute schedules.
But why keep holding a LOCK for a dead node?
Even if I urgently try to manually execute the schedule on the 2nd server, it is impossible to manually execute it because of the above lock.
What are options that exist for?
I am using Advanced Scheduler to schedule a cron job daily once at 12 PM. But the code executes automatically even though I have disabled the trigger and disconnected my GitHub repository. So far the code has executed three times in an interval of 2 hours.
I don't understand why it is executing automatically?
Any idea?
Sometimes it is preferred and/or required to host dozens of applications on a single server. Not saying this is "right" or "wrong," I'm only saying that it happens.
A downside to this configuration is the error message Waiting for the script in task [TASK ID] to finish as this script requires that no other Octopus scripts are executing on this target at the same time appears whenever more than one deployment to the same machine is running. It seems like Octopus Deploy is fighting itself.
How can I configure Octopus Deploy to wait for one deployment to completely finish before the next one is started?
Before diving into the answer, it is important to understand why that message is appearing in the first place. Each time a step is run on a deployment target, the tentacle will create a "Mutex" to prevent others projects from interfering with it. An early use case for this was updating the IIS metabase during a deployment. In certain cases, concurrent updates would cause random errors.
Option 1: Disable the Mutex
We've seen cases where the mutex is the cause of the delay. The mutex is applied per step, not per deployment. It is common to see a situation where it looks like Octopus is "jumping" between deployments. Depending on the number of concurrent deployments, that can slow down the deployment. The natural thought is to disable the mutex altogether.
It is possible to disable the mutex by adding the variable OctopusBypassDeploymentMutex and setting it to True. That variable can exist in either a specific project or in a variable set.
More details on what that variable does can be found in this document. If you do disable the mutex please test it and monitor for any failures. For the most part, we don't see issues disabling the mutex, but it has happened from time to time. It depends on a host of other factors such as application type and Windows version.
Option 2: Leverage Deploy a Release Step
Another option is to coordinate the projects using the deploy a release step. Typically this works best when the projects being deployed are part of the same application suite. In the example screenshot below I have five "deployment" projects:
Azure Worker IaC
Database Worker IaC
Kubernetes Worker IaC
Script Worker IaC
OctoStudy
The project Unleash the Kraken coordinates deployments for those projects.
It does this by using the Deploy a Release step. First it spins up all the infrastructure, then it deploys the application.
This won't work as well if the server is hosting 50 disparate applications.
Option 3: Leverage the API to check for running deployments
The final option is to include a step at the start of each project which hits the API to check for active releases to the deployment targets for the deployment target. If an active deployment is found then wait until it is done.
You can do this by hitting the endpoint https://[YOUR URL]/api/[SPACE ID]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[SPACE ID]&includeSystem=false. That will tell you all the active tasks being run for a specific machine.
You can get Machine Id by pulling the value from Octopus.Deployment.Machines. You can get Space Id by pulling the value from Octopus.Space.Id.
The pseudo code for this approach could look like this (I'm not including the actual code as your requirements could be very different).
activeDeployments = true
while (activeDeployments)
{
activeDeployments = false
foreach(machineId in Octopus.Deployment.Machines)
{
activeTasks = https://[YOUR URL]/api/[Octopus.Space.Id]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[Octopus.Space.Id]&includeSystem=false
if (activeTasks.Count > 0)
{
activeDeployments = true
}
}
if (activeDeployments = true)
{
Sleep for 5 seconds
}
}
I had this message hit me because I hit the Task Cap on the Octopus Server.
In Octopus\Configuration\Nodes change the task cap to 1 to have 1 deployment at a time even with agents on different servers. The message will display constantly
Or simply increase this value to prevent the message from occurring at all.
I have scheduled windows task scheduler to upload different text files to oracle database. Till date its working fine. But i have an issue that if any uploading was disrupted because of following reasons.
1) Database Machine Shutdown
2) Internet Connectivity Issues
3) Miscellaneous
How i will get this information about failure of task scheduled?
Windows itself (in the event log; a filtered view for each scheduled task is available in the Task Scheduler application) will record if the task failed.
For anything more you'll need to add diagnostic logging to your task's implementation.
In a workflow there are sessions connected in parallel and in sequence. Suppose some sessions which are in parallel and in sequential mode are failed, How do I restart the workflow with only failed sessions. How can I design this in Informatica?
Turn 'suspend on error' for workflow
Turn 'restart on recovery' for each session in workflow
Now if any session fail workflow will be suspended until you fix the problem and hit recover on workflow in monitor. When you do so it cause to restart only failed sessions.
A large publishing client asked us to implement something similar to what you asked. We crated a database table to keep track of successful sessions within a workflow. Each session will have a mapping at the end that adds an entry to database which says I passed or failed. When we try to run in a recovery mode we query the database at the beginning of each session to find out if we need to run this session or not.
We also provided a web interface to this table where business users can manually choose which session to run or escape based on their needs.
Recovery option will work only if you have "workflow recovery" turned on in repository. If you dont, then you can check option "fail workflow if task fails" at individual session level and create condition on link that connect workflow to each other. Disadvantage of this method is that your workflow will appear failed and wont execute next sessions until failed one are fixed.
thanks.