Start wercker job hourly - continuous-integration

I've just started using wercker and I'd like a job to run regularly (e.g. daily, hourly). I realize this may be an anti-pattern, but is it possible? My intent is not to keep the container running indefinitely, just that my workflow is executed on a particular interval.

You can use a call to the Wercker API to trigger a build for any project which is set up already in Wercker.
So maybe set up a cron job somewhere that uses curl to make the right API call?

Related

Laravel: How to detect if code is being executed from within a queued job, as opposed to manually run from the CLI

I found this similar question How to check If the current app process is running within a queue environment in Laravel
But actually this is the opposite of what I want. I want to be able to distinguish between code being executed manually from an artisan command launched on the CLI, and when a job is being run as a result of a POST trigger via a controller, or a scheduled run
Basically I want to distinguish between when a job is being run via the SYNC driver, manually triggered by the developer with eyes on the CLI output, and otherwise
app()->runningInConsole() returns true in both cases so it is not useful to me
Is there another way to detect this? For example is there a way to detect the currently used queue connection? Keeping in mind that it's possible to change the queue connection at runtime so just checking the value of the env file is not enough

Auto Trigger B job after triggering A job in TeamCity

Is there a way that I can auto trigger job B exactly 1 hour after triggering job A, here the issue is job A would have not finished its work in mid of the job itself it has to trigger job B that too exactly after an hour or the other option would be to skip to build script 2 exactly after an hour of execution in script 1 , is there any way to do this ?
Thanks in advance
I cannot offer a good practice as a solution, but I can suggest two possible workarounds:
1. Build Pause
You can add a 'Command Line' shell pause as the last build step of project A or the first build step of project B. That pause must be set to one hour:
sleep 1h
You need to reconfigure the default build timeout for this or the job will fail.
2. Strict Scheduling
If you have some flexibility on the time where A can or should be triggered, you can use the 'Schedule Trigger' to schedule both A and B, e.g. if you schedule project A to 1 pm and project B to 2 pm, you make sure that there is at least one hour between those two. This can be scheduled as often as necessary.
I don't think what you are proposing is a good way to go about setting up the deployment, but I can think of a few workarounds that might help if you are forced in this direction.
In configuration A, add build step which adds a scheduled build trigger to configuration B for an hours time (using the API). In configuration B, add a build step to the end of the configuration to remove this scheduled trigger. This feels like a really horrible hack which should be avoided, but more details here.
Outside of TeamCity make use of any pub/sub mechanism so the deployment to the VM can create an event when it has completed. Subscribe to this event and trigger a follow on build using the TeamCity API. For example, if you are using AWS you could have an SNS topic with a lambda function as a subscriber. This lambda function would call the API to queue configuration B when the environment is in a suitable state.
There are probably much nicer solutions if you share what deployment software you are using.

Can I specify a timeout for a GCP ai-platform training job?

I recently submitted a training job with a command that looked like:
gcloud ai-platform jobs submit training foo --region us-west2 --master-image-uri us.gcr.io/bar:latest -- baz qux
(more on how this command works here: https://cloud.google.com/ml-engine/docs/training-jobs)
There was a bug in my code which cause the job to just keep running, rather than terminate. Two weeks and $61 later, I discovered my error and cancelled the job. I want to make sure I don't make that kind of mistake again.
I'm considering using the timeout command within the training container to kill the process if it takes too long (typical runtime is about 2 or 3 hours), but rather than trust the container to kill itself, I would prefer to configure GCP to kill it externally.
Is there a way to achieve this?
As a workaround, you could write a small script that runs your command and then sleeps the time you want until running a cancel job command.
As a timeout definition is not available in AI Platform training service, I took the liberty to open a Public Issue with a Feature Request to record the lack of this command. You can track the PI progress here.
Except the script mentioned above, you can also try:
TimeOut Keras callback, or timeout= Optuna param (depending on which library you actually use)
Cron-triggered Lambda (Cloud Function)

gcloud cron jobs and laravel

I am trying to execute an api in laravel every minute.
The api's method is GET. However I could not specify the method in the cron.yaml file. Could I use DELETE method here and how? The code should be deployed on google cloud.
I have created a cron.yaml file that has the following format:
cron:
- description: "every minutes job"
url: /deletestories
schedule: every 1 mins
retry_parameters:
min_backoff_seconds: 2.5
max_doublings: 5
I also created the api deletestories that delete rows under specific conditions.
However this isn't working, and when I open google cloud console I could not found any error or any cron job executed.
This cron.yaml file appears to be a Google App Engine cron configuration. If this is correct then only the GET method is supported, you cannot use DELETE.
The GAE cron service itself consists simply of scheduled GET requests that your app needs to handle. From Scheduling Tasks With Cron for Python (the same applies to other languages and to the flexible environment cron as well):
A cron job makes an HTTP GET request to a URL as scheduled. The
handler for that URL executes the logic when it is called.
You also need to deploy your cron.yaml file for it to be effective. You should be able to see the deployed cron configuration in the developer console's Cron Jobs tab under the Task Queues Menu (where you can also manually trigger any of the cron jobs). The performed GET requests for the respective cron jobs should appear in your app's request logs as well, when executed.

Running CakePHP Shells on Background

Is it possible for CakePHP to execute a cakephp shell task on background for
i.e running long reports. I would also want to update the current
status back to the user via updating a table during the report
generation and querying using Ajax.
Yes, you can run shells in the background via normal system calls like
/path/to/cake/console/cake -app /path/to/app/ <shell> <task>
The tricky part is to start one asynchronously from PHP; the best option would be to put jobs in a queue and run the shell as a cron job every so often, which then processes the queue. You can then also update the status of the job in the queue and poll that information via AJAX.
Consider implementing it as a daemon: http://pear.php.net/package/System_Daemon

Resources