I have a Nuxt project where I scrape data in the server and send it back to the client on request.
The app is SSG so the scraping happens at build time. The data changes once every 12 hours.
I deployed it on vercel and it's working correctly but don't know how to setup an automation
to trigger vercel deploy hooks to redeploy the app with the new data.
I prefer to do it with GitHub Actions if it's possible so that all the projects is in one place.
You can use crontab guru to find out how to make a cronjob every 12 hours aka 0 */12 * * *.
Then, you can schedule it in your Github actions like the following
on:
# Triggers the workflow every 12 hours
schedule:
- cron: "0 */12 * * *"
Check this dev article or the official documentation.
Related
I have a cloudbuild.yaml file in my code. I have configured it in a manner where for each pull request, a cloud run url is hosted with prefix as PR_NUMBER from Github and the cloudrun url will be like
https://PR_NUMBER---app-abcd123-uc.a.run.app
Once the URL is tested, code will be merged into dev.
Problem: After a pull request, a build trigger is initiated in cloudbuild and it was finished in 7 minutes. But in the Github, it shows that it has been queued and still running for more than 12 hours. Its costing money in github for running. I am trying to cancel it. But I could not see any option in github to cancel it.
Any idea what could be the reason behind this issue of build got queued in github even though its finished inside cloud build google platform.?
I have some code in my app that purges our cache (using Cloudflare's API) every time it starts up so that whenever a change to the website is deployed it shows up instantly for everyone instead of the old version remaining in Cloudflare's cache indefinitely.
Heroku restarts my dyno every 24 hours. This purges Cloudflare for no reason, causes a large spike in traffic, and messes with analytics.
Is there a way to detect on startup if this app restart is occurring due to an actual Heroku deploy, or just due to their daily restart?
One way I've considered is using GitHub's public API to check on startup if a commit has been pushed to master in the last hour, but that seems like a hack and there is probably a better way.
This is a classic use case for a release phase task (bold mine):
Release phase enables you to run certain tasks before a new release of your app is deployed. Release phase can be useful for tasks such as:
Sending CSS, JS, and other assets from your app’s slug to a CDN or S3 bucket
Priming or invalidating cache stores
Running database schema migrations
If a release phase task fails, the new release is not deployed, leaving your current release unaffected.
Move your "clear cache" logic to a separate script and add it to your Procfile, e.g.:
web: python some_main_command.py
release: python clear_cache.py
I gave my app access (via environment variables) to my own Heroku API. This allows it to query the API, asking when its own most recent release was. If the most recent release was more than 24 hours ago, we do not purge Cloudflare. Code is here https://github.com/ImpactDevelopment/ImpactServer/commit/db1cced1ed298b933cee87457eaa844f60974f60#diff-12a774f9437b88d4b4ebbd4e2ab726abR25
This detects anything that causes the app to be rereleased, including code changes, env variable changes, add-on changes, etc.
I have 2 repositories residing in Bitbucket - Backend (Laravel app as the API and entry point) and Frontend (Main application front-end - VueJs app). My goal is to set up continuous deployment so whenever something is pushed in either of the repos in master (or other branch selected by me) branch it triggers something so that the whole app builds and reaches the AWS EC2 server.
I have considered/tried the following:
AWS CodePipeline and/or CodeDeploy. This looked like a great option
since the servers are in AWS as well. However, there is no support
for Bitbucket out of the box, so it would have to go to Bitbucket
Pipeline -> AWS Lambda -> AWS S3 -> AWS CodePipeline/CodeDeploy ->
AWS EC2. This seems like a very lengthy journey and I am not sure if
that's a good practice whatsoever.
Using Laravel Forge to deploy the Laravel app, and add additional steps to build the VueJS app. This seemed like a very basic solution,
however, the build process seems to fail there as it just takes long
time and crashes with no errors (whereas I can run exact same process
on my local machine or a different server hosted elsewhere). I am not
sure if this is issue with the way server is provisioned, the way
Forge runs deployment script or the server is too weak to handle it.
The main question of mine would be what are the best pracises for deploying the app of such components? I have read many tutorials/articles about deploying a NodeJS app, or a Laravel app, but haven't gotten good information about a scenario like this.
Would it be better to build the front-end app locally and version control the built JS file? Or should I create a Pipeline in Bitbucket that would build the app and then deploy it? Or is it the best to just version control and deploy the source files and leave the whole build process as the last step in the deployment process that will be done by the server that is hosting the app itself? There are also some articles suggesting hosting the whole front-end app in S3 bucket - would that be bad practise as well?
Appreciate any help and resources that would help!
From the sounds of things it sounds like you have two types of deployments you might want to run.
Laravel API: If you're using Laravel Forge already then this is a great way to go about deploying your Laravel App, takes care of most of the process and easy server management.
Vue.js App: Few things you can do here, I personally prefer using a provider like Vercel or Netlify who let you deploy your static sites/frontends for free-low costs. You can write custom build steps but they have great presets that should work out the box.
If you really want to keep everything on AWS then look into how to host static sites on AWS
Have an issue when we try to deploy webjob to a web app via Visual Studio.
If we try to set 10 minutes interval it returns 409 response
http://grab.by/Rirs
If we try with one hour interval it is successful.
We have Standard app service plan, so it should be supported.
http://grab.by/Rirw
Always on is activated for web app.
We have also tried another approach that described here
There is Publish Succeeded. but if we look on webjob in VS Cloud Explorer it has Fail status
Any ideas how can we solve it?
After some researching, found the issue, but actually in old Azure portal.
It was necessary to set standard tier for the job collections that was created for jobs under web-apps.
New portal doesn't have simple way to get that (or I didn't find it as simple like I done that with old portal) and again the super-fine thing it's possibility to configure job schedule on the old portal, while it's not possible on new one... or again I didn't find that
So I have this unique issue
I have a laravel app that is like alexa and it uses several apis including dmoz, google, json to fetch data about domains.
It has a feature where admin can bulk upload the websites and starts the cron and it keeps updating websites by itself.
Now after reaching approx 1000 websites, the app simply stopped.
I have to use CHOWN -R user /path/to/directory again to get it working.
However after doing this my cron stopped working
I flushed the cronjob and cronb manager tables from the database and delete cron.lock file and then resubmit websites and then start the cron.
Now the cron seems to be working because the rows started to appear in cron manager and cronjob tables and also confirmed by the cron output log, however its not appearing on the website.
Following are my laravel logs.
http://pastebin.com/iYuFmD4p
any idea...???