Can a server use two DRONE_RPC_HOST? - continuous-integration

Every nice developer!
I use a Drone as my team CI/CD tool.
And my server is running two Drone container, one for Gogs , the other one for Github.
I need to finish my Pipelines in exec runner mode now. But I wonder how can I connect two Git servers? Just be Gogs and Github.
This is exec runner installation tutorial. And the config file can only fill one DRONE_RPC_HOST. But I have two hosts.
DRONE_RPC_PROTO=https
DRONE_RPC_HOST=drone.company.com
DRONE_RPC_SECRET=super-duper-secret
How can I solve my problem? I am glad to get your answer.
Thank you for your read.

Related

Using both Apache Airflow and Github Actions for dBT

I'm trying to get my head around if replacing our current CI/CD tool with Github Actions is a pro or con.
My team is currently using GitHub and TeamCity for CI/CD. I find this limiting as it's a separate team that configures the different stages. We use Airflow to schedule our DBT production, so we don't use DBT Cloud.
I'm wondering if there's any benefit to replacing TeamCity with Github Actions as we can write the configuration of the stages (compile, run, test, lint) ourselves. I'm interested in implementing Slim CI , which I don't believe is possible with TeamCity.
Would anyone be able to with the pros and cons of using GitHub Actions alongside Airflow? Is it possible? I'm struggling to find any documentation on using both Actions whilst running DBT production on Airflow.
Right now, my source of information comes from this: https://towardsdatascience.com/how-to-deploy-dbt-to-production-using-github-action-778bf6a1dff6
It mentions "Well, it depends. If you don’t have Airflow running in productions already, you will probably not need it now. There are more simple/elegant solutions than this (dbt Cloud, GitHub Actions, GitLab CI). Also, this approach shares many disadvantages with using a compute instance, such as waste of resources and no easy way for CI/CD."
My main goal is to get rid of TeamCity and implement Slim CI with Github Actions.
Thank you in advance!

How should I deploy the result of the CI/CD pipeline on my production server

I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how?
Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.
Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly
There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.
I. Kubernetes
create cluster (ie control plane) somewhere
add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
attach your target machine as a worker node to the cluster
create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you
This is quite complex but sort of "right" way of doing things.
II. Private GitLab runner
go to Settings > CI/CD > Runners of your GitLab project or group
obtain the registration token
install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
give runner some specific tag
use that tag in your .gitlab-ci.yml file, see documentation
then deployment process is just a local process of docker pull... and docker run ... for your image
This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.

What’s the best way to deploy multiple lambda functions from a single github repo onto AWS?

I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍

How to setup Bitbucket Pipelines for Laravel test and SSH Agent to deploy on a remote server

I am developing a project in laravel 5.4. I have repository on Bitbucket. Now I want to setup Bitbucket Pipelines so I can test when anybody commit. I think I need a suitable Docker Image for laravel, I tried some images but they generate errors. Also I want to setup SSH key agent so Whenever I commit in production branch it deploys on server.
I searched a lot on internet to do these things but I could not find anything, If you share any detailed tutorial or something like this it will be helpful.
Thanks in advance.

How to push from Gitlab to Github with webhooks

My Google-fu is failing me for what seems obvious if I can only find the right manual.
I have a Gitlab server which was installed by our hosting provider
The Gitlab server has many projects.
For some of these projects, I want that Gitlab automatically pushes to a remote repository (in this case Github) every time there is a push from a local client to Gitlab.
Like this: client --> gitlab --> github
Any tags and branches should also be pushed.
AFAICT I have 3 options:
Configure the local client with two remotes, and push simultaneous to Gitlab and Github. I want to avoid this because developers.
Add a git post-receive hook in the repository on the Gitlab server. This would be most flexible (I have sufficient Linux experience to write shell scripts as git hooks) and I have found documentation on how to do this, but I want to avoid this too because then the hosting provider will need to give me shell access.
I use webhooks in Gitlab. I am unfamiliar with what the very basics of webhooks are, and I am unable to locate understandable documentation or even a simple step-by-step example. This is the documentation from Gitlab that I found and I do not understand it: http://demo.gitlab.com/help/web_hooks/web_hooks
I would appreciate good pointers, and I will summarize and document a solution when I find it.
EDIT
I'm using this Ruby code for a web hook:
class PewPewPew < Sinatra::Base
post '/pew' do
push = JSON.parse(request.body.read)
puts "I got some JSON: #{push.inspect}"
end
end
Next: find out how to tell the gitlab server that it has to push a repository. I am going back to the GitLab API.
EDIT
I think I have an idea. On the server where I run the webhook, I pull from GitLab and then I push to Github. I can even do some "magic" (running tests, building jars, deploying to Artifactory,...) before I push to GitHub. In fact it would be great if Jenkins were able to push to a remote repository after a succesful build, then I wouldn't need to write my own webhook, because I'm pretty sure Jenkins already provides a webhook for Gitlab, either native or via a plugin. But I don't know. Yet.
EDIT
I solved it in Jenkins.
You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
would work of course.
is possible but dangerous because GitLab shell automatically symlinks hooks into repositories for you, and those are necessary for permission checks: https://github.com/gitlabhq/gitlab-shell/tree/823aba63e444afa2f45477819770fec3cb5f0159/hooks so I'd rather stay away from it.
Web hooks are not suitable directly: they make an HTTP request with fixed format on certain events, in your case push, not Git protocol requests.
Of course, you could write a server that consumes the hook, clones and pushes, but a service (single push and no deployment) or GitLab CI (already implements hook management) would be strictly better solutions.
services are a the best option if someone implements it: live in the source tree, would do a single push, and require no extra deployment overhead.
GitLab CI or othe CIs like Jenkins are the best option currently available. They are essentially already implemented server for the webhooks, which automatically clone for you: all you have to do then is to push from them.
The keywords you want to Google for are "gitlab mirror github". That has led me to: Gitlab repository mirroring for instance. There seems to be no perfect, easy solution today.
Also this has already been proposed at the feature request forum at: http://feedback.gitlab.com/forums/176466-general/suggestions/4614663-automatic-push-to-remote-mirror-repo-after-push-to Always check there ;) Go and upvote the request.
The key difficulty now is how to store the push credentials.
I solved it in Jenkins. You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
I added "-publisher" jobs that run after "" is built successfully. I could have done it in one job, but I decided to split it up. The build jobs are triggered by a web hook in GitLab; the publisher jobs are using a #daily schedule from the BuildResultTrigger plugin.

Resources