This is my github actions flow:
---
name: build and push image to aws ecr
on:
push:
branches: [ main ]
jobs:
build-and-push:
name: Build and push to ecr
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
...
deploy:
needs: build-and-push
name: deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login ec2
env:
PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
HOSTNAME: ${{ secrets.HOST_DNS }}
USER_NAME: ${{ secrets.USERNAME }}
run: |
echo "$PRIVATE_KEY" > private_key && chmod 600 private_key
ssh -o StrictHostKeyChecking=no -i private_key ${USER_NAME}#${HOSTNAME}
ls
touch helloworld.txt
I wonder that I'm connected to the EC2 instance or not.
This is my result after actions, it's list all file of my project. I think I didn't ssh to EC2 successfull becuase I just create EC2 instance and it's empty.
Run echo "$PRIVATE_KEY" > private_key && chmod 600 private_key
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '***,18.181.220.91' (ECDSA) to the list of known hosts.
Dockerfile
README.md
docker-compose.yml
nest-cli.json
package-lock.json
package.json
private_key
src
test
tsconfig.build.json
tsconfig.json
This is reference
This is my command to ssh ec2 from client: ssh -i "home-key.pem" ec2-user#ec2-18-181-220-91.ap-northeast-1.compute.amazonaws.com
If i didn't connnect to EC2 instance, how I do it with github ?
Thanks for your attention.
I wonder that I'm connected to the EC2 instance or not.
In your SSH commands, add a hostname -a one: it will display the name of the machine you are on.
That being said, you could use actions/ssh-execute-commands to execute those same commands:
- name: Execute SSH commmands on remote server
uses: JimCronqvist/action-ssh#master
env:
PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
HOSTNAME: ${{ secrets.HOST_DNS }}
USER_NAME: ${{ secrets.USERNAME }}
with:
hosts: '${USER_NAME}#${HOSTNAME}'
privateKey: ${{ secrets.PRIVATE_KEY }}
debug: false
command: |
ls -lah
hostname -a
whoami
touch helloworld.txt
Related
I have a dockerised fastapi app whith depends on mysql and redis which are all configured in docker-compose.yml. Want to implement a CI/CD using github actions and AWS EC2 instance. My EC2 instance has docker and docker-compose installed. Here are my questions.
What do I do to run the tests that depends on the test db?
How do I implement CD from github actions and AWS EC2 instance?
I might not be clear so please ask some questions for clarification. Thank you.
name: backend-api
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Set up Python 3.10
uses: actions/setup-python#v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Create .env file for configuration settings.
uses: SpicyPizza/create-envfile#v1.3
with:
envkey_APP_ENV: ${{secrets.APP_ENV}}
envkey_APP_HOST: ${{secrets.APP_HOST}}
envkey_MYSQL_USER: ${{secrets.APP_ENV}}
envkey_PROD_BASE_URL: ${{secrets.MYSQL_USER}}
envkey_DEV_BASE_URL: ${{secrets.APP_ENV}}
envkey_MYSQL_ROOT_PASSWORD: ${{secrets.MYSQL_ROOT_PASSWORD}}
envkey_MYSQL_DATABASE: ${{secrets.MYSQL_DATABASE}}
envkey_PRODUCTION_DB_URI: ${{secrets.PRODUCTION_DB_URI}}
envkey_TEST_DB_URI: ${{secrets.TEST_DB_URI}}
envkey_BASE_URL: ${{secrets.BASE_URL}}
envkey_WALLET_PROVIDER_ACCESS_TOKEN: ${{secrets.WALLET_PROVIDER_ACCESS_TOKEN}}
envkey_S3_BUCKET_NAME: ${{secrets.S3_BUCKET_NAME}}
envkey_S3_ACCESS_SECRET: ${{secrets.S3_ACCESS_SECRET}}
envkey_S3_ACCESS_KEY: ${{secrets.S3_ACCESS_KEY}}
envkey_S3_BUCKET_REGION: ${{secrets.S3_BUCKET_REGION}}
envkey_JWT_SECRET_KEY: ${{secrets.JWT_SECRET_KEY}}
envkey_ETHERSCAN_API_URL: ${{secrets.ETHERSCAN_API_URL}}
envkey_BLOCKCHAIN_API_URL: ${{secrets.BLOCKCHAIN_API_URL}}
envkey_WALLET_PROVIDER_BASE_URL: ${{secrets.WALLET_PROVIDER_BASE_URL}}
envkey_STRATEGY_PROVIDER_BASE_URL: ${{secrets.STRATEGY_PROVIDER_BASE_URL}}
envkey_INDEX_PROVIDER_BASE_URL: ${{secrets.INDEX_PROVIDER_BASE_URL}}
- name: Running Tests with pytest
run: |
pytest
Deploy:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Git pull
env:
AWS_EC2_PEM: ${{ secrets.AWS_EC2_PEM }}
AWS_EC2_PUBLIC_IP: ${{ secrets.AWS_EC2_PUBLIC_IP }}
AWS_EC2_USERNAME: ${{ secrets.AWS_EC2_USERNAME }}
run: |
pwd
echo "$AWS_EC2_PEM" > private_key && chmod 600 private_key
ssh -o StrictHostKeyChecking=no -i private_key ${AWS_EC2_USERNAME}#${AWS_EC2_PUBLIC_IP}
git checkout main &&
git fetch --all &&
git reset --hard origin/main &&
git pull origin main &&
touch .env
docker-compose up -d --build
I`m trying to automate deploys to ec2 instance instance with github actions, but ssh-keyscan seems to fail for no reason. On my local machine it works totally fine.
here is my workflow file:
name: Deploy
on:
push:
branches:
- 'main'
env:
SERVER_HOST: x.x.x.xx
SERVER_USER: username
SERVER_PATH: ~/folder-name/
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Install SSH Key
uses: shimataro/ssh-key-action#v2.3.1
with:
key: "${{ secrets.SSH_PRIVATE_KEY }}"
known_hosts: "just-a-placeholder-so-we-dont-get-errors"
- name: Generate auth hosts
run: ssh-keyscan -H ${{ env.SERVER_HOST }} >> ~/.ssh/known_hosts
# Deploy
- run: rsync -rv --delete . ${{ env.SERVER_USER }}#${{ env.SERVER_HOST }}:${{ env.SERVER_PATH }}
Notes:
secrets.SSH_PRIVATE_KEY contains my private openssh key generated with ssh-keygen -t rsa -b 4096 -C "dummyemail#host.com" where dummyemail#host.com is the actual email of a github account where the workflow is triggered.
yes, I have added .pub key to ~/.ssh/authorized_keys on my server machine
The problem was that I mistakenly added inbound rules only for ip addresses listed here.
So the solution was to add inbound ssh rule for 0.0.0.0/0 and use private key created along with ec2 instance.
I have a Chalice (AWS lambda Python framework) project the following CI/CD GitHub Action workflow:
name: Production Workflow
on:
push:
branches:
- "main"
env:
REPO: ${{ github.repository }}
GITHUB_REF_NAME: ${{ github.ref_name }}
GITHUB_SHA: ${{ github.sha }}
jobs:
production:
name: Deploy production
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Set up Python
uses: actions/setup-python#v1
with:
python-version: "3.9"
- name: Install requirements
run: pip3 install -r requirements.txt
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: ${{ secrets.AWS_REGION }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Check branch
run: echo "${{ env.GITHUB_REF_NAME }}"
- name: Check branch
run: echo "${{ env.GITHUB_SHA }}"
- name: Run tests
run: python3 -m unittest discover -s tests
- name: Deploy with Chalice
run: chalice deploy --stage=production
However, from inside the project, the env variables REPO, GITHUB_REF_NAME and GITHUB_SHA are not accessible (i.e. os.environ.get("GITHUB_REF_NAME", None)). Why?
I also tried setting the env variables not globally, but in the "Deploy with Chalice" step only, with the same result. Also, I can successfully see the branch and commit ID written in GitHub Actions by the "Check branch" and "Check branch" steps.
Other env variables that are set in the Chalice config file .chalice/config.json are accessible.
You need to set up the environment and explicitly list all ENV variables you want to use, like this:
- name: Deploy with Chalice
run: chalice deploy --stage=production
env:
GITHUB_REF_NAME: ${{ secrets.GITHUB_REF_NAME }}
GITHUB_SHA: ${{ secrets.GITHUB_SHA }}
I have the following deploy.yml
name: Deploy
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Copy repository contents via scp
uses: appleboy/scp-action#master
env:
HOST: ${{ secrets.HOST }}
USERNAME: ${{ secrets.USERNAME }}
PORT: ${{ secrets.PORT }}
PASSWORD: ${{ secrets.PASSWORD }}
with:
source: "."
target: "/var/www/html/cnaiapp"
rm: true
- name: Executing remote command
uses: appleboy/ssh-action#develop
with:
host: ${{ secrets.HOST }}
USERNAME: ${{ secrets.USERNAME }}
PORT: ${{ secrets.PORT }}
PASSWORD: ${{ secrets.PASSWORD }}
script: cd /var/www/html/cnaiapp && npm run deploy
However, the master branch has unminified and testing code, that I don't want to have in my VPS. Do you know how could I achieve this? BTW, in order to remove this unnecessary code, I'd need to run the npm run build command.
PS: The npm run deploy command just builds the code and starts the server.
Just add a new step with the run command after checkout:
…
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: build
run: npm run build
- name: Copy repository contents via scp
…
Is it possible to access a github secret in a yaml file that's not a workflow or an action yaml file?
For example, I've saved in github the environment secret INFURA_RINKEBY_WSS and I attempt to access it in the following yaml config file for my program.
type: EndpointList
endpoints:
- type: RPCEndpoint
chain_id: 1
network: rinkeby
provider: Infura
url: ${{ secrets.INFURA_RINKEBY_WSS}}
explorer: https://etherscan.io
However, the INFURA_RINKEBY_WSS environment variable I've set in github isn't accessed yet by my yaml config file.
The following is my main.yaml github workflow:
name: Report to eth/usd on rinkeby w/ pytelliot
on: push
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.9"]
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install telliot-feed-examples
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Move pre-made pytelliot configs to home directory
run: |
cp -r ./config ~/
- name: report :)
run: telliot-examples --legacy-id 1 report --submit-once
env:
PK: ${{ secrets.PK }}
INFURA_RINKEBY_WSS: ${{ secrets.INFURA_RINKEBY_WSS }}
Thanks!