Deploy to EC2 instance from Github Actions - amazon-ec2

Good night guys, I've been trying to find out this but it's the first time I'm doing it.
With Github OpenID connect I could to deploy the code to a S3 bucket, but now I need to pass these files from s3 to an EC2 instance in the deploy.yml file, but I can't use
access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Because the client doesn't want to give access to secrets.
There's any other way to do this?
I'm completely lost and I don't have administrator access or the enough permissions in AWS to work.
Thanks.
I have already the files in a S3 bucket with this code:
- name: Copy to S3 bucket
run: |
aws s3 sync --delete ./my_path s3://${{ env.BUCKET_NAME }}/${{ steps.extract_branch.outputs.branch }}/${{steps.extract_hash.outputs.commit_hash }}
And I've been trying to do something like this:
- name: Deploy to EC2 instance
uses: easingthemes/ssh-deploy#v2.1.5
env:
SSH_PRIVATE_KEY: "ALL PRIVATE KEY CODE PASTED"
SOURCE: "./"
REMOTE_HOST: "my host"
REMOTE_USER: "my user"
TARGET: "/path/to/copy/"
EXCLUDE: "/dist/, /node_modules/, /venv/"
This is the error:
Run easingthemes/ssh-deploy#v2.1.5
[general] GITHUB_WORKSPACE: /home/runner/work/project
[SSH] Creating /home/runner/.ssh dir in /home/runner/work/project
✅ [SSH] dir created.
[SSH] Creating /home/runner/.ssh/known_hosts file in /home/runner/work/project
✅ [SSH] file created.
✅ Ssh key added to `.ssh` dir /home/runner/.ssh/deploy_key
[Rsync] Starting Rsync Action: /home/runner/work/project/docker/ to ubuntu#ec2-server.com:/home/ubuntu/
⚠️ [Rsync] error: rsync exited with code 255
⚠️ [Rsync] stderr: Warning: Permanently added 'ec2-server.com' (ED25519) to the list of known hosts.
Load key "/home/runner/.ssh/deploy_key": error in libcrypto
ubuntu#ec2-server.com: Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(228) [sender=3.2.3]
⚠️ [Rsync] stdout:
⚠️ [Rsync] cmd: rsync /home/runner/work/project/docker/ ubuntu#ec2-server.com:/home/ubuntu/ --rsh "ssh -p 22 -i /home/runner/.ssh/deploy_key -o StrictHostKeyChecking=no" --recursive -rltgoDzvO
1: 0xa1a640 node::Abort() [/home/runner/runners/2.299.1/externals/node12/bin/node]
2: 0xa90649 [/home/runner/runners/2.299.1/externals/node12/bin/node]
3: 0xc06599 [/home/runner/runners/2.299.1/externals/node12/bin/node]
4: 0xc08387 v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/home/runner/runners/2.299.1/externals/node12/bin/node]
5: 0x140dd19 [/home/runner/runners/2.299.1/externals/node12/bin/node]

Related

Ansible having problem authenticating with Google Cloud Platform

We are using Ansible to deploy an image to Google Kubernetes Cluster (GKE).
We have setup Ubuntu 20.04 and Python 3.8.5.
playbook.main.yml:
---
- hosts: localhost
vars:
k8s_file_path: /home/pesinn/Documents/...
become: yes
become_method: sudo
roles:
- k8s
main.yml:
- name: First Deployment
k8s:
kubeconfig: /home/pesinn/.kube/config
src: "{{k8s_file_path}}/deployment.yml"
When trying to deploy the image defined in deployment.yml file, by running the playbook, we get this error:
kubernetes.config.config_exception.ConfigException: cmd-path: process returned 1
Cmd: /home/pesinn/y/google-cloud-sdk/bin/gcloud config config-helper --format=json
Stderr: WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.
Please run:
$ gcloud auth login
What we've already done
Initialized the cloud: gcloud init
Logged in and chosen a project gcloud auth login
Run export GOOGLE_APPLICATION_CREDENTIALS="path_to_service_account_key.json"
Run gcloud container clusters get-credentials {gke_project} --region {region}
Run the playbook sudo ansible-playbook playbook.main.yml -vvv
Run gcloud config config-helper --format=json on the local machine without any problems
What is very strange here is that we're logged in for sure. We can access the GKE cluster through kubectl command on the local machine. However, Ansible complains about us not being logged in. Also, in the error logs, we see that it is trying to open /root/.config/gcloud/configurations/config_default. Our default config file is, on the other hand, located in the home folder.
This error occurs randomly. Sometimes Ansible can detect our login and deploys the image, but sometimes it gives us this error. Both scenarios can happens without any code changes being made.
For some reason, ansible does not use GCP's default environment variables for authentication.
You can set
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
GCP_SERVICE_ACCOUNT_FILE is the equivalent of GOOGLE_APPLICATION_CREDENTIALS
Reference: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html#providing-credentials-as-environment-variables

Deploying to FTP server in Github Actions does not work

As the title says, deploying to FTP server isn't working for me from a Github Action. I've tried using a couple of actions to accomplish this (FTP-Deploy and ftp-action), but FTP-Deploy just kept running with sporadic
curl: (7) Failed to connect to ftpservername.com port 21: Connection timed out
messages and ftp-action kept running without any output. Note: The server is available, I connected and transferred some files using Filezilla without any issues.
After that I tried using lftp, this is the command I used on a local Ubuntu machine
lftp -c "open -u username,password ftpservername.com; mirror -R locfolder remote/remotefolder"
and the file transfer worked, but when used in a Github Action it produced this output:
---- Connecting to ftpservername.com (123.456.789.123) port 21
mkdir `remote/remotefolder' [Connecting...]
**** Socket error (Connection timed out) - reconnecting
---- Closing control socket
---- Connecting to ftpservername.com (123.456.789.123) port 21
I tried setting both ftp:ssl-allow and ssl:verify-certificate to false, but this did not produce any results. Also, I do not have access to the server, so I can't check the server logs.
This is the workflow file:
name: Test
on:
push:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install pip
run: python -m pip install --upgrade pip
- name: Install packages
run: |
sudo apt install lftp
sudo apt install expect
.
.
.
- name: FTP Deploy
run: |
echo Starting...
unbuffer lftp -c "debug; set ftp:ssl-allow false; set ssl:verify-certificate false; open -u username,${{ secrets.PASSWORD }} ftpservername.com; mirror -R -v locfolder remote/remotefolder"
echo Done transferring files.
Any help is appreciated, thank you!
Found the issue, the hosting service was blocking the IP address (as it was an IP address outside of the country). After setting up a self-hosted runner and whitelisting the IP of the runner everything works fine.

Bitbucket fatal: Can't access remote

I tried to set up sftp connection between Bitbucket and Runcloud server. Runcloud only uses sftp connection. Bitbucket config:
image: php:7.3
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $SFTP_username --passwd $FTP_password sftp://runcloud#1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
Connection always fails with error fatal: Can't access remote 'sftp://1.111.111.11', exiting...
I tried a different sftp Path combination but the result always the same.
sftp://1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
sftp://mywebsite/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
My website
Root Path: /home/runcloud/webapps/mywebsite
Public Path: /home/runcloud/webapps/mywebsite
Runcload have different as "normal" set up for ftp. For example to conect with FileZila HOST is my server ip. And to get to my website i have to navigate /webapps/mywebsite
Not sure what I doing wrong is my sftp path incorrect?

SSH access to github repo on codeship

I am attempting to push to github from a container on Codeship. After getting a Permission denied (publickey) error, I followed the suggestion here:
https://documentation.codeship.com/pro/builds-and-configuration/setting-ssh-private-key/
I created a service called publish to and some steps to try to recreate the article's suggestion.
My codeship_services.yml file:
# codeship_services.yml
publish:
build:
image: codeship/setting-ssh-key-test
dockerfile: Dockerfile.publish
encrypted_env_file: codeship.env.encrypted
volumes:
- ./.ssh:/root/.ssh
My codeship_steps.yml file:
- name: temp publish service
service: publish
command: /bin/bash -c "echo -e $PRIVATE_SSH_KEY >> /root/.ssh/id_rsa"
- name: chmod id_rsa
service: publish
command: chmod 600 /root/.ssh/id_rsa
- name: add server to list of known hosts
service: publish
command: /bin/bash -c "ssh-keyscan -H github.com >> /root/.ssh/known_hosts"
- name: confirm ssh connection to server, authenticating with generated public ssh key
service: publish
command: /bin/bash -c "ssh -T git#github.com"
When running jet steps, however, I still get the Permission denied (publickey) error:
(step: temp_publish_service) success ✔
(step: chmod_id_rsa)
(step: chmod_id_rsa) success ✔
(step: add_server_to_list_of_known_hosts)
(service: publish) (step: add_server_to_list_of_known_hosts) # github.com:22 SSH-2.0-babeld-80573d3e
(service: publish) (step: add_server_to_list_of_known_hosts) # github.com:22 SSH-2.0-babeld-80573d3e
(service: publish) (step: add_server_to_list_of_known_hosts) # github.com:22 SSH-2.0-babeld-80573d3e
(step: add_server_to_list_of_known_hosts) success ✔
(step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key)
(service: publish) (step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) Permission denied (publickey).
(step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) error ✗
(step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) container exited with a 255 code
I have generated the keys as instructed in the article and added the encrypted private key to codeship.env.encryped.
Is there something I am missing?
The only missing step would be to register the public Key on your GitHub account itself
Only then an SSH connection using that same public key would have a chance to succeed.
If not, try at least an ssh -Tvv git#github.com in your last step, in order to get more clues.

Capistrano deploy - Permission denied

I'm trying to deploy my application with capistrano but I'm having some problems. My machine is a ec2 amazon and I have the .pem locally. I can do ssh and run commands with no problem, but for cap production deploy I get the following error:
DEBUG [4f4633f7] Command: ( export GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/git-ssh-hybrazil-production-ronanlopes.sh" ; /usr/bin/env git ls-remote --heads git#git#github.com:fneto/hybrazil.git )
DEBUG [4f4633f7] Permission denied (publickey).
DEBUG [4f4633f7]
DEBUG [4f4633f7] fatal: Could not read from remote repository.
DEBUG [4f4633f7]
DEBUG [4f4633f7]
DEBUG [4f4633f7] Please make sure you have the correct access rights
DEBUG [4f4633f7]
and the repository exists.
DEBUG [4f4633f7]
On my production/deploy.rb, I have the config like this:
set :ssh_options, {
keys: %w(/home/ronanlopes/Pems/hybrazil-impulso.pem ~/.ssh/id_rsa),
forward_agent: true,
auth_methods: %w(publickey)
}
any ideas? Thanks in advance!
You can add your key to agent, use command:
ssh-add ~/.ssh/id_rsa
In your code you should use full path to ssh key, without pem:
keys: %w(/home/user_name/.ssh/id_rsa)

Resources