I'm using GitlabCI to deploy my Laravel applications.
I'm wondering how should I manage the .env file. As far as I've understood I just need to put the .env.example under version control and not the one with the real values.
I've set all the keys my app needs inside Gitlab Settings -> CI/CD -> Environment Variables and I can use them on the runner, for example to retrieve the SSH private key to connect to the remote host, but how should I deploy these variables to the remote host as well? Should I write them with bash in a "runtime generated" .env file and then copy it? Should I export them via ssh on the remote host? Which is the correct way to manage this?
If you open to another solution i propose using fabric(fabfile) i give you an example:
create .env.default with variable like :
DB_CONNECTION=mysql
DB_HOST=%(HOST)s
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=%(USER)s
DB_PASSWORD=%(PASSWORD)s
After installing fabric add fabfile on you project directory:
from fabric.api import env , run , put
prod_env = {
'name' : 'prod' ,
'user' : 'user_ssh',
'deploy_to' : '/path_to_project',
'hosts' : ['ip_server'],
}
def set_config(env_config):
for key in env_config:
env[key] = env_config[key]
def prod():
set_config(prod_env)
def deploy(password,host,user):
run("cd %s && git pull -r",env.deploy_to)
process_template(".env.default",".env" , { 'PASSWORD' : password , 'HOST' : host,'USER': user } )
put( ".env" , "/path_to_projet/.env" )
def process_template(template , output , context ):
import os
basename = os.path.basename(template)
output = open(output, "w+b")
text = None
with open(template) as inputfile:
text = inputfile.read()
if context:
text = text % context
#print " processed \n : %s" % text
output.write(text)
output.close()
Now you can run from you local to test script :
fab prod deploy:password="pass",user="user",host="host"
It will deploy project on your server and check if it process .env
If it works now it's time for gitlab ci this is an example file :
image: python:2.7
before_script:
- pip install 'fabric<2.0'
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy_staging:
type: deploy
script:
- fab prod deploy:password="$PASSWORD",user="$USER",host="$HOST"
only:
- master
$SSH_PRIVATE_KEY,$PASSWORD,$USER,$HOST is environnement variable gitlab,you should add a $SSH_PRIVATE_KEY private key which have access to the server.
Hope i don't miss a step.
Related
Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo. Before looping through the directory path $TF_ROOT_DIR, I'm using a bash if statement to check if the GitHub branch name $BRANCH_NAME is within an env variable $LIVE_BRANCHES. As you can see in the error screenshot below, the bash if statement outputs the error: syntax error: bad substitution. When I reproduce the if statement within a local bash script, the if statement works as it's supposed to.
Here's the env variables defined in the CodeBuild project:
Here's a relevant snippet from the buildspec.yml:
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Here's the build log that shows the syntax error:
Here's the AWS CodeBuild project JSON to reproduce the CodeBuild project:
{
"projects": [
{
"name": "terraform_validate_plan",
"arn": "arn:aws:codebuild:us-west-2:xxxxx:project/terraform_validate_plan",
"description": "Perform terraform plan and terraform validator",
"source": {
"type": "GITHUB",
"location": "https://github.com/marshall7m/sparkify_end_to_end.git",
"gitCloneDepth": 1,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"buildspec": "deployment/CI/dev/cfg/buildspec_terraform_validate_plan.yml",
"reportBuildStatus": false,
"insecureSsl": false
},
"secondarySources": [],
"secondarySourceVersions": [],
"artifacts": {
"type": "NO_ARTIFACTS",
"overrideArtifactName": false
},
"cache": {
"type": "NO_CACHE"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "hashicorp/terraform:0.12.28",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "TF_ROOT_DIR",
"value": "deployment",
"type": "PLAINTEXT"
},
{
"name": "LIVE_BRANCHES",
"value": "(dev, prod)",
"type": "PLAINTEXT"
}
Here's the associated buildspec file content: (buildspec_terraform_validate_plan.yml)
version: 0.2
env:
shell: bash
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: TF_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: TF_AWS_SECRET_ACCESS_KEY_ID
phases:
install:
commands:
# install/incorporate terraform validator?
pre_build:
commands:
# CodeBuild environment variables
# BRANCH_NAME -- GitHub branch that triggered the CodeBuild project
# TF_ROOT_DIR -- Directory within branch ($BRANCH_NAME) that will be iterated through for terraform planning and testing
# LIVE_BRANCHES -- Branches that represent a live cloud environment
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
- bash -version || echo "${BASH_VERSION}" || bash --version
- |
if [[ -z "${BRANCH_NAME}" ]]; then
# extract branch from github webhook
BRANCH_NAME=$(echo $CODEBUILD_WEBHOOK_HEAD_REF | cut -d'/' -f 3)
fi
- "echo Triggered Branch: $BRANCH_NAME"
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- "echo Terraform root directory: $TF_ROOT_DIR"
build:
commands:
- |
for dir in $TF_ROOT_DIR; do
#get list of non-hidden directories within $dir/
service_dir_list=$(find "${dir}" -type d | grep -v '/\.')
for sub_dir in $service_dir_list; do
#if $sub_dir contains .tf or .tfvars files
if (ls ${sub_dir}/*.tf) > /dev/null 2>&1 || (ls ${sub_dir}/*.tfvars) > /dev/null 2>&1; then
cd $sub_dir
echo ""
echo "*************** terraform init ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform init
echo ""
echo "*************** terraform plan ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform plan
cd - > /dev/null
fi
done
done
Given this is just a side project, all files that could be relevant to this problem are within a public repo here.
UPDATES
Tried adding #!/bin/bash shebang line but resulted in the CodeBuild error:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: #!/bin/bash
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
#!/bin/bash
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Solution
As mentioned by #Marcin, I used an AWS managed image within Codebuild (aws/codebuild/standard:4.0) and downloaded Terraform within the install phase.
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -q
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && mv terraform /usr/local/bin/
I tried to reproduce your issue, but it all works fine for me.
The only thing I've noticed is that you are using $BRANCH_NAME but its not defined anywhere. But even with missing $BRANCH_NAME the buildspec.yml you've posted runs fine.
Update using hashicorp/terraform:0.12.28 image
In jupyter notebook, following gcloud commands work with bang(!) but not with %%bash
import os
PROJECT = 'mle-1234'
REGION = 'us-central1'
BUCKET = 'mle-1234'
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.14.0' # Tensorflow version
# Set GCP Project and Region
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
gcloud config list
I get this error message when I execute the last snippet above with %%bash
File "<ipython-input-16-f93912dbcc34>", line 3
gcloud config set project $[PROJECT]
^
SyntaxError: invalid syntax
However, project and region values get set with same lines of code but by removing %%bash and prefixing (!) with all gcloud commands.
# Set GCP Project and Region
!gcloud config set project $PROJECT
!gcloud config set compute/region $REGION
!gcloud config list
Result with using (!)
Updated property [core/project].
Updated property [compute/region].
[compute]
region = us-central1
zone = us-central1-a
[core]
account = my-service-account#mle-1234.iam.gserviceaccount.com
disable_usage_reporting = False
project = mle-1234
What could be the reason for this behavior?
%%bash
%%bash is considered part of the Built-in magic commands. Run cells with bash in a subprocess. This is a shortcut for %%script bash. You can combine code from multiple kernels into one notebook. For example:
%%HTML
%%python2
%%python3
%%ruby
%%perl
implementation:
%%bash
factorial()
{
if [ "$1" -gt "1" ]
then
i=`expr $1 - 1`
j=`factorial $i`
k=`expr $1 \* $j`
echo $k
else
echo 1
fi
}
input=5
val=$(factorial $input)
echo "Factorial of $input is : "$val
! command
Starting a code cell with a bang character, e.g. !, instructs jupyter to treat the code on that line as an OS shell command
!cat /etc/os-release | grep VERSION
Output:
VERSION="16.04 LTS (Xenial Xerus)"
VERSION_ID="16.04"
Answer: since you are using gcloud commands, Jupyter will interpret those as OS shell commands; and therefore, using !glcoud will work.
I'm trying to generate a env.ts file in my Circleci config file with my circleci environment variables. I tried the code below:
steps:
- run:
name: Setup Environment Variables
command:
echo "export const env = {
jwtSecret: ${jwtSecret},
gqlUrl: ${gqlUrl},
engineAPIToken: ${engineAPIToken},
mongodb: ${mongodb},
mandrill: ${mandrill},
gcpKeyFilename: ${gcpKeyFilename},
demo: ${demo},
nats: ${NATS},
usernats: ${usernats},
passnats: ${passnats} };" | base64 --wrap=0 > dist/env.ts
but it outputs this:
#!/bin/sh -eo pipefail
# Unable to parse YAML
# mapping values are not allowed here
# in 'string', line 34, column 24:
# jwtSecret: ${jwtSecret},
# ^
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
I forgot to write a pipe after command: |
steps:
- run:
name: Setup Environment Variables
command: |
echo "export const env = {
jwtSecret: '${jwtSecret}',
gqlUrl: '${gqlUrl}',
engineAPIToken: '${engineAPIToken}',
mongodb: '${mongodb}',
mandrill: '${mandrill}',
gcpKeyFilename: '${gcpKeyFilename}',
demo: ${demo}, // it's a boolean
nats: '${NATS}',
usernats: '${usernats}',
passnats: '${passnats}'
};" > dist/env.ts
Note: Also I had forgotten to add '' around my variables.
In Laravel 5.8 using envoy I want to set the password of a user in console command, like
envoy run Deploy --serveruser_password=mypass1112233
Having in envoy file:
#setup
$server_login_user= 'serveruser';
$user_password = isset($serveruser_password) ? $serveruser_password : "Not Defined";
#endsetup
#task( 'clone_project', ['on'=>$on] )
echo '$user_password password ::';
echo $user_password;
But $user_password output empty in both cases :
1) if serveruser_password is set in command
envoy run Deploy --serveruser_password=mypass1112233
2) or it is empty
envoy run Deploy
But I expected "Not Defined" outputted...
Why error and how correct?
Try the following.
#setup
$server_login_user = 'serveruser';
$user_password = isset($serveruser_password) ? $serveruser_password : 'Not Defined';
#endsetup
#servers(['local' => '127.0.0.1'])
#macro('deploy')
clone_project
#endmacro
#task('clone_project')
echo 'The password is: {{ $user_password }}.';
#endtask
Please make sure your macro is named "deploy" and not "Deploy." Also, in your echo statement, use curly braces to echo out your set variable. The output will be as follows.
$ envoy run deploy --serveruser_password=mypass1112233
[127.0.0.1]: The password is: mypass1112233.
$ envoy run deploy
[127.0.0.1]: The password is: Not Defined.
I am building a simple proxy to point to another server. Everything works but I need to find a way to be able to set the hosts in a ClientBuilder externally most likely using Docker or maybe some sort of configuration file. Here is what I have:
import java.net.InetSocketAddress
import com.twitter.finagle.Service
import com.twitter.finagle.builder.{ServerBuilder, ClientBuilder}
import com.twitter.finagle.http.{Request, Http}
import com.twitter.util.Future
import org.jboss.netty.handler.codec.http._
object Proxy extends App {
val client: Service[HttpRequest, HttpResponse] = {
ClientBuilder()
.codec(Http())
.hosts("localhost:8888")
.hostConnectionLimit(1)
.build()
}
val server = {
ServerBuilder()
.codec(Http())
.bindTo(new InetSocketAddress(8080))
.name("TROGDOR")
.build(client)
}
}
If you know of a way to do this or have any ideas about it please let me know!
if you want running this simple proxy in a docker container and manage the target host ip dynamically, you can try to pass a target host ip through environment variable and change your code like this
import java.net.InetSocketAddress
import com.twitter.finagle.Service
import com.twitter.finagle.builder.{ServerBuilder, ClientBuilder}
import com.twitter.finagle.http.{Request, Http}
import com.twitter.util.Future
import org.jboss.netty.handler.codec.http._
object Proxy extends App {
val target_host = sys.env.get("TARGET_HOST")
val client: Service[HttpRequest, HttpResponse] = {
ClientBuilder()
.codec(Http())
.hosts(target_host.getOrElse("127.0.0.1:8888"))
.hostConnectionLimit(1)
.build()
}
val server = {
ServerBuilder()
.codec(Http())
.bindTo(new InetSocketAddress(8080))
.name("TROGDOR")
.build(client)
}
}
this will let your code read system environment variable TARGET_HOST. when you done this part, you can try to start your docker container by adding the following parameter to your docker run command:
-e "TARGET_HOST=127.0.0.1:8090"
for example docker run -e "TARGET_HOST=127.0.0.1:8090" <docker image> <docker command>
note that you can change 127.0.0.1:8090 to your target host.
You need a file server.properties and put your configuration inside the file:
HOST=host:8888
Now get docker to write your configuration with every startup with a docker-entrypoint bash script. Add this script and define environment variables inside your Dockerfile:
$ ENV HOST=myhost
$ ENV PORT=myport
$ ADD docker-entrypoint.sh /docker-entrypoint.sh
$ ENTRYPOINT ["/docker-entrypoint.sh"]
$ CMD ["proxy"]
Write out your docker-entrypoint.sh:
#!/bin/bash -x
set -o errexit
cat > server.properties << EOF
HOST=${HOST}:${PORT}
EOF
if [ "$1" = 'proxy' ]; then
launch server
fi
exec "$#"
Launch Docker with your configuration and the command "proxy":
$ docker run -e "HOST=host" -e "PORT=port" image proxy
You can also do linking when your not sure of your server container ip adress:
$ docker run -e "HOST=mylinkhost" -e "PORT=port" --link myservercontainer:mylinkhost image proxy