Iam looking for help to containerize a laravel application with docker, running it locally and make it deployable to gcloud Run, connected to a gcloud database.
My application is an API, build with laravel, and so far i have just used the docker-compose/sail package, that comes with laravel 8, in the development.
Here is what i want to achieve:
Laravel app running on gcloud Run.
Database in gcloud, Mysql, PostgreSQL or SQL server. (prefer Mysql).
Enviroment stored in gcloud.
My problem is can find any info if or how to use/rewrite the docker-composer file i laravel 8, create a Dockerfile or cloudbuild file, and build it for gcloud.
Maybe i could add something like this in a cloudbuild.yml file:
#cloudbuild.yml
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
Any help/guidanceis is appreciated.
As mentioned in the comments to this question you can check this video that explains how you can use docker-composer, laravel to deploy an app to Cloud Run with a step-by-step tutorial.
As per database connection to said app, the Connecting from Cloud Run (fully managed) to Cloud SQL documentation is quite complete on that matter and for secret management I found this article that explains how to implement secret manager into Cloud Run.
I know this answer is basically just links to the documentation and articles, but I believe all the information you need to implement your app into Cloud Run is in those.
Looking for a solution to this use case
Docker image is pushed to Oracle Cloud Infrastructure - container registry (OCIR)
Jenkins has a webhook on the OCIR and Jenkins pipeline gets triggered as a new image is available in OCIR
How is it possible to have a webhook or some kind of mechanish for letting Jenkins know there is a new push to OCIR?
This blog post walks you thru how to set up a continuous pipeline that may be able to be used in full or in part to accomplish this
https://blogs.oracle.com/cloud-infrastructure/build-a-continuous-integration-pipeline-using-github,-docker-and-jenkins-on-oracle-cloud-infrastucture
We can listen to the OCI container registry events via Service Connector. You can configure Service Connector to invoke your custom functions on a specific event 'Container Image - Upload' under service name 'Registry'.
You can find a sample illustration below to perform some custom tasks during an image upload to OCI Container Registry.
Ref: https://github.com/RahulMR42/oci-devops-deploy-on-imageupload
I feel that I'm going around in circles here, so, please, bear with me. I want to deploy my Spring Boot application to App Engine but unlike the simple sample Google provides, mine, requires a database and that means credentials. I'm running Java 11 on Standard on Google App Engine.
I managed to make my app successfully connect by having this in the application.properties:
spring.datasource.url=jdbc:postgresql://google/recruiters_wtf?cloudSqlInstance=recruiters-wtf:europe-west2:recruiters-wtf&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=the_user&password=monkey123
The problem is that I don't want to commit any credentials to the repository, so, this is not acceptable. I could use an environment variable, but then I'll have to define them in the app.yaml file. I either keep a non-committed app.yaml file that is needed to deploy, which is cumbersome, or I commit it and I'm back at square one, committing credentials to the repository.
Since apparently Google App Engine cannot have environment variables defined in any other way (unlike Heroku), does this mean it's impossible to deploy a Spring Boot app to App Engine and have it connect to the database without using some unsafe/cumbersome practices? I feel I'm missing something here.
Based on my understanding of what you have described, you would like to essentially connect your Spring boot application running on Google App Engine to a database without exposing the sensitive information. If that is the case, I was able to find out that Cloud KMS offers users the ability for secret management. Specifically, applications which require small pieces of sensitive data at build or runtime are referred to as secrets. These secrets can be encrypted and decrypted with a symmetric key. In your case you can store the database credentials as secrets. You may find further details of the process for encrypting / decrypting a secret here.
They are currently three ways to manage secrets:
Storing secrets in code, encrypted with a key from Cloud KMS. This solution is implementing secrets at the application layer.
Storing secrets in a storage bucket in Cloud Storage, encrypted at rest. You can use Cloud Storage: Bucket to store your database
credentials and can also grant that bucket a specific Service Account.
This solution allows for separation of systems. In the case that the
code repository is breached, your secrets would themselves may still
be protected.
Using third-party secret management system.
In terms of storing secrets themselves, I found the follow steps outlined here quite useful for this. This guide walks users through setting up and storing secrets within a Cloud Storage bucket. The secret is encrypted at the application layer with an encryption key from Cloud KMS. Given your use case, this would be a great option as your secret would be stored within a bucket instead of your app.yaml file. Also, the secret being stored in a bucket would grant you the ability to restrict access to it with service account roles.
Essentially, your app will need to perform an API call to Google Cloud Storage in order to download the KMS encrypted file that contains the secret. It would then use the KMS generated key to decrypt the file so that it would be able to read out the password and use it to make a manual connection to the database. Adding these extra steps would be implementing more security layers, which is the entire idea noted in “'Note: Saving credentials in environment variables is convenient, but not secure - consider a more secure solution such as Cloud KMS to help keep secrets safe.'” in the Google example repository for Cloud SQL.
I hope this helps!
Assuming you are OK to use KMS or GCS to get the credentials, you can programatically set them in Spring Boot. See this post
Configure DataSource programmatically in Spring Boot
As you pointed out there's no built-in way to set environment variables in App Engine other than with the app.yaml file. I'm not an expert in Spring Boot but unless you can set/override some hook to initialize env var from Java code prior to application.properties evaluation, you'll need to set these at build time.
Option 1: Using Cloud Build
I know you're not really keen to use Cloud Build but that would be something like this.
First, following the instructions here, (after creating KeyRing and CryptoKey in KMS and grant access to Cloud Build service account) from you terminal encrypt your environment variable using KMS and get back its base64 representation:
echo -n $DB_PASSWORD | gcloud kms encrypt \
--plaintext-file=- \ # - reads from stdin
--ciphertext-file=- \ # - writes to stdout
--location=global \
--keyring=[KEYRING-NAME] \
--key=[KEY-NAME] | base64
Next, let's say you have an app.yaml file like this:
runtime: java11
instance_class: F1
env_variables:
USER: db_user
PASSWORD: db_passwd
create a cloudbuild.yaml file to define your build steps:
steps:
# replace env vars in app.yaml by their values from KMS
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'sed -i "s/TEST/$$PASSWORD/g" src/main/appengine/app.yaml']
secretEnv: ['PASSWORD']
- name: 'gcr.io/cloud-builders/mvn'
args: ['clean']
- name: 'gcr.io/cloud-builders/mvn'
args: ['package']
- name: 'gcr.io/cloud-builders/mvn'
args: ['appengine:deploy']
timeout: '1600s'
secrets:
- kmsKeyName: projects/<PROJECT-ID>/locations/global/keyRings/<KEYRING_NAME>/cryptoKeys/<KEY_NAME>
secretEnv:
PASSWORD: <base64-encoded encrypted password>
timeout: '1600s'
You ould then deploy your app by running the following command:
gcloud builds submit .
The advantage of this method is that your local app.yaml file only contains placehoder values and can be safely committed. Or you can even set this build to trigger automatically every time you commit to a remote repository.
Option 2: Locally with a bash script
Instead of running mvn appengine:deploy to deploy your app, you could create a bash script that would replace the values in app.yaml, deploy the app, and remove the values straight away.. Something like:
#!/bin/bash
sed -i "s/db_passwd/$PASSWORD/g" src/main/appengine/app.yaml'
mvn appengine:deploy
sed -i "s/$PASSWORD/db_passwd/g" src/main/appengine/app.yaml'
and execute that bash script instead of running the maven command.
I would probably suggest the combination of Spring Cloud Config & Google Runtime Configuration API with your Spring Boot App.
Spring Cloud Config is component which is responsible for retrieving the configuration from remote locations and serving that configuration to your Spring Boot during initialization/boot up. The remote locations can be anything. e.g, GIT repository is widely used but for your use case, you can store the configuration in Google Runtime Configuration API.
So a sample flow will be like this.
Your Spring Boot App(with Config Client) --> Spring Cloud Config Server --> Google Runtime Configuration API
This requires for you to bring up a Spring Cloud Config Server as another App in GCP and have communications enabled from many of your Spring Boot Apps to a centralized Config Server which interacts with Google Runtime API.
Some documentation links.
https://cloud.spring.io/spring-cloud-config/reference/html/
https://docs.spring.io/spring-cloud-gcp/docs/1.1.0.M1/reference/html/_spring_cloud_config.html
https://cloud.google.com/deployment-manager/runtime-configurator/reference/rest/
Sample Spring Cloud GCP Config Example.
https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-config-sample
You can try to encrypt the password in the application properties. Take a look at http://mbcoder.com/spring-boot-how-to-encrypt-properties-in-application-properties/
You should use GCP's Key Management Service: https://cloud.google.com/kms/
We use a few options:
1 - Without Docker
Can you use this approach via the env or console?
We use (Spring Boot defined) environment variables. This is the default way of doing this:
SPRING_DATASOURCE_USERNAME=myusername
SPRING_DATASOURCE_PASSWORD=mypassword
According to the Spring Boot spec this will overrule any value of application.properties variables. So you can specify default username and passwords for development and have this overruled at (test or) production deploy time.
Another way is documented in this post:
spring.datasource.url = ${OPENSHIFT_MYSQL_DB_HOST}:${OPENSHIFT_MYSQL_DB_PORT}/"nameofDB"
spring.datasource.username = ${OPENSHIFT_MYSQL_DB_USERNAME}
spring.datasource.password = ${OPENSHIFT_MYSQL_DB_PASSWORD}
2 - A Docker like approach via a console
Your question is described in this post. The default solution is working with 'secrets'. They are specifically made for this. You can convert any secret (as a file) to an environment variable in your process of building and deploying your application. This is a simple action that is described in many posts. Look for newer approaches.
I feel like it is easy to do. I used below properties in my spring boot app which connect to google mysql instance which is public one.
spring.datasource.url=jdbc:postgresql://IP:5432/yourdatabasename
spring.datasource.platform = postgres
spring.datasource.username = youruser
spring.datasource.password = yourpass
Rest will be taken care by the JpaRepository. I hope this would help you.
Let me know if any help is needed.
I have created a Zend expressive application that basically exposes a few APIs. I want to deploy this now to AWS Lambda. What is the best way to refactor the code quickly and easily (or is there any other alternatives) to deploy it? I am fairly new in AWS.
I assume that you have found the answer already since the question is more than five months old. But I am posting what I have found in my recent research in the same criteria. Please note that you need to have at least some idea on how AWS IAM, Lambda, API Gateway in order to follow the steps I have described below. Also please note that I have only deployed the liminas/mezzio skeleton app during this research and you'll need much more work to deploy a real app because it might need database & storage support in the AWS environment which might require to adapt your application accordingly.
PHP application cab be executed using the support for custom runtimes in AWS. You could check this AWS blog article on how to get it done but it doesn't cover any specific PHP framework.
Then I have found this project which provides all then necessary tools for running a PHP application in serverless environment. You could go through their documentation to get an understanding how things work.
In order to get the liminas/mezzio (new name of the zend expressive project) skeltopn app working, I have followed the laravel tutorial given in the bref documentation. First I installed bref package using
composer require bref/bref
Then I have created the serverless.yml file in the root folder of the project according to the documentation and made few tweaks in it and it looked like as follows.
service: myapp-serverless
provider:
name: aws
region: eu-west-1 # Change according to the AWS region you use
runtime: provided
plugins:
- ./vendor/bref/bref
package:
exclude:
- node_modules/**
- data/**
- test/**
functions:
api:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
memorySize: 512 # Memory size for the AWS lambda function. Default is 1024MB
layers:
- ${bref:layer.php-73-fpm}
events:
- http: 'ANY /'
- http: 'ANY /{proxy+}'
Then I followed the deployment guidelines given in the bref documentation which is to use serverless framework for the deployment of the app. You can check here how to install serverless framework on your system and here to see how it need to be configured.
To install servreless I have used npm install -g serverless
To configure the tool I have used serverless config credentials --provider aws --key <key> --secret <secret>. Please note that this key used here needs Administrator Access to the AWS environment.
Then serverless deploy command will deploy your application to the AWS enviroment.
The result of the above command will give you an API gateway endpoint with which you application/api will work. This is intended as a starting point for a PHP serverless application and there might be lots of other works needed to be done to get an real application working there.
After my app is successfully pushed via cf I usually need do manually ssh-log into the container and execute a couple of PHP scripts to clear and warmup my cache, potentially execute some DB schema updates etc.
Today I found out about Cloudfoundry Tasks which seems to offer a pretty way to do exactly this kind of things and I wanted to test it whether I can integrate it into my build&deploy script.
So used cf login, got successfully connected to the right org and space, app has been pushed and is running and I tried this command:
cf run-task MYAPP "bin/console doctrine:schema:update --dump-sql --env=prod" --name dumpsql
(tried it with a couple of folder changes like app/bin/console etc.)
and this was the output:
Creating task for app MYAPP in org MYORG / space MYSPACE as me#myemail...
Unexpected Response
Response Code: 404
FAILED
Uses CF CLI: 6.32.0
cf logs ArcticTenTestBackend --recent does not output anything (this might be the case because I have enabled an ELK instance for logging - as I wanted to service-connect to ELK to look up the logs I found out that the service-connector cf plugin is gone for which I will open a new ticket).
Created new Issue for that: https://github.com/cloudfoundry/cli/issues/1242
This is not a CF CLI issue. Swisscom Application Cloud does not yet support the Cloud Foundry tasks. This explains the 404 you are currently receiving. We will expose this feature of Cloud Foundry in an upcoming release of Swisscom Application Cloud.
In the meantime, maybe you can find a way to execute your one-off tasks (cache warming, DB migrations) at application startup.
As mentioned by #Mathis Kretz Swisscom has gotten around to enable cf run-task since this question was posted. They send out e-mails on 22. November 2018 to announce the feature.
As discussed on your linked documentation you use the following commands to manage tasks:
cf tasks [APP_NAME]
cf run-task [APP_NAME] [COMMAND]
cf terminate-task [APP_NAME] [TASK_ID]