Why is it recommended to use a cloud builder for yarn? - google-cloud-build

Looking at the source code for the yarn builder for Google Cloud Build I was wondering why it is recommended to use the builder rather than specifying the entrypoint.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/yarn
Basically
steps
- name: 'gcr.io/cloud-builders/yarn'
args:
- install
vs
steps:
- name: node:10
entrypoint: yarn
args:
- install
Is it because the cloud builder is registered with the Google Cloud Container Registry which is faster to read from within Google Cloud build?

Yes, you are corrected. Indeed, it's recommended because the read from the Container Registry will be faster to be done, using the builder.
As per the code indicates, you referencing directly the Container from yarn, which will make the access faster than using an entrypoint.
Let me know if the information helped you!

Related

Create Argo CD application not from repo but from CRD

I want to create ArgoCD Application(app#1) that will contain an operator (for instance, Postgres operator) and I want to create another ArcoCD Application(app#2) but this time this application(app#2) should be the instance of the Postgres DB itself and be managed by its operator that is installed with app#1. Is it possible using Argocd source code to create this app#2 with CRD of Postgres DB(this CRD is likely part of helm chart of Postgres operator)?
I'm not entirely sure if I got your question right, however you can use the "apps of apps" pattern to have one ArgoCD application, install several other applications on your behalf. You can read the official docs here: https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/

How to deploy Laravel 8 google cloud run with google cloud database

Iam looking for help to containerize a laravel application with docker, running it locally and make it deployable to gcloud Run, connected to a gcloud database.
My application is an API, build with laravel, and so far i have just used the docker-compose/sail package, that comes with laravel 8, in the development.
Here is what i want to achieve:
Laravel app running on gcloud Run.
Database in gcloud, Mysql, PostgreSQL or SQL server. (prefer Mysql).
Enviroment stored in gcloud.
My problem is can find any info if or how to use/rewrite the docker-composer file i laravel 8, create a Dockerfile or cloudbuild file, and build it for gcloud.
Maybe i could add something like this in a cloudbuild.yml file:
#cloudbuild.yml
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
Any help/guidanceis is appreciated.
As mentioned in the comments to this question you can check this video that explains how you can use docker-composer, laravel to deploy an app to Cloud Run with a step-by-step tutorial.
As per database connection to said app, the Connecting from Cloud Run (fully managed) to Cloud SQL documentation is quite complete on that matter and for secret management I found this article that explains how to implement secret manager into Cloud Run.
I know this answer is basically just links to the documentation and articles, but I believe all the information you need to implement your app into Cloud Run is in those.

How to setup multiple visual studio solutions working together using docker compose (with debugging)

Lots of questions seem to have been asked about getting multiple projects inside a single solution working with docker compose but none that address multiple solutions.
To set the scene, we have multiple .NET Core APIs each as a separate VS 2019 solution. They all need to be able to use (as a minimum) the same RabbitMQ container running locally as this deals with all of the communication between the services.
I have been able to get this setup working for a single solution by:
Adding 'Container orchestration support' for an API project.
This created a new docker-compose project in the solution I did it for.
Updating the docker-componse.yml to include both a RabbitMQ and MongoDb image (see image below - sorry I couldn't get it to paste correctly as text/code):
Now when I launch all new RabbitMQ and MongoDB containers are created.
I then did exactly the same thing with another solution and unsurprisingly it wasn't able to start because the RabbitMQ ports were already in use (i.e. it tried to create another new RabbitMQ image).
I kind of expected this but don't know the best/right way to properly configure this and any help or advice would be greatly appreciated.
I have been able to compose multiple services from multiple solutions by setting the value of context to the appropriate relative path. Using your docker-compose example and adding my-other-api-project you end up with something like:
services:
my-api-project:
<as you have it currently>
my-other-api-project:
image: ${DOCKER_REGISTRY-}my-other-api-project
build:
context: ../my-other-api-project/ <-- Or whatever the relative path is to your other project
dockerfile: my-other-api-project/Dockerfile
ports:
- <your port mappings>
depends_on:
- some-mongo
- some-rabbit
some-rabbit:
<as you have it currently>
some-mongo:
<as you have it currently>
So I thought I would answer my own question as I think I eventually found a good (not perfect) solution. I did the following steps:
Created a custom docker network.
Created a single docker-compose.yml for my RabbitMQ, SQL Server and MongoDB containers (using my custom network).
Setup docker-compose container orchestration support for each service (right click on the API project and choose add container orchestration).
The above step creates the docker-compose project in the solution with docker-compose.yml and docker-compose.override.yml
I then edit the docker-compose.yml so that the containers use my custom docker network and also specifically specify the port numbers (so they're consistently the same).
I edited the docker-compose.override.yml environment variables so that my connection strings point to the relevant container names on my docker network (i.e. RabbitMQ, SQL Server and MongoDB) - no more need to worry about IPs and when I set the solution to startup using docker-compose project in debug mode my debug containers can access those services.
Now I can close the VS solution and go to the command line and navigate to the solution folder and run 'docker-compose up' to start the container.
I setup each VS solution as per steps 3-7 and can start up any/all services locally without the need to open VS anymore (provided I don't need to debug).
When I need to debug/change a service I stop the specific container (i.e. 'docker container stop containerId' and then open the solution in VS and start it in debug mode/make changes etc.
If I pull down changes by anyone else I re-build the relevant container on the command line by going to the solution folder and running 'docker-compose build'.
As a brucey bonus I wrote PowerShell script to start all of my containers using each docker-compose file as well as one to build them all so when I turn on my laptop I simply run that and my full dev environment and 10 services are up and running.
For the most part this works great but with some caveats:
I use https and dev-certs and sometimes things don't play well and I have to clean the certs/re-trust them because kestrel throws errors and expects the certificate to be trusted, have a certain name and to be trusted. I'm working on improving this but you could always not use https locally in dev.
if you're using your own nuget server like me you'll need to a Nuget.config file and copy that as part of your docker files.

Using Oracle pre-built Docker image as a service for GitLab CI runner

I am currently in the process of creating an integration-tests pipeline for a few project we have running around that require the presence of an Oracle database to work. In order to do so, I have gone through the process of creating a dockerized pre-built Oracle database using the instruction mentioned in this document.
https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance/samples/prebuiltdb
I have successfully built the image and I'm able to verify that it works indeed correctly. I have pushed the image in question to one of our custom docker repositories and I am also able to successfully fetch from the context of the runner.
My main problem is that when the application attempts to connect to the database it fails with a connection refused error as if the database is not running (mind you I'm running the runner locally in order to test this). My question are the following:
When using a custom image, what is the name the runner exposes it?
For example, the documentation states that when I use mysql:latest
then the exposed service name would be mysql. Is this the case for
custom images as well? Should I name it with an alias?
Do I need to expose ports/brigde docker networks in order to get
this to work correctly? My reasoning behind the failure leads me to
believe that the image that runs the application is not able to
properly communicate with the Oracle service.
For reference my gitlab-ci.yml for the job in question is the following:
integration_test:
stage: test
before_script:
- echo 127.0.0.1 inttests.myapp.com >> /etc/hosts
services:
- <repository>/devops/fts-ora-inttests-db:latest
script:
- ./gradlew -x:test integration-test:test
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- build
- .gradle
only:
- master
- develop
- merge_requests
- tags
except:
- api
Can anyone please lend a hand in getting this to work correctly?
I recently configured a gitlab-ci.yml to use an Oracle docker image as a service and used our custom docker repository to fetch the image.
I'll answer your question one by one below:
1.a When using a custom image, what is the name the runner exposes it? For example, the documentation states that when I use mysql:latest then the exposed service name would be mysql. Is this the case for custom images as well?
By default, the Gitlab runner will deduce the name of the service based from the following convention [1]:
The default aliases for the service’s hostname are created from its image name following these rules:
Everything after the colon (:) is stripped.
Slash (/) is replaced with double underscores (__) and the primary alias is created.
Slash (/) is replaced with a single dash (-) and the secondary alias is created (requires GitLab Runner v1.1.0 or higher).
So in your case, since the service name you used is <repository>/devops/fts-ora-inttests-db:latest, the Gitlab runner will by default generate two (2) aliases.
<repository>__devops__fts-ora_inttests-db
<repository>-devops-fts-ora-inttests-db
In order to connect to your oracle database service, you need to refer to the alias in your configuration file or source code.
e.g.
database.url=jdbc:oracle:thin#<repository>__devops__fts-ora_inttests-db:1521:xe
//OR
database.url=jdbc:oracle:thin#<repository>-devops-fts-ora-inttests-db:1521:xe
1.b Should I name it with an alias?
In my opinion, you should just declare an alias for the service in order to keep the alias name simple which the runner will automatically use and you can refer it to it the same way.
e.g.
// .gitlab-ci.yml
services:
- name: <repository>/devops/fts-ora-inttests-db:latest
alias: oracle-db
// my-app.properties
database.url=jdbc:oracle:thin#oracle-db:1521:xe
2. Do I need to expose ports/bridge docker networks in order to get this to work correctly?
Yes, the Oracle DB docker image that would be used for the service must have ports 1521 and 5500 exposed declared in the dockerfile in order for you to access it.
Sources:
Gitlab Documentation Accessing the Services

How can Ideploying APIs created in zend expressive in aws lambda?

I have created a Zend expressive application that basically exposes a few APIs. I want to deploy this now to AWS Lambda. What is the best way to refactor the code quickly and easily (or is there any other alternatives) to deploy it? I am fairly new in AWS.
I assume that you have found the answer already since the question is more than five months old. But I am posting what I have found in my recent research in the same criteria. Please note that you need to have at least some idea on how AWS IAM, Lambda, API Gateway in order to follow the steps I have described below. Also please note that I have only deployed the liminas/mezzio skeleton app during this research and you'll need much more work to deploy a real app because it might need database & storage support in the AWS environment which might require to adapt your application accordingly.
PHP application cab be executed using the support for custom runtimes in AWS. You could check this AWS blog article on how to get it done but it doesn't cover any specific PHP framework.
Then I have found this project which provides all then necessary tools for running a PHP application in serverless environment. You could go through their documentation to get an understanding how things work.
In order to get the liminas/mezzio (new name of the zend expressive project) skeltopn app working, I have followed the laravel tutorial given in the bref documentation. First I installed bref package using
composer require bref/bref
Then I have created the serverless.yml file in the root folder of the project according to the documentation and made few tweaks in it and it looked like as follows.
service: myapp-serverless
provider:
name: aws
region: eu-west-1 # Change according to the AWS region you use
runtime: provided
plugins:
- ./vendor/bref/bref
package:
exclude:
- node_modules/**
- data/**
- test/**
functions:
api:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
memorySize: 512 # Memory size for the AWS lambda function. Default is 1024MB
layers:
- ${bref:layer.php-73-fpm}
events:
- http: 'ANY /'
- http: 'ANY /{proxy+}'
Then I followed the deployment guidelines given in the bref documentation which is to use serverless framework for the deployment of the app. You can check here how to install serverless framework on your system and here to see how it need to be configured.
To install servreless I have used npm install -g serverless
To configure the tool I have used serverless config credentials --provider aws --key <key> --secret <secret>. Please note that this key used here needs Administrator Access to the AWS environment.
Then serverless deploy command will deploy your application to the AWS enviroment.
The result of the above command will give you an API gateway endpoint with which you application/api will work. This is intended as a starting point for a PHP serverless application and there might be lots of other works needed to be done to get an real application working there.

Resources