I want to create ArgoCD Application(app#1) that will contain an operator (for instance, Postgres operator) and I want to create another ArcoCD Application(app#2) but this time this application(app#2) should be the instance of the Postgres DB itself and be managed by its operator that is installed with app#1. Is it possible using Argocd source code to create this app#2 with CRD of Postgres DB(this CRD is likely part of helm chart of Postgres operator)?
I'm not entirely sure if I got your question right, however you can use the "apps of apps" pattern to have one ArgoCD application, install several other applications on your behalf. You can read the official docs here: https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/
Related
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
I am currently in the process of creating an integration-tests pipeline for a few project we have running around that require the presence of an Oracle database to work. In order to do so, I have gone through the process of creating a dockerized pre-built Oracle database using the instruction mentioned in this document.
https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance/samples/prebuiltdb
I have successfully built the image and I'm able to verify that it works indeed correctly. I have pushed the image in question to one of our custom docker repositories and I am also able to successfully fetch from the context of the runner.
My main problem is that when the application attempts to connect to the database it fails with a connection refused error as if the database is not running (mind you I'm running the runner locally in order to test this). My question are the following:
When using a custom image, what is the name the runner exposes it?
For example, the documentation states that when I use mysql:latest
then the exposed service name would be mysql. Is this the case for
custom images as well? Should I name it with an alias?
Do I need to expose ports/brigde docker networks in order to get
this to work correctly? My reasoning behind the failure leads me to
believe that the image that runs the application is not able to
properly communicate with the Oracle service.
For reference my gitlab-ci.yml for the job in question is the following:
integration_test:
stage: test
before_script:
- echo 127.0.0.1 inttests.myapp.com >> /etc/hosts
services:
- <repository>/devops/fts-ora-inttests-db:latest
script:
- ./gradlew -x:test integration-test:test
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- build
- .gradle
only:
- master
- develop
- merge_requests
- tags
except:
- api
Can anyone please lend a hand in getting this to work correctly?
I recently configured a gitlab-ci.yml to use an Oracle docker image as a service and used our custom docker repository to fetch the image.
I'll answer your question one by one below:
1.a When using a custom image, what is the name the runner exposes it? For example, the documentation states that when I use mysql:latest then the exposed service name would be mysql. Is this the case for custom images as well?
By default, the Gitlab runner will deduce the name of the service based from the following convention [1]:
The default aliases for the service’s hostname are created from its image name following these rules:
Everything after the colon (:) is stripped.
Slash (/) is replaced with double underscores (__) and the primary alias is created.
Slash (/) is replaced with a single dash (-) and the secondary alias is created (requires GitLab Runner v1.1.0 or higher).
So in your case, since the service name you used is <repository>/devops/fts-ora-inttests-db:latest, the Gitlab runner will by default generate two (2) aliases.
<repository>__devops__fts-ora_inttests-db
<repository>-devops-fts-ora-inttests-db
In order to connect to your oracle database service, you need to refer to the alias in your configuration file or source code.
e.g.
database.url=jdbc:oracle:thin#<repository>__devops__fts-ora_inttests-db:1521:xe
//OR
database.url=jdbc:oracle:thin#<repository>-devops-fts-ora-inttests-db:1521:xe
1.b Should I name it with an alias?
In my opinion, you should just declare an alias for the service in order to keep the alias name simple which the runner will automatically use and you can refer it to it the same way.
e.g.
// .gitlab-ci.yml
services:
- name: <repository>/devops/fts-ora-inttests-db:latest
alias: oracle-db
// my-app.properties
database.url=jdbc:oracle:thin#oracle-db:1521:xe
2. Do I need to expose ports/bridge docker networks in order to get this to work correctly?
Yes, the Oracle DB docker image that would be used for the service must have ports 1521 and 5500 exposed declared in the dockerfile in order for you to access it.
Sources:
Gitlab Documentation Accessing the Services
Looking at the source code for the yarn builder for Google Cloud Build I was wondering why it is recommended to use the builder rather than specifying the entrypoint.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/yarn
Basically
steps
- name: 'gcr.io/cloud-builders/yarn'
args:
- install
vs
steps:
- name: node:10
entrypoint: yarn
args:
- install
Is it because the cloud builder is registered with the Google Cloud Container Registry which is faster to read from within Google Cloud build?
Yes, you are corrected. Indeed, it's recommended because the read from the Container Registry will be faster to be done, using the builder.
As per the code indicates, you referencing directly the Container from yarn, which will make the access faster than using an entrypoint.
Let me know if the information helped you!
I've got a local strapi set up with sqlite. I didn't think ahead, sadly that I would need use postgres to deploy to Heroku later.
After struggling to deploy with the project using sqlite, I decided to create a new project using postgres and successfully deployed it to Heroku. Now, in the local project, I've already setup content types, pages and everything. I was wondering, instead of having to recreate what I have done locally, how do I copy what I've done to the new project on Heroku including the database (sqlite --> postgres).
Has anyone done this before or maybe could point me to the right direction?
thank you in advance!
According to this:
https://github.com/strapi/strapi/issues/205#issuecomment-490813115
Database migration (content types and relations) is no longer an issue, but moving existing data entries from one database to another is.
To change database provider, I suppose you just need to edit config/environments/**/database.json according to Postgres setup.
Faced same issue, my solution is to use new project to generate a core for PostgreSQL and then run your existing code base on freshly created PostgreSQL:
npx create-strapi-app my-project and then choose custom -> PostgreSQL (Link)
When manually create a collections that are exists in SQLite, without fields
Run your old codebase with new database config which point on a PostgreSQL (that will create fields that you have in your data models)
Require a little bit of manual work, but works for me. Good luck!
I'm writing a Helm chart for a custom application that we'll need to bring up in different environments within my organization. This application has some pieces in Kubernetes (which is why I'm writing the Helm chart) and other pieces outside of K8S, more specifically various resources in AWS which I have codified with Terraform.
One of those resources is a Lambda function, which I have fronted with API Gateway. This means that when I run the Terraform in a new environment, it creates the Lambda function and attaches an API Gateway endpoint to it, with a brand new URL which AWS generates for that endpoint. I'm having Terraform record that URL as an output variable, and moreover I have a non-local backend configured so that Terraform is saving its state remotely.
What I want to do is tie them both together, directly from Helm. I want a way to run the Terraform so that it brings up my Lambda, and by doing so saves the generated API Gateway URL to its remote state file. Then when I install my Helm chart, I'd like it if Helm were smart enough to automatically pull from the Terraform remote state file to get the URL it needs of the API Gateway endpoint, to use as a variable within my chart.
Currently, I either have to copy and paste, or use Bash. I can get away with doing it with a bash script much like this one:
#!/bin/bash
terraform init
terraform plan -out=tfplan.out
terraform apply tfplan.out
export WEBHOOK_URL=$(terraform output webhook_url)
helm install ./mychart --set webhook.url="${WEBHOOK_URL}"
But using a Bash script to accomplish this is not ideal. It requires that I run it in the same directory as the Terraform files (because the output command must be called from that directory), and it doesn't account for different methods of authentication we might use. Moreover, other developers on the team might want to run Terraform and Helm directly and not have to rely on a custom bash script to do it for them. Since this bash script is effectively acting as an "operator," and since Helm already is kind of an operator itself, I'm wondering if there's some way I can do it entirely within Helm?
The Terraform remote state files are ultimately just JSON files. I happen to be using the Consul backend, but I could just as easily use the S3 backend or any other; at the end of the day Terraform will manifest its state as a JSON file somewhere, where (presumably) Helm could read it and pick out the specific output value. Except I'm not sure if Helm is powerful enough to do this. Looking over their documentation, I didn't really see anything outside of writing your normal values.yaml templates to specify defaults. Does Helm have any functions built into it around making REST requests for external JSON? Is this something that could be done?
Helm does not have any functionality to search in files/templates.
It needs for you to tell it exactly what to inject.