Conditionally manage Helm chart dependencies without keeping the child charts inside 'charts' directory - yaml

I currently have 3 Helm repositories with the following structure:
repoA/
├── templates/
├── Chart.yaml
├── values.yaml
repoB/
├── templates/
├── Chart.yaml
├── values.yaml
masterRepo/
├── templates/
├── Chart.yaml
├── values.yaml
├── requirements.yaml
The requirements.yaml file from masterRepo is something like below:
dependencies:
- name: repoA
version: "1.0"
repository: "file://../repoA"
condition: repoA.enabled
- name: repoB
version: "1.0"
repository: "file://../repoB"
condition: repoB.enabled
I would like to only use masterRepo to deploy the dependent Helm charts.
I know I can manually put all the child repositories in the masterRepo/charts and it will work but I wanna keep these repositories independent so that other master-repositories can use any of
What to do to make parent Helm chart detect all the required Helm charts and install them conditionally (based on repoX.enabled variable) without keeping the dependent repositories inside the charts directory of the Master-helm-chart?

If you have multiple Helm charts at different locations in the system, you can create dependencies without changing their location.
With the structure specified in the question, we can add dependencies in requirements.yaml (for Helm version: 2.x.x) or Chart.yaml (for Helm version:3.x.x). I am currently using Helm v2.16.1.
Now simply run helm dependency update or helm dep up from inside the masterRepo directory and a charts directory gets created. Now the updated structure of masterRepo looks like:
masterRepo/
├── charts/
└── chartA-1.tgz
└── chartB-1.tgz
├── templates/
├── Chart.yaml
├── requirements.lock
├── requirements.yaml
├── values.yaml
The new files/directories added are:
ChartA-1.tgz and ChartB-1.tgz TAR Archive files which are nothing but zipped chartA and chartB charts.
requirements.lock: Used to rebuild the charts/ directory. Read more about this file in this SO post.
To install the child charts conditionally, you can the following the values.yaml file of the masterRepo:
repoA:
enabled: True
repoB:
enabled: True
Now a simple helm install command from inside the masterRepo will deploy masterRepo as well as it's dependencies (chartA and chartB).
Hope this helps. Happy Helming!

Related

Make gradle point to subdirectory and treat it as a rootProject

I encountered a problem with gradle project structure. I have a task that needs to be realized and some tests are meant to be executed to check whether my project structure is correct and the tasks in gradle execute correctly. However I think I misunderstood instruction a bit and I'm wondering whether I can do something with my current folders structure or If I will have to rewrite the whole project. My current project structure looks like this:
main-repo-folder/
├── docker-related-file
├── rootProject
│ ├── sub-project-1
│ ├── build(output from tasks is created here)
│ ├── build.gradle
│ ├── sub-project-2
│ ├── gradle
│ ├── gradlew
│ ├── gradlew.bat
│ ├── settings.gradle
│ └── src
As you can see, the root project is a directory inside a repo. In order for my tests to execute I think the repo itself must be a root folder (or act as one) because the tests seem to be trying executing there. And here is my question, is it possible to add f.e settings.gradle file in main-repo-folder (at the same level as rootProject folder) to "point" gradle to build from rootProject and treat that folder as the root?(I mean f.e if I call gradle clean build task_name in main-repo-folder I want to make gradle execute it as I would be in rootProject folder)
I've tried to find some information but I'm at the path of learning gradle and I don't know if it is even possible :/ .
Rename main-repo-folder/rootProject to main-repo-folder.

Bash: automatically add a file to a Xcode project?

I am creating a script.sh file that creates a Test.swift file and adds it into a Xcode project. However, I would like to know if there is a way to add this file to Xcode (in the project.pbxproj file) from this script? Instead of doing it manually in Xcode (Add files to Project...).
Thank you
3/05 Update
I tried #Johnykutty answer, here is my current Xcode project before executing the ruby script:
I have already generated a A folder with a Sample.swift file located in test, but these files are not linked to my Xcode project yet:
Now here is the script that I'm executing:
require 'xcodeproj'
project_path = '../TestCodeProjTest.xcodeproj'
project = Xcodeproj::Project.open(project_path)
file_group = project["TestCodeProjTest"]["test"]
file_group.new_file("#{project.project_dir}/TestCodeProjTest/test/A")
project.save()
This almost works fine, except that it creates a folder reference instead of a group, and it doesn't link it to my target:
Hence the content of Sample.swift is unreachable.
Its hard to achieve by bash. But really easy if you use Ruby and xcodeproj gem from Cocoapods
Consider you have file structure like
├── GeneratedFiles
│   └── Sample1.swift
├── MyProject
│   ├── AppDelegate.swift
│   ├── ... all other files
│   ├── SceneDelegate.swift
│   └── ViewController.swift
├── MyProject.xcodeproj
│   ├── project.pbxproj
│   ├── .....
└── add_file.rb
Then you can add files like
require 'xcodeproj'
project_path = 'MyProject.xcodeproj'
project = Xcodeproj::Project.open(project_path)
file_group = project["MyProject"]
file_group.new_file("../GeneratedFiles/Sample1.swift")
project.save()
UPDATE:
project["MyProject"] returns a file group which is a group named MyProject in the root of the project, you can select another group inside MyProject by file_group = project["MyProject"]["MyGroup"]
Then the generated file path should be either related to that group like file_group.new_file("../../GeneratedFiles/Sample1.swift") or full path like file_group.new_file("#{project.project_dir}/GeneratedFiles/Sample1.swift")
More details about Xcodeproj here

golang unknown revision module/vX.Y.Z and importing package properly

I have a golang application structure like this:
.
├── calc
│   ├── go.mod
│   ├── main.go
│   └── Makefile
├── go.mod
├── LICENSE
├── num
│   ├── go.mod
│   └── num.go
└── README.md
Where calc is an "application" where I'm importing the num package to add 2 numbers.
calc/go.mod
go 1.15
require github.com/github_username/goapp/num v0.2.1
num/go.mod
module github.com/github_username/goapp/num/v0.2.1
go 1.15
go.mod
module github.com/github_username/goapp/v0.2.1
go 1.15
When in /calc, and I run go run main.go, I get the following:
go: github.com/github_username/goapp/num#v0.2.1: reading github.com/github_username/goapp/num/num/go.mod at revision num/v0.2.1: unknown revision num/v0.2.1
What am I doing wrong? The github repo has the annotated tags.
For further context, I'm mimicking a production setup where we have six different mini golang services in folders such as calc, calc2, etc. where each "calc" service has a go.mod file.
module github.com/github_username/goapp/num/v0.2.1
Is nonsense. The semver version tag "v0.2.1" does not belong into the module name.
(Note that for major versions > 1, e.g. 4.3.1, the major version becomes part of the name like in module github.com/user/proj/folder/v4).
And one more: There are no source code belonging to the root go.mod so this module makes no sense whatsoever.
You really should not make that many modules.
Are you working with private repositories?
if yes, then you need to configure OAuth authentication:
export GITHUB_TOKEN=MY_GITHUB_TOKEN
git config --global url."https://${GITHUB_TOKEN}:x-oauth-basic#github.com/".insteadOf "https://github.com/"
Now, if you don't have the necessity to use private repositories!
Turn your repositories to the public that will solve the problem too.

Automatic Ansible custom modules installation with Ansible Galaxy

Is there any nice way to use Ansible Galaxy order to install and enable Ansible (2.7.9) custom modules?
My requirement allows Ansible Galaxy to download the right Ansible role which embeds my custom module. Once ansible-galaxy install --roles-path ansible/roles/ -r roles/requirements.yml, I get the following structure (non-exhaustive):
├── ansible
│   ├── roles
│   │   ├── mymodule (being imported by Galaxy)
│   │   │   ├── library
│   │   │   │   └── mymodule.py
By looking this part of the documentation, it seems like my module is at the right place and does not require any further configuration: https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html?highlight=library#directory-layout
But when I found this part of the documentation I got confused. Is ANSIBLE_LIBRARY related to the custom modules?
DEFAULT_MODULE_PATH
Description: Colon separated paths in which Ansible will search for Modules.
Type: pathspec
Default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
Ini Section: defaults
Ini Key: library
Environment: ANSIBLE_LIBRARY
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-module-path
When calling my module,
- name: Test of my Module
mymodule:
I get the following error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
I expected not to have to configure the ANSIBLE_LIBRARY and the module being automatically callable. Am I understanding correctly or should I also trick this var?
If your custom module is in a role, you need to include the role in your playbook, so at the very least:
---
- hosts: myhosts
roles:
- role: mymodule
tasks:
- name: Test of my Module
mymodule:

Dockerize a multi maven project (not multi-module)

In my maven application i have multiple projects:
Core
Application 1
Application 2
Application 1 and Application 2 are two projects that uses the core (for example, those application are built for two different customers)
In order to Dockerize all of them, the simplest way would be to create a multi-module project, but the downside is that i have all inside a single project (core + Application 1 + Application 2).
I would like to have the core separated from them.
The main problem with this configuration is that the core project need to built before the other two, and App 1 and App 2 use this as maven dependency:
App 1
<dependency>
<groupId>it.myorg</groupId>
<artifactId>core-project</artifactId>
<version>1.12.0-SNAPSHOT</version>
</dependency>
If i try to dockerize the App 1 its fail when i package it, because inside the docker container core-project 1.12.0-SNAPSHOT do not exists.
I was thinking to setup a "local maven repo", pushing the core there and App 1 will "pull" the jar from the repo and not from .m2 folder, but i dont like this soulution.
I can provide more information, sorry if i dont provide examples, but my paper is white right now :(
Folder structure
+- Core
--- pom.xml
--- src
+- Application1
--- pom.xml
--- src
The solution i'm trying to do now is create a Dockerfile for core project (FROM maven:latest), building the image with a tag and in Dockerfile of App1 use this image (so, creating a multi stage build but in two separate moments).
The best would be
FROM maven:latest as core-builder
## build the core
FROM maven:latest
## Copy jar from builder
Because the project are in separate folder, i cant build the core in this way. I need to build del core BEFORE (running docker build -t) and later copy from them.
UPDATE
After the correct answer from #mihai i'm asking if its possible a structure like this:
-- myapp-docker
- Dockerfile
- docker-compose.yml
-- core-app
-- application_1
Having Dockerfile at the same level of core-app and application_1 its totally fine and 100% working. The only "problem" is that i should put all the projects in the same repo.
This is the proposed solution with multi-stage builds.
To replicate your setup I created this structure:
.
├── Dockerfile-app1
├── application1
│ ├── pom.xml
│ └── src
│ └── main
│ ├── resources
│ └── webapp
│ ├── WEB-INF
│ │ └── web.xml
│ └── index.jsp
├── core
│ ├── pom.xml
│ └── src
│ ├── main
│ │ └── java
│ │ └── com
│ │ └── test
│ │ └── App.java
│ └── test
│ └── java
│ └── com
│ └── test
│ └── AppTest.java
In the pom.xml file from Application 1 I added the dependency to core:
<dependency>
<groupId>com.test</groupId>
<artifactId>core</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
I named the Dockerfile Dockerfile-app1, this way you can have more than 1 of them.
This is the Dockerfile-app1:
FROM maven:3.6.0-jdk-8 as build
WORKDIR /apps
COPY ./core .
RUN mvn clean install
FROM maven:3.6.0-jdk-8
# If you comment this out then the build fails because it cannot find the dependency to 'core'
COPY --from=build /root/.m2 /root/.m2
COPY ./application1 ./
RUN mvn clean install
You should probably add an entrypoint at the end to run your project or even better add another 3rd stage that only copies the generated artefacts and runs your project (this way the final image will not have your sourced in).
The first stage only builds the core submodule.
The second stage used the results of the first stage, copies only the source for application1 and builds it.
You can easily replicate this for application2 by creating a similar file Dockerfile-app2.
Since you're using maven, try dockerfile-maven to build the image. You don't want any of your build information inside of your image (like what the dependencies are), you should just add the jar at the end. I usually use it together with spring-boot-maven-plugin and repackage, to get a fully self-contained jar.

Resources