I have a gradle build file with the following jib definition:
def baseImage = 'ghcr.io/tobias-neubert/eclipse-temurin:17.0.2_8-jre'
jib {
from {
image = baseImage
auth {
username = githubUser
password = githubPassword
}
}
to {
image = 'ghcr.io/tobias-neubert/motd-service:0.0.1'
auth {
username = githubUser
password = githubPassword
}
}
}
and the following skaffold.yaml:
apiVersion: skaffold/v4beta1
kind: Config
metadata:
name: motd-service
build:
artifacts:
- image: ghcr.io/tobias-neubert/motd-service
jib:
args:
- "-PgithubUser=tobias-neubert"
- "-PgithubPassword=secret"
manifests:
rawYaml:
- k8s/deployment.yaml
- k8s/istio.yaml
It seems that the arguments are not passed to gradle because I get the error:
Could not get unknown property 'githubPassword'
Why? What am I doing wrong and/or what have I misunderstood?
If I define the property like so:
ext {
githubPassword = System.getProperty('githubPassword', '')
}
I have to pass that property as system property vi -DgithubPassword not as -P
Related
I am working on creating a CI Pipeline using Github Actions, Terraform and Heroku. My example application is a Jmix application from Mario David (rent-your-stuff) that I am building according to his Youtube videos. Unfortunately, the regular Github integration he suggests has been turned off due to a security issue. If you attempt to use Heroku's "Connect to GitHub" button, you get an Internal Service Error.
So, as an alternative, I have changed my private repo to public and I'm trying to directly download via the Terraform heroku_build Source.URL (see the "heroku_build" section):
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 5.0"
}
herokux = {
source = "davidji99/herokux"
version = "0.33.0"
}
}
backend "remote" {
organization = "eraskin-rent-your-stuff"
workspaces {
name = "rent-your-stuff"
}
}
required_version = ">=1.1.3"
}
provider "heroku" {
email = var.HEROKU_EMAIL
api_key = var.HEROKU_API_KEY
}
provider "herokux" {
api_key = var.HEROKU_API_KEY
}
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
resource "heroku_build" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
buildpacks = ["heroku/gradle"]
source {
url = "https://github.com/ericraskin/rent-your-stuff/archive/refs/heads/master.zip"
}
}
resource "heroku_formation" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
type = "web"
quantity = 1
size = "Standard-1x"
depends_on = [heroku_build.eraskin-rsys-staging]
}
Whenever I try to execute this, I get the following build error:
-----> Building on the Heroku-20 stack
! Push rejected, Failed decompressing source code.
Source archive detected as: Zip archive data, at least v1.0 to extract
More information: https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive
My assumption is that Heroku can not download the tarball, but I can successfully download it without any authentication using wget.
How do I debug this? Is there a way to ask Heroku to show the commands that the build stack is executing?
For that matter, is there a better approach given that the normal GitHub integration pipeline is broken?
I have found a workaround for this issue, based on the notes from Heroku. They suggest using a third-party GitHub Action Deploy to Heroku instead of Terraform. To use it, I removed my heroku_build and heroku_formation from my main.tf file, so it just contains this:
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
My GitHub workflow now contains:
on:
push:
branches:
- master
pull_request:
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: Terraform Format
id: fmt
working-directory: ./infrastructure
run: terraform fmt
- name: Terraform Init
id: init
working-directory: ./infrastructure
run: terraform init
- name: Terraform Validate
id: validate
working-directory: ./infrastructure
run: terraform validate -no-color
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
working-directory: ./infrastructure
run: terraform plan -no-color -input=false
continue-on-error: true
- name: Update Pull Request
uses: actions/github-script#v6
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ️⚙️\`${{ steps.init.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${process.env.PLAN}
\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
- name: Terraform Plan Status
if: steps.plan.outcome == 'failure'
run: exit 1
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
working-directory: ./infrastructure
run: terraform apply -auto-approve -input=false
heroku-deploy:
name: 'Heroku-Deploy'
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
runs-on: ubuntu-latest
needs: terraform
steps:
- name: Checkout App
uses: actions/checkout#v3
- name: Deploy to Heroku
uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: ${{secrets.HEROKU_APP_NAME}}
heroku_email: ${{secrets.HEROKU_EMAIL}}
buildpack: https://github.com/heroku/heroku-buildpack-gradle.git
branch: master
dontautocreate: true
The workflow has two "phases". On the pull request, it runs the tests in my application, followed by terraform fmt, terraform init and terrform plan. On a merge to my master branch, it runs the terraform apply. When that completes, it runs the second job that runs the akhileshns/heroku-deploy#v3.12.12 GitHub action.
As far as I can tell, it works. YMMV, of course. ;-)
I am using Terraform to publish lambda to AWS. It works fine when I deploy to AWS but stuck on "Refreshing state..." when running against localstack.
Below is my .tf config file as you can see I configured the lambda endpoint to be http://localhost:4567.
provider "aws" {
profile = "default"
region = "ap-southeast-2"
endpoints {
lambda = "http://localhost:4567"
}
}
variable "runtime" {
default = "python3.6"
}
data "archive_file" "zipit" {
type = "zip"
source_dir = "crawler/dist"
output_path = "crawler/dist/deploy.zip"
}
resource "aws_lambda_function" "test_lambda" {
filename = "crawler/dist/deploy.zip"
function_name = "quote-crawler"
role = "arn:aws:iam::773592622512:role/LambdaRole"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
}
Below is docker compose file for localstack:
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
- '8055:8080'
environment:
- SERVICES=${SERVICES-lambda }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker-reuse }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
Does anyone know how to fix the issue?
This is how i fixed similar issue :
Set export TF_LOG=TRACE which is the most verbose logging.
Run terraform plan ....
In the log, I got the root cause of the issue and it was :
dag/walk: vertex "module.kubernetes_apps.provider.helmfile (close)" is waiting for "module.kubernetes_apps.helmfile_release_set.metrics_server"
From logs, I identify the state which is the cause of the issue: module.kubernetes_apps.helmfile_release_set.metrics_server.
I deleted its state :
terraform state rm module.kubernetes_apps.helmfile_release_set.metrics_server
Now run terraform plan again should fix the issue.
This is not the best solution, that's why I contacted the owner of this provider to fix the issue without this workaround.
The reason I failed because terraform tries to check credentials against AWS. Add below two lines in your .tf configuration file solves the issue.
skip_credentials_validation = true
skip_metadata_api_check = true
I ran into the same issue and fixed it by logging into the aws dev profile from the console.
So don't forget to log in.
provider "aws" {
region = "ap-southeast-2"
profile = "dev"
}
We are trying to connect to aws docdb from errbit, but of no luck. This is the connection string from docdb:
mongodb://user:<insertYourPassword>#dev-docdb-cluster.cluster-xxxx.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
We are able to connect to Atlas db though, the connection string format we are using for atlas is something like this:
mongodb://user:pass#cluster-shard-00-00-xxx.mongodb.net:27017,cluster-shard-00-01-xxx.mongodb.net:27017,cluster-shard-00-02-xxx.mongodb.net:27017/errbit?ssl=true&replicaSet=Cluster-shard-0&authSource=admin&w=majority
You will need to change config/mongo.rb file to be as follows:
log_level = Logger.const_get Errbit::Config.log_level.upcase
Mongoid.logger.level = log_level
Mongo::Logger.level = log_level
Mongoid.configure do |config|
uri = if Errbit::Config.mongo_url == 'mongodb://localhost'
"mongodb://localhost/errbit_#{Rails.env}"
else
Errbit::Config.mongo_url
end
config.load_configuration(
clients: {
default: {
uri: uri,
options: { ssl_ca_cert: Rails.root.join('rds-combined-ca-bundle.pem') }
}
},
options: {
use_activesupport_time_zone: true
}
)
end
You can notice that this is exactly the same as current one except for I added:
options: { ssl_ca_cert: Rails.root.join('rds-combined-ca-bundle.pem') }
It did work for me after doing this :) Of course you need rds-combined-ca-bundle.pem file to be present at your Rails root folder.
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Yes, I've had to create a docker image with the following code:
FROM errbit/errbit:latest
LABEL maintainer="Tarek N. Elsamni <tarek.samni+stackoverflow#gmail.com>"
WORKDIR /app
RUN wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
COPY ["mongo.rb", "/app/config/"]
I'm trying to deploy SonarQube on Kubernetes using configMaps.
The latest 7.1 image I use has a config in sonar.properties embedded in $SONARQUBE_HOME/conf/ . The directory is not empty and contain also a wrapper.conf file.
I would like to mount the configMap inside my container in a other location than /opt/sonar/conf/ and specify to sonarQube the new path to read the properties.
Is there a way to do that ? (environment variable ? JVM argument ? ...)
It is not recommended to modify this standard configuration in any way. But we can have a look at the SonarQube sourcecode. In this file you can find this code for reading the configuration file:
private static Properties loadPropertiesFile(File homeDir) {
Properties p = new Properties();
File propsFile = new File(homeDir, "conf/sonar.properties");
if (propsFile.exists()) {
...
} else {
LoggerFactory.getLogger(AppSettingsLoaderImpl.class).warn("Configuration file not found: {}", propsFile);
}
return p;
}
So the conf-path and filename is hard coded and you get a warning if the file does not exist. The home directory is found this way:
private static File detectHomeDir() {
try {
File appJar = new File(Class.forName("org.sonar.application.App").getProtectionDomain().getCodeSource().getLocation().toURI());
return appJar.getParentFile().getParentFile();
} catch (...) {
...
}
So this can also not be changed. The code above is used here:
#Override
public AppSettings load() {
Properties p = loadPropertiesFile(homeDir);
p.putAll(CommandLineParser.parseArguments(cliArguments));
p.setProperty(PATH_HOME.getKey(), homeDir.getAbsolutePath());
p = ConfigurationUtils.interpolateVariables(p, System.getenv());
....
}
This suggests that you can use commandline parameters or environment variables in order to change your settings.
For my problem, I defined environment variable to configure database settings in my Kubernetes deployment :
env:
- name: SONARQUBE_JDBC_URL
value: jdbc:sqlserver://mydb:1433;databaseName=sonarqube
- name: SONARQUBE_JDBC_USERNAME
value: sonarqube
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: dbpassword
I needed to use also ldap plugin but it was not possible to configure environment variable in this case. As /opt/sonarqube/conf/ is not empty, I can't use configMap to decouple configuration from image content. So, I build my own sonarqube image adding the ldap jar plugin and ldap setting in sonar.properties :
# General Configuration
sonar.security.realm=LDAP
ldap.url=ldap://myldap:389
ldap.bindDn=CN=mysa=_ServicesAccounts,OU=Users,OU=SVC,DC=net
ldap.bindPassword=****
# User Configuration
ldap.user.baseDn=OU=Users,OU=SVC,DC=net
ldap.user.request=(&(sAMAccountName={0})(objectclass=user))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
ldap.group.baseDn=OU=Users,OU=SVC,DC=net
ldap.group.request=(&(objectClass=group)(member={dn}))
a bit new to Gradle and Groovy trying to use the following task: http://bmuschko.github.io/gradle-docker-plugin/docs/groovydoc/com/bmuschko/gradle/docker/tasks/image/DockerPushImage.html
as folows:
task pushImageDev(type: DockerPushImage) {
imageName "xxxxxx:5000/${project.name}-${appEnviroment}:${version}"
registryCredentials {
email = 'none#your.business'
url = 'xxxxxx:5000'
username = 'xxxxxx'
password = 'xxxxxx'
}
}
But I keep getting...
Could not find method registryCredentials() for arguments [build_21ymvy7kfomjn3daqwpuika10$_run_closure8$_closure18#dd69c19] on task ':pushImageDev' of type com.bmuschko.gradle.docker.tasks.image.DockerPushImage
I believe you can only use registryCredentials method within the docker task configuration, and not in a custom task, like
docker {
registryCredentials {
url = 'https://gcr.io'
username = '_json_key'
password = file('keyfile.json").text
}
}
If you wanted to create a custom task, you probably have to create an actual instance of DockerRegistryCredentials to pass, like
task pushImageDev(type: DockerPushImage) {
imageName "xxxxxx:5000/${project.name}-${appEnviroment}:${version}"
registryCredentials(new DockerRegistryCredentials(...));
}
The reason is that registryCredentials {...} is an extension defined in DockerExtension.groovy, which does not work for custom tasks. It is not a setter for the field registryCredentials inside class DockerPushImage.
What also works is to nest a registry credentials call inside a docker call inside the custom task, though I am not sure why:
task pushImageDev(type: DockerPushImage) {
appEnviroment = 'dev'
imageName "xxxxxx/${project.name}-${appEnviroment}:${version}"
docker {
registryCredentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
}
}