Executing maven unit tests on a Google Cloud SQL environment - maven

I have a Jenkins pod running in GCP's Kubernetes Engine and I'm trying to run a maven unit test that connects to a google cloud SQL database to perform said test. My application.yaml for my project looks like this:
spring:
cloud:
gcp:
project-id: <my_project_id>
sql:
database-name: <my_database_name>
instance-connection-name: <my_instance_connection_name>
jpa:
database-platform: org.hibernate.dialect.MySQL55Dialect
hibernate:
ddl-auto: create-drop
datasource:
continue-on-error: true
driver-class-name: com.mysql.cj.jdbc.Driver
username: <my_cloud_sql_username>
password: <my_cloud_sql_password>
The current Jenkinsfile associated with this project is:
Jenkinsfile:
pipeline {
agent any
tools{
maven 'Maven 3.5.2'
jdk 'jdk8'
}
environment {
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
DEV_DB_USER = "${env.DEV_DB_USER}"
DEV_DB_PASSWORD = "${env.DEV_DB_PASSWORD}"
}
stages {
stage('Build docker image') {
steps {
sh 'mvn -Dmaven.test.skip=true clean package'
script{
docker.build '$IMAGE:$VERSION'
}
}
}
stage('Run unit tests') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
}
}
}
}
}
}
My problem is when the Pipeline actually tries to run the mvn test using the above configuration (in my application.yaml) I'm getting this error:
Caused by:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Insufficient Permission: Request had insufficient authentication scopes.",
"reason" : "insufficientPermissions"
} ],
"message" : "Insufficient Permission: Request had insufficient authentication scopes."
}
I have two Google Cloud projects:
One that has the Kubernetes Cluster where the Jenkins pod is running.
Another project where the K8s Cluster contains my actual Spring Boot Application and the Cloud SQL database that I'm trying to access.
I also created the service account only in my Spring Boot Project for Jenkins to use with three roles: Cloud SQL Editor, Kubernetes Engine Cluster Admin and Project owner (to verify that the service account is not at fault).
I enabled the Cloud SQL, Cloud SQL admin and Kubernetes APIs in both projects and I double checked my Cloud SQL credentials and they are ok. In addition, I authenticated the Jenkins pipeline using the json file generated when I created the service account, following the recommendations discussed here:
Jenkinsfile (extract):
...
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
...

I don't believe the GCP Java SDK relies on gcloud CLI at all. Instead, it looks for an environment variable GOOGLE_APPLICATION_CREDENTIALS that points to your service account key file and GCLOUD_PROJECT (see https://cloud.google.com/docs/authentication/getting-started).
Try adding the following:
sh("export GOOGLE_APPLICATION_CREDENTIALS=${GC_KEY}")
sh("export GCLOUD_PROJECT=<project_id>")

There are a couple of different things you should verify to get this working. I'm assuming you are using the Cloud SQL JDBC SocketFactory for Cloud SQL.
You should create a testing service account and give it whatever permissions are needed to execute the tests. To connect to Cloud SQL, it needs at a minimum the "Cloud SQL Client" role for the same project as the Cloud SQL instance.
The Cloud SQL SF uses the Application Default Credentials (ADC) strategy for determining what authentication to use. This means the first place it looks for credentials is the GOOGLE_APPLICATION_CREDENTIALS env var, which should be a path to the key for the testing service account.

Related

Why can't I get my github action to run under the proper project when the workload identity is in another pool

I have 2 gcp projects pool-infra pool-dev. I use a github action to run a mvn command the configuration look like this...
- name: Authenticate with pure-infra project
uses: 'google-github-actions/auth#v0.8.1'
with:
service_account: my#pool-infra.iam.gserviceaccount.com
workload_identity_provider: projects/<pool-infra-id>/locations/global/workloadIdentityPools/....
token_format: 'access_token'
project_id: pure-platform-dev
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
with:
project_id: pool-app
- name: Run Package
working-directory: my-service
run: |
gcloud config set project pool-app
gcloud config get project
mvn clean package jacoco:report
But I see an error that suggests the project ID is incorrect...
"errors": [
{
"domain": "usageLimits",
"message": "Cloud SQL Admin API has not been used in project <pool-infra-num> before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview?project=<pool-infra-num> then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.",
"reason": "accessNotConfigured",
"extendedHelp": "https://console.developers.google.com"
}
],
I would expect those project nums to be for pool-app not infra. What am I missing? How do I properly set the project for the mvn project?
This is coming from the JDBC connection pool when it tries to connect.

Spring Boot app in Docker container not starting in Cloud Run after building successfully - cannot access jarfile

I've set up continuous deployment to Cloud Run from GitHub for my Spring Boot project, and while it's successfully building in Cloud Build, when I go over to Cloud Run, I get the following error under Creating Revision:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
When I go over to the Logs, I see the following errors:
2022-09-23 09:42:47.881 BST
Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar
{
insertId: "632d7187000d739d29eb84ad"
labels: {5}
logName: "projects/educity-manager/logs/run.googleapis.com%2Fstderr"
receiveTimestamp: "2022-09-23T08:42:47.883252595Z"
resource: {2}
textPayload: "Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar"
timestamp: "2022-09-23T08:42:47.881565Z"
}
2022-09-23 09:43:48.800 BST
run.googleapis.com
…ager/revisions/educity-manager-00011-fod
Ready condition status changed to False for Revision educity-manager-00011-fod with message: Deploying Revision.
{
insertId: "w6ptr6d20ve"
logName: "projects/educity-manager/logs/cloudaudit.googleapis.com%2Fsystem_event"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
resourceName: "namespaces/educity-manager/revisions/educity-manager-00011-fod"
response: {6}
serviceName: "run.googleapis.com"
status: {2}}
receiveTimestamp: "2022-09-23T08:43:49.631015104Z"
resource: {2}
severity: "ERROR"
timestamp: "2022-09-23T08:43:48.800371Z"
}
Dockerfile is as follows (and looking at the build log all of the commands in it completed successfully):
FROM openjdk:17-jdk-alpine
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
COPY . /app
ENTRYPOINT [ "java","-jar","/app/target/educity-manager-0.0.1-SNAPSHOT.jar" ]
I've read that Cloud Run defaults to exposing Port 8080, but just to be on the safe side I've put server.port=${PORT:8080} in my application.properties file (but it seems to make no difference one way or the other).
I have run into similar issues in the past. Usually, I am able to resolve this issue by:
specifying the port in the application itself (as you indicated in your post), and
exposing the required port in my dockerfile eg. EXPOSE 8080
Oh my good god I have done it. After two full days of digging, I realised that because I was doing it through github, my .gitignore file was excluding the /target folder containing the jar file, so Cloud Build never got the jar file mentioned in the Dockerfile.
I am going to have a cry and then go to the pub.

How to push a docker image to remote registry with gradle in a spring boot project

I want to use that ./gradlew bootBuildImage comman to build a docker Image.
That command works perfectaly on my localmachine.
I have a remote docker registry on my server, and I want to push my Images from my local machine directly into my registry using bootBuildImage
To achieve that I added this into my build gradle.
tasks.named("bootBuildImage") {
docker {
builderRegistry {
username = "admin"
password = "secret-password"
url = "https://registry.myserver.com"
}
}
}
On ./gradlew bootBuildImage
I got this Error:
FAILURE: Build failed with an exception.
What went wrong: Execution failed for task ':bootBuildImage'.
Docker API call to 'localhost/v1.24/images/create?fromImage=docker.io%2Fpaketobuildpacks%2Fbuilder%3Abase'
failed with status code 500 "Internal Server Error" and message "Head
"https://registry-1.docker.io/v2/paketobuildpacks/builder/manifests/base":
una uthorized: incorrect username or password"
Username & Password are 100% correct.
Probably it should be publishRegistry { ... }, not builderRegistry

.NET Core 3.1 Docker in Visual Studio accessing Azure Key Vault

I am trying to run a .NET Core 3.1 Application in Docker locally in Visual Studio. The application needs to access a Azure Key Vault.
When I run the application I get the following error:
One or more errors occurred. (Parameters: Connection String: [No
connection string specified], Resource: https://vault.azure.net,
Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried the following 3 methods to get an access
token, but none of them worked.
Parameters: Connection String: [No
connection string specified], Resource: https://vault.azure.net,
Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried to get token using Managed Service Identity.
Access token could not be acquired. Connection refused
Parameters:
Connection String: [No connection string specified], Resource:
https://vault.azure.net, Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried to get token using Visual Studio. Access
token could not be acquired. Environment variable LOCALAPPDATA not
set.
Parameters: Connection String: [No connection string specified],
Resource: https://vault.azure.net, Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried to get token using Azure CLI. Access token
could not be acquired. /bin/bash: az: No such file or directory
Note: it works fine using IIS Express! Please help! :D
Please set the required environment variables when using DefaultAzureCredential to authenticate Azure key vault.
In this scenario, it means to set the environment variables in Dockerfile.
ENV AZURE_CLIENT_ID=<Your AZURE CLIENT ID>
ENV AZURE_CLIENT_SECRET=<Your CLIENT SECRET>
ENV AZURE_TENANT_ID=<Your TENANT ID>
In an attempt to avoid the accepted answer (because of obvious security issues), and to simplify and automate E. Staal's answer (on a duplicate question), I came up with this:
Update your .gitignore file, by adding the following line to the bottom of it:
appsettings.local.json
Right click on the project in Solution Explorer, and click on Properties; in the Build Events tab, find the Pre-build event command line text box and add the following code:
cd /d "$(ProjectDir)"
if exist "appsettings.local.json" del "appsettings.local.json"
if "$(ConfigurationName)" == "Debug" (
az account get-access-token --resource=https://vault.azure.net > appsettings.local.json
)
In your launchSettings.json (or using the Visual Editor under project settings) configure the following values:
{
"profiles": {
// ...
"Docker": {
"commandName": "Docker",
"environmentVariables": {
"DOTNET_ENVIRONMENT": "Development",
"AZURE_TENANT_ID": "<YOUR-AZURE-TENANT-ID-HERE>"
}
}
}
}
In your Program.cs file find the CreateHostBuilder method and update the ConfigureAppConfiguration block accordingly -- here is mine as an example:
Host.CreateDefaultBuilder(args).ConfigureAppConfiguration
(
(ctx, cfg) =>
{
if (ctx.HostingEnvironment.IsDevelopment())
{
cfg.AddJsonFile("appsettings.local.json", true);
}
var builtConfig = cfg.Build();
var keyVault = builtConfig["KeyVault"];
if (!string.IsNullOrWhiteSpace(keyVault))
{
var accessToken = builtConfig["accessToken"];
cfg.AddAzureKeyVault
(
$"https://{keyVault}.vault.azure.net/",
new KeyVaultClient
(
string.IsNullOrWhiteSpace(accessToken)
? new KeyVaultClient.AuthenticationCallback
(
new AzureServiceTokenProvider().KeyVaultTokenCallback
)
: (x, y, z) => Task.FromResult(accessToken)
),
new DefaultKeyVaultSecretManager()
);
}
}
)
If this still doesn't work, verify that az login has been performed and that az account get-access-token --resource=https://vault.azure.net works correctly for you.

Azure devops terraform pipeline generate client id and secret

I am using this terraform manifest to deploy AKS on Azure. I can do this via the commandline fine and it works, as I have azure cli configured on my machine to generate client id and secret
https://github.com/anubhavmishra/terraform-azurerm-aks
However, I am now building this on Azure Devops Pipeline
So, far i have managed to run terraform init and plan with backend storage on Azure, using Azure Devops using this extension
https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
Question: How do i get client id and secret on the Azure devops pipeline and set that as an environment variable for terraform? I tried creating a bash az command in the pipeline
> az ad sp create-for-rbac --role="Contributor"
> --scopes="/subscriptions/YOUR_SUBSCRIPTION_ID"
but failed with this error
> 2019-03-27T10:41:58.1042923Z
2019-03-27T10:41:58.1055624Z Setting AZURE_CONFIG_DIR env variable to: /home/vsts/work/_temp/.azclitask
2019-03-27T10:41:58.1060006Z Setting active cloud to: AzureCloud
2019-03-27T10:41:58.1069887Z [command]/usr/bin/az cloud set -n AzureCloud
2019-03-27T10:41:58.9004429Z [command]/usr/bin/az login --service-principal -u *** -p *** --tenant ***
2019-03-27T10:42:00.0695154Z [
2019-03-27T10:42:00.0696915Z {
2019-03-27T10:42:00.0697522Z "cloudName": "AzureCloud",
2019-03-27T10:42:00.0698958Z "id": "88bfee03-551c-4ed3-98b0-be68aee330bb",
2019-03-27T10:42:00.0704752Z "isDefault": true,
2019-03-27T10:42:00.0705381Z "name": "Visual Studio Enterprise",
2019-03-27T10:42:00.0706362Z "state": "Enabled",
2019-03-27T10:42:00.0707434Z "tenantId": "***",
2019-03-27T10:42:00.0716107Z "user": {
2019-03-27T10:42:00.0717485Z "name": "***",
2019-03-27T10:42:00.0718161Z "type": "servicePrincipal"
2019-03-27T10:42:00.0718675Z }
2019-03-27T10:42:00.0719185Z }
2019-03-27T10:42:00.0719831Z ]
2019-03-27T10:42:00.0728173Z [command]/usr/bin/az account set --subscription 88bfee03-551c-4ed3-98b0-be68aee330bb
2019-03-27T10:42:00.8569816Z [command]/bin/bash /home/vsts/work/_temp/azureclitaskscript1553683312219.sh
2019-03-27T10:42:02.4431342Z ERROR: Directory permission is needed for the current user to register the application. For how to configure, please refer 'https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal'. Original error: Insufficient privileges to complete the operation.
2019-03-27T10:42:02.5271752Z [command]/usr/bin/az account clear
2019-03-27T10:42:03.3092558Z ##[error]Script failed with error: Error: /bin/bash failed with return code: 1
2019-03-27T10:42:03.3108490Z ##[section]Finishing: Azure CLI
Here is how I do it with Azure Pipelines.
Create a Service Principal for Terraform.
Create the following variables in your pipeline
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID
ARM_TENANT_ID
If you choose to store ARM_CLIENT_SECRET as a secret in Azure DevOps you will need to do the following in your task under the Environment Variables sections of the task to get it decrypted so terraform can read it.
you just need to grant your service connections rights to create service principals. but I'd generally advise against that, just precreate a service principal and use it in your pipeline. creating a new service principal on each run seems excessive.
you can use build\release variables and populate those with client id\secret
The approach defined in the post https://medium.com/#maninder.bindra/creating-a-single-azure-devops-yaml-pipeline-to-provision-multiple-environments-using-terraform-e6d05343cae2?
can be considered as well. Here the Keyvault task is used to fetch the secrets from Azure Vault (these include terraform backend access secrets as well as aks sp secrets):
#KEY VAULT TASK
- task: AzureKeyVault#1
inputs:
azureSubscription: '$(environment)-sp'
KeyVaultName: '$(environment)-pipeline-secrets-kv'
SecretsFilter: 'tf-sp-id,tf-sp-secret,tf-tenant-id,tf-subscription-id,tf-backend-sa-access-key,aks-sp-id,aks-sp-secret'
displayName: 'Get key vault secrets as pipeline variables'
And then you can use the secrets as variables in the rest of the pipeline. FOr instance aks-sp-id can be referred to as $(aks-sp-id). So the bash/azure-cli task can be something like
# AZ LOGIN USING TERRAFORM SERVICE PRINCIPAL
- script: |
az login --service-principal -u $(tf-sp-id) -p $(tf-sp-secret) --tenant $(tf-tenant-id)
cd $(System.DefaultWorkingDirectory)/tf-infra-provision
Followed by terraform init and plan (plan shown below, see post for complete pipeline details)
# TERRAFORM PLAN
echo '#######Terraform Plan########'
terraform plan -var-file=./tf-vars/$(tfvarsFile) -var="client_id=$(tf-sp-id)" -var="client_secret=$(tf-sp-secret)" -var="tenant_id=$(tf-tenant-id)" -var="subscription_id=$(tf-subscription-id)" -var="aks_sp_id=$(aks-sp-id)" -var="aks_sp_secret=$(aks-sp-secret)" -out="out.plan"
Hope this helps.

Resources