Override default jkube deployment name - maven

Is it possible to override the default deployment naming in jkube? I want to do something similar to the docker image naming where I can provide a pattern.
The deployment section in the resources documentation looked promising but those options are not present in the plugin.
The default naming appears to be the maven ${project.artifactId} but I have not found that documented anywhere. Digging through the code I can see the ResourceConfig is out of sync with the documentation and the examples.

I'm from Eclipse JKube/FMP's development team. I think you should be able to override default controller name by either using jkube.enricher.jkube-controller.name property or by providing XML configuration for jkube-controller (Enricher which is responsible for default Deployment by plugin) like this:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.0.0-alpha-1</version>
<configuration>
<enricher>
<config>
<jkube-controller>
<name>some-deployment</name>
</jkube-controller>
</config>
</enricher>
</configuration>
</plugin>
When I tried this, I was able to see Deployment's name being changed as per our configuration:
~/work/repos/eclipse-jkube-demo-project : $ mvn k8s:resource k8s:apply
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- kubernetes-maven-plugin:1.0.0-alpha-1:resource (default-cli) # random-generator ---
[INFO] k8s: Running generator spring-boot
[INFO] k8s: spring-boot: Using Docker image fabric8/java-centos-openjdk8-jdk:1.5 as base / builder
[INFO] k8s: jkube-controller: Adding a default Deployment
[INFO] k8s: jkube-service: Adding a default service 'random-generator' with ports [8080]
[INFO] k8s: jkube-healthcheck-spring-boot: Adding readiness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 10 seconds
[INFO] k8s: jkube-healthcheck-spring-boot: Adding liveness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 180 seconds
[INFO] k8s: jkube-revision-history: Adding revision history limit to 2
[INFO]
[INFO] --- kubernetes-maven-plugin:1.0.0-alpha-1:apply (default-cli) # random-generator ---
[INFO] k8s: Using Kubernetes at https://192.168.39.93:8443/ in namespace default with manifest /home/rohaan/work/repos/eclipse-jkube-demo-project/target/classes/META-INF/jkube/kubernetes.yml
[INFO] k8s: Using namespace: default
[INFO] k8s: Creating a Service from kubernetes.yml namespace default name random-generator
[INFO] k8s: Created Service: target/jkube/applyJson/default/service-random-generator.json
[INFO] k8s: Creating a Deployment from kubernetes.yml namespace default name some-deployment
[INFO] k8s: Created Deployment: target/jkube/applyJson/default/deployment-some-deployment.json
[INFO] k8s: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 8.076 s
[INFO] Finished at: 2020-04-03T16:57:04+05:30
[INFO] ------------------------------------------------------------------------
~/work/repos/eclipse-jkube-demo-project : $ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
some-deployment 0/1 1 0 8s
~/work/repos/eclipse-jkube-demo-project : $ kubectl get pods
NAME READY STATUS RESTARTS AGE
some-deployment-97495447b-z9p48 0/1 Running 0 15s

Related

fabric8: Add a configmap

I've been able to build and deploy my spring boot service image into openshift using fabric8 plugin.
I need to add some configmap.
I've tried to add an striaghtforward configmap.yml into src/main/fabric8.
Currently, I'm getting this message:
[INFO] --- fabric8-maven-plugin:4.4.1:resource (default-cli) # connector ---
[INFO] F8: Using Container image name of namespace: arxius-linia
[INFO] F8: Running generator spring-boot
[INFO] F8: spring-boot: Using Container image fabric8/java-centos-openjdk11-jdk:1.6.3 as base / builder
[INFO] F8: using resource templates from /home/jeusdi/projects/arxius-linia/connector/src/main/fabric8
[INFO] F8: fmp-controller: Adding a default Deployment
[INFO] F8: fmp-service: Adding a default service 'connector' with ports [8080]
[WARNING] F8: fmp-git: Could not detect any git remote
[WARNING] F8: fmp-git: Could not detect any git remote
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 20.839 s
[INFO] Finished at: 2020-11-25T14:08:29+01:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:4.4.1:resource (default-cli) on project connector: Execution default-cli of goal io.fabric8:fabric8-maven-plugin:4.4.1:resource failed.: NullPointerException -> [Help 1]
My configmap.yml is:
data:
application.properties: |
spring.profiles.active=dev
My current related pom.xml configuration is:
<build>
<plugins>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>4.4.1</version>
</plugin>
</plugins>
</build>
It's zero configuration based.
Any ideas about how to add my configmap.yml?
I'm from Fabric8/Eclipse JKube team.
As I've mentioned in the comment, you should try switching to Eclipse JKube since it's going to be supported in future. Migrating to Eclipse JKube is as simple as running this goal:
$ mvn org.eclipse.jkube:kubernetes-maven-plugin:migrate
I tried out your ConfigMap resource fragment with this version of kubernetes-maven-plugin in one of my demo projects:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.1.0</version>
<executions>
<execution>
<goals>
<goal>resource</goal>
</goals>
</execution>
</executions>
</plugin>
I added your ConfigMap to my src/main/jkube directory:
eclipse-jkube-sample-with-resource-fragments : $ cat src/main/jkube/second-configmap.yaml
data:
application.properties: |
spring.profiles.active=dev
When I did mvn k8s:resource k8s:apply, I was able to see ConfigMap being created:
eclipse-jkube-sample-with-resource-fragments : $ mvn k8s:resource k8s:apply
[INFO] Scanning for projects...
[INFO]
[INFO] -------< org.eclipse.jkube.quickstarts.maven:external-resources >-------
[INFO] Building Eclipse JKube :: Quickstarts :: Maven :: External Resources 1.1.0
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- kubernetes-maven-plugin:1.1.0:resource (default-cli) # external-resources ---
[INFO] k8s: Running generator spring-boot
[INFO] k8s: spring-boot: Using Docker image quay.io/jkube/jkube-java-binary-s2i:0.0.9 as base / builder
[INFO] k8s: Using resource templates from /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/src/main/jkube
[INFO] k8s: jkube-controller: Adding a default Deployment
[INFO] k8s: jkube-healthcheck-spring-boot: Adding readiness probe on port 8080, path='/health', scheme='HTTP', with initial delay 10 seconds
[INFO] k8s: jkube-healthcheck-spring-boot: Adding liveness probe on port 8080, path='/health', scheme='HTTP', with initial delay 180 seconds
[INFO] k8s: jkube-service-discovery: Using first mentioned service port '80'
[INFO] k8s: jkube-revision-history: Adding revision history limit to 2
[INFO] k8s: validating /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes/ribbon-serviceaccount.yml resource
[INFO] k8s: validating /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes/external-resources-service.yml resource
[INFO] k8s: validating /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes/game-config-env-file-configmap.yml resource
[INFO] k8s: validating /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes/second-configmap.yml resource
[INFO] k8s: validating /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes/external-resources-deployment.yml resource
[INFO] k8s: validating /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes/my-ingress-ingress.yml resource
[INFO]
[INFO] --- kubernetes-maven-plugin:1.1.0:apply (default-cli) # external-resources ---
[INFO] k8s: Using Kubernetes at https://192.168.39.102:8443/ in namespace default with manifest /home/rohaan/work/repos/eclipse-jkube-sample-with-resource-fragments/target/classes/META-INF/jkube/kubernetes.yml
[INFO] k8s: Creating a ServiceAccount from kubernetes.yml namespace default name ribbon
[INFO] k8s: Created ServiceAccount: target/jkube/applyJson/default/serviceaccount-ribbon-1.json
[INFO] k8s: Creating a Service from kubernetes.yml namespace default name external-resources
[INFO] k8s: Created Service: target/jkube/applyJson/default/service-external-resources-1.json
[INFO] k8s: Creating a ConfigMap from kubernetes.yml namespace default name game-config-env-file
[INFO] k8s: Created ConfigMap: target/jkube/applyJson/default/configmap-game-config-env-file-1.json
[INFO] k8s: Creating a ConfigMap from kubernetes.yml namespace default name second
[INFO] k8s: Created ConfigMap: target/jkube/applyJson/default/configmap-second-1.json
[INFO] k8s: Creating a Deployment from kubernetes.yml namespace default name external-resources
[INFO] k8s: Created Deployment: target/jkube/applyJson/default/deployment-external-resources-1.json
[INFO] k8s: Applying Ingress my-ingress from kubernetes.yml
[INFO] k8s: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.821 s
[INFO] Finished at: 2021-02-09T12:37:38+05:30
[INFO] ------------------------------------------------------------------------
When I checked, I was able to see ConfigMap created in default namespace:
eclipse-jkube-sample-with-resource-fragments : $ kubectl get configmap second -o yaml
apiVersion: v1
data:
application.properties: |
spring.profiles.active=dev
kind: ConfigMap
metadata:
creationTimestamp: "2021-02-09T07:07:37Z"
...

Skaffold dev works with minikube only. Other on-prem cluster fails

I have a Spring Boot app with jib-maven configured
POM
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.1.0</version>
<configuration>
<from>

</from>
<to>

<tags>
<tag>${project.version}</tag>
</tags>
<tags>
<tag>latest</tag>
</tags>
</to>
<container>
<jvmFlags>
<jvmFlag>-XX:+UseContainerSupport</jvmFlag>
<jvmFlag>-XX:MinRAMPercentage=60.0</jvmFlag>
<jvmFlag>-XX:MaxRAMPercentage=90.0</jvmFlag>
<jvmFlag> -XshowSettings:vm</jvmFlag>
</jvmFlags>
<mainClass>com.demo.DemoApplication</mainClass>
</container>
</configuration>
SKAFFOLD.YAML
apiVersion: skaffold/v2beta1
kind: Config
metadata:
name: springtokube
build:
artifacts:
- image: registry.demo/springtokube
jib:
project: com.demo:springtokube
local:
push: true
concurrency: 1
useBuildkit: false
useDockerCLI: true
deploy:
kubectl:
manifests:
- deployment.yaml
ALSO SET INSECURE REGISTRY
skaffold config set --global insecure-registries registry.demo
But when using minikube I can run successfully
skaffold dev
When using other cluster (ON-PREM) I get
FATA[0016] exiting dev mode because first build failed: build failed: building [registry.demo/springtokube]: build artifact: getting image: GET http://registry.demo/v2/: : Not Found
What might be the problem?
I restarted today using kubectl context
skaffold debug --no-prune=false --cache-artifacts=false
And It Failed
Listing files to watch...
Generating tags...
- registry.demo/springtokube -> registry.demo/springtokube:cf60c31
Found [minikube] context, using local docker daemon.
Building [registry.demo/springtokube]...
.............
...............
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.294 s - in com.demo.springtokube.SpringtokubeApplicationTests
2020-04-15 08:45:48.277 INFO 30662 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO]
[INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) # springtokube ---
[INFO] Building jar: ....../springtokube/target/springtokube.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:2.2.6.RELEASE:repackage (repackage) # springtokube ---
[INFO] Replacing main artifact with repackaged archive
[INFO]
[INFO] --- jib-maven-plugin:2.1.0:build (default-cli) # springtokube ---
[INFO]
[INFO] Containerizing application to registry.demo/springtokube:cf60c31, registry.demo/springtokube...
[WARNING] Base image 'openjdk:11-jre-slim' does not use a specific image digest - build may not be reproducible
[INFO] Getting manifest for base image openjdk:11-jre-slim...
[INFO] Building dependencies layer...
[INFO] Building resources layer...
[INFO] Building classes layer...
[INFO] Using credentials from Docker config (~/.docker/config.json) for registry.demo/springtokube:cf60c31
[WARNING] Cannot verify server at https://registry.demo/v2/. Attempting again with no TLS verification.
[WARNING] Cannot verify server at https://registry.demo/v2/springtokube/blobs/sha256:1fb3fb86aa52691fa3705554da5ba07dcb556f62a93ba7efab0e397ca3db092c. Attempting again with no TLS verification.
[WARNING] Cannot verify server at https://registry.demo/v2/springtokube/blobs/sha256:88a7d9887f9fdeb5a4736d07c64818453e00e71fe916b13f413eb6e545445a68. Attempting again with no TLS verification.
[WARNING] Cannot verify server at https://registry.demo/v2/springtokube/blobs/sha256:a6c851c4b90b9eb7af89d240dd4f438dba9feba5c78600fed7eadddf8cb7b647. Attempting again with no TLS verification.
[INFO] The base image requires auth. Trying again for openjdk:11-jre-slim...
[INFO] Using credentials from Docker config (~/.docker/config.json) for openjdk:11-jre-slim
[INFO] Using base image with digest: sha256:01669f539159a1b5dd69c4782be9cc7da0ac1f4ddc5e2c2d871ef1481efd693e
[INFO]
[INFO] Container entrypoint set to [java, -XX:+UseContainerSupport, -XX:MinRAMPercentage=60.0, -XX:MaxRAMPercentage=90.0, -XshowSettings:vm, -cp, /app/resources:/app/classes:/app/libs/*, com.demo.springtokube.SpringtokubeApplication]
[INFO]
[INFO] Built and pushed image as registry.demo/springtokube:cf60c31, registry.demo/springtokube
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 20.058 s
[INFO] Finished at: 2020-04-15T08:45:57+03:00
[INFO] ------------------------------------------------------------------------
Pruning images...
FATA[0024] exiting dev mode because first build failed: build failed: building [registry.demo/springtokube]: build artifact: getting image: GET http://registry.demo/v2/: : Not Found
I thought the minikube works. But disabling cache fails to build
if I run
skaffold debug OR skaffold dev
Works Fine
But if I run with cache disabled
skaffold debug --no-prune=false --cache-artifacts=false
FAILS it shows the logs above
After days of struggling I found a solution.
Following Brian de Alwis suggestions I was able to make Skaffold work with Self Signed Certificate.
Skaffold build or dev does not use certificate put in.
/etc/docker/certs.d/myregistrydomain.com/ca.crt
The path is used by docker client only.
The solution was to put yout registry certificate into
/usr/local/share/ca-certificates/myregistrydomain.com.crt
Then
update-ca-certificates
Check The link for more info
If you are using self signed certificate no need for insecure registry in your scaffold yaml file
apiVersion: skaffold/v2beta1
kind: Config
metadata:
name: springtokube
build:
# insecureRegistries:
# - myregistrydomain.com
Or Running skaffold with
skaffold dev --insecure-registry=myregistrydomain.com
Hope this help someone else struggling to make skaffold works with self signed certificate

Fabric8: deploy java application with external pods

I am using a fabric8 maven plugin to create a docker image of the app written on spring-boot. I need also pods with Nginx,Db and Mail server. Can fabric 8 maven plugin help me create those pods as well ? If not what should I do?
I'm a maintainer of Fabric8 Maven Plugin. Fabric8 Maven Plugin has a concept of resource fragments(i.e you can add your additional resources in FMP source (src/main/fabric8 by default) directory and FMP would process and enrich them during resource goal). This goes for controller resources also, If you add a fragment of deployment with additional containers in your Deployment spec, FMP would For example, let me add a pod fragment in src/main/fabric8 directory:
~/work/repos/fmp-demo-project : $ cat src/main/fabric8/test-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: testkubee
spec:
containers:
- name: testkubepod
image: nginx
~/work/repos/fmp-demo-project : $ mvn fabric8:resource
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:4.3.0:resource (default-cli) # random-generator ---
[INFO] F8: Running generator spring-boot
[INFO] F8: spring-boot: Using Container image fabric8/java-centos-openjdk8-jdk:1.5 as base / builder
[INFO] F8: using resource templates from /home/rohaan/work/repos/fmp-demo-project/src/main/fabric8
[INFO] F8: fmp-controller: Adding a default Deployment
[INFO] F8: fmp-service: Adding a default service 'random-generator' with ports [8080]
[INFO] F8: f8-healthcheck-spring-boot: Adding readiness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 10 seconds
[INFO] F8: f8-healthcheck-spring-boot: Adding liveness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 180 seconds
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes/random-generator-service.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes/testkubee-pod.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes/random-generator-deployment.yml resource
[INFO] F8: using resource templates from /home/rohaan/work/repos/fmp-demo-project/src/main/fabric8
[INFO] F8: fmp-controller: Adding a default DeploymentConfig
[INFO] F8: fmp-service: Adding a default service 'random-generator' with ports [8080]
[INFO] F8: f8-healthcheck-spring-boot: Adding readiness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 10 seconds
[INFO] F8: f8-healthcheck-spring-boot: Adding liveness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 180 seconds
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/random-generator-service.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/testkubee-pod.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/random-generator-route.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/random-generator-deploymentconfig.yml resource
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.210 s
[INFO] Finished at: 2020-01-26T14:32:09+05:30
[INFO] ------------------------------------------------------------------------
~/work/repos/fmp-demo-project : $ mvn fabric8:apply
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:4.3.0:apply (default-cli) # random-generator ---
[INFO] F8: Using Kubernetes at https://192.168.39.39:8443/ in namespace default with manifest /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes.yml
[INFO] F8: Using namespace: default
[INFO] F8: Using namespace: default
[INFO] F8: Creating a Service from kubernetes.yml namespace default name random-generator
[INFO] F8: Created Service: target/fabric8/applyJson/default/service-random-generator-1.json
[INFO] F8: Using namespace: default
[INFO] F8: Creating a Deployment from kubernetes.yml namespace default name random-generator
[INFO] F8: Created Deployment: target/fabric8/applyJson/default/deployment-random-generator-1.json
[INFO] F8: Using namespace: default
[INFO] F8: Creating a Pod from kubernetes.yml namespace default name testkubee
[INFO] F8: Created Pod result: Pod(apiVersion=v1, kind=Pod, metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=2020-01-26T09:02:18Z, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, labels={app=random-generator, group=meetup, provider=fabric8, version=0.0.1}, managedFields=[], name=testkubee, namespace=default, ownerReferences=[], resourceVersion=200332, selfLink=/api/v1/namespaces/default/pods/testkubee, uid=9959a629-7dff-4a66-802a-c3c24107ce6b, additionalProperties={}), spec=PodSpec(activeDeadlineSeconds=null, affinity=null, automountServiceAccountToken=null, containers=[Container(args=[], command=[], env=[EnvVar(name=KUBERNETES_NAMESPACE, value=null, valueFrom=EnvVarSource(configMapKeyRef=null, fieldRef=ObjectFieldSelector(apiVersion=v1, fieldPath=metadata.namespace, additionalProperties={}), resourceFieldRef=null, secretKeyRef=null, additionalProperties={}), additionalProperties={})], envFrom=[], image=nginx, imagePullPolicy=IfNotPresent, lifecycle=null, livenessProbe=null, name=testkubepod, ports=[ContainerPort(containerPort=8080, hostIP=null, hostPort=null, name=http, protocol=TCP, additionalProperties={}), ContainerPort(containerPort=9779, hostIP=null, hostPort=null, name=prometheus, protocol=TCP, additionalProperties={}), ContainerPort(containerPort=8778, hostIP=null, hostPort=null, name=jolokia, protocol=TCP, additionalProperties={})], readinessProbe=null, resources=ResourceRequirements(limits=null, requests=null, additionalProperties={}), securityContext=SecurityContext(allowPrivilegeEscalation=null, capabilities=null, privileged=false, procMount=null, readOnlyRootFilesystem=null, runAsGroup=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, windowsOptions=null, additionalProperties={}), stdin=null, stdinOnce=null, terminationMessagePath=/dev/termination-log, terminationMessagePolicy=File, tty=null, volumeDevices=[], volumeMounts=[VolumeMount(mountPath=/var/run/secrets/kubernetes.io/serviceaccount, mountPropagation=null, name=default-token-qx85s, readOnly=true, subPath=null, subPathExpr=null, additionalProperties={})], workingDir=null, additionalProperties={})], dnsConfig=null, dnsPolicy=ClusterFirst, enableServiceLinks=true, hostAliases=[], hostIPC=null, hostNetwork=null, hostPID=null, hostname=null, imagePullSecrets=[], initContainers=[], nodeName=null, nodeSelector=null, preemptionPolicy=null, priority=0, priorityClassName=null, readinessGates=[], restartPolicy=Always, runtimeClassName=null, schedulerName=default-scheduler, securityContext=PodSecurityContext(fsGroup=null, runAsGroup=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, supplementalGroups=[], sysctls=[], windowsOptions=null, additionalProperties={}), serviceAccount=default, serviceAccountName=default, shareProcessNamespace=null, subdomain=null, terminationGracePeriodSeconds=30, tolerations=[Toleration(effect=NoExecute, key=node.kubernetes.io/not-ready, operator=Exists, tolerationSeconds=300, value=null, additionalProperties={}), Toleration(effect=NoExecute, key=node.kubernetes.io/unreachable, operator=Exists, tolerationSeconds=300, value=null, additionalProperties={})], volumes=[Volume(awsElasticBlockStore=null, azureDisk=null, azureFile=null, cephfs=null, cinder=null, configMap=null, csi=null, downwardAPI=null, emptyDir=null, fc=null, flexVolume=null, flocker=null, gcePersistentDisk=null, gitRepo=null, glusterfs=null, hostPath=null, iscsi=null, name=default-token-qx85s, nfs=null, persistentVolumeClaim=null, photonPersistentDisk=null, portworxVolume=null, projected=null, quobyte=null, rbd=null, scaleIO=null, secret=SecretVolumeSource(defaultMode=420, items=[], optional=null, secretName=default-token-qx85s, additionalProperties={}), storageos=null, vsphereVolume=null, additionalProperties={})], additionalProperties={}), status=PodStatus(conditions=[], containerStatuses=[], hostIP=null, initContainerStatuses=[], message=null, nominatedNodeName=null, phase=Pending, podIP=null, qosClass=BestEffort, reason=null, startTime=null, additionalProperties={}), additionalProperties={})
[INFO] F8: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.207 s
[INFO] Finished at: 2020-01-26T14:32:22+05:30
[INFO] ------------------------------------------------------------------------
~/work/repos/fmp-demo-project : $ kubectl get pods
NAME READY STATUS RESTARTS AGE
random-generator-86496844ff-2tzn2 1/1 Running 0 43s
testkubee 1/1 Running 0 43s
~/work/repos/fmp-demo-project : $
You can either provide pod fragments like I did in src/main/fabric8 directory or you can also provide a customized Deployment resource fragment (if you want all pods as a part of your Deployment like in this example)

How to use gradle-enterprise-maven-extension behind a proxy?

Willing to use the gradle-enterprise-maven-extension
to migrate a project from maven to gradle,
I cloned the maven-build-scan-quickstart provided by the gradle team.
But runing mvn install, as specified, after acceptation of gradle terms of service,
I receive the following message:
UnknownHostException: scans-in.gradle.com
I tried to configure the proxy in several manners:
Using commannd line option:
mvn install -Dhttps.proxyHost=localhost:8888 -Dhttps.proxyPort=localhost:8888
Using a gradle.properties file in a directory at %GRADLE_USER_HOME% directory location:
(proxy is also configured in maven's settings.xml and need no password it works fine when using maven in standard ways)
gradle.properties :
systemProp.http.proxyHost=localhost
systemProp.http.proxyPort=8888
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost|127.0.0.1
systemProp.https.proxyHost=localhost
systemProp.https.proxyPort=8888
systemProp.https.nonProxyHosts=*.nonproxyrepos.com|localhost|127.0.0.1
But I still receive the same message:
~/github/maven-build-scan-quickstart
$ mvn install
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.830 s
[INFO] Finished at: 2019-12-31T13:09:19+01:00
[INFO] ------------------------------------------------------------------------
[INFO] 7 goals, 7 executed
Publishing a build scan to scans.gradle.com requires accepting the Gradle Terms of Service defined at https://gradle.com/terms-of-service. Do you accept these terms? (yes/no): yes
[INFO] Gradle Terms of Service accepted.
[INFO]
[INFO] Publishing build scan...
[INFO]
[INFO] A network error occurred.
[INFO]
[INFO] The hostname 'scans-in.gradle.com' could not be resolved.
[INFO] You may be disconnected from the Internet.
[INFO]
[INFO] If you require assistance with this problem, please report it via https://gradle.com/help/plugin and include the following information via copy/paste.
[INFO]
[INFO] ----------
[INFO] Maven version: 3.6.2
[INFO] Extension version: 1.3.3
[INFO] Request URL: https://scans-in.gradle.com/in/maven/3.6.2/1.3.3
[INFO] Request ID: 1234567e-abcd-23de-c2d3-3fbbccd14a32
[INFO] Exception: java.net.UnknownHostException: scans-in.gradle.com
[INFO] ----------
[INFO]

VSTS Maven build - no JUnit tests run

[Update: Problem cause found! read below]
Problem: VSTS Maven build does not seem to run JUnit, does not show any JUnit results, does not seem to produce any JUnit testreports.
In VSTS, we have a Java project with a Contact and TestContact class with 1 testcase;
source\module\src\main\java\nl\customer\model\situation\Contact.java
source\module\src\test\java\nl\customer\model\ContactTest.java
source\module\pom.xml
Running Maven from Eclipse works fine. Console shows Maven using Surefire reports, running/passing the 1 unit test.
Running the project with Maven on a Windows PC also works:
mvn test
Logging:
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Domain Model
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # module ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 30 resources
[INFO] skip non existing resourceDirectory C:\project\source\projectdomain\src\main\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) # module ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) # module ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory C:\project\source\module\src\test\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) # module ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.20.1:test (default-test) # module ---
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running nl.customer.module.ContactTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 s - in nl.customer.module.ContactTest
[INFO]
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.109 s
[INFO] Finished at: 2018-01-09T14:05:03+01:00
[INFO] Final Memory: 10M/196M
[INFO] ------------------------------------------------------------------------
In VSTS, using the Maven build step (goal: install) succeeds, but the log does not show anything about JUnit, even with system.debug = true.
Consequently, trying the "Publish test results" always fails (both when using a seperate build task or the Publish TFS option in the maven build task).
It seems I have found the cause of the problem: in VSTS, the Maven build task has the option
Set MAVEN_OPTS to
In our build definition, this was set to
-Xmx1024m -X
The first parameter is correct, it sets the maximum memory.
The second parameter is incorrect. If you want Maven to give debug output, you should put the "-X" parameter under Goal(s).
There is no real warning about the second parameter in the logging.

Resources