Exit from shell script after execution via maven - shell

I am executing below shell script (my.sh) via maven as shown below.
#!/bin/sh
oc login "https://server-name:8443" --insecure-skip-tls-verify--
echo "Enter userid password"
UN="username"
PWD="password"
-u $UN -p $PWD
oc project dev1
oc port-forward image-name 1521:1521 && exit
POM
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.6.0</version>
<executions>
<execution>
<id>my-exec</id>
<phase>initialize</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>sh</executable>
<arguments>
<argument>-c</argument>
<argument>${project-home}/resources/my.sh</argument>
</arguments>
</configuration>
</plugin>
But the problem is when run mvn command ( mvn spring-boot:run -Dspring-boot.run.profiles=it)
then terminal stops at Forwarding from 127.0.0.1:1521 -> 1521 and stuck there without move forward.

After changing the last line as below it worked.
nohup oc port-forward image-name 1521:1521 > /dev/null 2>&1 </dev/null &
echo
the >/dev/null pipes the parent log output to a void directory
it's a workaround for starting a background process easily
nohup - disconnects the inputs and outputs of the background sub-process being launched from the parent process and tells the sub-process not to respond to HUP (hangup) signal

Related

Spring Boot app crashing in Docker container (but not at cmd line)

I have a small Spring Boot rest service that runs fine with:
java -jar myapp.jar
...but when I deploy in a docker container, it crashes the container when I access the service with curl:
A fatal error has been detected by the Java Runtime Environment:
SIGSEGV (0xb) at pc=0x00007f052205991a, pid=1, tid=40
JRE version: OpenJDK Runtime Environment Temurin-19.0.1+10 (19.0.1+10)
(build 19.0.1+10) Java VM: OpenJDK 64-Bit Server VM Temurin-19.0.1+10
(19.0.1+10, mixed mode, sharing, tiered, compressed oops, compressed
class pt> Problematic frame: V [libjvm.so+0xe2f91a]
JVM_handle_linux_signal+0x13a
The Dockerfile:
FROM amd64/eclipse-temurin:19.0.1_10-jre-alpine
VOLUME /opt/galleries
RUN mkdir -p /opt/rest.galleries/logs/
ARG JAR_FILE
ADD ${JAR_FILE} /opt/rest.galleries/app.jar
EXPOSE 8000
ENTRYPOINT ["java","-jar","/opt/rest.galleries/app.jar"]
Creating the container from the image:
docker run -p 8000:8000 -v /opt/galleries:/opt/galleries --memory="1g" --memory-swap="2g" -t craigfoote/rest.galleries:latest &
I am using these libraries to read webp and jpg images.:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-imaging</artifactId>
<version>1.0-alpha3</version>
</dependency>
<dependency>
<groupId>org.sejda.imageio</groupId>
<artifactId>webp-imageio</artifactId>
<version>0.1.6</version>
</dependency>
I'm building to a image via:
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.13</version>
<executions>
<execution>
<id>default</id>
<goals>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
<configuration>
<repository>${project.artifactId}</repository>
<tag>${project.version}</tag>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
The point at which it crashes is a call to:
ImageIO.read(file); // where file is a 238kB webp image
Since it works at cmd line, I assume the code operation itself is not the problem but it may be causing it, perhaps a memory issue? I tried modifying the docker run command to increase ram and swap but it didn't help:
docker run -p 8000:8000 -v /opt/galleries:/opt/galleries --memory="4g" --memory-swap="8g" -t craigfoote/rest.galleries:latest &
When the crash occurs, the console states that a hs_err_pid1.log file was written but I can't find it.
Any ideas anyone?
It appears that the base image, amd64/eclipse-temurin:19.0.1_10-jre-alpine, uses a different libc than org.sejda.imageio:webp-imageio. I changed to ubuntu base and installed openjdk-19 and everything works now. My Dockerfile:
FROM ubuntu:latest
RUN apt update && \
apt install -y openjdk-19-jdk ca-certificates-java && \
apt clean && \
update-ca-certificates -f
ENV JAVA_HOME /usr/lib/jvm/java-19-openjdk-amd64/
RUN export JAVA_HOME
VOLUME /opt/galleries
RUN mkdir -p /opt/rest.galleries/logs/
ARG JAR_FILE
ADD ${JAR_FILE} /opt/rest.galleries/app.jar
EXPOSE 8000
ENTRYPOINT ["java","-jar","/opt/rest.galleries/app.jar"]

Jib-Maven-plugin with Jenkins scripted pipeline: how to log in to private docker registry?

Regarding this problem, I updated my JHipster-Application with scripted Jenkins pipeline and have now in Jenkinsfile (partly following these hints):
[...]
def dockerImage
withEnv(["DOCKER_CREDS=credentials('myregistry-login')"]) {
stage('publish docker') {
sh "./mvnw -X -ntp jib:build"
}
}
with Jenkins global credentials myregistry-login saved in my Jenkins-Server to my own docker registry v2 docker-container https://myregistry.mydomain.com (domain changed for security reasons). I can successfully do a $ docker login myregistry.mydomain.com (as well as docker login https://myregistry.mydomain.com as well as docker login myregistry.mydomain.com:443) from local bash with the user and password stored in myregistry-login.
In pom.xml (following these hints as well as this, this and this):
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<configuration>
<to>

<tags>
<tag>${maven.build.timestamp}</tag>
<tag>latest</tag>
</tags>
<auth>
<username>${env.DOCKER_CREDS_USR}</username>
<password>${env.DOCKER_CREDS_PSW}</password>
</auth>
</to>
<container>
<jvmFlags>
<jvmFlag>-Xms512m</jvmFlag>
<jvmFlag>-Xmx1G</jvmFlag>
<jvmFlag>-Xdebug</jvmFlag>
</jvmFlags>
<mainClass>de.myproject_name.MyApp</mainClass>
</container>
</configuration>
</plugin>
where username, imagename and de.myproject_name.MyApp are placeholders here.
Unfortunately I get
[DEBUG] TIMING Retrieving registry credentials for myregistry.mydomain.com:443
[DEBUG] No credentials could be retrieved for registry myregistry.mydomain.com:443
[...]
[ERROR] I/O error for image [myregistry.mydomain.com:443/username/imagename]:
[ERROR] Connect to myregistry.mydomain.com:443 [myregistry.mydomain.com/xxx.xxx.xxx.xxx] failed: Connection refused (Connection refused)
[DEBUG] TIMED Authenticating push to myregistry.mydomain.com:443 : 460.0 ms
[DEBUG] TIMED Building and pushing image : 514.0 ms
[ERROR] I/O error for image [registry-1.docker.io/library/adoptopenjdk]:
[ERROR] Socket closed
So the withEnv isn't forwarded to Maven and/or the jib-maven-plugin is not reading the <auth>-Tag, right? What am I still doing wrong?
And why is there an I/O error to registry-1.docker.io?
Finally I've got it working.
In Jenkinsfile I edit the JHipster generated code to:
def dockerImage
stage('publish docker') {
withCredentials([usernamePassword(credentialsId: 'myregistry-login', passwordVariable: 'DOCKER_REGISTRY_PWD', usernameVariable: 'DOCKER_REGISTRY_USER')]) {
sh "./mvnw -ntp jib:build" }
}
In pom.xml I put the jib-maven-plugin configuration:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<configuration>
<from>

</from>
<to>
<auth>
<username>${DOCKER_REGISTRY_USER}</username>
<password>${DOCKER_REGISTRY_PWD}</password>
</auth>

<tags>
<tag>${maven.build.timestamp}</tag>
<tag>latest</tag>
</tags>
</to>
<container>
<jvmFlags>
<jvmFlag>-Xms512m</jvmFlag>
<jvmFlag>-Xmx1G</jvmFlag>
<jvmFlag>-Xdebug</jvmFlag>
</jvmFlags>
<mainClass>com.mypackage.MyApp</mainClass>
<entrypoint>
<shell>bash</shell>
<option>-c</option>
<arg>chmod +x /entrypoint.sh && sync && /entrypoint.sh</arg>
</entrypoint>
<ports>
<port>8080</port>
</ports>
<environment>
<SPRING_OUTPUT_ANSI_ENABLED>ALWAYS</SPRING_OUTPUT_ANSI_ENABLED>
<JHIPSTER_SLEEP>0</JHIPSTER_SLEEP>
</environment>
<creationTime>USE_CURRENT_TIMESTAMP</creationTime>
</container>
</configuration>
</plugin>
In my remote server setup my own docker registry v2 is running as a docker-container published via nginx-proxy with letsencrypt-nginx-proxy-companion. On the same custom network bridge runs my own jenkins server as another docker-container.
Some tests showed me that the container-name of the docker registry can not be named with the public DNS name of the registry (e.g. 'myregistry.mydomain.com' as container name). The jenkins docker-container gets the embedded docker dns server into resolv.conf, and docker will resolve the container-names of containers in the same network to the internal bridge-network IPs of these containers (only in case of custom docker networks).
I guess jib has to connect via ssl to push the docker image to the docker registry container and ssl has to be handled before the container with nginx-proxy, so the external address of the docker registry domain has to be used.
Also the docker hosts firewall has to be configured (according to this link) to allow traffic from the docker container jenkins through to the docker host. At the host it then goes back again to docker registry via nginx-proxy with ssl, right? In my case, this comes down to:
$ sudo firewall-cmd --info-zone=public
public (active)
target: default
icmp-block-inversion: no
interfaces: enp6s0
sources:
[...]
rich rules:
rule family="ipv4" source address="172.26.0.13/32" accept

"gpg: signing failed: Inappropriate ioctl for device" on MacOS with Maven

I have installed GPG via Homebrew with brew install gpg.
It is installed in version 2.2.17.
In my Maven POM I have this snippet:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
</plugin>
However when running mvn clean verify I get this error:
gpg: Beglaubigung fehlgeschlagen: Inappropriate ioctl for device
gpg: signing failed: Inappropriate ioctl for device
How can I fix this error?
I have added
GPG_TTY=$(tty)
export GPG_TTY
to my ~/.bash_profile file. Now it is working.
See also https://github.com/Homebrew/homebrew-core/issues/14737#issuecomment-309848851
On macOS you may want to use pinentry-mac to have GUI window to enter pin and optionally store pin in keychain.
You can install it though Homebrew:
brew install pinentry-mac
And enable it with the following line in your ~/.gnupg/gpg-agent.conf config (create it if it doesn't exists):
pinentry-program /usr/local/bin/pinentry-mac
Try this
gpg --use-agent --armor --detach-sign --output $(mktemp) pom.xml
For me, this happened because the terminal window wasn't big enough to fit the passphrase TUI. Once I opened a bigger terminal tab and then I rerun the gpg command, I was able to see the passphrase terminal user interface.
If anybody gets this error message when typing gpg commands in bash, try adding --no-tty. That fixed it for me.

How to deploy SpringBoot Maven application with Jenkins ?

I have a Spring Boot application which runs on embedded Tomcat servlet container mvn spring-boot:run . And I don’t want to deploy the project as separate war to standalone Tomcat.
Whenever I push code to BitBucket/Github, a hook runs and triggers Jenkins job (runs on Amazon EC2) to deploy the application.
The Jenkins job has a post build action: mvn spring-boot:run, the problem is that the job hangs when post build action finished.
There should be another way to do this. Any help would be appreciated.
The problem is that Jenkins doesn't handle spawning child process from builds very well. Workaround suggested by #Steve in the comment (nohuping) didn't change the behaviour in my case, but a simple workaround was to schedule app's start by using the at unix command:
> echo "mvn spring-boot:run" | at now + 1 minutes
This way Jenkins successfully completes the job without timing out.
If you end up running your application from a .jar file via java -jar app.jar be aware that Boot breaks if the .jar file is overwritten, you'll need to make sure the application is stopped before copying the artifact. If you're using ApplicationPidListener you can verify that the application is running (and stop it if it is) by adding execution of this command:
> test -f application.pid && xargs kill < application.pid || echo 'App was not running, nothing to stop'
I find very useful to first copy the artifacts to a specified area on the server to keep track of the deployed artifacts and not to start the app from the jenkins job folder. Then create a server log file there and start to listening to it on the jenkins window until the server started.
To do that I developed a small shell script that you can find here
You will also find a small article explaining how to configure the project on jenkins.
Please let me know if worked for you. Thnaks
The nohup and the at now + 1 minutes didn't work for me.
Since Jenkins was killing the process spun in the background, I ensured the process to not be killed by setting a fake BUILD_ID to that Jenkins task. This is what the Jenkins Execute shell task looks like:
BUILD_ID=do_not_kill_me
java -jar -Dserver.port=8053 /root/Deployments/my_application.war &
exit
As discussed here.
I assume you have a Jenkins-user on the server and this user is the owner of the Jenkins-service:
log in on the server as root.
run sudo visudo
add "jenkins ALL=(ALL) NOPASSWD:ALL" at the end (jenkins=your Jenkins-user)
Sign In in Jenkins and choose your jobs and click to configure
Choose "Execute Shell" in the "Post build step"
Copy and paste this:
service=myapp
if ps ax | grep -v grep | grep -v $0 | grep $service > /dev/null
then
sudo service myapp stop
sudo unlink /etc/init.d/myapp
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
else
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
fi
Save and run your job, the service should start automatically.
This worked for me on jenkins on a linux machine
kill -9 $(lsof -t -i:8080) || echo "Process was not running."
mvn clean compile
echo "mvn spring-boot:run" | at now + 1 minutes
In case no process on 8080 it will print the message and will continue.
Make sure that at is installed on your linux machine. You can use
sudo apt-get install at
to install at

maven 3 site-deploy gets stuck in authentication prompt in Jenkins build

How do you get rid of this prompt when using site-deploy?
"Are you sure you want to continue connecting?"
I know this question has been asked multiple times (link, link), but the recommended solutions do not work for me and I will explain why.
Oh, and I posted pretty much the exact same question here
where the solution is to:
# Run this manually:
ssh -o UserKnownHostsFile=foo javadoc.foo.com
# Take that file and put it in your private DAV share, and then
ssh -o UserKnownHostsFile=/private/<account>/known_hosts javadoc.foo.com
Which has been working fine 99% of the time, but using this solution, every once in a while we get the following text over and over again in the logs:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
3d:69:41:8a:ec:d1:4c:d9:75:ef:7d:71:b7:7d:61:d0.
Please contact your system administrator.
Add correct host key in known_hosts to get rid of this message.
Do you want to delete the old key and insert the new key? (yes/no)
So, back to my problem: in a nutshell, the problem is this:
When I run mvn site-deploy, it gets stuck in an infinite loop in Jenkins:
The authenticity of host 'javadoc.foo.com' can't be established.
RSA key fingerprint is 3d:69:41:8a:ec:d1:4c:d9:75:ef:7d:71:b7:7d:61:d0.
Are you sure you want to continue connecting? (yes/no)
The authenticity of host 'javadoc.foo.com' can't be established.
RSA key fingerprint is 3d:69:41:8a:ec:d1:4c:d9:75:ef:7d:71:b7:7d:61:d0.
Are you sure you want to continue connecting? (yes/no)
The machine that this occurs on is a CloudBees machine, so it's not a machine that we own. In other words, every time we do a build, a brand new machine is provisioned to us.
Our settings.xml has something like:
<server>
<id>javadoc.foo.com</id>
<username>username</username>
<password>password</password>
</server>
If it was a machine that we owned and controlled, we could manually ssh on there and run the ssh command just once so that this is fixed, but like I said, those machines are dynamically provisioned to us.
Since we are using maven 3 and not maven 2, we cannot add the following to our server section of the settings.xml:
<configuration>
<knownHostsProvider implementation="org.apache.maven.wagon.providers.ssh.knownhost.NullKnownHostProvider">
<hostKeyChecking>no</hostKeyChecking>
</knownHostsProvider>
</configuration>
Is there a way to either:
programmatically answer yes (this is not a free-style Jenkins job; this is Maven project.)
an alternative to site-deploy (ant code within the pom.xml?)
have site-deploy fail if this question does not get answered, so that the Jenkins build doesn't fill gigs of disk space with this question repeated over and over again.
tell the site-deploy plugin to set stricthostkeychecking to "no"
I would like to avoid any pre-build steps that could tweak ssh settings; I would prefer to either tweak the settings.xml, pom.xml, or maven options.
Nonetheless, I'm open to any suggestions.
You can manage to get it work using this settings.xml configuration :
<server>
<id>site</id>
<username>_your_login_user_</username>
<privateKey>_path_to_key_identify_file</privateKey>
<configuration>
<strictHostKeyChecking>no</strictHostKeyChecking>
<preferredAuthentications>publickey,password</preferredAuthentications>
<interactive>false</interactive>
</configuration>
</server>
along with the following pom.xml :
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<version>3.6</version>
<dependencies>
<dependency><!-- add support for ssh/scp -->
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh</artifactId>
<version>2.12</version>
</dependency>
</dependencies>
</plugin>
An issue https://issues.apache.org/jira/browse/WAGON-467 has been addressing the wagon-ssh plugin for the strictHostKeyChecking parameter and has been solved in recent versions.
add a shell pre-build step to create ~/.ssh/config with content :
StrictHostKeyChecking no
echo yes | mvn site:deploy
Totally fixed this for me despite having tried many other routes.
I couldn't find a way round this. Using this didnt work: org.apache.maven.wagon.providers.ssh.knownhost.NullKnownHostProvider, which seems to be a known issue.
But assuming you're on a unix box of some sort you can do this as a workaround to send yes when prompted if you don't want to change ssh config:
echo yes | mvn site:deploy
For the case that someone would not use StrictHostKeyChecking no and also if someone has this problem on Windows I have another solution:
Normaly your known_hosts could be found under
C:\Users\<YourUsername>\.ssh\known_hosts
For the Windows Service Installation of Jenkins you should copy your known_hosts to:
C:\Windows\System32\config\systemprofile\.ssh\
Or for the case of a Jenkins 64Bit version to:
C:\Windows\SysWOW64\config\systemprofile\.ssh\
For Unix/Linux systems use analog paths - copy the known_hosts (or only parts of it) from your account to Jenkins user.

Resources