I am trying to integrate my application with the APM Framework. I've already done it for a series of other applications and everything worked accordingly. This specific Java SpringBoot application gives me the following error:
I've checked that the TLS certificate is valid in the container that the application is running. As I understand TLS 1.2 is used everywhere. I have checked and updated some http dependencies (okhttp) in case there was an issue with TLS there. No luck. I've checked that the APM SECRET TOKEN I am using is the correct one - and validated the APM environmental variables I've enetered.
I am calling ElasticApmAttacher.attach(); on the main of my Application class.
I have a elasticapm.properties file as follows:
enable_log_correlation=true
service_name=my-api-name
The following env vars:
ELASTIC_APM_APPLICATION_PACKAGES=my_classpath_here
ELASTIC_APM_SERVER_URL=apm_addresss_here
ELASTIC_APM_SECRET_TOKEN=token_here
Any ideas for what more to look for will be greatly appreciated.
I added the following env var:
ENV JAVA_OPTIONS=“-Dhttps.protocols=TLSv1.1,TLSv1.2”
Dockerfile base image change:
from java:8-jdk-alpine to openjdk:8u272-jdk
And it worked.
Related
I'm deploying a pretty basic web app to Google App Engine. I'm using Springboot and I can run the app locally just fine, but when I deploy to Google, the App Engine does not start up the instance. I have a Cloud SQL datasource being configured on startup.
I have the cloud sql configuration properties configured in src/main/resources/application.properties. It seems that App Engine cannot find these properties so it is failing to properly setup the Cloud sql datasource.
Has anyone ran into this issue? It seems like something so basic. I'm hoping a second pair of eyes and brain can shed some light. Thanks!
EDIT:
Thank you for your responses. Here is a snippet of the properties file:
#################### DATABASE SETTINGS
#POSTGRES connection parameters
application.postgres.driver-name=org.postgresql.Driver
application.postgres.url=jdbc:postgresql://<url>:5432/<db>
application.postgres.max-pool-size=5
application.postgres.min-pool-size=1
application.postgres.connection-wait-time-seconds=60
application.postgres.schema=dev
application.postgres.preparedstatement-cache-queries-size=256
application.postgres.preparedstatement-cache-size=500
application.postgres.preparedstatement-cache-sql-limit=5
application.postgres.preparedthreshold=5
application.postgres.ssl.enabled=true
#################### GOOGLE CLOUD SETTINGS
application.google.project.id=<our project>
application.google.project.region=us-central1
application.google.project.cloud.sql.instance.name=<cloud sql instance>
application.google.project.cloud.sql.instance=${application.google.project.id}:${application.google.project.region}:${application.google.project.cloud.sql.instance.name}
I have this located in the src/main/resources (as usual).
I've tried running in a flex and standard environment. I wanted to make sure one or the other wasn't causing the issue.
my appengine-web.xml looked like this for the standard env:
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>my app name</application>
<service>my service name</service>
<version>1</version>
<threadsafe>true</threadsafe>
<runtime>java8</runtime>
</appengine-web-app>
And here was my yaml file for the flex environment (nothing fancy. pretty bare bones setup.):
runtime: java
env: flex
service: my service name
handlers:
- url: /.*
script: this field is required, but ignored
I would post the stack traces, but they are not very helpful. It's basically just the error that I cannot load my cloud sql datasource (ie: because it's not picking up the properties). If I hardcode the values in my Config class that initializes the datasource, it works so I can definitely tell it's just not wanting to pick up the application.properties).
My app setup is typical:
src/main/java
src/main/resources
src/main/webapp
*src/main/appengine (yaml location for flex env)
Note: I'm attempting to pass the postgres (ie: cloud sql) username and password in my maven deploy command:
mvn package -DskipTests appengine:deploy -Dapp.deploy.projectId=myproject -Dapp.deploy.version=1 -Dapplication.postgres.username= -Dapplication.postgres.password=
I faced the similar issue. just add the following in app.yml
resources:
cpu: 2
memory_gb: 3
disk_size_gb: 10
volumes:
- volume_type: tmpfs
size_gb: 2
name: ramdisk1
I'm developing a POC over IBM HyperLedger Blockchain. I have a business network developed and deployed in IBM Cloud. I can generate a working local API REST, but cannot make it work on cloud, on the deployed IP.
I'm following this guide:
https://ibm-blockchain.github.io/interacting/
You just have to execute the following command:
./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME
But it doesn't deploy anything, and get the following (more related to kubernetes than blockchain).
Preparing yaml file for create composer-rest-server
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the server doesn't have a resource type "svc"
Creating composer-rest-server service
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server-services-free.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Composer rest server created successfully
Any ideas? Thanks too much.
You need to ensure you have a correct kube config setup. Step 10 in https://ibm-blockchain.github.io/setup/ provides the details to set up KUBECONFIG as the error suggests that either it is not configured or not configured correctly.
The document you refer to https://ibm-blockchain.github.io/interacting/ is being updated and should be available soon.
When you run the command ./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME - should be the name of the Network Admin for the network you deployed, NOT the PeerAdmin card so it will be something like ./create/create_composer-rest-server.sh --business-network-card admin#perishable-network
Look like it's an issue of acceess control. You should make sure again you are running with Local Admin configuration.it will help you to run queries
Im trying to use Spring Boot Dev tools (Spring Remote), and automatically upload recompiled files to my docker container.
I keep receiving
Unexpected 404 response uploading class files
This is my docker file:
FROM java:8
WORKDIR /first
ADD ./build/libs/first.jar /first/first.jar
EXPOSE 8080
RUN bash -c 'touch /first/first.jar'
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongodb/micros", "-Djava.security.egd", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005","-jar", "first.jar"]
This is my configuration and the configuration
And this is the error I'm receiving:
As of Spring Boot 1.5.0, devtools defaults were changed to exclude the devtools from fat jars.
If you want to include them, you have to set the excludeDevtools flag to false.
However, the devtools documentation doesn't explain how to do this. The necessary documentation is actually in the spring-boot-gradle-plugin documentation.
To do it, you can put this snippet of code in your build.gradle file:
bootRepackage {
excludeDevtools = false
}
Unfortunately, this was buggy at first and had no effect as of Spring Boot 1.5.0. The workaround was to do this instead:
springBoot {
excludeDevtools = false
}
However, I have verified that the bootRepackage approach works for Spring Boot 1.5.8 .
I got the same issues as yours while using docker-compose to compose my application ( a web service + redis server + mongo server ).
As the Spring developer tools document point out "Developer tools are automatically disabled when running a fully packaged application. If your application is launched using java -jar or if it’s started using a special classloader, then it is considered a “production application”."
I think when we running Spring Web Application inside Docker container, the developer tool is disabled then we cant remotely restart it.
Currently, I'm running my web application on the host machine and set the redis server, mongo server inside containers so I can restart the web app quickly when the code is changing in development process.
In my case I had to put the application context on the argument of the IDE RemoteSpringApplication configuration.
For example, my application root context was /virtue so I had to configure it like so:
I installed springxd 1.0.1 release. I configured spring-xd to run in HTTPS mode by enabling the SSL properties as specified in https://github.com/spring-projects/spring-xd/wiki/Application-Configuration#enabling-https. I am able to start the xd admin and containers successfully after that. I set the httpSSL.properties as well. However, I am not able to get the xd shell properly. or admin UI to run. I know I have to specify these new ssl properties for them to use, but I am not sure where. The output when I run xd shell is:
1.0.1.RELEASE | Admin Server Target: http://localhost:9393
-------------------------------------------------------------------------------
Error: Unable to contact XD Admin Server at 'http://localhost:9393'.
Please execute 'admin config info' for more details.
-------------------------------------------------------------------------------
Welcome to the Spring XD shell. For assistance hit TAB or type "help".
server-unknown:>
When I try the admin-ui, I just get a 'Connection Interrupted' error.
EDIT: I tried basic authentication by enabling the properties in servers.yml. With this I am able to get the admin-ui to work, but shell still does not work. I am trying to find which configurations I need to set to make this work unsuccessfully. Any help is greatly appreciated.
Any pointers are greatly appreciated.
thanks much,
AG
Asha,
A few clarifications:
You do not need to change httpSSL.properties, that is necessary only for configuring HTTPS for the HTTP source.
Since you've enabled https, you must change the target URL accordingly, as follows:
xd:> admin config server https://localhost:9393
(please note that the protocol is https now)
If you also enable Basic security, you must add the configuration parameters to the configuration command, as in this example:
xd:> admin config server --uri https://localhost:9393 --username adminUserName --password adminPassword
(As described in the reference documentation)
Hope this helps,
Marius
I cannot get the proxy configuration to work for SonarQube 4.0 so that I can install plugins.
When i open http://localhost:9000/updatecenter/available it displays the error: "Not connected to update center. Please check your internet connection and logs."
In sonar.log I read: "org.sonar.api.utils.HttpDownloader$HttpException: Fail to download [http://update.sonarsource.org/update-center.properties]. Response code: 403"
In sonar.properties I configured it with the same proxy which I use for other programs:
sonar.updatecenter.activate=true
http.proxyHost=<host>
http.proxyPort=<port>
http.proxyUser=<username>
http.proxyPassword=<password>
I tried the same to configure in wrapper.properties, but it didn't work either by the way.
For the proxy host I tried the short and the full name. For the username I tried just the username and with <DOMAINNAME>\<username> and <DOMAINNAME>\\<username>.
Nothing of it worked. Any ideas?
My proxy configuration works and looks the following way:
http.proxyHost=proxy.domain.de
http.proxyPort=8888
Note that there is no "http://" or anything else before the URL.
Also, I do not use proxy authentication, so I left "proxyUser" and "proxyPassword" commented out.
For those running SonarQube in Docker, I had no luck with any suggestion mentioned here. But I found following solution that worked for me (here):
docker run -d sonarqube -Dhttp.proxyHost=<myproxy.url.com> -Dhttp.proxyPort=<port>
and equivalent of this in a docker-compose notation:
services:
sonarqube:
image: sonarqube
command: -Dhttp.proxyHost=<myproxy.url.com> -Dhttp.proxyPort=<port>
Just an information: I had this problem also.
I can see the PlugIns but cannot download it. The problem is, you have to add this line into your sonar.properties, for the https:
# https-proxy
sonar.web.javaAdditionalOpts=-Dhttps.proxyHost=xxxxx -Dhttps.proxyPort=xxxx -Dhttps.proxyUser=xxxx -Dhttps.proxyPassword=xxxx
I used the official documentation and it works:
Using the Update Center behind a Proxy
http.proxyHost=<your.proxy.host>
http.proxyPort=<yout.proxy.port>
Regards,
At sonar.properties set the proxy without "http://", only http.proxyHost=myproxy.domain.pt
Another suggestion is to also add this lines on wrapper.conf:
wrapper.java.additional.3=-Dhttp.proxySet=true
wrapper.java.additional.4=-Dhttp.proxyHost=myproxy.domain.pt
wrapper.java.additional.5=-Dhttp.proxyPort=myproxy.port
wrapper.java.additional.6=-Dhttps.proxyHost=myproxy.domain.pt
wrapper.java.additional.7=-Dhttps.proxyPort=myproxy.port
Careful if you have a docker volume, remove it before deploy the new one with this configuration, or otherwise it will keep the original configuration
Appart from http, don't forget to set your https proxy configuration in sonar.properties (update server is behind HTTPS):
https.proxyHost=<host>
https.proxyPort=<port>
https.proxyUser=<username>
https.proxyPassword=<password>