Spring Dataflow and Yarn: How to set properties properly? - spring

How can one change the default appdeployappmaster properties ?
When I'm trying to deploy an application through Spring DataFlow YARN. I registered my app, created a stream, and click the "deploy" button. When doing so, I get the following error :
[XNIO-2 task-2] WARN o.s.c.d.s.c.StreamDeploymentController - Exception when deploying the app StreamAppDefinition [streamName=histo, name=my-app, registeredAppName=my-app, properties={spring.cloud.stream.bindings.input.destination=log, spring.cloud.stream.bindings.input.group=histo}]: java.util.concurrent.ExecutionException: org.springframework.yarn.YarnSystemException: Invalid host name: local host is: (unknown); destination host is: "null":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost; nested exception is java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "null":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
As you can see, the deployer is unable to find the "Resource Manager" URI, Although it is well found when the Spring DataFlow Server starts.
So I only get the problem at the deployment time.
Which property should I set to fix this issue, and where would I do that ?
EDIT 1:
Following Janne Valkealahti's answer, I added the following properties in /dataflow/apps/stream/app/servers.yml, relaunched the server, and tried to re-deploy my stream.
spring:
cloud:
dataflow:
yarn:
version: 0.0.1-SNAPSHOT
deployer:
yarn:
version: 1.0.2.RELEASE
stream:
kafka:
binder:
brokers: kafka.my-domain.com:9092
zkNodes: zookeeper.my-domain.com:2181/node
# Configured for Hadoop single-node running on localhost. Replace with property values reflecting your
# actual Hadoop cluster when running in a distributed environment.
hadoop:
fsUri: hdfs://mapr.my-domain.com/referentiel/ca_category_2014/
resourceManagerHost: mapr.my-domain.com
resourceManagerPort: 8032
resourceManagerSchedulerAddress: mapr.my-domain.com:8030
session:
store-type: none
I still get the exact same message.
PS: I'm not using Ambari, I'd like to understand how it works manually first.
EDIT 2:
I solved the problem adding the -Dspring.config.location VM arg on the DataFlow Server. The given configuration is passed to the deployer, and the application is effectively deployed.
I'll write an answer for it.

You didn't tell if your installation was based on ambari or normal manual YARN install so I assume it was a latter(manual).
I think a problem is that in distribution you use the config/servers.yml has a wrong setting for resourceManagerHost as it defaults to localhost. This file is distribute only once into hdfs when streams are launched. If you have changed it after you redeploy/create stream, app in hdfs directory will not get updated. On default this file in hdfs is /dataflow/apps/stream/app/servers.yml.
This error makes sense as also dataflow yarn server controlling whole stuff also needs access to yarn resource manager to submit apps. Settings for server also comes from a same servers.yml file.

It turns out I needed to add the -Dspring.config.location JVM arg to make it work. -Dspring.config.location should point to the file containing the YARN configuration, i.e.:
spring:
cloud:
dataflow:
yarn:
version: 0.0.1-SNAPSHOT
deployer:
yarn:
version: 1.0.2.RELEASE
stream:
kafka:
binder:
brokers: kafka.my-domain.com:9092
zkNodes: zookeeper.my-domain.com:2181/node
# Configured for Hadoop single-node running on localhost. Replace with property values reflecting your
# actual Hadoop cluster when running in a distributed environment.
hadoop:
fsUri: hdfs://mapr.my-domain.com/referentiel/ca_category_2014/
resourceManagerHost: mapr.my-domain.com
resourceManagerPort: 8032
resourceManagerSchedulerAddress: mapr.my-domain.com:8030
session:
store-type: none
This configuration is then passed to the deployer app (appdeployerappmaster if I get it right).

Related

Spring application unable to access kafka running in kubernetes minikube

I used bitnami/kafka to deploy kafka on minikube. A describe of the pod kafka-0 looks says that server address is:
KAFKA_CFG_ADVERTISED_LISTENERS:INTERNAL://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9092
My kafka address is set like so in Spring config properties:
spring.kafka.bootstrap-servers=["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
But when I try to send a message I get the following error:
Failed to construct kafka producer] with root cause:
org.apache.kafka.common.config.ConfigException:
Invalid url in bootstrap.servers: ["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
Note that this works when I run kafka locally and set the bootstrap-servers address to localhost:9092
How do I fix this error? What is the correct kafka URL to use and where do I find it? thanks
Minikube network is different to the host network, you need a bridge.
The advertised listener is in the minikube realm, not findable from the host.
You could setup a service and an ingress in minikube pointing to your kafka, setup your hosts file to the ip address of the ingress and the hostname advertised.
spring.kafka.bootstrap-servers needs valid server hostnames along with port number as comma-separated
hostname-1:port,hostname-2:port
["kafka-0.kafka-headless.default.svc.cluster.local:9092"] is not looking like one!

Integration between ELK and LDAP

I recently got to manage an opensource-based infrastructure composed by multiple Debian servers. On some of them, the ELK stack is installed.
I am verifying verify the presence of any integration between ELK and LDAP or other IAMs. On the dedicated monitoring node, I looked for IAM-related info into the following configuration files:
/etc/elasticsearch/elasticsearch.yaml
/etc/kibana/kibana.yml
/etc/logstash/logstash.yml
but the only login/account credentials I have been able to find are in the kibana.yml file:
elasticsearch.username: "username"
elasticsearch.password: "password"
In /etc/kibana/kibana.yml and /etc/elasticsearch/elasticsearch.yml I find the following:
xpack.security.enabled: false
which leads me think to the presence of a "xpack" plugin in somehow related to ldap. Where should I look for LDAP integration ?
Thanks to #Wonka for suggesting the presence of ReadOnlyRest. I found a readonlyrest.yml in /etc/elasticsearch. There, the following was present:
ldaps:
- name: ldap1
host: "ourldapserver.ourdomain"
[...]
Here is where LDAP integration occured.

How to run spring boot application multiple instance when get resource from config server?

I have Eureka server, Eureka zuul and config server.
So I write a simple microservice. Then, running on 8686 port.
So I want to run that microservice on another port.
I trying that command. But don't work.
java -Dserver.port=8687 -jar -Dlogging.file="bla.log" testMicro.jar --debug > "bla.log"&
I am confusing. Help me!
You have two ways to running your instances on different ports.
user assignment of random port from a specified range:
server:
port: ${random.int(8080,8090)}
Set in property file from config server for testMicro microservice the following configurations:
spring:
cloud:
config:
override-system-properties: false
allow-override: true
override-none: true
and then run again your jar with -Dserver.port=8687 property

Unable to load consul config

I'm trying to build sample microservice app using this tutorial Tutorial. jhipster v5.2.1 So I've created a gateway and an armory started consul using this command:
docker-compose -f src/main/docker/consul.yml up
While I've pointed into the armory folder writing this command :
./gradlew
I got this error :
2018-09-03 13:20:11.235 WARN 7224 --- [ restartedMain] o.s.c.c.c.ConsulPropertySourceLocator : Unable to load consul config from config/armory-swagger/
com.ecwid.consul.transport.TransportException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8600 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect
Could you please help me
If you are using the toolbox you have to replace localhost by the IP of your Docker machine vm. You will have to ajust the bootstrap.yml properties to point to this adress.
You should also be able to apply this trick : https://www.jhipster.tech/tips/020_tip_using_docker_containers_as_localhost_on_mac_and_windows.html
I just changed fail-fast to false in bootstrap-prod.yml
You can disable Spring Cloud Config this way.
fail-fast: false
Otherwise you have to provide proper configuration as stated above.
Or you can run this command if you have already installed consul on your development machine.
consul agent -dev

Spring XD 1.2.1 Container Jolokia access

What is the default HTTP port for Jolokia access in a Spring XD 1.2.1 container? We have had Jolokia working before on previous XD versions but something seems to have changed.
We have changed the XD_JMX_ENABLED value in servers.yml ie
XD_JMX_ENABLED: true
endpoints:
jolokia:
enabled: ${XD_JMX_ENABLED:false}
jmx:
enabled: ${XD_JMX_ENABLED:false}
uniqueNames: true
With this we can access JMX directly by setting -Dcom.sun.management.jmxremote.port etc. I assumed the HTTP port would be 9393 like Spring XD Admin but this doesn't seem to be the case.
Is there a default or do we have to uncomment and set
#spring:
# profiles: container
#management:
# port: 0
Also the documentation seems wrong here, it mentions port 9080, then in the URL beneath it uses 9393.
You need to set CONTROL_MGMT_PORT environment variable too.
For *nix based setups /etc/sysconfig/springxd
I'm using it in 1.2.1.RELEASE version without issue.

Resources