Glassfish and mod_jk - Load Balancing and Session Replication - amazon-ec2

I'm configuring an environment using Glassfish and mod_jk to provide load balancing and session replication.
My worker.properties is as follow:
worker.list=i1,i2,loadbalancer
# default properties for workers
worker.template.type=ajp13
worker.template.port=28080
worker.template.lbfactor=1
worker.template.socket_timeout=300
# properties for node1
worker.i1.reference=worker.template
worker.i1.host=10.0.0.93
#worker.worker1.host=node1
# properties for worker2
worker.i2.reference=worker.template
worker.i2.host=10.0.0.38
#worker.worker2.host=node2
# properties for loadbalancer
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=i1,i2
worker.loadbalancer.sticky_session=true
The steps I've done are:
Created two nodes, n1 and n2 managed centrally (via SSH) from my server:
create-node-ssh --sshuser ubuntu --sshkeyfile /home/ubuntu/acme-auction.pem --nodehost 10.0.0.93 --installdir /home/ubuntu/glassfish3 n1
create-node-ssh --sshuser ubuntu --sshkeyfile /home/ubuntu/acme-auction.pem --nodehost 10.0.0.38 --installdir /home/ubuntu/glassfish3 n2
Created a cluster c1:
create-cluster --properties 'GMS_DISCOVERY_URI_LIST=generate:GMS_LISTENER_PORT=9090' c1
Created two instances:
create-instance --cluster c1 --node n1 i1
create-instance --cluster c1 --node n2 i2
start-instance i1
start-instance i2
Created an http-listener and a network-listener
create-http-listener --listenerport 28080 --listeneraddress 0.0.0.0 --defaultvs server jk-connector
create-network-listener --protocol http-listener-1 --listenerport 28080 --jkenabled true --target c1-config jk-connector
Then I created the routes JVM option:
create-jvm-options --target c1 "-DjvmRoute=\${AJP_INSTANCE_NAME}"
...and the sysyem properties according to jvmRoute:
create-system-properties --target i1 AJP_INSTANCE_NAME=i1
create-system-properties --target i2 AJP_INSTANCE_NAME=i2
I expected to be able to use my application visiting server_ip/app_name.
If I look at the cookies I can see:
a JSESSIONIDVERSION, format: value:number_of_operation
a JSESSIONID, format: value.i1
a JREPLICA, format: i2
(or the same with i2 and i1 exchanged).
This let me suppose the replication is set correctly, but when I stop i1, what I obtain is a blank page and no changes in cookies (I suppose JSESSIONID should change the last part, ".i1" in ".i2" to make the request be routed to i2, am I wrong?).
Thanks,
Andrea

Actually, it was a Serialization problem that made not possibile to serialize session and (as a consequence) switch to the other instance.
To make it serializable, just had to
make all object managed in the session implemet Serializable
for those who could not be serialized (eg EntityManager, Transactions...) add the "transient" modifier when declaring the property
Hope it helps,
Andrea

Related

RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.
This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

hashicorp consul is not publishing all the metrics

consul isn't publishing all the metrics defined in their document, from https://www.consul.io/docs/agent/telemetry.html#transaction-timing, it shows only raft metrics but not txn kvs, has anyone observed this problem?
Command to enable prometheus style metrics:
consul agent -dev -hcl 'telemetry{prometheus_retention_time="24h" disable_hostname=true}'
watch metrics:
watch -n 1 -d "curl -s localhost:8500/v1/agent/metrics?format=prometheus|grep -v ^# | grep -E 'kvs|txn|raft'"
Metrics will be exported only if they are available, i.e. if there are no transactions or KV store operations, then you will not see these metrics in the output.
I have managed to see kvs metrics in the example you have provided. While running Consul agent via command in the question, in browser open http://127.0.0.1:8500/ and click on Key/Value option in the top list (you should end up here http://127.0.0.1:8500/ui/dc1/kv). Click on Create to add new Key/Value pair. After clicking Save you should see something like this in the terminal running watch command:
consul_fsm_kvs{op="set",quantile="0.5"} 0.3572689890861511
consul_fsm_kvs{op="set",quantile="0.9"} 0.3572689890861511
consul_fsm_kvs{op="set",quantile="0.99"} 0.3572689890861511
consul_fsm_kvs_sum{op="set"} 0.3572689890861511
consul_fsm_kvs_count{op="set"} 1
consul_kvs_apply{quantile="0.5"} 2.6777150630950928
consul_kvs_apply{quantile="0.9"} 2.6777150630950928
consul_kvs_apply{quantile="0.99"} 2.6777150630950928
consul_kvs_apply_sum 2.6777150630950928
consul_kvs_apply_count 1
If there are no more transactions some of these values will be set to NaN value, depends on Prometheus metrics type.
Similarly, to see txn, you need to create Consul Transaction
Hope that helps you set up monitoring.

Spark app unable to write to elasticsearch cluster running in docker

I have a elasticsearch docker image listening on 127.0.0.1:9200, I tested it using sense and kibana, It works fine, I am able to index and query documents. Now when I try to write to it from a spark App
val sparkConf = new SparkConf().setAppName("ES").setMaster("local")
sparkConf.set("es.index.auto.create", "true")
sparkConf.set("es.nodes", "127.0.0.1")
sparkConf.set("es.port", "9200")
sparkConf.set("es.resource", "spark/docs")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
val rdd = sc.parallelize(Seq(numbers, airports))
rdd.saveToEs("spark/docs")
It fails to connect, and keeps on retrying
16/07/11 17:20:07 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Operation timed out
16/07/11 17:20:07 INFO HttpMethodDirector: Retrying request
I tried using IPAddress given by docker inspect for the elasticsearch image, that also does not work. However when I use a native installation of elasticsearch, the Spark App runs fine. Any ideas?
Also, set the config
es.nodes.wan.only to true
As mentioned in this answer if you are having issues writing to ES.
Couple things I would check:
The Elasticsearch-Hadoop spark connector version you are working with. Make sure that it is not beta. There was a fixed bug related to the IP resolving.
Since 9200 is the default port, you may remove this line: sparkConf.set("es.port", "9200") and check.
Check that there is no proxy configured in your Spark environment or config files.
I assume that you run Elasticsaerch and Spark on the same machine. Can you try to configure your machine IP address instead of 127.0.0.1
Hope this helps! :)
Had the same problem and a further issue was that the confs set using sparkConf.set() didn't have an effect. But supplying the confs with the saving function worked, like this:
rdd.saveToEs("spark/docs", Map("es.nodes" -> "127.0.0.1", "es.nodes.wan.only" -> "true"))

What is the right configuration of titan db 1.0 running against ES deployed on google/aws cloud

I'm using titan 1.0 with ES 1.51 running internally as a service (127.0.0.1), and it is working pretty well.
My working ES configuration is :
storage.backend=cassandra
storage.hostname=cassandraserver2-cassandra-00
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
query.fast-property=true
index.search.backend=elasticsearch
index.search.hostname=localhost
index.search.elasticsearch.interface=NODE
Now I want to redeploy the ES into the cloud , but unfortunately titan isn't up.
The exception i get is:
gremlin> tg = TitanFactory.open('../conf/titan-db.properties')
Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
Display stack trace? [yN] y
java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:473)
at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:460)
at com.t...
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
What is the right configuration of titan properties to run against elasticsearch service on google/aws cloud ??
suppose the external ip of the "VM" is 8.35.193.69 and i reach this machine with ping
I'm using titan-db properties:
storage.backend=cassandra
storage.hostname=cassandraserver2-cassandra-00
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
query.fast-property=true
index.search.backend=elasticsearch
index.search.hostname=8.35.193.69
index.search.client-only=true
index.search.local-mode=false
index.search.elasticsearch.interface=NODE
any solutions are welcome
You need to make sure port 9300 is open on your instance. If it's not open, you need to:
Ensure ES service is up sudo service elasticsearch status
Ensure port 9300 is open and accepting requests. Check how here
If the port is closed, you need to enable TCP transport communication. Check here
Change your configuration to look like this:
# elasticsearch config
index.search.backend=elasticsearch
index.search.elasticsearch.interface=TRANSPORT_CLIENT
index.search.hostname=your_ip:9300

Running two SonarQube instances in Windows Server 2008 R2

I've already run one SonarQube instance at port 9000 and able access it at address: localhost:9000.
Now I would like to run another SonarQube instance for my new project at port 10000. I've changed in sonar.properties file:
sonar.web.port: 10000
sonar.web.context: /
However, when I run C:\SonarMAP\bin\windows-x86-64\StartSonar.bat, I got the ERROR message:
wrapper | ERROR: Another instance of the SonarQube application is already running.
Press any key to continue . . .
I do some research to solve this problem but can't find any helpful information.
Any suggestion ? Thanks !
UPDATE
The instance 1 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url=jdbc:postgresql://server15/sonarQube
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive=20
sonar.jdbc.maxIdle=5
sonar.jdbc.minIdle=2
sonar.jdbc.maxWait=5000
sonar.jdbc.minEvictableIdleTimeMillis=600000
sonar.jdbc.timeBetweenEvictionRunsMillis=30000
The instance 2 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url: jdbc:postgresql://localhost/sonarMAP
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive: 20
sonar.jdbc.maxIdle: 5
sonar.jdbc.minIdle: 2
sonar.jdbc.maxWait: 5000
sonar.jdbc.minEvictableIdleTimeMillis: 600000
sonar.jdbc.timeBetweenEvictionRunsMillis: 30000
sonar.web.port: 9100
sonar.web.context: /
sonar.search.port=9101
sonar.notifications.delay: 60
Apparently you can't run multiple instances on Windows because of wrapper.single_invocation=true in conf/wrapper.conf.
Setting it to false seems to allow this (you'll still have to use different ports as Fabrice explained in his answer) but this is getting into grey zone: non recommended and non tested setup.
You need to change other settings inside the conf/sonar.properties file, namely:
sonar.search.port: the port of Elasticsearch
sonar.search.httpPort: if you enabled it in the first instance, you've got to change it as well
and obviously you can't connect to the same schema of the same DB

Resources