Changing default consul http port - consul

I need to change the default http port because another application is using 8500 already.
This command works:
consul info -http-addr=http://127.0.0.1:18500
I can't figure out what config setting this equals to in a config file.
Here are my current settings:
datacenter = "test_test"
data_dir = "/opt/consul"
encrypt = "**********"
performance {
raft_multiplier = 1
}
ports {
http = 18500
dns = 18600
server = 18300
}
addresses {
http = "127.0.0.1"
}
retry_join = ["10.60.0.5"]`
Error message when I run the join or info command:
Error querying agent: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused

If you use:
{
"ports": {
"http": 18500
}
}
Then consul by default will bind to localhost:
==> Log data will now stream in as it occurs:
2019/02/19 17:28:23 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:4887467c-c84b-15b4-66f7-ad3f822631e0 Address:172.17.0.2:8300}]
2019/02/19 17:28:23 [INFO] raft: Node at 172.17.0.2:8300 [Follower] entering Follower state (Leader: "")
2019/02/19 17:28:23 [INFO] serf: EventMemberJoin: b884fe85d115.dc1 172.17.0.2
2019/02/19 17:28:23 [INFO] serf: EventMemberJoin: b884fe85d115 172.17.0.2
2019/02/19 17:28:24 [INFO] consul: Adding LAN server b884fe85d115 (Addr: tcp/172.17.0.2:8300) (DC: dc1)
2019/02/19 17:28:24 [INFO] consul: Handled member-join event for server "b884fe85d115.dc1" in area "wan"
2019/02/19 17:28:24 [WARN] agent/proxy: running as root, will not start managed proxies
2019/02/19 17:28:24 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2019/02/19 17:28:24 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2019/02/19 17:28:24 [INFO] agent: Started HTTP server on 127.0.0.1:18500 (tcp)
Obviously, anyone from other nodes can't connect to your bootstrap server.
You should configure address and port:
{
"addresses": {
"http": "0.0.0.0"
},
"ports": {
"http": 18500
}
}
Now you can see binding for any IP 0.0.0.0/0
==> Log data will now stream in as it occurs:
2019/02/19 17:35:11 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:ef42f35f-7505-d1fc-3f91-16f144d91fc6 Address:172.17.0.2:8300}]
2019/02/19 17:35:11 [INFO] raft: Node at 172.17.0.2:8300 [Follower] entering Follower state (Leader: "")
2019/02/19 17:35:11 [INFO] serf: EventMemberJoin: ac34230483e0.dc1 172.17.0.2
2019/02/19 17:35:11 [INFO] serf: EventMemberJoin: ac34230483e0 172.17.0.2
2019/02/19 17:35:11 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2019/02/19 17:35:11 [WARN] agent/proxy: running as root, will not start managed proxies
2019/02/19 17:35:11 [INFO] consul: Adding LAN server ac34230483e0 (Addr: tcp/172.17.0.2:8300) (DC: dc1)
2019/02/19 17:35:11 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2019/02/19 17:35:11 [INFO] consul: Handled member-join event for server "ac34230483e0.dc1" in area "wan"
2019/02/19 17:35:11 [INFO] agent: Started HTTP server on [::]:18500 (tcp)

Related

Is it to be expected that the client will discover each node twice for the Hazelcast sidecar caching pattern?

I'm pretty new to using Hazelcast for its interesting feature of auto-sync with other cache instances. My queries are bottom of the description.
Here was my initial goal:
Design an environment following Hazelcast sidecar caching pattern.
There will be no cache on the application container side. Basically, I don't want to use "near-cache" just to avoid my JVM being heavy and reduce GC time.
Application Container in each Node will communicate with its own sidecar cache container via localhost IP.
Hazelcast management center will be a separate node that communicates with all the nodes containing Hazelcast sidecar cache container.
Here is the target design:
I prepared Hazelcast configuration [hazelcast.yaml] for Hazelcast container,
hazelcast:
cluster-name: dev
network:
port:
auto-increment: false
port-count: 3
port: 5701
I also prepared another hazelcast.yaml for my application container,
hazelcast:
map:
default:
backup-count: 0
async-backup-count: 1
read-backup-data: true
network:
reuse-address: true
port:
auto-increment: true
port: 5701
join:
multicast:
enabled: true
kubernetes:
enabled: false
tcp-ip:
enabled: false
interaface: 127.0.0.1
member-list:
- 127.0.0.1:5701
Here is the client part, I used SpringBoot for it.
#Component
public class CacheClient {
private static final String ITEMS = "items";
private HazelcastInstance client;
CacheClient() throws IOException {
ClientConfig config = new YamlClientConfigBuilder("hazelcast.yaml").build();
config.setInstanceName(UUID.randomUUID().toString());
client = HazelcastClient.getOrCreateHazelcastClient(config);
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}
Here is the dockerfile, I used to build my application container image,
FROM adoptopenjdk/openjdk11:jdk-11.0.5_10-alpine-slim
# Expose port 8081 to Docker host
EXPOSE 8081
WORKDIR /opt
COPY /build/libs/hazelcast-client-0.0.1-SNAPSHOT.jar /opt/app.jar
COPY /src/main/resources/hazelcast.yaml /opt/hazelcast.yaml
COPY /src/main/resources/application.properties /opt/application.properties
ENTRYPOINT ["java","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.initial.min.cluster.size=1","-Dhazelcast.socket.bind.any=false","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.socket.client.bind=false","-Dhazelcast.socket.client.bind.any=false","-Dhazelcast.logging.type=slf4j","-jar","app.jar"]
Here is the deployment script I used,
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: spring-hazelcast-service
spec:
selector:
app: spring-hazelcast-app
ports:
- protocol: "TCP"
name: http-app
port: 8081 # The port that the service is running on in the cluster
targetPort: 8081 # The port exposed by the service
type: LoadBalancer # type of the service. LoadBalancer indicates that our service will be external.
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: spring-hazelcast-app
spec:
selector:
matchLabels:
app: spring-hazelcast-app
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: spring-hazelcast-app
spec:
containers:
- name: hazelcast
image: hazelcast/hazelcast:4.0.2
workingDir: /opt
ports:
- name: hazelcast
containerPort: 5701
env:
- name: HZ_CLUSTERNAME
value: dev
- name: JAVA_OPTS
value: -Dhazelcast.config=/opt/config/hazelcast.yml
volumeMounts:
- mountPath: "/opt/config/"
name: allconf
- name: spring-hazelcast-app
image: spring-hazelcast:1.0.3
imagePullPolicy: Never #IfNotPresent
ports:
- containerPort: 8081 # The port that the container is running on in the cluster
volumes:
- name: allconf
hostPath:
path: /opt/config/ # directory location on host
type: Directory # this field is optional
---
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: hazelcast-mc-service
spec:
selector:
app: hazelcast-mc
ports:
- protocol: "TCP"
name: mc-app
port: 8080 # The port that the service is running on in the cluster
targetPort: 8080 # The port exposed by the service
type: LoadBalancer # type of the
loadBalancerIP: "127.0.0.1"
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: hazelcast-mc
spec:
selector:
matchLabels:
app: hazelcast-mc
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: hazelcast-mc
spec:
containers:
- name: hazelcast-mc
image: hazelcast/management-center
ports:
- containerPort: 8080 # The port that the container is running on in the cluster
Here is my application logs,
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.4)
2021-09-27 06:42:51.274 INFO 1 --- [ main] com.caching.Application : Starting Application using Java 11.0.5 on spring-hazelcast-app-7bdc8b7f7-bqdlt with PID 1 (/opt/app.jar started by root in /opt)
2021-09-27 06:42:51.278 INFO 1 --- [ main] com.caching.Application : No active profile set, falling back to default profiles: default
2021-09-27 06:42:55.986 INFO 1 --- [ main] c.h.c.impl.spi.ClientInvocationService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Running with 2 response threads, dynamic=true
2021-09-27 06:42:56.199 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTING
2021-09-27 06:42:56.202 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (jar:file:/opt/app.jar!/BOOT-INF/lib/hazelcast-all-4.0.2.jar!/) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2021-09-27 06:42:56.277 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to cluster: dev
2021-09-27 06:42:56.302 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to [127.0.0.1]:5701
2021-09-27 06:42:56.429 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is CLIENT_CONNECTED
2021-09-27 06:42:56.429 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5701:c967f642-a7aa-4deb-a530-b56fb8f68c78, server version: 4.0.2, local address: /127.0.0.1:54373
2021-09-27 06:42:56.436 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:56.461 INFO 1 --- [21ad30a.event-4] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [1] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
}
2021-09-27 06:42:56.803 INFO 1 --- [ main] c.h.c.i.s.ClientStatisticsService : Client statistics is enabled with period 5 seconds.
2021-09-27 06:42:57.878 INFO 1 --- [ main] c.h.i.config.AbstractConfigLocator : Loading 'hazelcast.yaml' from the working directory.
2021-09-27 06:42:57.934 WARN 1 --- [ main] c.h.i.impl.HazelcastInstanceFactory : Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2021-09-27 06:42:57.976 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2021-09-27 06:42:57.987 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Picked [172.17.0.3]:5702, using socket ServerSocket[addr=/172.17.0.3,localport=5702], bind any local is false
2021-09-27 06:42:58.004 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Hazelcast 4.0.2 (20200702 - 2de3027) starting at [172.17.0.3]:5702
2021-09-27 06:42:58.005 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2021-09-27 06:42:58.047 INFO 1 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [172.17.0.3]:5702 [dev] [4.0.2] Backpressure is disabled
2021-09-27 06:42:58.373 INFO 1 --- [ main] com.hazelcast.instance.impl.Node : [172.17.0.3]:5702 [dev] [4.0.2] Creating MulticastJoiner
2021-09-27 06:42:58.380 WARN 1 --- [ main] com.hazelcast.cp.CPSubsystem : [172.17.0.3]:5702 [dev] [4.0.2] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2021-09-27 06:42:58.676 INFO 1 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.3]:5702 [dev] [4.0.2] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
2021-09-27 06:42:58.682 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : [172.17.0.3]:5702 [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:58.687 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTING
2021-09-27 06:42:58.923 INFO 1 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [172.17.0.3]:5702 [dev] [4.0.2] Trying to join to discovered node: [172.17.0.3]:5701
2021-09-27 06:42:58.932 INFO 1 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [172.17.0.3]:5702 [dev] [4.0.2] Connecting to /172.17.0.3:5701, timeout: 10000, bind-any: false
2021-09-27 06:42:58.955 INFO 1 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [172.17.0.3]:5702 [dev] [4.0.2] Initialized new cluster connection between /172.17.0.3:40242 and /172.17.0.3:5701
2021-09-27 06:43:04.948 INFO 1 --- [21ad30a.event-3] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [2] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3
}
2021-09-27 06:43:04.959 WARN 1 --- [ration.thread-0] c.h.c.i.operation.OnJoinCacheOperation : [172.17.0.3]:5702 [dev] [4.0.2] This member is joining a cluster whose members support JCache, however the cache-api artifact is missing from this member's classpath. In case JCache API will be used, add cache-api artifact in this member's classpath and restart the member.
2021-09-27 06:43:04.963 INFO 1 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [172.17.0.3]:5702 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3 this
]
2021-09-27 06:43:05.466 INFO 1 --- [ration.thread-1] c.h.c.i.p.t.AuthenticationMessageTask : [172.17.0.3]:5702 [dev] [4.0.2] Received auth from Connection[id=2, /172.17.0.3:5702->/172.17.0.3:40773, qualifier=null, endpoint=[172.17.0.3]:40773, alive=true, connectionType=JVM], successfully authenticated, clientUuid: 8843f057-c856-4739-80ae-4bc930559bd5, client version: 4.0.2
2021-09-27 06:43:05.468 INFO 1 --- [d30a.internal-3] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5702:08dfe633-46b2-4581-94c7-81b6d0bc3ce3, server version: 4.0.2, local address: /172.17.0.3:40773
2021-09-27 06:43:05.968 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTED
2021-09-27 06:43:06.237 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8081
2021-09-27 06:43:06.251 INFO 1 --- [ main] com.caching.Application : Started Application in 17.32 seconds (JVM running for 21.02)
Here is the Hazelcast management center member list,
Finally my question is,
Why I'm seeing 2 members, where there is only one sidecar cache container deployed?
What modification I will be required to reach my initial goal?
According to Spring Boot documentation for Hazelcast feature:
If a client can’t be created, Spring Boot attempts to configure an embedded server.
Spring Boot starts an embedded server from your hazelcast.yaml from the application container and joins to Hazelcast container using multicast.
You should replace your hazelcast.yaml in the Spring Boot app container with hazelcast-client.yaml with the following content:
hazelcast-client:
cluster-name: "dev"
network:
cluster-members:
- "127.0.0.1:5701"
After doing that Spring Boot will autoconfigure client HazelcastInstance bean and you will be able to change your cache client like this:
#Component
public class CacheClient {
private static final String ITEMS = "items";
private final HazelcastInstance client;
public CacheClient(HazelcastInstance client) {
this.client = client;
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}

Consul proxy failed to dial: dial tcp 127.0.0.1:0: connect: connection refused

I'm trying to run a consul connect proxy, but it displays unexpected errors
This is my configuration
{
"service": {
"name": "api",
"check": {
"name": "HTTP 80",
"http": "http://localhost:80",
"interval": "10s",
"timeout": "10s"
},
"connect": {
"sidecar_service": {
"proxy":{
"upstreams":[{
"destination_name": "elasticsearch",
"local_bind_port": 9200
}]
}
}
}
}
}
Here is the command with logging
$ consul connect proxy -sidecar-for elasticsearch
==> Consul Connect proxy starting...
Configuration mode: Agent API
Sidecar for ID: elasticsearch
Proxy ID: elasticsearch-sidecar-proxy
==> Log data will now stream in as it occurs:
2019/06/03 08:00:54 [INFO] Proxy loaded config and ready to serve
2019/06/03 08:00:54 [INFO] TLS Identity: spiffe://fadce594-37c1-8586-1b57-c6245436684c.consul/ns/default/dc/dc1/svc/elasticsearch
2019/06/03 08:00:54 [INFO] TLS Roots : [Consul CA 8]
2019/06/03 08:00:54 [INFO] public listener starting on 0.0.0.0:21000
2019/06/03 08:01:02 [ERR] failed to dial: dial tcp 127.0.0.1:0: connect: connection refused
^C==> Consul Connect proxy shutdown
Any suggestions?
The issue is because the service has no port, so it tried to connect to proxy.local_service_port - Defaults to the parent service port.
Specifying the port for the parent service solves the issue

How to deal with docker deployed spring cloud application mutual access through zuul?

I deployed my spring cloud application in docker,include eureka server,zuul,eureka client. I want to access eureka client via zuul.
Zuul and eureka client are registered at eureka server.I access each application ,it is work. When I access eureka client via zuul, zuul console infomation show java.net.NoRouteToHostException. I don't know why and how to deal with this problem.
Eureka server config is like this.
server:
port: 1020
spring:
application:
name: eureka-server
security:
basic:
enabled: true
user:
name: admin
password: admin
eureka:
client:
fetch-registry: true
register-with-eureka: true
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
instance:
hostname: 192.168.90.183
prefer-ip-address: true
ip-address: 192.168.90.183
server:
enable-self-preservation: false
eviction-interval-timer-in-ms: 5000
management:
endpoints:
web:
exposure:
include: "*"
endpoint:
shutdown:
enabled: true
Zuul config is like this.
server:
port: 8088
spring:
application:
name: gateway
security:
oauth2:
management:
security:
enabled: false
endpoints:
web:
exposure:
exclude: refresh,health,info
ribbon:
ReadTimeout: 20000
SocketTimeout: 20000
zuul:
# sensitiveHeaders: "*"
routes:
tdcm-linyi:
path: /371300/**
serviceId: tdcm
ratelimit:
key-prefix: your-prefix
enabled: true
behind-proxy: true
default-policy:
limit: 100
quota: 1000
refresh-interval: 60
type:
- user
- origin
- url
host:
connect-timeout-millis: 20000
socket-timeout-millis: 20000
#================================eureka setting==============================
eureka:
instance:
instance-id: ${eureka.instance.hostname}:${server.port}
hostname: 192.168.90.183
prefer-ip-address: true
ip-address: 192.168.90.183
lease-expiration-duration-in-seconds: 10
lease-renewal-interval-in-seconds: 5
client:
serviceUrl:
defaultZone: http://admin:admin#${EUREKA_HOST:192.168.90.183}:${EUREKA_PORT:1020}/eureka
fetch-registry: true
register-with-eureka: true
Eureka client config is like this.
spring:
application:
name: tdcm
banner:
charset: UTF-8
http:
encoding:
charset: UTF-8
enabled: true
force: true
messages:
encoding: UTF-8
mvc:
throw-exception-if-no-handler-found: true
# Server
server:
port: 8926
tomcat:
uri-encoding: UTF-8
#================================eureka settinig==============================
eureka:
instance:
instance-id: ${eureka.instance.hostname}:${server.port}
hostname: 192.168.90.183
prefer-ip-address: true
ip-address: 192.168.90.183
lease-expiration-duration-in-seconds: 10
lease-renewal-interval-in-seconds: 5
client:
serviceUrl:
defaultZone: http://admin:admin#${EUREKA_HOST:192.168.90.183}:${EUREKA_PORT:1020}/eureka
fetch-registry: true
register-with-eureka: true
My test operate is like this.
I access the zuul by http://192.168.90.183:8088 ,it works well.
I access the eureka client by http://192.168.90.183:8926/getCityCenter , it works well.
When I access the eureka client via zuul by
http://192.168.90.183:8088/371300/getCityCenter , it doesn't work.
The console show the information like this.
03-29 01:55:27.229 INFO [c.n.loadbalancer.DynamicServerListLoadBalancer] - DynamicServerListLoadBalancer for client tdcm initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=tdcm,current list of Servers=[192.168.90.183:8926],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:192.168.90.183:8926; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 00:00:00 UTC 1970; First connection made: Thu Jan 01 00:00:00 UTC 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#3275110f
03-29 01:55:28.201 INFO [com.netflix.config.ChainedDynamicProperty] - Flipping property: tdcm.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
03-29 01:55:28.545 INFO [org.apache.http.impl.execchain.RetryExec] - I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://192.168.90.183:8926: No route to host (Host unreachable)
03-29 01:55:28.546 INFO [org.apache.http.impl.execchain.RetryExec] - I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://192.168.90.183:8926: No route to host (Host unreachable)
03-29 01:55:28.546 INFO [org.apache.http.impl.execchain.RetryExec] - Retrying request to {}->http://192.168.90.183:8926
03-29 01:55:28.546 INFO [org.apache.http.impl.execchain.RetryExec] - Retrying request to {}->http://192.168.90.183:8926
03-29 01:55:28.547 INFO [org.apache.http.impl.execchain.RetryExec] - I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://192.168.90.183:8926: No route to host (Host unreachable)
03-29 01:55:28.548 INFO [org.apache.http.impl.execchain.RetryExec] - Retrying request to {}->http://192.168.90.183:8926
03-29 01:55:28.555 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:28.556 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:29.549 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:29.550 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:29.550 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:29.551 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:29.549 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:29.552 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:37.508 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:37.510 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:39.031 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:39.033 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
It seems the zuul can't find the router to eureka client of tdcm.
I tried to deployed all application on computer,include eureka server,zuul,eureka client,not in docker. The same config as this article descript,it works well. I don't know why it isn't work when access the eureka client via zuul in docker deployed.
I use the host computer IP address for spring cloud appliction.
My docker version is 17.12.1-ce.
My spring cloud version is Finchley.SR1.
My Spring boot version is 2.0.3.RELEASE.
My host computer is cent-os 7.
How can I deal with the problem?
I know the problem how to dealing.Eureka client config delete the yml value of ip-address.
eureka:
instance:
ip-address: 192.168.90.183
The reason is eureka client config in the inner network of docker.It can access from zuul through inner network of docker.

Storm Cluster Mode Error

Firstly I am a beginner in Storm and so i want your tolerance for my incomplete report of my question. I have completed the project in local mode and it runs smoothly, without any problems.
I tried to run it in the setup-ed cluster of my university. And i see in the log at cluster's UI that never start running cause of an error. The same error in all the bolts and spouts of my topology. I attach the log with the error of one of a spouts.
I know that my description is inadequate but if you tell my what else is useful to inform you about i will add it to the post.
Thank you
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:zookeeper.version=3.4.6-2--1, built on 03/31/2015 19:31 GMT
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:host.name=clu18.softnet.tuc.gr
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.version=1.7.0_80
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.vendor=Oracle Corporation
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.class.path=/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/clj-stacktrace-0.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/oncrpc-1.0.7.jar:/usr/hdp/2.2.4.2-2/storm/lib/chill-java-0.3.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/reflectasm-1.07-shaded.jar:/usr/hdp/2.2.4.2-2/storm/lib/logback-classic-1.0.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-http-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/snakeyaml-1.11.jar:/usr/hdp/2.2.4.2-2/storm/lib/hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/storm/lib/slf4j-api-1.6.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.cli-0.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/joda-time-2.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/java.classpath-0.2.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-codec-1.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/objenesis-1.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-io-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/hadoop-auth-2.4.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/compojure-1.1.3.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-jetty-adapter-1.3.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-security-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-util-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/clojure-1.5.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/minlog-1.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/ns-tracker-0.2.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/jersey-bundle-1.17.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/clout-1.0.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/disruptor-2.10.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.logging-0.2.3.jar:/usr/hdp/2.2.4.2-2/storm/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-continuation-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-servlets-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-anti-forgery-1.0.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/hiccup-0.3.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-lang-2.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/crypto-equality-1.0.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-server-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/gmetric4j-1.0.7.jar:/usr/hdp/2.2.4.2-2/storm/lib/storm-core-0.9.3.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-core-1.1.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-exec-1.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/logback-core-1.0.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/carbonite-1.4.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/math.numeric-tower-0.0.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-fileupload-1.2.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-servlet-1.3.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-storm-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.macro-0.1.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-servlet-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/kryo-2.21.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-logging-1.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/asm-4.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/jgrapht-core-0.9.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.namespace-0.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-configuration-1.10.jar:/usr/hdp/2.2.4.2-2/storm/lib/core.incubator-0.1.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/crypto-random-1.2.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-client-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/javax.servlet-2.5.0.v201103041518.jar:/usr/hdp/2.2.4.2-2/storm/lib/json-simple-1.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/clj-time-0.4.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-devel-1.3.0.jar:/usr/hdp/2.2.4.2-2/storm/conf:/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar:/hadoop/storm/supervisor/stormdist/aek-16-1436963685/stormjar.jar:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.library.path=/hadoop/storm/supervisor/stormdist/aek-16-1436963685/resources/Linux-amd64:/hadoop/storm/supervisor/stormdist/aek-16-1436963685/resources:/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.io.tmpdir=/tmp
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:java.compiler=<NA>
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:os.name=Linux
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:os.arch=amd64
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:os.version=3.2.0-70-generic
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:user.name=storm
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:user.home=/home/storm
2015-07-15 15:34:48 o.a.s.z.ZooKeeper [INFO] Client environment:user.dir=/home/storm
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:zookeeper.version=3.4.6-2--1, built on 03/31/2015 19:31 GMT
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:host.name=clu18.softnet.tuc.gr
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.version=1.7.0_80
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.vendor=Oracle Corporation
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.home=/usr/lib/jvm/java-7-oracle/jre
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.class.path=/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/clj-stacktrace-0.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/oncrpc-1.0.7.jar:/usr/hdp/2.2.4.2-2/storm/lib/chill-java-0.3.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/reflectasm-1.07-shaded.jar:/usr/hdp/2.2.4.2-2/storm/lib/logback-classic-1.0.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-http-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/snakeyaml-1.11.jar:/usr/hdp/2.2.4.2-2/storm/lib/hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/storm/lib/slf4j-api-1.6.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.cli-0.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/joda-time-2.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/java.classpath-0.2.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-codec-1.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/objenesis-1.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-io-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/hadoop-auth-2.4.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/compojure-1.1.3.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-jetty-adapter-1.3.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-security-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-util-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/clojure-1.5.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/minlog-1.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/ns-tracker-0.2.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/jersey-bundle-1.17.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/clout-1.0.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/disruptor-2.10.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.logging-0.2.3.jar:/usr/hdp/2.2.4.2-2/storm/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-continuation-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-servlets-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-anti-forgery-1.0.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/hiccup-0.3.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-lang-2.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/crypto-equality-1.0.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-server-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/gmetric4j-1.0.7.jar:/usr/hdp/2.2.4.2-2/storm/lib/storm-core-0.9.3.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-core-1.1.5.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-exec-1.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/logback-core-1.0.6.jar:/usr/hdp/2.2.4.2-2/storm/lib/carbonite-1.4.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/math.numeric-tower-0.0.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-fileupload-1.2.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-servlet-1.3.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-storm-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.macro-0.1.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-servlet-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/kryo-2.21.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-logging-1.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/asm-4.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/storm/lib/jgrapht-core-0.9.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/tools.namespace-0.2.4.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-configuration-1.10.jar:/usr/hdp/2.2.4.2-2/storm/lib/core.incubator-0.1.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/crypto-random-1.2.0.jar:/usr/hdp/2.2.4.2-2/storm/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/jetty-client-7.6.13.v20130916.jar:/usr/hdp/2.2.4.2-2/storm/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/storm/lib/javax.servlet-2.5.0.v201103041518.jar:/usr/hdp/2.2.4.2-2/storm/lib/json-simple-1.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/clj-time-0.4.1.jar:/usr/hdp/2.2.4.2-2/storm/lib/ring-devel-1.3.0.jar:/usr/hdp/2.2.4.2-2/storm/conf:/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar:/hadoop/storm/supervisor/stormdist/aek-16-1436963685/stormjar.jar:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.library.path=/hadoop/storm/supervisor/stormdist/aek-16-1436963685/resources/Linux-amd64:/hadoop/storm/supervisor/stormdist/aek-16-1436963685/resources:/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.io.tmpdir=/tmp
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:java.compiler=<NA>
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:os.name=Linux
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:os.arch=amd64
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:os.version=3.2.0-70-generic
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:user.name=storm
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:user.home=/home/storm
2015-07-15 15:34:48 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:user.dir=/home/storm
2015-07-15 15:34:49 b.s.d.worker [INFO] Launching worker for aek-16-1436963685 on 3a7d0fdf-91c7-461c-bc24-2c912a622f34:6701 with id 3229d690-cb75-45a3-bab4-e3d0dad1c9a3 and conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "supervisor.run.worker.as.user" false, "topology.max.error.report.per.interval" 5, "storm.group.mapping.service" "backtype.storm.security.auth.ShellBasedGroupsMapping", "zmq.linger.millis" 5000, "topology.skip.missing.kryo.registrations" false, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m ", "storm.zookeeper.session.timeout" 20000, "ui.filter.params" nil, "nimbus.reassign" true, "storm.auth.simple-acl.admins" [], "storm.group.mapping.service.cache.duration.secs" 120, "topology.trident.batch.emit.interval.millis" 500, "drpc.authorizer.acl.filename" "drpc-auth-acl.yaml", "storm.messaging.netty.flush.check.interval.ms" 10, "ui.header.buffer.bytes" 4096, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m ", "java.library.path" "/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib", "supervisor.supervisors" [], "topology.executor.send.buffer.size" 1024, "metrics.reporter.register" "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter", "storm.local.dir" "/hadoop/storm", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "drpc.authorizer.acl.strict" false, "storm.nimbus.retry.times" 5, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "storm.meta.serialization.delegate" "backtype.storm.serialization.DefaultSerializationDelegate", "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "clu01.softnet.tuc.gr", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2181, "transactional.zookeeper.port" nil, "ui.http.creds.plugin" "backtype.storm.security.auth.DefaultHttpCredentialsPlugin", "topology.executor.receive.buffer.size" 1024, "logs.users" nil, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["clu02.softnet.tuc.gr" "clu01.softnet.tuc.gr" "clu03.softnet.tuc.gr"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "storm.auth.simple-acl.users" [], "storm.zookeeper.auth.user" nil, "topology.testing.always.try.serialize" false, "topology.transfer.buffer.size" 1024, "storm.principal.tolocal" "backtype.storm.security.auth.DefaultPrincipalToLocal", "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m -javaagent:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-client/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM", "storm.auth.simple-acl.users.commands" [], "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "storm.nimbus.retry.interval.millis" 2000, "ui.users" nil, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m ", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "nimbus.thrift.max_buffer_size" 1048576, "drpc.invocations.threads" 64, "drpc.https.port" -1, "supervisor.supervisors.commands" [], "topology.metrics.consumer.register" [{"class" "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink", "parallelism.hint" 1}], "topology.max.spout.pending" nil, "ui.filter" nil, "logviewer.cleanup.age.mins" 10080, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701], "storm.messaging.netty.authentication" false, "topology.environment" nil, "topology.debug" false, "nimbus.thrift.threads" 64, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "drpc.http.creds.plugin" "backtype.storm.security.auth.DefaultHttpCredentialsPlugin", "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=56431 -javaagent:/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM", "storm.auth.simple-white-list.users" [], "nimbus.thrift.port" 6627, "drpc.https.keystore.type" "JKS", "topology.stats.sample.rate" 0.05, "task.credentials.poll.secs" 30, "worker.heartbeat.frequency.secs" 1, "ui.actions.enabled" true, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "drpc.https.keystore.password" "", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "topology.multilang.serializer" "backtype.storm.multilang.JsonSerializer", "drpc.max_buffer_size" 1048576, "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "topology.worker.receiver.thread.count" 1, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "nimbus.credential.renewers.freq.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.netty.Context", "worker.gc.childopts" "", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "storm.zookeeper.auth.password" nil, "drpc.http.port" 3774, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8745, "nimbus.childopts" "-Xmx1024m -javaagent:/usr/hdp/current/storm-nimbus/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8649,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-nimbus/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM", "storm.cluster.mode" "distributed", "topology.optimize" true, "topology.max.task.parallelism" nil, "storm.messaging.netty.transfer.batch.size" 262144, "storm.nimbus.retry.intervalceiling.millis" 60000, "topology.classpath" nil, "storm.log.dir" "/var/log/storm"}
2015-07-15 15:34:49 b.s.u.StormBoundedExponentialBackoffRetry [INFO] The baseSleepTimeMs [1000] the maxSleepTimeMs [30000] the maxRetries [5]
2015-07-15 15:34:49 o.a.s.c.f.i.CuratorFrameworkImpl [INFO] Starting
2015-07-15 15:34:49 o.a.s.z.ZooKeeper [INFO] Initiating client connection, connectString=clu02.softnet.tuc.gr:2181,clu01.softnet.tuc.gr:2181,clu03.softnet.tuc.gr:2181 sessionTimeout=20000 watcher=org.apache.storm.curator.ConnectionState#1ce9f29c
2015-07-15 15:34:49 o.a.s.z.ClientCnxn [INFO] Opening socket connection to server clu02.softnet.tuc.gr/147.27.14.202:2181. Will not attempt to authenticate using SASL (unknown error)
2015-07-15 15:34:49 o.a.s.z.ClientCnxn [INFO] Socket connection established to clu02.softnet.tuc.gr/147.27.14.202:2181, initiating session
2015-07-15 15:34:49 o.a.s.z.ClientCnxn [INFO] Session establishment complete on server clu02.softnet.tuc.gr/147.27.14.202:2181, sessionid = 0x24d6c5b265b5e1a, negotiated timeout = 20000
2015-07-15 15:34:49 o.a.s.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2015-07-15 15:34:49 b.s.zookeeper [INFO] Zookeeper state update: :connected:none
2015-07-15 15:34:50 o.a.s.z.ZooKeeper [INFO] Session: 0x24d6c5b265b5e1a closed
2015-07-15 15:34:50 o.a.s.z.ClientCnxn [INFO] EventThread shut down
2015-07-15 15:34:50 b.s.u.StormBoundedExponentialBackoffRetry [INFO] The baseSleepTimeMs [1000] the maxSleepTimeMs [30000] the maxRetries [5]
2015-07-15 15:34:50 o.a.s.c.f.i.CuratorFrameworkImpl [INFO] Starting
2015-07-15 15:34:50 o.a.s.z.ZooKeeper [INFO] Initiating client connection, connectString=clu02.softnet.tuc.gr:2181,clu01.softnet.tuc.gr:2181,clu03.softnet.tuc.gr:2181/storm sessionTimeout=20000 watcher=org.apache.storm.curator.ConnectionState#10c3dd25
2015-07-15 15:34:50 o.a.s.z.ClientCnxn [INFO] Opening socket connection to server clu02.softnet.tuc.gr/147.27.14.202:2181. Will not attempt to authenticate using SASL (unknown error)
2015-07-15 15:34:50 o.a.s.z.ClientCnxn [INFO] Socket connection established to clu02.softnet.tuc.gr/147.27.14.202:2181, initiating session
2015-07-15 15:34:50 o.a.s.z.ClientCnxn [INFO] Session establishment complete on server clu02.softnet.tuc.gr/147.27.14.202:2181, sessionid = 0x24d6c5b265b5e1b, negotiated timeout = 20000
2015-07-15 15:34:50 o.a.s.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2015-07-15 15:34:50 b.s.s.a.AuthUtils [INFO] Got AutoCreds []
2015-07-15 15:34:50 b.s.d.worker [INFO] Reading Assignments.
2015-07-15 15:34:50 b.s.m.TransportFactory [INFO] Storm peer transport plugin:backtype.storm.messaging.netty.Context
2015-07-15 15:34:51 b.s.d.executor [INFO] Loading executor __metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink:[2 2]
2015-07-15 15:34:51 b.s.d.task [INFO] Emitting: __metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink __system ["startup"]
2015-07-15 15:34:51 b.s.d.executor [INFO] Loaded executor tasks __metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink:[2 2]
2015-07-15 15:34:51 b.s.d.executor [INFO] Finished loading executor __metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink:[2 2]
2015-07-15 15:34:51 b.s.d.executor [INFO] Preparing bolt __metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink:(2)
2015-07-15 15:34:51 b.s.d.executor [INFO] Loading executor distributeeventbolt:[3 3]
2015-07-15 15:34:51 b.s.d.task [INFO] Emitting: distributeeventbolt __system ["startup"]
2015-07-15 15:34:51 b.s.d.executor [INFO] Loaded executor tasks distributeeventbolt:[3 3]
2015-07-15 15:34:51 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: Could not instantiate a class listed in config under section topology.metrics.consumer.register with fully qualified name org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
at backtype.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:46) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at backtype.storm.daemon.executor$fn__4641$fn__4654.invoke(executor.clj:732) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at backtype.storm.util$async_loop$fn__551.invoke(util.clj:463) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_80]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_80]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_80]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_80]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_80]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_80]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_80]
at java.lang.Class.forName0(Native Method) ~[na:1.7.0_80]
at java.lang.Class.forName(Class.java:195) ~[na:1.7.0_80]
at backtype.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:44) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
... 4 common frames omitted
2015-07-15 15:34:51 b.s.d.executor [ERROR]
java.lang.RuntimeException: Could not instantiate a class listed in config under section topology.metrics.consumer.register with fully qualified name org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
at backtype.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:46) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at backtype.storm.daemon.executor$fn__4641$fn__4654.invoke(executor.clj:732) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at backtype.storm.util$async_loop$fn__551.invoke(util.clj:463) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_80]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_80]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_80]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_80]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_80]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_80]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_80]
at java.lang.Class.forName0(Native Method) ~[na:1.7.0_80]
at java.lang.Class.forName(Class.java:195) ~[na:1.7.0_80]
at backtype.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:44) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
... 4 common frames omitted
2015-07-15 15:34:51 b.s.d.executor [INFO] Finished loading executor distributeeventbolt:[3 3]
2015-07-15 15:34:51 b.s.d.executor [INFO] Preparing bolt distributeeventbolt:(3)
2015-07-15 15:34:51 b.s.d.executor [INFO] Prepared bolt distributeeventbolt:(3)
2015-07-15 15:34:51 b.s.d.executor [INFO] Loading executor distributeeventbolt:[4 4]
2015-07-15 15:34:51 b.s.d.task [INFO] Emitting: distributeeventbolt __system ["startup"]
2015-07-15 15:34:51 b.s.d.executor [INFO] Loaded executor tasks distributeeventbolt:[4 4]
2015-07-15 15:34:51 b.s.d.executor [INFO] Finished loading executor distributeeventbolt:[4 4]
2015-07-15 15:34:51 b.s.d.executor [INFO] Preparing bolt distributeeventbolt:(4)
2015-07-15 15:34:51 b.s.d.executor [INFO] Prepared bolt distributeeventbolt:(4)
2015-07-15 15:34:51 b.s.d.executor [INFO] Loading executor distributeeventbolt:[5 5]
2015-07-15 15:34:51 b.s.d.task [INFO] Emitting: distributeeventbolt __system ["startup"]
2015-07-15 15:34:51 b.s.d.executor [INFO] Loaded executor tasks distributeeventbolt:[5 5]
2015-07-15 15:34:51 b.s.d.executor [INFO] Finished loading executor distributeeventbolt:[5 5]
2015-07-15 15:34:51 b.s.d.executor [INFO] Preparing bolt distributeeventbolt:(5)
2015-07-15 15:34:51 b.s.d.executor [INFO] Prepared bolt distributeeventbolt:(5)
2015-07-15 15:34:51 b.s.d.executor [INFO] Loading executor eventspout:[6 6]
2015-07-15 15:34:51 b.s.util [ERROR] Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:322) [storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na]
at backtype.storm.daemon.worker$fn__5053$fn__5054.invoke(worker.clj:495) [storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at backtype.storm.daemon.executor$mk_executor_data$fn__4474$fn__4475.invoke(executor.clj:245) [storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at backtype.storm.util$async_loop$fn__551.invoke(util.clj:475) [storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
EDIT_1:
I made an export of my topology as Runnable Jar via Eclipse but this came along in my terminal.
Exception in thread "main" java.lang.ExceptionInInitializerError
at backtype.storm.topology.TopologyBuilder$BoltGetter.customGrouping(TopologyBuilder.java:340)
at backtype.storm.topology.TopologyBuilder$BoltGetter.customGrouping(TopologyBuilder.java:264)
at main.java.storm.Main.main(Main.java:47)
Caused by: java.lang.RuntimeException: Found multiple defaults.yaml resources. You're probably bundling the Storm jars with your topology jar. [jar:file:/home/gdidymiotis/teliko_1.0.0_runnable.jar!/defaults.yaml, jar:file:/usr/hdp/2.2.4.2-2/storm/lib/storm-core-0.9.3.2.2.4.2-2.jar!/defaults.yaml]
at backtype.storm.utils.Utils.findAndReadConfigFile(Utils.java:139)
at backtype.storm.utils.Utils.readDefaultConfig(Utils.java:166)
at backtype.storm.utils.Utils.readStormConfig(Utils.java:190)
at backtype.storm.utils.Utils.<clinit>(Utils.java:77)
... 3 more
The log shows the problem clearly: Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
I guess, this file in not included in the jar you submit to Storm.

Storm drpc server does not accept spout request

I am using apache storm 0.9.3 in Ubuntu 14.04. I put zookeeper, nimubus, drpc,supervisor, ui, worker in same box. From ui, it lookes fine:
I have storm.yaml configureation as follows:
storm.zookeeper.servers:
- "localhost"
storm.zookeeper.port: 2181
nimbus.host: "localhost"
storm.local.dir: "/var/stormtmp"
java.library.path: "/usr/local/lib"
supervisor.slots.ports:
-6700
-6701
-6702
worker.childopts: "-Xmx768m"
nimbus.childopts: "-Xmx512m"
supervisor.childopts: "-Xmx256m"
drpc.servers:
- "localhost"
Then, my java client DRPC call as follows: "callstatio" is the topology name in storm UI.
public static void main(String[] args) throws TException, DRPCExecutionException {
System.out.println("Entering main in TestSpout");
String host = "127.0.0.1";
DRPCClient client = new DRPCClient(host, 3772);
System.out.println("host is:"+host);
String result = client.execute("callstatio","hello world");
System.out.println("result is:"+result);
}
When I run the Client:
I could not see any request happens in drpc.log, neither there is exception.
Any hints why I could not get drpc server working?
The following is from tail -f drpc.log
2015-03-25T03:50:56.842-0400 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:user.home=/root
2015-03-25T03:50:56.842-0400 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:user.dir=/home/juhani/storm/apache-storm-0.9.3/bin
2015-03-25T03:50:57.293-0400 b.s.d.drpc [INFO] Starting Distributed RPC servers...
2015-03-25T04:09:27.331-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 1 start at 1427270366
2015-03-25T04:11:22.337-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 2 start at 1427270477
2015-03-25T04:13:42.342-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 3 start at 1427270620
2015-03-25T04:16:32.349-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 4 start at 1427270791
2015-03-25T04:20:52.358-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 5 start at 1427271047
2015-03-25T04:23:07.373-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 6 start at 1427271183
2015-03-25T04:25:27.377-0400 b.s.d.drpc [WARN] Timeout DRPC request id: 7 start at 1427271325

Resources