Solana leader schedule cli vs api correlation - solana

How to correlate between API output and the ci output
The leader schedule coming from the API is coming as offset
solana#:~$ curl https://api.mainnet-beta.solana.com -X POST -H "Content-Type: application/json" -d '
> {
> "id":1,
> "jsonrpc":"2.0",
> "method":"getLeaderSchedule",
> "params":[
> {
> "identity":"AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb"
> }
> ]
> }
> '
The output
{"jsonrpc":"2.0","result":{"AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb":[48360,48361,48362,48363,49260,49261,49262,49263,75072,75073,75074,75075,112200,112201,112202,112203,114572,114573,114574,114575,140984,140985,140986,140987,158720,158721,158722,158723,166276,166277,166278,166279,185124,185125,185126,185127,226220,226221,226222,226223,249656,249657,249658,249659,252316,252317,252318,252319,258804,258805,258806,258807,259116,259117,259118,259119,259336,259337,259338,259339,277108,277109,277110,277111,289656,289657,289658,289659,313972,313973,313974,313975,327088,327089,327090,327091,377792,377793,377794,377795,424116,424117,424118,424119]},"id":1}
Here is th eleader schedule directly from the cli, as you can see the output is real slot number not the offset
solana#:~$ solana leader-schedule | grep "AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb"
132240360 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132240361 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132240362 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132240363 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132241260 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132241261 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132241262 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132241263 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132267072 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132267073 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132267074 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132267075 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132304200 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132304201 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132304202 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132304203 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132306572 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132306573 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132306574 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132306575 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132332984 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132332985 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132332986 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132332987 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132350720 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132350721 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132350722 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132350723 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132358276 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132358277 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132358278 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132358279 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132377124 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132377125 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132377126 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132377127 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132418220 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
132418221 AAHSdsnRREfdQNzDGRxai8CLXh9EPCoRdwULPqBYd9fb
.....
.......
How to use API to give the solana leader schedule as it shown using the solana cli

It's a bit convoluted, but to do the same thing as the CLI, you'll need to get the current epoch using getEpochInfo https://docs.solana.com/developing/clients/jsonrpc-api#getepochinfo, and then the epoch schedule using getEpochSchedule https://docs.solana.com/developing/clients/jsonrpc-api#getepochschedule.
With all of these, you can get the first slot in the epoch:
firstSlotInEpoch = (currentEpoch - firstNormalEpoch) * slotsPerEpoch + firstNormalSlot
With that, you can add firstSlotInEpoch to each of the values in the leader schedule.
The full CLI implementation doing this can be found at https://github.com/solana-labs/solana/blob/e1866aacad667073a6fcc3231f2494ef39363928/cli/src/cluster_query.rs#L998-L1044

Related

Kafka Connect MySQL "server time zone value 'EDT' is unrecognized"

I'm new to Kafka but had Debezium Connect/Mysql up and running in Docker just fine. All of a sudden all docker containers were gone and upon restart and attempt to reconnect to MySQL the JDBC connection fails with this response to the Connect rest API:
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
Using KAFKA_LISTENERS=PLAINTEXT://172.19.0.5:9092 and KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.19.0.5:9092
HTTP/1.1 400 Bad Request
Date: Fri, 30 Apr 2021 20:39:56 GMT
Content-Type: application/json
Content-Length:
Server: Jetty(9.4.33.v20201020)
{"error_code":400,"message":"Connector configuration is invalid
and contains the following 1 error(s): \nUnable to connect: The server time
zone value 'EDT' is unrecognized or represents more than one time zone. You
must configure either the server or JDBC driver (via the 'serverTimezone'
configuration property) to use a more specifc time zone value if you want
to utilize time zone support.\nYou can also find the above list of errors
at the endpoint `/connector-plugins/{connectorType}/config/validate`"}
in response to running this:
docker run -it --rm --net mynet_default debezium/kafka \
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" \
mynet_connect_1:8083/connectors/ \
-d '{
"name": "my-connector-001",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "my.domain.com",
"database.port": "3306",
"database.user": "myuser",
"database.password": "mypassword",
"database.server.id": "6400",
"database.server.name": "dbserver001",
"database.include.list": "mydb",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "dbhistory.metrics.connector004",
"table.include.list":"mydb.users,mydb.applications"
} }'
This worked fine for a couple hours. Then as I was watching updates mostly straight from the Debezium tutorial, all the kafka containers were gone and then ever since (with the exact same config) will no longer connect, citing the timezone thing. I can connect with the same credentials in the mysql client (via the docker network) and the MySQL permissions and grants did not change. There's a Gitter mention last July that this error is itself an erroneous indication of some other connection failure. And there are multiple reports of it being a bug in JDBC. Is there any other possibility beside someone having changed something on our database?
This is the Connect log:
connect_1 | 2021-04-30 20:28:47,254 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
connect_1 | 2021-04-30 20:39:56,996 ERROR || Failed testing connection for jdbc:mysql://my.domain.com:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'myuser' [io.debezium.connector.mysql.MySqlConnector]
connect_1 | java.sql.SQLException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support.
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:76)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
connect_1 | at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197)
connect_1 | at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$1(JdbcConnection.java:231)
connect_1 | at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:872)
connect_1 | at io.debezium.connector.mysql.MySqlConnection.connection(MySqlConnection.java:79)
connect_1 | at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:867)
connect_1 | at io.debezium.jdbc.JdbcConnection.connect(JdbcConnection.java:413)
connect_1 | at io.debezium.connector.mysql.MySqlConnector.validateConnection(MySqlConnector.java:98)
connect_1 | at io.debezium.connector.common.RelationalBaseSourceConnector.validate(RelationalBaseSourceConnector.java:52)
connect_1 | at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:375)
connect_1 | at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
connect_1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
connect_1 | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
connect_1 | at java.base/java.lang.Thread.run(Thread.java:834)
connect_1 | Caused by: com.mysql.cj.exceptions.InvalidConnectionAttributeException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support.
connect_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
connect_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
connect_1 | at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
connect_1 | at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
connect_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
connect_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:85)
connect_1 | at com.mysql.cj.util.TimeUtil.getCanonicalTimezone(TimeUtil.java:132)
connect_1 | at com.mysql.cj.protocol.a.NativeProtocol.configureTimezone(NativeProtocol.java:2120)
connect_1 | at com.mysql.cj.protocol.a.NativeProtocol.initServerSession(NativeProtocol.java:2143)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:1310)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:967)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:826)
connect_1 | ... 17 more
connect_1 | 2021-04-30 20:39:56,998 INFO || AbstractConfig values:
connect_1 | [org.apache.kafka.common.config.AbstractConfig]
Is there a parameter I could be passing in via the Connect rest API to specify the timezone in the JDBC string (shown near the top of this log)? I'm using the Debezium (1.5) Docker images per this tutorial.
I think EDT is not in the mysql.time_zone_name table
SELECT * FROM mysql.time_zone_name;
Adding the configuration "database.serverTimezone" solves this issue on my end, e.g.
"config": {
...
"database.serverTimezone": "America/Los_Angeles",
...
}
in my case, using kafka connect, i had to modify the connector config - mysql connection string (connection.url property)
like this:
jdbc:mysql://<server ip or dns>:<port>?serverTimezone=GMT%2b8:00&<more parameters>
Note: %2b is the + character.

Adding requestTimeout causes Kibana to fail at startup

I am experimenting with Elasticsearch-Kibana-Logstash for processing web log files.
Some of the queries take a bit of time and Kibana times out quickly, so I wanted to increase the amount of time that Kibana will wait for a response from elasticsearch. After a bit of searching, I found some suggestions to set elasticsearch.requestTimeout. I tried to increase the timeout by adding this to my kibana.yml file:
elasticsearch.requestTimeout: 5000
This causes Kibana to fail immediately on startup with this error:
kibserver_1 | FATAL { Error: Payload timeout must be shorter than socket timeout: POST /elasticsearch/{index}/_search
kibserver_1 | at Object.exports.assert (/usr/share/kibana/node_modules/hoek/lib/index.js:736:11)
kibserver_1 | at new module.exports.internals.Route (/usr/share/kibana/node_modules/hapi/lib/route.js:69:10)
kibserver_1 | at internals.Connection._addRoute (/usr/share/kibana/node_modules/hapi/lib/connection.js:387:19)
kibserver_1 | at internals.Connection._route (/usr/share/kibana/node_modules/hapi/lib/connection.js:379:18)
kibserver_1 | at internals.Plugin._apply (/usr/share/kibana/node_modules/hapi/lib/plugin.js:572:14)
kibserver_1 | at internals.Plugin.route (/usr/share/kibana/node_modules/hapi/lib/plugin.js:542:10)
kibserver_1 | at createProxy (/usr/share/kibana/src/core_plugins/elasticsearch/lib/create_proxy.js:85:14)
kibserver_1 | at ScopedPlugin.init [as externalInit] (/usr/share/kibana/src/core_plugins/elasticsearch/index.js:110:37)
kibserver_1 | at ScopedPlugin.tryCatcher (/usr/share/kibana/node_modules/bluebird/js/main/util.js:26:23)
kibserver_1 | at Promise.attempt.Promise.try (/usr/share/kibana/node_modules/bluebird/js/main/method.js:30:24)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:196:46
kibserver_1 | at next (native)
kibserver_1 | at step (/usr/share/kibana/src/server/plugins/plugin.js:25:191)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:25:361
kibserver_1 | cause:
kibserver_1 | Error: Payload timeout must be shorter than socket timeout: POST /elasticsearch/{index}/_search
kibserver_1 | at Object.exports.assert (/usr/share/kibana/node_modules/hoek/lib/index.js:736:11)
kibserver_1 | at new module.exports.in how to resolternals.Route (/usr/share/kibana/node_modules/hapi/lib/route.js:69:10)
kibserver_1 | at internals.Connection._addRoute (/usr/share/kibana/node_modules/hapi/lib/connection.js:387:19)
kibserver_1 | at internals.Connection._route (/usr/share/kibana/node_modules/hapi/lib/connection.js:379:18)
kibserver_1 | at internals.Plugin._apply (/usr/share/kibana/node_modules/hapi/lib/plugin.js:572:14)
kibserver_1 | at internals.Plugin.route (/usr/share/kibana/node_modules/hapi/lib/plugin.js:542:10)
kibserver_1 | at createProxy (/usr/share/kibana/src/core_plugins/elasticsearch/lib/create_proxy.js:85:14)
kibserver_1 | at ScopedPlugin.init [as externalInit] (/usr/share/kibana/src/core_plugins/elasticsearch/index.js:110:37)
kibserver_1 | at ScopedPlugin.tryCatcher (/usr/share/kibana/node_modules/bluebird/js/main/util.js:26:23)
kibserver_1 | at Promise.attempt.Promise.try (/usr/share/kibana/node_modules/bluebird/js/main/method.js:30:24)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:196:46
kibserver_1 | at next (native)
kibserver_1 | at step (/usr/share/kibana/src/server/plugins/plugin.js:25:191)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:25:361,
kibserver_1 | isOperational: true }
This one baffles me. I can't seem to find any reference to a "payload timeout" in the ElasticSearch documentation. My web searches suggest this might be coming from hapijs, but I'm not sure how to resolve this. Does anyone out there know?
(Kibana, ElasticSearch, and Logstash are all v 6.1.0)
I think the problem is that you're setting the timeout to a value that's too small, it's in ms and the default is 30000 (see https://www.elastic.co/guide/en/kibana/6.1/settings.html):
elasticsearch.requestTimeout:
Default: 30000 Time in milliseconds to wait for responses from the back end or Elasticsearch. This value must be a positive integer.
What might be possible is that the elasticsearch.requestTimeout is used to set the socket timeout in the hapijs and since the default for the payload time out seems to be 10 s (from here):
route.options.payload.timeout
Default value: to 10000 (10 seconds).
It will fail when checking if the payload timeout is shorter than the socket timeout. But that's just an hypotheses and I have failed find any proof in Kibana's code.

Getting NullPonterException while trying to stop apache-activemq?

I am unable to shutdown my activemq gracefully after enabling jmx. Please help and tell me what am I doing wrong. Here is what I am trying to do.
start activemq:-
[mwapp#JMNGD1BAO150V02 ~]$ /app/apache-activemq-5.14.0/bin/activemq start xbean:/app/apache-activemq-5.14.0/conf/activemq-security.xml
INFO: Loading '/app/apache-activemq-5.14.0//bin/env'
INFO: Using java '/usr/java/jre1.7.0_79//bin/java'
INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
INFO: pidfile created : '/app/apache-activemq-5.14.0//data/activemq.pid' (pid '16917')
activemq.log:- To me it's looking fine
2017-10-12 13:48:18,936 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#2142b533: startup date [Thu Oct 12 13:48:18 IST 2017]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2017-10-12 13:48:20,008 | INFO | Loading properties file from URL [file:/app/apache-activemq-5.14.0//conf/credentials-enc.properties] | org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer | main
2017-10-12 13:48:20,975 | INFO | Loaded the Bouncy Castle security provider. | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:21,283 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/jms_nas/kahadb] | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:21,308 | INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2017-10-12 13:48:21,594 | INFO | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,653 | INFO | Recovering from the journal #1105:27118028 | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,657 | INFO | Recovery replayed 58 operations from the journal in 0.046 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,719 | INFO | PListStore:[/app/apache-activemq-5.14.0/data/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2017-10-12 13:48:21,903 | INFO | Apache ActiveMQ 5.14.0 (localhost, ID:JMNGD1BAO150V02-59661-1507796301746-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,786 | INFO | Listening for connections at: ssl://JMNGD1BAO150V02:61616?needClientAuth=true&maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2017-10-12 13:48:22,787 | INFO | Connector ssl started | org.apache.activemq.broker.TransportConnector | main
2017-10-12 13:48:22,787 | INFO | Apache ActiveMQ 5.14.0 (localhost, ID:JMNGD1BAO150V02-59661-1507796301746-0:1) started | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,787 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,788 | WARN | Store limit is 102400 mb (current store usage is 1397 mb). The data directory: /jms_nas/kahadb only has 91534 mb of usable space. - resetting to maximum available disk space: 91534 mb | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:23,646 | INFO | No Spring WebApplicationInitializer types detected on classpath | /admin | main
2017-10-12 13:48:23,755 | INFO | ActiveMQ WebConsole available at http://localhost:8161/ | org.apache.activemq.web.WebConsoleStarter | main
2017-10-12 13:48:23,755 | INFO | ActiveMQ Jolokia REST API available at http://localhost:8161/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2017-10-12 13:48:23,799 | INFO | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
2017-10-12 13:48:24,068 | INFO | No Spring WebApplicationInitializer types detected on classpath | /api | main
2017-10-12 13:48:24,185 | INFO | jolokia-agent: Using policy access restrictor classpath:/jolokia-access.xml | /api | main
stop activemq:-
[mwapp#JMNGD1BAO150V02 ~]$ /app/apache-activemq-5.14.0/bin/activemq stop xbean:/app/apache-activemq-5.14.0/conf/activemq-security.xml
INFO: Loading '/app/apache-activemq-5.14.0//bin/env'
INFO: Using java '/usr/java/jre1.7.0_79//bin/java'
INFO: Waiting at least 30 seconds for regular process termination of pid '16917' :
Java Runtime: Oracle Corporation 1.7.0_79 /usr/java/jre1.7.0_79
Heap sizes: current=63488k free=61608k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/app/apache-activemq-5.14.0//conf/login.config -Dactivemq.classpath=/app/apache-activemq-5.14.0//conf:/app/apache-activemq-5.14.0//../lib/: -Dactivemq.home=/app/apache-activemq-5.14.0/ -Dactivemq.base=/app/apache-activemq-5.14.0/ -Dactivemq.conf=/app/apache-activemq-5.14.0//conf -Dactivemq.data=/app/apache-activemq-5.14.0//data
Extensions classpath:
[/app/apache-activemq-5.14.0/lib,/app/apache-activemq-5.14.0/lib/camel,/app/apache-activemq-5.14.0/lib/optional,/app/apache-activemq-5.14.0/lib/web,/app/apache-activemq-5.14.0/lib/extra]
ACTIVEMQ_HOME: /app/apache-activemq-5.14.0
ACTIVEMQ_BASE: /app/apache-activemq-5.14.0
ACTIVEMQ_CONF: /app/apache-activemq-5.14.0/conf
ACTIVEMQ_DATA: /app/apache-activemq-5.14.0/data
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
ERROR: java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.activemq.console.command.AbstractCommand.handleException(AbstractCommand.java:167)
at org.apache.activemq.console.command.AbstractJmxCommand.execute(AbstractJmxCommand.java:390)
at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)
at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.activemq.console.Main.runTaskClass(Main.java:262)
at org.apache.activemq.console.Main.main(Main.java:115)
...............................
INFO: Regular shutdown not successful, sending SIGKILL to process
INFO: sending SIGKILL to pid '16917'
As you can see, the system is forcefully shutting down using pid which is not expected. I am currently using apache-activemq-5.14.0 and the configuration file will look something like below. I am not sure why activemq gave two separate file to enable JMX i.e. env and activemq-security.xml. Or the env file has some different role to play. I read the documentation, and I got more confuse when they mention from V.5.12.0 onwards they are supporting OCSP. Do I need to enable that too?
${ACTIVEMQ_HOME}/bin/env
#!/bin/sh
# Active MQ installation dirs
# ACTIVEMQ_HOME="<Installationdir>/"
# ACTIVEMQ_BASE="$ACTIVEMQ_HOME"
# ACTIVEMQ_CONF="$ACTIVEMQ_BASE/conf"
# ACTIVEMQ_DATA="$ACTIVEMQ_BASE/data"
# ACTIVEMQ_TMP="$ACTIVEMQ_BASE/tmp"
ACTIVEMQ_OPTS_MEMORY="-Xms64M -Xmx1G"
if [ -z "$ACTIVEMQ_OPTS" ] ; then
ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS_MEMORY -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=$ACTIVEMQ_CONF/login.config"
fi
#ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.audit=true"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.port=1099"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_CONF}/jmx.password"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_CONF}/jmx.access"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=true"
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote"
#ACTIVEMQ_SUNJMX_CONTROL="--jmxurl service:jmx:rmi:///jndi/rmi://127.0.0.1:1099/jmxrmi --jmxuser controlRole --jmxpassword abcd1234"
ACTIVEMQ_SUNJMX_CONTROL=""
if [ -z "$ACTIVEMQ_QUEUEMANAGERURL" ]; then
ACTIVEMQ_QUEUEMANAGERURL="--amqurl tcp://localhost:61616"
fi
if [ -z "$ACTIVEMQ_SSL_OPTS" ] ; then
#ACTIVEMQ_SSL_OPTS="-Djava.security.properties=$ACTIVEMQ_CONF/java.security"
ACTIVEMQ_SSL_OPTS=""
fi
#ACTIVEMQ_DEBUG_OPTS="-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"
if [ -z "$ACTIVEMQ_KILL_MAXSECONDS" ]; then
ACTIVEMQ_KILL_MAXSECONDS=30
fi
ACTIVEMQ_USER=""
# ACTIVEMQ_PIDFILE="$ACTIVEMQ_DATA/activemq.pid"
JAVA_HOME="/usr/java/jre1.7.0_79/"
${ACTIVEMQ_HOME}/conf/activemq-security.xml
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
----------------------------------------------
<managementContext>
<!--managementContext createConnector="true" connectorPort="1099"/-->
<managementContext createConnector="true">
<property xmlns="http://www.springframework.org/schema/beans" name="environment">
<map xmlns="http://www.springframework.org/schema/beans">
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.password.file" value="${activemq.base}/conf/jmx.password"/>
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.access.file" value="${activemq.base}/conf/jmx.access"/>
</map>
</property>
</managementContext>
</managementContext>
----------------------------------------------
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook"/>
</shutdownHooks>
</broker>

Floating ip pool not found

I'm trying to use vagrant-openstack-provider to manage Bluemix VMs.
All is looking good, except for an error message at the end: Floating ip pool not found.
2015-09-27 11:17 | DEBUG | request => method : POST
2015-09-27 11:17 | DEBUG | request => url : https://api2-dal09.open.ibmcloud.com:8774/v2/.../os-floating-ips
2015-09-27 11:17 | DEBUG | request => headers : {"X-Auth-Token"=>"...", :accept=>:json, :content_type=>:json}
2015-09-27 11:17 | DEBUG | request => body : {"pool":"private"}
2015-09-27 11:17 | DEBUG | response => code : 404
2015-09-27 11:17 | DEBUG | response => headers : {:content_length=>"73", :content_type=>"application/json; charset=UTF-8", :x_compute_request_id=>"...", :date=>"Sun, 27 Sep 2015 10:17:30 GMT"}
2015-09-27 11:17 | DEBUG | response => body : {"itemNotFound": {"message": "Floating ip pool not found.", "code": 404}}
2015-09-27 11:17 | WARN | Error allocating ip in pool private : Floating ip pool not found.
2015-09-27 11:17 | WARN | Impossible to allocate a new IP
ERROR warden: Error occurred: Floating ip pool not found.
Is it possible to create an IP pool in the horizon console? If so, how do I do this? I couldn't find any documentation online.
I specified 'private' in the Vagrantfile:
os.floating_ip_pool = 'private'
I should have been using 'Public-Network' instead:
os.floating_ip_pool = 'Public-Network'
I didn't realise at the time, but you can find the floating_ip_pool with:
snowch$ vagrant openstack floatingip-list
+-------------------+
| Floating IP pools |
+-------------------+
| Public-Network |
+-------------------+

Exception when running Grails from terminal on OSX

I get the following exception when I try to run Grails from the terminal in OSX:
| Loading Grails 2.3.6
| Error java.lang.reflect.InvocationTargetException
| Error at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| Error at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| Error at java.lang.reflect.Method.invoke(Method.java:597)
| Error at org.codehaus.groovy.grails.cli.support.GrailsStarter.rootLoader(GrailsStarter.java:235)
| Error at org.codehaus.groovy.grails.cli.support.GrailsStarter.main(GrailsStarter.java:263)
| Error Caused by: java.lang.IllegalAccessError: class sun.reflect.GeneratedConstructorAccessor2 cannot access its superclass sun.reflect.ConstructorAccessorImpl
| Error at sun.misc.Unsafe.defineClass(Native Method)
| Error at sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:45)
| Error at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381)
| Error at java.security.AccessController.doPrivileged(Native Method)
| Error at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377)
| Error at sun.reflect.MethodAccessorGenerator.generateConstructor(MethodAccessorGenerator.java:76)
| Error at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:30)
| Error at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
| Error at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
| Error at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:77)
| Error at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:102)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:202)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor.addRuleForModuleId(EnhancedDefaultDependencyDescriptor.groovy:135)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor$addRuleForModuleId$0.callCurrent(Unknown Source)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor.excludeForMap(EnhancedDefaultDependencyDescriptor.groovy:113)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor.this$3$excludeForMap(EnhancedDefaultDependencyDescriptor.groovy)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor$this$3$excludeForMap.callCurrent(Unknown Source)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor.<init>(EnhancedDefaultDependencyDescriptor.groovy:76)
| Error at org.codehaus.groovy.grails.resolve.EnhancedDefaultDependencyDescriptor.<init>(EnhancedDefaultDependencyDescriptor.groovy:80)
| Error at org.codehaus.groovy.grails.resolve.GrailsIvyDependencies.registerDependency(GrailsIvyDependencies.groovy:69)
| Error at org.codehaus.groovy.grails.resolve.GrailsIvyDependencies.registerDependencies(GrailsIvyDependencies.groovy:58)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| Error at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| Error at java.lang.reflect.Method.invoke(Method.java:597)
| Error at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
| Error at org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite$StaticMetaMethodSiteNoUnwrapNoCoerce.invoke(StaticMetaMethodSite.java:148)
| Error at org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.call(StaticMetaMethodSite.java:88)
| Error at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
| Error at org.codehaus.groovy.grails.resolve.GrailsIvyDependencies$_createDeclaration_closure1_closure3.doCall(GrailsIvyDependencies.groovy:117)
| Error at org.codehaus.groovy.grails.resolve.GrailsIvyDependencies$_createDeclaration_closure1_closure3.doCall(GrailsIvyDependencies.groovy)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| Error at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| Error at java.lang.reflect.Method.invoke(Method.java:597)
| Error at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:272)
| Error at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.call(PogoMetaMethodSite.java:64)
| Error at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
| Error at org.codehaus.groovy.grails.resolve.config.DependencyConfigurationConfigurer.dependencies(DependencyConfigurationConfigurer.groovy:150)
| Error at org.codehaus.groovy.grails.resolve.config.DependencyConfigurationConfigurer$dependencies$1.call(Unknown Source)
| Error at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
| Error at org.codehaus.groovy.grails.resolve.GrailsIvyDependencies$_createDeclaration_closure1.doCall(GrailsIvyDependencies.groovy:102)
| Error at org.codehaus.groovy.grails.resolve.GrailsIvyDependencies$_createDeclaration_closure1.doCall(GrailsIvyDependencies.groovy)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| Error at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| Error at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| Error at java.lang.reflect.Method.invoke(Method.java:597)
| Error at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
| Error at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
| Error at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1086)
| Error at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:1110)
| Error at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:910)
| Error at groovy.lang.Closure.call(Closure.java:411)
| Error at groovy.lang.Closure.call(Closure.java:405)
| Error at org.codehaus.groovy.grails.resolve.AbstractIvyDependencyManager.doParseDependencies(AbstractIvyDependencyManager.java:676)
| Error at org.codehaus.groovy.grails.resolve.AbstractIvyDependencyManager.parseDependencies(AbstractIvyDependencyManager.java:577)
| Error at org.codehaus.groovy.grails.resolve.DependencyManager$parseDependencies.call(Unknown Source)
| Error at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
| Error at org.codehaus.groovy.grails.resolve.DependencyManagerConfigurer.configureIvy(DependencyManagerConfigurer.groovy:157)
| Error at grails.util.BuildSettings.configureDependencyManager(BuildSettings.groovy:1281)
| Error at grails.util.BuildSettings.postLoadConfig(BuildSettings.groovy:1219)
| Error at grails.util.BuildSettings.loadConfig(BuildSettings.groovy:1075)
| Error at grails.util.BuildSettings.loadConfig(BuildSettings.groovy)
| Error at grails.util.BuildSettings$loadConfig$0.callCurrent(Unknown Source)
| Error at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
| Error at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:141)
| Error at grails.util.BuildSettings.loadConfig(BuildSettings.groovy:1053)
| Error at org.codehaus.groovy.grails.cli.GrailsScriptRunner.loadConfigEnvironment(GrailsScriptRunner.java:249)
| Error at org.codehaus.groovy.grails.cli.GrailsScriptRunner.main(GrailsScriptRunner.java:210)
| Error ... 6 more
I can run it fine from within IntelliJ. I know it's something with my environment configuration, but I haven't been able to figure out what, yet. Doesi anyone have any ideas?
I'm running Java:
java version "1.6.0_65"
Java(TM) SE Runtime Environment (build 1.6.0_65-b14-462-11M4609)
Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-462, mixed mode)
OSX: 10.8.4
Are you exporting JAVA_HOME, I know on Linux I have to everytime I want to use grails command line..
Take a look here, maybe try updating to jdk since it does say jre .. (not a mac expert though) - also a few things to try:
http://liberalsprouts.blogspot.co.uk/2012/12/how-to-install-jdk-7-and-set-up.html
echo $PATH
echo $JAVA_HOME
java --version
and
$JAVA_HOME/bin/java -version
see if it is all the same.. I guess it will be, if you follow the instructions maybe see how latest jdk treats it.
to figure out what your intelliJ is using:
jps
3735
7588 GrailsStarter
7660 ForkedTomcatServer
7783 Jps
in my case its ggts if i now run:
lsof -p 7588|grep -i java|grep jdk|head -n 3
java 7588 mx1 txt REG 8,1 38568 18745123 /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java
java 7588 mx1 mem REG 8,1 71084 12982357 /usr/lib/jvm/java-6-openjdk-i386/jre/lib/i386/libj2pkcs11.so
java 7588 mx1 mem REG 8,1 85518 18745490 /usr/lib/jvm/java-6-openjdk-common/jre/lib/jce.jar
which is -p {process id} I piped it into head to minimise output - but it gives me an idea of what jdk it is using - maybe you can trace what is going on and exports etc from using this method..

Resources