neo4j 2.1.8 HA in Amazon EC2 - amazon-ec2

I have the following Problem:
I want to set up 3 neo4j-servers in EC2. All neo4j servers are on the same VPC-Network
The configuration of each neo4j server is as follows:
neo4j-Master:
conf/neo4j-server.properties
org.neo4j.server.webserver.address=110.0.0.5
org.neo4j.server.webserver.port=7474
org.neo4j.server.webserver.https.port=7484
org.neo4j.server.database.mode=HA
conf/neo4j.properties
ha.server_id=1
ha.initial_hosts=110.0.0.5:5001,110.0.1.5:5001,110.0.2.5:5001
ha.cluster_server=110.0.0.5:5001
ha.server=110.0.0.5:6001
neo4j-Slave-1:
conf/neo4j-server.properties
org.neo4j.server.webserver.address=110.0.1.5
org.neo4j.server.webserver.port=7475
org.neo4j.server.webserver.https.port=7485
org.neo4j.server.database.mode=HA
conf/neo4j.properties
ha.server_id=2
ha.initial_hosts=110.0.0.5:5001,110.0.1.5:5001,110.0.2.5:5001
ha.cluster_server=110.0.1.5:5001
ha.server=110.0.1.5:6001
neo4j-Slave-2:
conf/neo4j-server.properties
org.neo4j.server.webserver.address=110.0.2.5
org.neo4j.server.webserver.port=7476
org.neo4j.server.webserver.https.port=7486
org.neo4j.server.database.mode=HA
conf/neo4j.properties
ha.server_id=3
ha.initial_hosts=110.0.0.5:5001,110.0.1.5:5001,110.0.2.5:5001
ha.cluster_server=110.0.2.5:5001
ha.server=110.0.2.5:6001
And after trying to start the neo4j-server(Master) the following warning is given:
2015-09-21 12:02:45.964+0000 INFO [Cluster] Write transactions to database disabled
Where is the problem?

Related

RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.
This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

kafka.common.KafkaException: Failed to parse the broker info from zookeeper from EC2 to elastic search

I have aws MSK set up and i am trying to sink records from MSK to elastic search.
I am able to push data into MSK into json format .
I want to sink to elastic search .
I am able to do all set up correctly .
This is what i have done on EC2 instance
wget /usr/local http://packages.confluent.io/archive/3.1/confluent-oss-3.1.2-2.11.tar.gz -P ~/Downloads/
tar -zxvf ~/Downloads/confluent-oss-3.1.2-2.11.tar.gz -C ~/Downloads/
sudo mv ~/Downloads/confluent-3.1.2 /usr/local/confluent
/usr/local/confluent/etc/kafka-connect-elasticsearch
After that i have modified kafka-connect-elasticsearch and set my elastic search url
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=AWSKafkaTutorialTopic
key.ignore=true
connection.url=https://search-abcdefg-risdfgdfgk-es-ex675zav7k6mmmqodfgdxxipg5cfsi.us-east-1.es.amazonaws.com
type.name=kafka-connect
The producer sends message like below fomrat
{
"data": {
"RequestID": 517082653,
"ContentTypeID": 9,
"OrgID": 16145,
"UserID": 4,
"PromotionStartDateTime": "2019-12-14T16:06:21Z",
"PromotionEndDateTime": "2019-12-14T16:16:04Z",
"SystemStartDatetime": "2019-12-14T16:17:45.507000000Z"
},
"metadata": {
"timestamp": "2019-12-29T10:37:31.502042Z",
"record-type": "data",
"operation": "insert",
"partition-key-type": "schema-table",
"schema-name": "dbo",
"table-name": "TRFSDIQueue"
}
}
I am little confused in how will the kafka connect start here ?
if yes how can i start that ?
I also have started schema registry like below which gave me error.
/usr/local/confluent/bin/schema-registry-start /usr/local/confluent/etc/schema-registry/schema-registry.properties
When i do that i get below error
[2019-12-29 13:49:17,861] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"listener_security_protocol_map":{"CLIENT":"PLAINTEXT","CLIENT_SECURE":"SSL","REPLICATION":"PLAINTEXT","REPLICATION_SECURE":"SSL"},"endpoints":["CLIENT:/
Please help .
As suggested in answer i upgraded the kafka connect version but then i started getting below error
ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:63)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:168)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:111)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
... 5 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:161)
... 7 more
First, Confluent Platform 3.1.2 is fairly old. I suggest you get the version that aligns with the Kafka version
You start Kafka Connect using the appropriate connect-* scripts and properties located under bin and etc/kafka folders
For example,
/usr/local/confluent/bin/connect-standalone \
/usr/local/confluent/etc/kafka/kafka-connect-standalone.properties \
/usr/local/confluent/etc/kafka-connect-elasticsearch/quickstart.properties
If that works, you can move onto using connect-distributed command instead
Regarding Schema Registry, you can search its Github issues for multiple people trying to get MSK to work, but the root issue is related to MSK not exposing a PLAINTEXT listener and the Schema Registry not supporting named listeners. (This may have changed since versions 5.x)
You could also try using Connect and Schema Registry containers in ECS / EKS rather than extracting in an EC2 machine

Unable to connect to all storage backends successfully with PredictionIO and Elasticsearch on Localhost

I'm trying to setup PredictionIO locally following these instructions.
Unfortunately I'm unable to get it to work. When I try to install a template or run "pio status" I get the error saying that PredictionIO is unable to connect to elasticsearch:
[ERROR] [Console$] Unable to connect to all storage backends successfully. The following shows the error message from the storage backend.
[ERROR] [Console$] None of the configured nodes are available: [] (org.elasticsearch.client.transport.NoNodeAvailableException)
[ERROR] [Console$] Dumping configuration of initialized storage backend sources. Please make sure they are correct.
[ERROR] [Console$] Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /Users/tomasz/Documents/PredictionIO/apache-predictionio-0.10.0-incubating/PredictionIO-0.10.0-incubating/vendors/elasticsearch-1.4.4, HOSTS -> localhost, PORTS -> 9200, TYPE -> elasticsearch
My pio-env.sh file is setup like so:
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9200
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=$PIO_HOME/vendors/elasticsearch-1.4.4
PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
PIO_STORAGE_SOURCES_LOCALFS_PATH=$PIO_FS_BASEDIR/models
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
PIO_STORAGE_SOURCES_HBASE_HOME=$PIO_HOME/vendors/hbase-1.0.0
Any help would be appreciated.
I encountered this when I was using Elasticsearch 5.4.x on PredictionIO version 0.11.0-incubating. It worked when I used Elasticsearch 1.7.x

TCP/IP error while trying to load data to Elasticsearch from SQL Server using Logstash

I am new to Logstash.I am trying to load data from Sql server to Elasticsearch using Logstash,while doing so I get the below error:
C:\logstash-5.0.2>bin\logstash -f C:\logstash-5.0.2\bin\Crash_Data.conf
Using JAVA_HOME=C:\Program Files\Java\jre8 retrieved from C:\ProgramData\Oracle\
java\javapath\java.exe
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Could not find log4j2 configuration at path /logstash-5.0.2/config/log4j2.proper
ties. Using default config which logs to console
15:33:47.206 [[main]-pipeline-manager] ERROR logstash.agent - Pipeline aborted d
ue to error {:exception=>#<Sequel::DatabaseConnectionError: Java::ComMicrosoftSq
lserverJdbc::SQLServerException: The TCP/IP connection to the host Device_Crash_Reporting, port 1433 has failed. . Verify the connection properties. Make sure
that an instance of SQL Server is running on the host and accepting TCP/IP conn
ections at the port. Make sure that TCP connections to the port are not blocked
by a firewall.".>, :backtrace=>["com.microsoft.sqlserver.jdbc.SQLServerException
.makeFromDriverError(com/microsoft/sqlserver/jdbc/SQLServerException.java:191)",
"com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLSe
rverException(com/microsoft/sqlserver/jdbc/SQLServerException.java:242)", "com.m
icrosoft.sqlserver.jdbc.SocketFinder.findSocket(com/microsoft/sqlserver/jdbc/IOB
uffer.java:2369)", "com.microsoft.sqlserver.jdbc.TDSChannel.open(com/microsoft/s
qlserver/jdbc/IOBuffer.java:551)", "com.microsoft.sqlserver.jdbc.SQLServerConnec
tion.connectHelper(com/microsoft/sqlserver/jdbc/SQLServerConnection.java:1962)",
"com.microsoft.sqlserver.jdbc.SQLServerConnection.login(com/microsoft/sqlserver
/jdbc/SQLServerConnection.java:1627)", "com.microsoft.sqlserver.jdbc.SQLServerCo
nnection.connectInternal(com/microsoft/sqlserver/jdbc/SQLServerConnection.java:1
458)", "com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(com/microsoft/s
qlserver/jdbc/SQLServerConnection.java:772)", "com.microsoft.sqlserver.jdbc.SQLS
erverDriver.connect(com/microsoft/sqlserver/jdbc/SQLServerDriver.java:1168)", "R
UBY.connect(C:/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/sequel-4.40.0/lib/seq
uel/adapters/jdbc.rb:222)", "RUBY.make_new(C:/logstash-5.0.2/vendor/bundle/jruby
/1.9/gems/sequel-4.40.0/lib/sequel/connection_pool.rb:116)", "RUBY.make_new(C:/l
ogstash-5.0.2/vendor/bundle/jruby/1.9/gems/sequel-4.40.0/lib/sequel/connection_p
ool/threaded.rb:228)", "RUBY.available(C:/logstash-5.0.2/vendor/bundle/jruby/1.9
/gems/sequel-4.40.0/lib/sequel/connection_pool/threaded.rb:201)", "RUBY._acquire
(C:/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/sequel-4.40.0/lib/sequel/connect
ion_pool/threaded.rb:137)", "RUBY.acquire(C:/logstash-5.0.2/vendor/bundle/jruby/
1.9/gems/sequel-4.40.0/lib/sequel/connection_pool/threaded.rb:151)", "RUBY.sync(
C:/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/sequel-4.40.0/lib/sequel/connecti
on_pool/threaded.rb:282)", "org.jruby.ext.thread.Mutex.synchronize(org/jruby/ext
/thread/Mutex.java:149)", "RUBY.sync(C:/logstash-5.0.2/vendor/bundle/jruby/1.9/g
ems/sequel-4.40.0/lib/sequel/connection_pool/threaded.rb:282)", "RUBY.acquire(C:
/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/sequel-4.40.0/lib/sequel/connection
_pool/threaded.rb:150)", "RUBY.hold(C:/logstash-5.0.2/vendor/bundle/jruby/1.9/ge
ms/sequel-4.40.0/lib/sequel/connection_pool/threaded.rb:106)", "RUBY.synchronize
(C:/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/sequel-4.40.0/lib/sequel/databas
e/connecting.rb:285)", "RUBY.test_connection(C:/logstash-5.0.2/vendor/bundle/jru
by/1.9/gems/sequel-4.40.0/lib/sequel/database/connecting.rb:295)", "RUBY.prepare
_jdbc_connection(C:/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-input-j
dbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:171)", "RUBY.register(C:/logstash-5
.0.2/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/inputs/
jdbc.rb:191)", "RUBY.start_inputs(C:/logstash-5.0.2/logstash-core/lib/logstash/p
ipeline.rb:319)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)", "RU
BY.start_inputs(C:/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:318)",
"RUBY.start_workers(C:/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:195
)", "RUBY.run(C:/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:153)", "R
UBY.start_pipeline(C:/logstash-5.0.2/logstash-core/lib/logstash/agent.rb:250)"]}
15:33:47.721 [Api Webserver] INFO logstash.agent - Successfully started Logstas
h API endpoint {:port=>9600}
15:33:50.261 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:id=>"
main"}
C:\logstash-5.0.2>
I have checked the port to sql server and everything but still no luck.Can someone please help me with this problem.

How to reduce nodes using Hbase-Indexer-mr

I have a question about how to reduce nodes when I am running hbase-indexer-mr-*-job.jar.
I am using Cloudera Manager 5.2
Basically, I have 9 servers. cys-master, cys-slave1, cys-slave2, cys-slave3, cys-slave4, cys-slave5, cys-slave6, cys-slave7, cys-slave8.
Hbase tables are stored in these 9 servers.
However, I only installed Solr in cys-master, cys-slave1, cys-slave2, cys-slave3, cys-slave4.
In cys-master, when I ran
hadoop jar hbase-indexer-mr-*-job.jar
--hbase-indexer-zk localhost
--hbase-indexer-name myindexer
--reducers 0
There are about 15 map tasks in hadoop.
cys-slave1, cys-slave2, cys-slave3 and cys-slave4 could successfully upload indexers to Solr.
cys-slave5, cys-slave6, cys-slave7, cys-slave8 got error messages: "Connection refused". The whole process will fail eventually.
My question is: How could I exclude cys-slave5, cys-slave6, cys-slave7 and cys-slave8?

Resources