We are trying to setup Neo4j Causal Clustering using 3 EC2 instances that we have created in our AWS Account each with Neo4j Enterprise Causal Cluster AMI. We made necessary configurations in the neo4j.template of each Neo4j EC2 Instance to enable Causal Clustering. For example the parameters - causal_clustering.initial_discovery_members and causal_clustering.discovery_listen_address are set to the Public IPs of the EC2.
After making the necessary changes, the Neo4j Server was started using the command ./etc/init.d/neo4j start. When checked the status using systemctl status neo4j, it is still showing neo4j_mode as SINGLE though it was set to CORE. The Neo4j UI is also not accessible from the browser window.
How can we enable Neo4j Causal Clustering using 3 EC2 Instances with Neo4j AMIs.? Is there any specific AMI that we should use with EC2 to enable Causal Clustering.? Is there any documentation for the same (EC2 Neo4j Causal Clustering)? What should be the correct neo4j.conf file configuration for enabling Causal Cluster?
Related
I have an application up and running in set of docker containers (deployed using docker stack and not kubernetes). I want to do performance monitoring for this application. I am confused about whether I should go for beats or Elastic Agent.
This page says:
When Elastic Agent runs inside of a container, it cannot be upgraded through Fleet as it expects that the container itself is upgraded.
This page says:
Standalone mode — All policies are applied to the Elastic Agent manually as a YAML file.
Q1. Does this mean that in standalone mode Elastic Agent is not "at all" managed by Fleet? Or some part of management
This page says:
Standalone Elastic Agents are manually configured and managed locally on the systems where they are installed. They are useful when you are not interested in centrally managing agents in Fleet, either due to your company’s security requirements, or because you prefer to use another configuration management system.
This page says:
To run an Elastic Agent in standalone mode, install the agent on each host you want to monitor and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents.
Q2. Does this mean that for monitoring docker containers (deployed using docker stack and not kubernetes), there is no difference between between beats and Elastic agents in terms of "central" management? Only difference would be that I have to configure different beats separately, which is avoided with Elastic Agent?
Q3. What is preferrable in this case? Beats or Elastic Agent?
Standalone is not managed by fleet, the "it cannot be upgraded through Fleet" refers to upgrading the actual version of the elastic-agent, you can still update the agent polices.
Yes you would need to configure beats via a config file on the container where elastic agent can be setup with some env variables to enroll it in a policy, then that policy is centrally managed and updated via Kibana.
Both are valid but elastic agent allows updating the policy after the container is running via a central location and therefore would be my choice.
We have an elasticsearch cluster deployed to the Elastic Cloud and would like to send monitoring/health metrics to Datadog. What is the best way to do that?
It seems like our options are:
Installing the datadog agent binary via the plugins upload
Using metric beat -> logstash -> datadog_metrics output
You can deploy the Datadog agent in a container / instance that you manage and the configure it according to these instructions to gather metrics from the remote ElasticSearch cluster that is hosted on Elastic Cloud. You need to create a conf.yaml file in the elastic.d/ directory and provide the required information (Elasticsearch endpoint/URL, username, password, port, etc) for the agent to be able to connect to the cluster. You may find a sample configuration file here.
As George Tseres mentioned above, the way I had to get this working was to set up collection on a separate instance (through docker) and then to configure it to read the specific Elastic Cloud instances.
I ended up making this: https://github.com/crwang/datadog-elasticsearch, building that docker image, and then pushing it up to AWS ECR.
Then, I spun up a Fargate service / task to run the container.
I also set it to run locally with docker-compose as a test.
I have created a Graph database in ArangoDB in a 5 machine AWS cluster. I do not have enough space in the Database AWS cluster to store the dump. So I would like to take a dump of the database in an AWS instance in a different cluster. I have the key files to connect to the machines. How to do it using Arangodump ? Thanks.
I do get that correctly that you're using DC/OS clusters on AWS?
The problem with arangoimp is, that it doesn't know howto authenticate with the DC/OS proxy, and thus can't reach the routes it would require to import to arangodb.
The problem is similar to Running Arango Shell on DC/OS cluster - you want to use sshutle as lalitlogical describes to forward the ArangoDB server port (usually 8529) to your target environment.
If there are multiple read replicas, where load balancing related settings can be specified when using spring AWS libraries.
Read replicas have their own endpoint address similar to the original RDS instance. Your application will need to take care of using all the replicas and to switch between them. You'd need to introduce this algorithm into your application so it automatically detects which RDS instance it should connect to in turn. The following links can help:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html#Overview.ReadReplica
We're planning to move our Tomcat/MySQL app onto the Amazon cloud employing 2 EC2 instances (inst_1 & inst_2) running in different availability zones whereby inst_1 will contain the master RDS db and inst_2 the slave RDS db.
If we employ elastic load balancing to balance traffic between the two instances, will traffic directed to inst_2 that includes insert/update/delete db transactions first update the master RDS db in inst_1 followed by a synchronous update of the slave in inst_2; thereby ensuring that the two RDS instances are always synchronized?
Amazon's published info (whitepapers) suggests such, but doesn't explicitly state it. If not, how does one ensure that the two RDS instances remain synchronized?
Additional note: We're planning to employ Amazon's Elastic Beanstalk. Thanks!
You have to take a few things into consideration
AWS RDS instances are simple managed EC2 instances which run a MySQL server.
If you add a slave ( I think Amazon calls them read-replica) this is a read-only slave
Amazon doesn't manage the distribution of writing queries to the master server automatically.
Replication will ensure that your read slave always is up-to-date automatically ( with minimal delay which is increasing with write-load on the master )
This behavior is MySQL-specific
This means that you have to delegate manipulating queries to the master exclusively.
This can either be done by your application or by a MySQL proxy running on a extra machine.
The proxy then is the only interface your application servers will talk to. It is able to manage balancing between your RDS instances and the direction of any manipulation query to the master instance.
When RDS is used in multi-az mode you have no access to the secondary instance. There is only ever one instance that is visible to you, so most if your question doesn't apply. In case of failover the DNS address you are given will start resolving to a different ip. Amazon doesn't disclose how the two instances are kept in sync.
If instead of using a multi-az instance you use a single-az instance + a replica then it is up to you to direct queries appropriately - any attempt to alter data on the replica will fail. Since this is just standard MySQL replication, the replica can lag behind the master (in particular with current versions of MySQL the replica only runs a single thread)