I'm new to ELK. We have a Spring Boot backend on a dedicated AWS instance. We have ELK stack on another instance (To the outside world only Kibana is available). Information gathering to ELK is carried out via Amazon SQS.
These information include logs and some business history about user (registration, any other action, etc).
In this case, I have a question. Is it possible to get back information by action, by user and use it in the backend responses?
I am guessing you want to use data present in Elasticsearch to be available to spring boot application. It is definitely possible. You will need to open up elasticsearch port on elasticsearch machine to specifically to EC2 instance on which spring boot is running. How to open port will depend on if they are on same vpc, different vpc, different aws account etc. Once port is open, you can either use Spring Data Elasticsearch or just rest calls to access elasticsearch api.
Related
I deployed a Laravel application on AWS Elasticbeanstalk.
I want to incorporate caching with Redis as my cache driver as well as Elasticsearch.
I managed to run these 2 features locally (redis on port 6379 and elasticsearch on 9200),
but now I want them to run on remote servers and I simply specify their endpoints in my .env file.
Can anyone let me know how I can obtain remote URLs for Redis and Elasticsearch?
Update:
I found out that Heruko offers the ability to create a Redis instance and thereby one can obtain a URL for Redis. I presume a similar thing is for Elasticsearch.
If this is not the right way to do so, please let me know how it works
I don’t understand the difference between consul’s agent api and catalog api
Although the consul document has always emphasized that agent and catalog should not be confused,But there are indeed many methods that look similar, such as:
/catalog/services
/agent/services
When should I use catalog or agent(Just like the above http url)?
Which one is suitable for high frequency calls?
Consul is designed for services to be registered against a Consul client agent which is running on the same host where a service is deployed. The /v1/agent/service/ endpoints provide a way for you to interact with services which are registered with the specific Consul agent to which you are communicating, and register new services against that agent.
Each Consul agent in the data center submits its registered service information to the Consul servers. The servers aggregate this information to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog). The /v1/catalog/ endpoints return that aggregated information.
I want to call out this sentence from the anti-entropy doc.
Consul treats the state of the agent as authoritative; if there are any differences between the agent and catalog view, the agent-local view will always be used.
The catalog APIs can be used to register or remove services/nodes from the catalog, but normally these operations should be performed against the client agents (using the /v1/agent/ APIs) since they are authoritative for data in Consul.
The /v1/agent/ APIs should be used for high frequency calls, and should be issued against the local Consul client agent running on the same node as the app, as opposed to communicating directly with the servers.
I am looking for advice/ideas on how to continuously deploy new features to a Spring Boot web application that is hosted on an AWS EC2 instance. My current workflow:
bootRepackage my application to create a war file.
Upload that file to AWS.
Add a new feature to my application.
bootRepackage again.
Remove the current war from AWS, and upload the new one.
This is obviously not a good workflow, as the application needs to be restarted which could result in 1) downtime and 2) entries in the database being lost (if I'm using Spring's default H2 database - I am not, I'm using a standalone SQL server, but just making the point for this question) so I am wanting to streamline it.
Is there any way to add a new feature to the current instance of the service on AWS? Is it possible to recompile the code "one the fly" to prevent the need to restart the application?
Is there any way of creating a better setup that would allow me to just merge a new branch to master locally, and push that with the same instance still in prod except with this new feature?
Thank you in advance!
Update, is this really the correct answer?
If you using single instance of aws and deploying the application to EC2 instance, please assign Elastic IP for the AWS EC2 instance.
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
Deploy the new version of the application in another AWS EC2 instance
When the application is ready, reassign the Elastic IP from the existing EC2 instance to new EC2 instance
Elastic IPs are the simplest way to implement the blue-green switch.
Our app is based on Java Spring boot. And we totally based on Google cloud, where we have dynamic IP and our serve isntance will work behind Elastic load balance, where an instance may get spawned and get killed based on server resource consumptions.
None we these server instance can be assumed to have static IP.
Looking for solution to connect different server instance with dynamic IP on Google Cloud.
Since 3.6, Hazelcast offers Discovery SPI to integrate external discovery mechanisms into the system. As a result there are many discovery plugins and you can implement your own. See the list of your options here. Kubernetes might be helpful in your case.
Some additional info from what Sertug said,
There is also a Google Compute SPI that might be helpful, you can check it out here:
https://github.com/hazelcast/hazelcast-gcp
Also, here's a blog post (a little old but still valid):
https://blog.hazelcast.com/hazelcast-discovery-spi
I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.