Does Consul save the health check history of a service? - consul

I am using Hashicorp Consul for Health Check in our system.
Does Consul have any API to get the health check History?
For example; let's say I have a service called A and this service was down for the past hour and now it's up, What I need is to get this info ( service was down in the past 1 hour ).
Thanks for helping.

No, the history information is not available from consul server itself. You can see that their /health API, you have no way to get the health at specific point of time.
If you need that functionality, you will have to either record it periodically in an audit table, or log to something like Splunk, and pull the history accordingly.

Related

How to check from where job arrived in HashiCorp Nomad?

I wonder is there any way to find out how Nomad receives any specific job. As I found in logs there is information about job submit time only. But IP from which this job arrived or submit method (API, GUI) is not specified. Is there any way to find this information?
Although I haven't tried it myself yet, but what you refer to falls under audit logs feature of Nomad where payload is somewhat similar to Vault audit logs. Here is how audit logging can be setup as a part of Nomad server configuration, however, this is available only for enterprise version of Nomad at the moment.
Anyhow, looking at the docs I guess fields that you would be interested are
.payload.auth.stage
.payload.auth.accessor_id
.payload.auth.name
.payload.request.operation
.payload.request.endpoint
.payload.request.request_meta.remote_address
.payload.request.request_meta.user_agent
.payload.response.status_code.

Two Kubernetes pods accessing the same database

My SpringBoot application is scheduled to run at 1 UTC each day for some data collection and put that in the database. We are using Kubernetes and we have two pods accessing the same database. The database is at some other location for which we have a connection string which is the same in both pods.
The problem is both of my pods wake up at 1 UTC and add duplicate entries in the database? How can I ensure that only one pod is talking to the database? Is this application is not ideal for k8s deployment?
I know this is old, but for anybody else, look into ShedLock. It handles locking across distributed nodes and is pretty easy to implement.

Elastic Uptime Monitors using Heartbeat --Few Monitors are missing in kibana

I have the elk setup in a ec2 server.With Beats like metricbeat,filebeat,heartbeat.
I have setup the elastic apm for some applications like jenkins & sonarqube.
Now In uptime I can see only few monitors like sonarqube and jenkins
Other application are missing..
When I see data from yesterday not available in elasticsearch for particular application
The best way to troubleshoot what is going on is to check if the events from Heartbeat are being collected. The Uptime application only displays events from Heartbeat, and therefore — this is the Beat that you need to check.
First, check the connectivity of Heartbeat and the configured output:
metricbeat test output
Secondly, check if the events are being generated. You can check this by commenting out your existing output (Likely Elasticsearc/Elastic Cloud) and enabling either the Console output or the File output. Then start your Metricbeat and check if events are being generated. If they are, then it might be something with the backend side of things; maybe Elasticsearch is rejecting the documents sent and refusing to index them.
Apropos, Elastic is implementing a native Jenkins plugin that allows you to observe your CI pipeline using OpenTelemetry compatible backends such as Elastic APM. You can learn more about this plugin here.

AWS Aurora - adding error logs to Cloudwatch? (errorlog)

I know you can export Aurora's audit logs to Cloudwatch, but I don't see any way to do that for the error logs/errorlog. That would enable us to set up a Cloudwatch alarm for failed scheduled events, etc. Am I just missing it?
It looks like they support it for MySQL RDS, but not Aurora.
I already have code that will query the error log files, but being able to set up an alert would make more sense (plus I wouldn't scan the logs every x minutes).
The AWS doc does not provide any info about this in the English page but it does in the French page.
Please see the link here.
Then you could translate thanks to Chrome.
Publication logs of Aurora MySQL in CloudWatch Logs with AWS management Consoles.
You can publish logs of Aurora MySQL in CloudWatch Logs by the way of the console. To publish newspapers Aurora MySQL from the console :
Open the console Amazon RDS to the address https://console.aws.amazon.com/rds/
In the shutter of navigation, choose Clusters.
Choose the cluster of database Aurora MySQL whose data of logs you want to publish.
For Actions, choose Modify the cluster.
In the section Exports of logs, choose the logs which you want to begin to publish into CloudWatch Logs.
Choose Continue, then Modify DB Cluster on the recapitulative page.

Check whether Hazelcast cluster is ready for requests via bash

I'm using Docker with Hazelcast 3.6.
I launch Hazelcast instances and want to launch my application only after cluster is ready for requests.
I've read in documentation, that it's possible to access Hazelcast via curl, but it doesn't work for me.
I'm getting (52) Empty reply from server, when trying to POST to Hazelcast instance, which is started and ready.
Is there a method to check, whether Hazelcast is ready?
E.g., for Cassandra I run
wget --spider 0.0.0.0:9042
for RabbitMQ this one works great netcat -z -w 2 rabbit 5672. Is there similar solution for Hazelcast?
You can use REST API for cluster management, which described in the documentation.
Another hint, you can use cluster.sh script in bin directory to interact with management endpoints.
You're trying to hit a REST client API (for accessing maps and queues), which is disabled by default.
Let me know if you have any questions.
Thank you
(Posted on behalf of the OP).
Caution: Don't forget to hide group name and password when using public CI and/or code repository if you don't need all people to see this data.
first of all I had to add cluster group in configuration, see Creating Cluster Groups in documentation;
then use curl --data "${GROUPNAME}&${PASSWORD}" http://${ADDRESS}:${PORT}/hazelcast/rest/management/cluster/state to wait until cluster is ready.
E.g., in my case, when Hazelcast listens on 0.0.0.0:5701 with group name app1 and password app1-pass, it looks as follows
curl --data "app1&app1-pass" \
http://0.0.0.0:5701/hazelcast/rest/management/cluster/state
Note
As I understood, it will not show, whether all nodes are ready, so I need to check them separately.
When I set hazelcast.initial.min.cluster.size to 2, main node shows message HazelcastInstance waiting for cluster size of 2 and curl returns {"status":"success","state":"active"}.

Resources