Get failed service name inside Consul watch handler - consul

I'm using Consul to monitor services health status. I use Consul watch command to fire a handler when some service is failed. Currently I'm using this command:
consul watch -type=checks -state=passing /home/consul/health.sh
This works, however I'd like to know inside health.sh the name of the failed service, so I could send a proper alert message containing failed service name. How can I get failed service name there?

Your script could get all the required information by reading from stdin. Information will be sent as JSON. You can easily examine those events by simply adding cat - | jq . into your handler.

The check information outputted by consul watch -type=check contains a ServiceName field that contains the name of the service the check is associated with.
[
{
"Node": "foobar",
"CheckID": "service:redis",
"Name": "Service 'redis' check",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "redis",
"ServiceName": "redis"
}
]
(See https://www.consul.io/docs/dynamic-app-config/watches#checks for official docs.)
Checks associated with services should have values in both the ServiceID and ServiceName fields. These fields will be empty for node level checks.
The following command watches changes in health checks, and outputs the name of a service when its check transitions to a state other than "passing" (i.e., warning or critical).
$ consul watch -type=checks -state=passing "jq --raw-output '.[] | select(.ServiceName!=\"\" and .Status!=\"passing\") | .ServiceName'"

Related

How to check consul service health if there is space in the name and tag both

I want to fetch service health information from consul. How can I search a service with curl cmd when there is space in their name and tag both
one more question is curl --get http://127.0.0.1:8500/v1/health/checks/$service will check for service check, I want to check if a Node check is failing for a service or not. How to do that?
curl --get http://127.0.0.1:8500/v1/health/checks/$service --data-urlencode 'filter=Status == "critical"'
here if service name and tag both are "ldisk gd" then with this cmd it will throw
curl: (6) Could not resolve host: gd; Name or service not known
Can't pass the name within quotes with that getting Bad request error

Producer Avro data from Windows with Docker

I'm following How to transform a stream of events tutorial.
Everything works fine until topic creation part:
Under title Produce events to the input topic:
docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic raw-movies --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/input_movie_event.avsc)"
I'm getting:
<: The term '<' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or
if a path was included, verify that the path is correct and try again.
What would be proper way of calling Avro schema file in --property value.schema ?
All Confluent Kafka servers are running fine.
Schema registry is empty at this point:
PS C:\Users\Joe> curl -X GET http://localhost:8081/subjects
[]
How can I register Avro file in Schema manually from CLI? I'm not finding options for that in Schema Registry API..
My thinking was - if I insert schema manually than I would be able to call it this way.
EDIT 1
Tried entering Avro file path as variable in Power shell like:
$avroPath = 'D:\ConfluentKafkaDocker\kafkaStreamsDemoProject\src\main\avro\input_movie_event.avsc'
And than executing:
docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic raw-movies --bootstrap-server broker:9092 --property value.schema=$avroPath
But that didn't work.
EDIT 2
Manage to get it working with:
$avroPath = 'D:\ConfluentKafkaDocker\kafkaStreamsDemoProject\src\main\avro\input_movie_event.avsc'
docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic raw-movies --bootstrap-server broker:9092 --property value.schema.file=$avroPath
But now I'm getting:
org.apache.kafka.common.config.ConfigException: Error reading schema
from
D:\ConfluentKafkaDocker\kafkaStreamsDemoProject\src\main\avro\input_movie_event.avsc
at io.confluent.kafka.formatter.SchemaMessageReader.getSchemaString(SchemaMessageReader.java:260)
at io.confluent.kafka.formatter.SchemaMessageReader.getSchema(SchemaMessageReader.java:222)
at io.confluent.kafka.formatter.SchemaMessageReader.init(SchemaMessageReader.java:153)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:43)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
input_movie_event.avsc:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "RawMovie",
"fields": [
{"name": "id", "type": "long"},
{"name": "title", "type": "string"},
{"name": "genre", "type": "string"}
]
}
It's copy-pasted from example so I see not reason why it would be incorrectly formatted.
EDIT 3
Tried with forward slash since Power shell works now with it:
value.schema.file=src/main/avro/input_movie_event.avsc
and than with backslash:
value.schema.file=src\main\avro\input_movie_event.avsc
I'm getting same error as in Edit 2 - so it looks like this flag value.schema.file is not working properly.
EDIT 4
tried with value.schema="$(cat src/main/avro/input_movie_event.avsc)" as suggested here:
Error I'm getting now is:
[2022-04-05 10:17:24,135] ERROR Could not parse Avro schema
(io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider)
org.apache.avro.SchemaParseException:
com.fasterxml.jackson.core.JsonParseException: Unexpected character
('n' (code 110)): was expecting double-quote to start field name at
[Source: (String)"{ namespace: io.confluent.developer.avro, type:
record, name: RawMovie, fields: [ {name: id, type: long},
{name: title, type: string}, {name: genre, type: string} ] }";
line: 1, column: 6]
at org.apache.avro.Schema$Parser.parse(Schema.java:1427)
at org.apache.avro.Schema$Parser.parse(Schema.java:1413)
at io.confluent.kafka.schemaregistry.avro.AvroSchema.(AvroSchema.java:70)
at io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider.parseSchema(AvroSchemaProvider.java:54)
at io.confluent.kafka.schemaregistry.SchemaProvider.parseSchema(SchemaProvider.java:63)
at io.confluent.kafka.formatter.SchemaMessageReader.parseSchema(SchemaMessageReader.java:212)
at io.confluent.kafka.formatter.SchemaMessageReader.getSchema(SchemaMessageReader.java:224)
at io.confluent.kafka.formatter.SchemaMessageReader.init(SchemaMessageReader.java:153)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:43)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
In error it says that it was expecting double-quote to start field name and also that name: id and in file I have:
"fields": [
{"name": "id", "type": "long"},
{"name": "title", "type": "string"},
{"name": "genre", "type": "string"}
]
Why is it parsing it incorrectly, like there are not double-quotes when in file they are actually there?
EDIT 6
tried with value.schema="$(type src/main/avro/input_movie_event.avsc)"
since type is equivalent for cat on Windows - got same error as in Edit 5.
Tried with get-content as suggested here - same error.
How can I register Avro file in Schema manually from CLI?
You would not use a Producer, or Docker.
You can use Postman and send POST request (or the Powershell equivalent of curl) to the /subjects endpoint, like the Schema Registry API documentation says for registering schemas.
After that, using value.schema.id, as linked, will work.
Or, if you don't want to install anything else, I'd stick with value.schema.file. That being said, you must start the container with this file (or whole src\main\avro folder) mounted as a Docker volume, which would not be referenced by a Windows path when you actually use it as part of a docker exec command. My linked answer referring to the cat usage assumes your files are on the same filesystem.
Otherwise, the exec command is being interpreted by Powershell, first, so input redirection won't work for value.schema, and type would be the correct CMD command, but $() syntax might not be, as that's for UNIX shells;
Related - PowerShell: Store Entire Text File Contents in Variable

How consul constructs SRV record

Let say I registered a service in consul, so that I can query it by something like:
curl http://localhost:8500/v1/catalog/service/BookStore.US
and it returns
[
{
"ID": "xxxxx-xxx-...",
"ServiceName": "BookStore.US",
...
}
]
If I am using consul directly in my code, it is ok. But the problem is that when I want to use the SRV record directly, it does not work.
Normally, there is a service record created by consul with the name service_name.service.consul. In the above case, it is "BookStore.US.service.consul"
So that you can use "dig" command to get it.
dig #127.0.0.1 -p 8600 BookStore.US.service.consul SRV
But when I tried to "dig" it, it failed with 0 answer session.
My question:
How does consul construct the service/SRV name (pick up some fields in the registered consul record and concat them?)
Is there any way for me to search the SRV records with wildcards, so that at least I can search the SRV name by using the key word "BookStore"
The SRV lookup is not working because Consul is interpreting the . in the service name as domain separator in the hostname.
Per https://www.consul.io/docs/discovery/dns#standard-lookup, service lookups in Consul can use the following format.
[tag.]<service>.service[.datacenter].<domain>
The tag and datacenter components are optional. The other components must be specified. Given the name BookStore.US.service.consul, Consul interprets the components to be:
Tag: BookStore
Service: US
Sub-domain: service
TLD: consul
Since you do not have a service registered by the name US, the DNS server correctly responds with zero records.
In order to resolve this, you can do one of two things.
Register the service with a different name, such as bookstore-us.
{
"Name": "bookstore-us",
"Port": 1234
}
Specify the US location as a tag in the service registration.
{
"Name": "bookstore",
"Tags": ["us"],
"Port": 1234
}
Note that in either case, the service name should be a valid DNS label. That is, it may contain only the ASCII letters a through z (in a case-insensitive manner), the digits 0 through 9, and the hyphen-minus character ('-').
The SRV query should then successfully return a result for the service lookup.
# Period in hostname changed to a hyphen
$ dig -t SRV bookstore-us.service.consul +short
# If `US` is a tag:
# Standard lookup
$ dig -t SRV us.bookstore.service.consul +short
# RFC 2782-style lookup
$ dig -t SRV _bookstore._us.service.consul +short

How to set Consul Alias Service

I currently have 2 services running on a single node (EC2 instance) with a consul client. I would like to health check both of these by hitting a single endpoint, namely: http://localhost:8500/v1/agent/health/service/id/AliasService based on the information Consul provides from https://www.consul.io/api/agent/service.html.
The issue is that I can't seem to find any sort of documentation regarding this AliasService, just that I can use it to run health checks. I've tried putting it into my service definitions but to no avail. It just seems to ignore it altogether.
It seems that what you need is to manually define both services and then attach HTTP health check to one of them and alias health check to the other. What is being aliased here is not service but health check.
For example:
$ consul services register -name ssh -port 22
$ consul services register -name ssh-alias -port 22 -address 172.17.0.1
$ cat >ssh-check.json
{
"ID": "ssh",
"Name": "SSH TCP on port 22",
"ServiceID": "ssh",
"TCP": "localhost:22",
"Interval": "10s",
"Timeout": "1s"
}
$ curl --request PUT --data #ssh-check.json http://127.0.0.1:8500/v1/agent/check/register
$ cat >ssh-alias-check.json
{
"ID": "ssh-alias",
"Name": "SSH TCP on port 22 - alias",
"ServiceID": "ssh-alias",
"AliasService": "ssh"
}
$ curl --request PUT --data #ssh-alias-check.json http://127.0.0.1:8500/v1/agent/check/register
Here I have defined two separate services and two health checks. But only the first health check is doing actual work, the second is aliasing health status from one service to the other.

Parse VMware REST API response

I'm trying to parse a json response from a REST API call. My awk is not strong. This is a bash shell script, and I use curl to get the response and write it to a file. My problem is solely trying to cut the response up into useful parts.
The response is all run together on one line and looks like this:
{
"value": {
"summary": "Patch for VMware vCenter Server Appliance 6.5.0",
"install_time": "2017-03-22T22:43:25 UTC",
"product": "VMware vCenter Server Appliance",
"build": "5178943",
"releasedate": "March 14, 2017",
"type": "vCenter Server with an external Platform Services Controller",
"version": "6.5.0.5300"
}
}
I'm interested in simply writing the type, version, and product strings into a log file. Ideally on 3 lines, but I really don't care; I simply need to be able to identify the build etc at the time this backup script ran, so if I need to rebuild & restore I can make sure I have a compatible build.
Your Rest API gives you JSON format, it's best suited for a JSON parser like jq :
curl -s '/rest/endpoint' | jq -r '.value | .type,.version,.product' > config.txt

Resources