How to check consul service health if there is space in the name and tag both - consul

I want to fetch service health information from consul. How can I search a service with curl cmd when there is space in their name and tag both
one more question is curl --get http://127.0.0.1:8500/v1/health/checks/$service will check for service check, I want to check if a Node check is failing for a service or not. How to do that?
curl --get http://127.0.0.1:8500/v1/health/checks/$service --data-urlencode 'filter=Status == "critical"'
here if service name and tag both are "ldisk gd" then with this cmd it will throw
curl: (6) Could not resolve host: gd; Name or service not known
Can't pass the name within quotes with that getting Bad request error

Related

How consul constructs SRV record

Let say I registered a service in consul, so that I can query it by something like:
curl http://localhost:8500/v1/catalog/service/BookStore.US
and it returns
[
{
"ID": "xxxxx-xxx-...",
"ServiceName": "BookStore.US",
...
}
]
If I am using consul directly in my code, it is ok. But the problem is that when I want to use the SRV record directly, it does not work.
Normally, there is a service record created by consul with the name service_name.service.consul. In the above case, it is "BookStore.US.service.consul"
So that you can use "dig" command to get it.
dig #127.0.0.1 -p 8600 BookStore.US.service.consul SRV
But when I tried to "dig" it, it failed with 0 answer session.
My question:
How does consul construct the service/SRV name (pick up some fields in the registered consul record and concat them?)
Is there any way for me to search the SRV records with wildcards, so that at least I can search the SRV name by using the key word "BookStore"
The SRV lookup is not working because Consul is interpreting the . in the service name as domain separator in the hostname.
Per https://www.consul.io/docs/discovery/dns#standard-lookup, service lookups in Consul can use the following format.
[tag.]<service>.service[.datacenter].<domain>
The tag and datacenter components are optional. The other components must be specified. Given the name BookStore.US.service.consul, Consul interprets the components to be:
Tag: BookStore
Service: US
Sub-domain: service
TLD: consul
Since you do not have a service registered by the name US, the DNS server correctly responds with zero records.
In order to resolve this, you can do one of two things.
Register the service with a different name, such as bookstore-us.
{
"Name": "bookstore-us",
"Port": 1234
}
Specify the US location as a tag in the service registration.
{
"Name": "bookstore",
"Tags": ["us"],
"Port": 1234
}
Note that in either case, the service name should be a valid DNS label. That is, it may contain only the ASCII letters a through z (in a case-insensitive manner), the digits 0 through 9, and the hyphen-minus character ('-').
The SRV query should then successfully return a result for the service lookup.
# Period in hostname changed to a hyphen
$ dig -t SRV bookstore-us.service.consul +short
# If `US` is a tag:
# Standard lookup
$ dig -t SRV us.bookstore.service.consul +short
# RFC 2782-style lookup
$ dig -t SRV _bookstore._us.service.consul +short

gcloud cli failing to add record when contents start with dash

I'm working with the LetsEncrypt dns-01 challenge system which entails dynamically creating a TXT record in Google Cloud DNS with specific content, so LE can assert proof of ownership for generating a wildcard certificate (so I can't use http-01). The problem is sometimes LE tells me to create a TXT record that starts with a "-", for example -E_DFDFHJKF1783FSHDJ. I cannot get the gcloud cli to properly accept this data no matter what I do.
Example:
gcloud dns record-sets transaction start --zone=myzone
gcloud dns record-sets transaction add "-E_ASDFSDF" --ttl=30 --zone=myzone --name=test --type=TXT
gcloud dns record-sets transaction remove "-A_DSFKHSDF" --ttl=30 --zone=myzone --name=test2 --type=TXT
If you run those commands and inspect the resulting transaction.yaml you can see whether it properly contains the right string. If it did it correct, you should see something like:
- kind: dns#resourceRecordSet
name: test.
rrdatas:
- '"ASDFASDF"'
ttl: 30
type: TXT
I am executing this via Node's child_process, but I have the issue even if I execute it directly from bash, so Node isn't really meaningful issue at the moment. I've tried echoing the value in. I've tried setting an environment variable and using that in the string.
No matter what I do I get an error like the following:
ERROR: (gcloud.dns.record-sets.transaction.add) unrecognized arguments: -E_ASDFSDF
It turns out some characters need to be escaped in the CLI. I can confirm that the following works:
gcloud dns --project=myprojectid record-sets transaction add "\-test123" --name=test.mydomain.com. --ttl=300 --type=TXT --zone=myzoneid

Hostnames resolution fails with "unknown host" error for hostnames containing utf-8 characters

I am trying to ping a hostname "win-2k12r2-addc.阿伯测阿伯测ad.hai.com" from a linux client.
I see that DNS requests go over the wire with hostname being sent in utf-8 format
and i get a response from the DNS server also with the correct IP address.
But ping fails with the following error :
ping: unknown host win-2k12r2-addc.阿伯测阿伯测ad.hai.com
If i add an entry into /etc/hosts, it works fine
I have the following entries in /etc/hosts when it works.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
127.0.0.1 localhost ava-dev
::1 localhost
10.141.33.93 win-2k12r2-addc.阿伯测阿伯测ad.hai.com
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The /etc/nsswitch.conf file has the following entries for hosts.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
hosts: files dns
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I somewhat suspect that getaddrInfo() call fails when we try to resolve the address i.e it is not able to handle the DNS responses correctly for hostnames
containing unicode characters.
Has anyone faced this issue before ?
Or has anyone tried resolving a unicode hostname from a linux client ?
The reason i m suspecting getaddrinfo() is because of the following.
Apart from ping, i m trying the following ldap command to the same host and it fails with the below mentioned error
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ldapsearch -d 255 -x -h win-2k12r2-addc.阿伯测阿伯测ad.hai.com
ldap_create
ldap_url_parse_ext(ldap://win-2k12r2-addc.%E9%98%BF%E4%BC%AF%E6%B5%8B%E9%98%BF%E4%BC%AF%E6%B5%8Bad.hai.com)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP win-2k12r2-addc.阿伯测阿伯测ad.hai.com:389
ldap_connect_to_host: getaddrinfo failed: Name or service not known
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In both the scenarios (ping / ldap), i see the DNS query request going to the DNS server and the correct response from the DNS server back to the linux client.
The following is the value of the hostname sent in the DNS query
win-2k12r2-addc.\351\230\277\344\274\257\346\265\213\351\230\277\344\274\257\346\265\213ad.hai.com: type A, class IN
It looks like you are trying to use UTF-8 or unicode within the DNS system while the DNS system really doesn't like that. It wants ascii (See RFCs 5890, 5891, 5892, 5893 - but mostly 5891). Escaping the utf-8 characters does not turn them into the required ascii encoding, called punycode (prefixed by "xn--"). You want to use the version of your IDN that has punycode instead of the UTF-8:
ping win-2k12r2-addc.xn--ad-tl3ca3569aba8944eca.hai.com

encode curl GET request and construct the URL

i've the following example dynamic generated url for GET request
http://api.jwplatform.com/v1/videos/create?api_format=json&api_key=dgfgfg&api_nonce=554566&api_timestamp=1500296525&custom.videoId=905581&description=تقليد بداية العام&downloadurl=http://media.com/media/mymedia.mp4&sourceformat=mp4&sourcetype=url&sourceurl=http://media.com/media/mymedia.mp4&title=الغطس بالمياه الباردة.. تقليد بداية العام&api_signature=5cd337198ead0768975610a135e2
which is include the following vars
api_key=
api_nonce=
api_timestamp=
custom.videoId=
description=
downloadurl=
sourceurl=
title=
api_signature=
sourceformat=mp4
sourcetype=url
i;m tring to get send curl GET command and get the response back i always failed for 2 reason ,
the urls in the request should be encoded utf8
and the 2nd one the i always get curl: (6) Couldn't resolve host for
each vars like its not taking the url as one url and break it into
different calls for example
curl: (6) Couldn't resolve host 'بداية'
Input domain encoded as `ANSI_X3.4-1968'
Failed to convert العام&downloadurl=http: to ACE; System iconv failed
getaddrinfo(3) failed for العام&downloadurl=http::80
curl -X -G -v -H "Content: agent-type: application/x-www-form-urlencoded" http://api.jwplatform.com/v1/videos/create?api_format=json&{vars}
and tips how i can reach this and construct the curl command in the right format

Unable to get Cognitive Services access token from subscription key

I have tried both key 1 and key 2 from the Azure Resource Management > Keys page with the following, where foo is a direct copy/paste:
curl -X POST "https://api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=foo" --data ""
curl -X POST "https://api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: foo" --data ""
In both cases I get:
{ "statusCode": 401, "message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription." }
Is there something I need to configure so I can I retrieve access tokens for my subscription? My ultimate goal is to use the access token to authenticate with a Custom Speech Service Endpoint. Thanks!
For some reason this URL worked instead of the one in the documentation:
https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken
Here's the complete command:
curl -X POST --header "Ocp-Apim-Subscription-Key:foo" --data "" "https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken"

Resources