statsd influxdb retention policy not found - statsd

when sending data to statsd:
echo "foo:1|c" | nc -u -w0 127.0.0.1 8125
statsd will output the result after refresh and insert the data to influxDB:
{ counters:
{ 'statsd.bad_lines_seen': 0,
'statsd.packets_received': 1,
'statsd.metrics_received': 1,
foo: 1 },
timers: {},
gauges: { 'statsd.timestamp_lag': 0 },
timer_data: {},
counter_rates:
{ 'statsd.bad_lines_seen': 0,
'statsd.packets_received': 0.03333333333333333,
'statsd.metrics_received': 0.03333333333333333,
foo: 0.03333333333333333 },
sets: {},
pctThreshold: [ 90 ] }
run the command to show influxDB info:
$curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "q=SHOW MEASUREMENTS"
will give the response successfully:
{
"results": [
{
"series": [
{
"name": "measurements",
"columns": [
"name"
],
"values": [
[
"cpu_load_short"
],
[
"foo.counter"
]
]
}
]
}
]
}
then I want to query the data from influxDB:
$curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "q=SELECT value FROM foo.counter"
I get the Error message:
{
"results": [
{
"error": "retention policy not found"
}
]
}
Any ideas?
influxDB : 0.9.3

You did find the correct resolution, which is that any identifiers containing a period must be double-quoted. The original query parses as select * from the measurement "counter" from the retention policy "foo", thus the foo not found error.

sorry, the query should be
$curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode 'q=SELECT * FROM "foo.counter"'
put double quotes on "foo.counter", the error message doesn't help.

Related

Bash Script: Error when using variable in curl data

I've been working on creating a script utilizes curl and variables. Doing some searches I found how to place a variable in the data portion of the curl but now I'm getting errors from the curl command.
(Some parts of the code removed to keep passwords out, everything is working except for entering the data into the --data portion of curl)
curlData=$(cat <<EOF
{
"rebootClient": false,
"createPseudoClientRequest": {
"registerClient": true,
"clientInfo": {
"clientType": 0,
}
},
"packages": [
{
"packageId": 702,
"packageName": "File System",
"packageId": 51,
"packageName": "MediaAgent"
}
],
},
"entities": [
{
"clientId": 0,
}
]
}
EOF
)
shopt -u nocasematch #Sets options back to being case sensitive
echo "$csName"
curl -vv --location --request POST "http://$csName:81/SearchSvc/CVWebService.svc/InstallClient" --header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header "Authtoken: $token" \
-d "$curlData"
Doing a bash -x everything looks correct
"rebootClient": false,
"createPseudoClientRequest": {
"registerClient": true,
"clientInfo": {
"clientType": 0,
}
},
"packages": [
{
"packageId": 702,
"packageName": "File System",
"packageId": 51,
"packageName": "MediaAgent"
}
],
"clientAuthForJob": {
},
"entities": [
{
"clientId": 0,
"clientName": "srybrcost",
}
]
}'
But every time I get Request body is empty or format is invalid

Loop through multiple cURL api request body parameters using Bash script

I want to convert multiple audio files to text using Google Cloud's Speech Recognize API.
I successfully transcribed one audio file called '1.flac'...
Request:
curl -s -H "Content-Type: application/json" \
-H "Authorization: Bearer ACCESSTOKEN" \
https://speech.googleapis.com/v1/speech:recognize \
-d '
{"config": {"languageCode": "pt-BR", "audioChannelCount": 2},"audio":{"uri": "gs://PROJECTID/1.flac"}}
'
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "cat",
"confidence": 0.9999999
}
]
}
]
}
I successfully generated multiple lines for the data/body portion of the above request...
Request:
for i in 1 2 3
do
echo "{\"config\": {\"languageCode\": \"pt-BR\", \"audioChannelCount\": 2},\"audio\":{\"uri\": \"gs://PROJECTID/$i.flac\"}}"
done
Response:
{"config": {"languageCode": "pt-BR", "audioChannelCount": 2},"audio":{"uri": "gs://PROJECTID/1.flac"}}
{"config": {"languageCode": "pt-BR", "audioChannelCount": 2},"audio":{"uri": "gs://PROJECTID/2.flac"}}
{"config": {"languageCode": "pt-BR", "audioChannelCount": 2},"audio":{"uri": "gs://PROJECTID/3.flac"}}
How can I combine these two scripts, so that the curl API executes once for each of the three files, with one response like this:
{
"results": [
{
"alternatives": [
{
"transcript": "cat",
"confidence": 0.9999999
}
]
}
]
}
{
"results": [
{
"alternatives": [
{
"transcript": "dog",
"confidence": 0.9999999
}
]
}
]
}
{
"results": [
{
"alternatives": [
{
"transcript": "horse",
"confidence": 0.9999999
}
]
}
]
}
Your code is almost right.
for i in 1 2 3
do
curl -s -H "Content-Type: application/json" \
-H "Authorization: Bearer ACCESSTOKEN" \
-d '{"config": {"languageCode": "pt-BR", "audioChannelCount": 2}
,"audio":{"uri": "gs://PROJECTID/'$i'.flac"}}' \
https://speech.googleapis.com/v1/speech:recognize
done
I put URL at the end, because usually, options come before arguments
The value of -d is composed of 3 parts ['...'] [$i] ['...'] chained together. This allows the expansion of [$i]

Is there any way to exclude the "meta", "rows", "statistics" fields from the ClickHouse FORMAT JSON query?

Query:
SELECT value FROM test.json_test FORMAT JSON
Response:
{
"meta":
[
{
"name": "value",
"type": "Int32"
}
],
"data":
[
{
"value": 1
}
],
"rows": 1,
"statistics":
{
"elapsed": 0.112135109,
"rows_read": 1,
"bytes_read": 4
}
}
How to exclude unnecessary fields and leave only data field?
You can exclude "statistics" from output:
set output_format_write_statistics=0;
ubuntu-16gb-nbg1-1 :) select 1 format JSON;
SELECT 1
FORMAT JSON
{
"meta":
[
{
"name": "1",
"type": "UInt8"
}
],
"data":
[
{
"1": 1
}
],
"rows": 1
}
1 rows in set. Elapsed: 0.003 sec.
It looks like JSONEachRow cannot be used straightforward because the result JSON is not valid.
You can play with grouping data to the array to get valid JSON:
SELECT
groupArray(value) AS values,
groupArray((value, name)) AS objects
FROM
(
SELECT
1 AS value,
'str1' AS name
UNION ALL
SELECT
2 AS value,
'str2' AS name
)
FORMAT JSONEachRow
/* Result:
{"values":[1,2],"objects":[[1,"str1"],[2,"str2"]]}
*/
where you try running your query?
maybe you can try filter this output via pipelines and JQ
clickhouse-client -q "SELECT value FROM test.json_test FORMAT JSON" | jq .data

How to recover the hdp

I have this command line to dispaly the YARN policy:
The result is:
{
"id": 131,
"guid": "4d9c3257-0998-42ea-8506-f773a368430d",
"isEnabled": true,
"version": 2,
"service": "Namecluster_yarn",
}
},
"policyItems": [
{
"accesses": [
{
"type": "submit-app",
"isAllowed": true
}
],
"users": [],
"groups": [
"Application_Team_1"
],
"conditions": [],
"delegateAdmin": false
}
],
"denyPolicyItems": [],
"allowExceptions": [],
"denyExceptions": [],
"dataMaskPolicyItems": [],
"rowFilterPolicyItems": []
}
I would like recover just the list of the groups (in my case I have just one group is Application_Team_1).
How can I recover the list of groups via the API REST or shell if it's possible ?
Using jq:
wget "http://myhost:6080/service/public/v2/api/service/Namecluster_yarn/policy/YARN%20_QueueName/" | jq -r '.policyItems[0].groups[0]'
Use wget or curl or something else that can output JSON data.
jq is filtering the string you want. Note the -r option to get rid of double quotes.

clone the server in zabbix using api

sorry guys, but i don't know how clone server in zabbix, so please help.
i didn't find the json request with this function in zabbix documentation, so try to create personal script, but when i get items from server and try to add it in new, have this error :
{"jsonrpc":"2.0","error":{"code":-32602,"message":"Invalid
params.","data":"Item uses host interface from non-parent
host."},"id":1}
thanks
if need of course my code :
create_host() {
a='{ "jsonrpc": "2.0", "method": "host.create", "params": { "host": "'$selected_name'", "interfaces": [ { "type": 1, "main": 1, "useip": 1, "ip": "'$IP'", "dns": "", "port": "10050" } ], "groups": [ { "groupid": "1" } ], "templates": [ { "templateid": "10001" } ], "inventory": { "macaddress_a": "01234", "macaddress_b": "56768" } }, "auth": "'$AUTH_TOKEN'", "id": 1 }'
wget -O- -o /dev/null $API --no-check-certificate --header 'Content-Type: application/json-rpc' --post-data "$a"
}
push_items() {
id_of_host=`echo $3 | tr -d ']' | tr -d '[u' | sed -s "s/'//g"`
echo "name"$1
echo "key"$2
request_push='{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"name": "'$1'",
"key_": "'$2'",
"hostid": "'$id_of_host'",
"type": 10,
"value_type": 0,
"interfaceid": "2",
"delay": 300
},
"auth": "'$AUTH_TOKEN'",
"id": 1
}'
wget -O- -o /dev/null $API --no-check-certificate --header 'Content-Type: application/json-rpc' --post-data "$request_push"
}
get_items() {
request='{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": ["name", "key_"],
"host": "'$ETALON_SERVER'",
"sortfield": "name"
},
"auth": "'$AUTH_TOKEN'",
"id": 1
}'
wget -O- -o /dev/null $API --no-check-certificate --header 'Content-Type: application/json-rpc' --post-data "$request"
}
AUTH_TOKEN=$(authenticate)
if [ -z "$AUTH_TOKEN" ]; then
echo "Connection not established"
exit 1
else
echo "everything is ok"
echo $AUTH_TOKEN
fi
check_host=$(check_exist_host)
host_create=$(create_host)
#echo "$host_create"
id_host=`echo "$host_create" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostids"]'`
items=$(get_items)
#echo $items
keys=`echo "$items" | python -c 'import json, sys; print ("".join(i["name"]+";"+i["key_"] +"|" for i in json.load(sys.stdin)["result"]))'`
while IFS='[;]' read -r s1 s2; do
name_item=$s1
item_key=$s2
push_items "$s1" "$s2" "$id_host"
done < <(printf "%b" "${keys//|/\\n}")
As old as this question is, it still comes up in the top search results...
The API doesn't allow host.clone() because it's supposed to be performed through host.get() and host.create(), depending on your needs. The following code clones
host name
host visible name
first IP address
groups, adding a new one (static ID)
macros
and changes
templates (statid ID)
The old host is renamed and disabled, in order to maintain data history.
from pyzabbix import ZabbixAPI
ZAPI = ZabbixAPI(api_url)
ZAPI.login(username, password)
batch_list = [
"host1-asdasd",
"host2-asdasd"
]
for hostname in batch_list:
try:
original_host = ZAPI.host.get(
filter={'host': hostname},
selectGroups='extend',
selectInterfaces='extend',
selectMacros='extend'
)[0]
disable = ZAPI.host.update(
hostid=original_host['hostid'],
status=1,
host=original_host['host'] + '-history',
name=original_host['name'] + ' (history)'
)
print(disable)
clone = ZAPI.host.create(
host=original_host['host'],
name=original_host['name'],
proxy_hostid=original_host['proxy_hostid'],
groups=original_host['groups'] + [{'groupid': 802}],
macros=original_host['macros'],
interfaces=[{'main': '1', 'type': '1', 'useip': '1', 'dns': '', 'port': '10050', 'bulk': '1',
'ip': original_host['interfaces'][0]['ip']}],
templates={'templateid': 25708}
)
print(clone)
except:
print('something went wrong with: ' + hostname)

Resources