How do I use a json tag as a parameter in benthos - benthos

input:
generate:
mapping: root = {"id":48554}
pipeline:
processors:
- http:
url: https://example.com/**use_id_here_from_root**
verb: GET
retries: 5
timeout: 10s
retry_period: 2s
I tried using {{.id}}, eq.${! json("id") } but both of these dont seem to be working.

The http processor url field supports interpolation functions. You should be on the right track with ${! json("id") }, but there might be something wrong with how your test setup.
This config worked for me:
input:
generate:
count: 1
interval: 0s
mapping: root = {"id":48554}
pipeline:
processors:
- http:
url: http://127.0.0.1:6666/${! json("id") }
verb: GET
retries: 5
timeout: 10s
retry_period: 2s
output:
stdout: {}
To test it, I ran nc in a loop:
> while true; do echo -e "HTTP/1.1 200 OK\r\n" | nc -l 127.0.0.1 6666; done
then I ran benthos:
> benthos -c test.yaml
and I got the following output from nc:
GET /48554 HTTP/1.1
Host: 127.0.0.1:6666
User-Agent: Go-http-client/1.1
Content-Length: 12
Content-Type: application/octet-stream
Accept-Encoding: gzip
{"id":48554}

Related

Benthos: How to get variable from processor to input?

i'm new to benthos, hope following configuration to work, i looked at the benthos doc and tried to google, but didn't find an answer, any answer is greatly appreciated
actually, the sign will be the calculated value, but now I'm stuck on the first step, i can't get the sign value to be successfully assigned to the header
input:
processors:
- bloblang: |
meta sign = "1233312312312312"
meta device_id = "31231312312"
http_client:
url: >-
https://test/${!meta("device_id")}
verb: GET
headers:
sign: ${!meta("sign")}
after #Mihai Todor helped, now i have new question.
this config below can work.(first)
input:
http_client:
url: >-
https://test/api
verb: GET
headers:
sign: "signcode"
but this one returned invalid signature error
input:
mapping: root = {}
generate:
count: 1
pipeline:
processors:
- http:
url: >-
https://test/api
verb: GET
headers:
sign: "signcode"
output:
stdout: {}
update(more detail screenshot)
first
second
Finally i got it work with the #Mihai helped.
the reason why i got 'signure is invaild' is because a space character in paramter headers->stringToSign, for reason i need add paramter 'stringTosign' and the value need to be empty, but somehow i copy an invisible space character in it, this will make benthos to add a Content-Length: 1 to the request header(i don't know why), this Content-Length: 1 cause my request always failed with error 'signure invaild'.
after i deleted the space character, all is ok.
Input processors operate on the messages returned by the input, so you can't set metadata that way. Also, metadata is associated with in-flight messages and doesn't persist across messages (use an in-memory cache if you need that).
One workaround is to combine a generate input with an http processor like so:
input:
generate:
mapping: root = ""
interval: 0s
count: 1
processors:
- mapping: |
meta sign = "1233312312312312"
meta device_id = "31231312312"
pipeline:
processors:
- http:
url: >-
https://test/${!meta("device_id")}
verb: GET
headers:
sign: ${!meta("sign")}
output:
stdout: {}
Note that the mapping processor (the replacement for the soon-to-be deprecated bloblang one) can also reside under pipeline.processors, and, if you just need to set those metadata fields, you can also do it inside the mapping field of the generate input (root = {} is implicit).
Update: Following the comments, I ran two configs and used nc to print the full HTTP request each of them make:
generate input with http processor:
input:
generate:
mapping: root = ""
interval: 0s
count: 1
processors:
- mapping: |
meta sign = "1233312312312312"
meta device_id = "31231312312"
pipeline:
processors:
- http:
url: >-
http://localhost:6666/${!meta("device_id")}
verb: GET
headers:
sign: ${!meta("sign")}
output:
stdout: {}
HTTP request dump:
> nc -l localhost 6666
GET /31231312312 HTTP/1.1
Host: localhost:6666
User-Agent: Go-http-client/1.1
Sign: 1233312312312312
Accept-Encoding: gzip
http_client input:
input:
http_client:
url: >-
http://localhost:6666/31231312312
verb: GET
headers:
sign: 1233312312312312
output:
stdout: {}
HTTP request dump:
> nc -l localhost 6666
GET /31231312312 HTTP/1.1
Host: localhost:6666
User-Agent: Go-http-client/1.1
Sign: 1233312312312312
Accept-Encoding: gzip
I used Benthos v4.5.1 on OSX and, for both configs, the request looks identical.
My best guess is that you're seeing a transitive issue on your end (some rate limiting perhaps).

How to pass in multiple --data in Ansible using uri

I'm trying to come up with equivalent Curl command in Ansible using the uri module to no avail
curl 'https://URL --data 'a=1234' --data 's=4321'
In ansible:
- uri:
url: https://URL
method: PUT
body:
a: 1234
s: 4321
status_code: 200
headers:
Content-Type: "application/x-www-form-urlencoded"
I get a response back that command is invalid meaning my Ansible task doesn't work but using the equivalent curl command works fine.

Jenkins pipeline not finish tomcat war deploy

I want to deploy a war from Jenkins using Tomcat manager.
Here is what i am doing from command line :
curl -v -u user:pasword -T target/app.war "http://host:8180/manager/text/deploy?path=&update=true"
It takes a little bit time and works :
* Hostname was NOT found in DNS cache
* Trying ip...
* Connected to host.com (ip) port 8180 (#0)
* Server auth using Basic with user 'jenkins'
> PUT /manager/text/deploy?path=&update=true HTTP/1.1
> Authorization: Basic amVua2luczpLbzNEaUE=
> User-Agent: curl/7.38.0
> Host: host.com:8180
> Accept: */*
> Content-Length: 71682391
> Expect: 100-continue
>
< HTTP/1.1 100
* We are completely uploaded and fine
< HTTP/1.1 200
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 00:00:00 UTC
< Content-Type: text/plain;charset=utf-8
< Transfer-Encoding: chunked
< Date: Tue, 14 Aug 2018 13:18:22 GMT
<
OK - Application déployée pour le chemin de contexte [/]
* Connection #0 to host host.com left intact
My problem is when i am execute this command in Jenkins Pipeline :
stage('Tomcat Deploy') {
sh "curl -v -u user:password -T app.war http://host:8180/manager/text/deploy?path=&update=true"
}
The curl command is not finished correctly :
+ curl -v -u jenkins:pass -T app.war http://host:8180/manager/text/deploy?path=
* Hostname was NOT found in DNS cache
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ip...
* Connected to host.com (ip) port 8180 (#0)
* Server auth using Basic with user 'jenkins'
> PUT /manager/text/deploy?path= HTTP/1.1
> Authorization: Basic amVua2luczpLbzNEaUE=
> User-Agent: curl/7.38.0
> Host: host.com:8180
> Accept: */*
> Content-Length: 71682391
> Expect: 100-continue
>
< HTTP/1.1 100
} [data not shown]
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
And it goes to the next stage without waiting the curl deploy result. Is there any quick solution to fix that?
Yes the war was not uploaded. But i resolved my problem using --upload-file option. Now Jenkins wait for upload as attended. Thank you.
A quick(but very bad) fix for this would be to put a sleep for some time in your code.
sleep(30)
This will give curl commad time to submit/upload your jar to your target server. ** Dont rely on this in Production **

Elastic Search Bulk API - Not indexing

I have output a JSON file in bulk format which I can load in to Kibana with the developer tools. and by inserting a few lines using the -d command
example lines of file:
{"index":{"_index":"els","_type":"logs","_id":1481018400003}}
{"timestamp":1481018400003,"zoneId":29863567,............[]}
{"index":{"_index":"els","_type":"logs","_id":"30cee368073c0c9b"}}
{"timestamp":1481018400005,"zoneId":29863567,............[]}
...
However when I run the bulk api to pot a file it does not do anything. I added verbose to the command and get the following:
* Connected to localhost (::1) port 9200 (#0)
> POST /_bulk HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.49.0
> Accept: */*
> Content-Length: 0
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 400 Bad Request
< content-type: application/json; charset=UTF-8
< content-length: 165
* HTTP error before end of send, stop sending
Any help would be great.
Thanks!

Returned lines overlap when using ncat pipe in a shell script

I am using /bin/sh to write a shell script that fetches data from a telnet call via ncat, like so:
echo 'transport info' | ncat hostname 9993
When I do this from a command line the output looks like this:
500 connection info:
protocol version: 1.3
model: HyperDeck Studio
208 transport info:
status: record
speed: 0
slot id: 1
clip id: none
display timecode: 00:28:01:27
timecode: 00:00:00:00
video format: 1080i5994
loop: false
But when I do it in a shell script /bin/sh it looks like this:
loop: falseat: 1080i599443:15
Here is my sample script:
#!/bin/sh
FOO="$( echo "transport info" | ncat -C hostname 9993 )"
echo $FOO
Anyone know why this happens?
#!/bin/sh
echo "transport info" | ncat -C hostname 9993 | dos2unix > /tmp/test.txt
cat /tmp/test.txt
Output:
500 connection info:
protocol version: 1.3
model: HyperDeck Studio
208 transport info:
status: preview
speed: 0
slot id: none
clip id: none
display timecode: 01:10:01:01
timecode: 00:00:00:00
video format: 1080i5994
loop: false

Resources