I want to deploy a war from Jenkins using Tomcat manager.
Here is what i am doing from command line :
curl -v -u user:pasword -T target/app.war "http://host:8180/manager/text/deploy?path=&update=true"
It takes a little bit time and works :
* Hostname was NOT found in DNS cache
* Trying ip...
* Connected to host.com (ip) port 8180 (#0)
* Server auth using Basic with user 'jenkins'
> PUT /manager/text/deploy?path=&update=true HTTP/1.1
> Authorization: Basic amVua2luczpLbzNEaUE=
> User-Agent: curl/7.38.0
> Host: host.com:8180
> Accept: */*
> Content-Length: 71682391
> Expect: 100-continue
>
< HTTP/1.1 100
* We are completely uploaded and fine
< HTTP/1.1 200
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 00:00:00 UTC
< Content-Type: text/plain;charset=utf-8
< Transfer-Encoding: chunked
< Date: Tue, 14 Aug 2018 13:18:22 GMT
<
OK - Application déployée pour le chemin de contexte [/]
* Connection #0 to host host.com left intact
My problem is when i am execute this command in Jenkins Pipeline :
stage('Tomcat Deploy') {
sh "curl -v -u user:password -T app.war http://host:8180/manager/text/deploy?path=&update=true"
}
The curl command is not finished correctly :
+ curl -v -u jenkins:pass -T app.war http://host:8180/manager/text/deploy?path=
* Hostname was NOT found in DNS cache
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ip...
* Connected to host.com (ip) port 8180 (#0)
* Server auth using Basic with user 'jenkins'
> PUT /manager/text/deploy?path= HTTP/1.1
> Authorization: Basic amVua2luczpLbzNEaUE=
> User-Agent: curl/7.38.0
> Host: host.com:8180
> Accept: */*
> Content-Length: 71682391
> Expect: 100-continue
>
< HTTP/1.1 100
} [data not shown]
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
And it goes to the next stage without waiting the curl deploy result. Is there any quick solution to fix that?
Yes the war was not uploaded. But i resolved my problem using --upload-file option. Now Jenkins wait for upload as attended. Thank you.
A quick(but very bad) fix for this would be to put a sleep for some time in your code.
sleep(30)
This will give curl commad time to submit/upload your jar to your target server. ** Dont rely on this in Production **
Related
I am trying to use curl to query neo4j
curl -X POST -H Accept:application/json -H Content-Type:application/json -u neo4j:password -v http://localhost:7474/db/neo4j/tx/commit -d '{"statements":[{"statement":"MATCH (n) RETURN n"}]}'
gives me this response
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 127.0.0.1:7474...
* Connected to localhost (127.0.0.1) port 7474 (#0)
* Server auth using Basic with user 'neo4j'
> POST /db/neo4j/tx/commit HTTP/1.1
> Host: localhost:7474
> Authorization: Basic bmVvNGo6cGFzc3dvcmQ=
> User-Agent: curl/7.79.1
> Accept:application/json
> Content-Type:application/json
> Content-Length: 47
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Wed, 27 Jul 2022 09:13:35 GMT
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< Content-Length: 120
<
{"results":[],"errors":[{"code":"Neo.ClientError.Request.InvalidFormat","message":"Could not parse the incoming JSON"}]}* Connection #0 to host localhost left intact
If anyone could help please
Should have mentioned I'm on windows. Apparently you have to escape those double quotes in the json
This works for me now:
curl -X POST -H Accept:application/json -H Content-Type:application/json -u neo4j:password -v http://localhost:7474/db/neo4j/tx/commit -d "{\"statements\":[{\"statement\":\"MATCH (n) RETURN n\"}]}"
I'm trying to do user acceptance testing on an application which becomes unresponsive on a particular URL parameter included in the GET request.
Steps
I have curl and run the GET req (crafted) copied curl syntax for Unix and copied to ubuntu server along with some changes.
'https://abc.ai/getMultiDashboard/demouser' -H 'Cookie: _ga=GA1.2.561275388.1601468723; _hjid=ecd3d778-b7f5-4f7f-b3ef-6f9f12b13d66; 54651cc_an=4; _gid=GA1.2.1366208807.1601560229; _hjTLDTest=1; 54651cc_data=JTdCJTIyaWQlMjIlM0Ellc3NUb2tlbiUyMiUzQSUyMjA2MTk3NjM3NTgwOGE2N2RmZjlhMmJlOWJmODE5NDQzJTIyJTdE; 54651cc_loggedin=1; 54651cc_sound=true; 54651cc_read=true; 54651cc_popup=true; 54651cc_disablelastseen=false; 54651cc_usertype=loginuser; _hjIncludedInPageviewSample=1; _hjAbsoluteSessionInProgress=0; abc=s%3A8ZGd7Mol31n_Y8OCLq39dHoo3_mIlRhZ.pFQWz5gG9McKsQLzOikcTBmmb2Wcrxo%2B9u9iPpqoyxw; pageUrl=/#/dashboard/18; _gat_gtag_UA_97985973_5=1'
"https://abc.ai/getTagTrends/E1_CPU_PERCENTAGE/2020-9-12%2013:4:0/202**'23548'**0-09-15|%2013:04:00"
"https://abc.ai/getTagTrends/E1_CPU_PERCENTAGE/2020-9-12%2013:4:0/202**'`23548`'**0-09-15|%2013:04:00"
The ** asterisks are not part of the actual values; I use them to demarcate my injected value
Using a small bash script I have generated 1000s of (unique) payload combinations for Curl.
#/bin/bash
for ((i=0; i<1000; ++i)); do
echo "
'https://abc.ai/getMultiDashboard/demouser' -H 'Cookie: _ga=GA1.2.561275388.1601468723; _hjid=ecd3d778-b7f5-4f7
f-b3ef-6f9f12b13d66; 54651cc_an=4; _gid=GA1.2.1366208807.1601560229; _hjTLDTest=1; 54651cc_data=JTdCJTIyaWQlMjIlM0ElMjJkZW1vdXNlciU yMiUyQyUyMm4lMjIlM0ElMjJkZW1vdXNlciUyMiUyQyUyMmZyaWVuZHMlMjIlM0ElMjIlMjIlMkMlMjJhdXRoJTIyJTNBJTIyZWQ0YjVhNDFkMzJlY2U4MzQ3Mzk0ZjlkZT U5YThjMWQlMjIlMkMlMjJyZWZlcmVyJTIyJTNBJTIyaXJpZGl1bS1wcmVwcm9kLmVtcGlyaWMuYWklMjIlMkMlMjJhY2Nlc3NUb2tlbiUyMiUzQSUyMjA2MTk3NjM3NTgwO
GE2N2RmZjlhMmJlOWJmODE5NDQzJTIyJTdE; 54651cc_loggedin=1; 54651cc_sound=true; 54651cc_read=true; 54651cc_popup=true; 54651cc_disable
lastseen=false; 54651cc_usertype=loginuser; _hjIncludedInPageviewSample=1; _hjAbsoluteSessionInProgress=0; abc=s%3A8ZGd7Mol31n_
Y8OCLq39dHoo3_mIlRhZ.pFQWz5gG9McKsQLzOikcTBmmb2Wcrxo%2B9u9iPpqoyxw; pageUrl=/#/dashboard/18; _gat_gtag_UA_97985973_5=1' \"https://abc.ai/getTagTrends/E1_CPU_PERCENTAGE/2020-9-12%2013:4:0/202'$((1 + RANDOM % 10000000))'0-09-15|%2013:04:00\""
> URL.txt
done
Final command for testing (one-liner) fails as
cat URL.txt | xargs -I{} -- curl -O {}
Output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
Expected output
when I run the curl manually copying the contents from URL file I get
[{"dashboard_id": 18, "user_id": "demouser", "dashboard_name": "My_dashboard_1", "description": "Test description One", "creation_date": "2020-09-21 10:13:00", "dashboard_config": null, "id": 5}]
<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.18.0</center>
In order to troubleshoot, i used set -x on shell cmd-line I can't see why or how the request is crafted and handled by the curl processes. The curl output shows output (above) which has all 0 values in all fields, this tell me its just a bad malformed request, which isn't the actual case since i manually tested running the URL payload given in URL.txt multiple times it works.
EMPTY LINE
CODE
NEW-LINE
CODE
NEWLINE
...
I want to generate as many parallel requests as possible, without waiting for the first one to finish.
Debug
running it with -v using one-liner (showing only importants lines)
> GET /getMultiDashboard/demouser -H Cookie: _ga=GA1.2.561275388.1601468723; _hjid=ecd3d778-b7f5-4f7f-b3ef-6f9f12b13d66; 54651cc_an=4; _hjTLDTest=1; 54651cc_data=JTdCJTIyaWQlMjIlM0ElMgwOGE2N2RmZjlhMmJlOWJmODE5NDQzJTIyJTdE; 54651cc_loggedin=1; 54651cc_sound=true; 54651cc_read=true; 54651cc_popup=true; 54651cc_disablelastseen=false; 54651cc_usertype=loginuser; _gid=GA1.2.1722546791.1601890062; _hjIncludedInPageviewSample=1; _hjAbsoluteSessionInProgress=0; abc=s%3AKsRWcfNnOkbDHh1e65C3NwiDSZMx4LYg.zxLIymu488Ii5Z2%2Brz0qiwS17BzK2P7A0OoTSCHlMQM; pageUrl=/ HTTP/1.1
> Host: abc.ai
> User-Agent: curl/7.58.0
> Accept: */*
>
{ [5 bytes data]
< HTTP/1.1 400 BAD_REQUEST
< Content-Length: 0
< Connection: Close
When I run it with curl alone not using xargs I get the correct output no 400 bad request
> Cookie: _ga=GA1.2.561275388.1601468723; _hjid=ecd3d778-b7f5-4f7f-b3ef-6f9f12b13d66; 54651cc_an=4; _hjTLDTest=1; 54651cc_data=JTdCJTIyaWQlMjIlM0ElMjJkZW1vdXNlciUyMiUyQyUyMm4lMjIlM0ElMjJkZW1vdXNJlOWJmODE5NDQzJTIyJTdE; 54651cc_loggedin=1; 54651cc_sound=true; 54651cc_read=true; 54651cc_popup=true; 54651cc_disablelastseen=false; 54651cc_usertype=loginuser; _gid=GA1.2.1722546791.1601890062; _hjIncludedInPageviewSample=1; _hjAbsoluteSessionInProgress=0; abc=s%3AKsRWcfNnOkbDHh1e65C3NwiDSZMx4LYg.zxLIymu488Ii5Z2%2Brz0qiwS17BzK2P7A0OoTSCHlMQM; pageUrl=/#/dashboard; _gat_gtag_UA_97985973_5=1
>
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Date: Mon, 05 Oct 2020 09:48:51 GMT
< ETag: W/"3b4-gP1vMAXMzUZy+pt7cwyOmQslPT8"
< Server: nginx/1.18.0
< Strict-Transport-Security: max-age=15552000; includeSubDomains
< Vary: Accept-Encoding
< X-Content-Type-Options: nosniff
< X-DNS-Prefetch-Control: off
< X-Download-Options: noopen
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Content-Length: 948
< Connection: keep-alive
<
* Connection #0 to host abc.ai left intact
[{"dashboard_id": 18, "user_id": "demouser", "dashboard_name": "My_dashboard_1", "description": "Test description One", "creation_date": "2020-09-21 10:13:00", "2020-08-12 09:08:00", "dashboard_config": {}, "sort_id": 4, "id": 2}, {"dashboard_id": 5}]* Found bundle for host abc.ai: 0x55836cf75a50 [can pipeline]
* Re-using existing connection! (#0) with host abc.ai
* Connected to abc.ai (52.86.136.249) port 443 (#0)
> GET /getTagTr/E1_CP/2020-9-12%2013:4:0/202'6368'0-09-15|%2013:04:00 HTTP/1.1
> Host: abc.ai
> User-Agent: curl/7.58.0
> Accept: */*
> Cookie: _ga=GA1.2.561275388.1601468723; _hjid=ecd3d778-b7f5-4f7f-b3ef-6f9f12b13d66; 54651cc_an=4; _hjTLDTest=1; 54651cc_data=JTdCJTIyaWQlMjIlM0ElMjJkZW1vdXNlciUyMiUyQyUyMmjM3NTgwOGE2N2RmZjlhMmJlOWJmODE5NDQzJTIyJTdE; 54651cc_loggedin=1; 54651cc_sound=true; 54651cc_read=true; 54651cc_popup=true; 54651cc_disablelastseen=false; 54651cc_usertype=loginuser; _gid=GA1.2.1722546791.1601890062; _hjIncludedInPageviewSample=1; _hjAbsoluteSessionInProgress=0; abc=s%3AKsRWcfNnOkbDHh1e65C3NwiDSZMx4LYg.zxLIymu488Ii5Z2%2Brz0qiwS17BzK2P7A0OoTSCHlMQM; pageUrl=/#/dashboard; _gat_gtag_UA_97985973_5=1
Having multiple curl arguments and options in the same file adds a complication which probably isn't worth working around. Basically,
echo "http://example.com -H 'X-Hello: Hello'" | xargs curl -O
passes the entire argument to echo as a single string to curl, which interprets it as the URL to fetch.
My suggestion would be to put the URL and any other arguments on the command line, and only store the -H option's argument in the file.
for ((i=0; i<1000; ++i)); do
curl -O http://example.com -H "$(sed "s/%|/%$((1 + RANDOM))|/" xm.cookiefile)"
done
and run 400 (or whatever) of these jobs in parallel, perhaps just as regular background processes, or maybe with xargs if you think it adds value. (Maybe also look at GNU parallel which simplifies some aspects of this.)
I took out the big modulo because it's not doing anything; $RANDOM produces integers in the range 0-32767 so if you need a much bigger number, maybe paste together multiple $RANDOM numbers, or maybe use a different random source.
I have created spring-boot application with tomcat 9.0.16, spring-boot 2.1.3.RELEASE, JDK1.8.
When I am making curl post request with --http2 its saying "curl: (56) Recv failure: Connection reset by peer".
but when I use --http-prior-knowledge it works fine.
my application.property file
server.port=8080
server.http2.enabled=true
and congif file
#Bean
public WebServerFactoryCustomizer tomcatCustomizer() {
return (container) -> {
if (container instanceof TomcatServletWebServerFactory) {
((TomcatServletWebServerFactory) container)
.addConnectorCustomizers((connector) -> {
connector.addUpgradeProtocol(new Http2Protocol());
});
}
};
}
for curl -vvv --http2 -H 'Content-Type: application/json' -H 'cache-control: no-cache' -XPOST http://localhost:8080/save -d '{"xyz":"xyz"}'
logs of curl->
* Trying ::1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x7fc78a808a00)
* Expire in 200 ms for 4 (transfer 0x7fc78a808a00)
* Connected to localhost (::1) port 8080 (#0)
> POST /save HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
> Content-Type: application/json
> Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd
> cache-control: no-cache
> Content-Length: 702
>
* upload completely sent off: 702 out of 702 bytes
< HTTP/1.1 101
< Connection: Upgrade
< Upgrade: h2c
< Date: Wed, 27 Mar 2019 12:29:18 GMT
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 200)!
* Recv failure: Connection reset by peer
* Failed receiving HTTP2 data
* Send failure: Broken pipe
* Failed sending HTTP2 data
* Connection #0 to host localhost left intact
curl: (56) Recv failure: Connection reset by peer
curl -vvv --http2-prior-knowledge -H 'Content-Type: application/json' -H 'Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd' -H 'cache-control: no-cache' -XPOST http://localhost:8080/save -d '{"xyz":"xyz"}'
* Expire in 0 ms for 6 (transfer 0x7fc5c0808a00)
* Trying ::1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x7fc5c0808a00)
* Expire in 200 ms for 4 (transfer 0x7fc5c0808a00)
* Connected to localhost (::1) port 8080 (#0)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fc5c0808a00)
> POST /save HTTP/2
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Type: application/json
> Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd
> cache-control: no-cache
> Content-Length: 702
>
* We are completely uploaded and fine
* Connection state changed (MAX_CONCURRENT_STREAMS == 200)!
< HTTP/2 200
< content-type: application/json;charset=UTF-8
< date: Wed, 27 Mar 2019 12:32:26 GMT
<
* Connection #0 to host localhost left intact
true%
You cannot use a POST method to perform an HTTP/1.1 upgrade, so Tomcat is probably choking on your first request (curl --http2 ...) for that reason.
I am the HTTP/2 implementer in Jetty, and Jetty also does not upgrade to HTTP/2 in that case, although it responds with HTTP/1.1 200 to the request, rather than choking.
Converting the first request to a GET without content, the upgrade succeeds in Jetty with a HTTP/1.1 101 response, as expected.
The second request is not an HTTP/1.1 upgrade, but a prior knowledge HTTP/2 request; there is no upgrade and therefore no limitation as to what HTTP method you can use, so the request succeeds in both Jetty and Tomcat.
I want to create curl with #!/bin/sh script and the part that giving me a headache is:
STR="-XGET -v -H 'Authorization: Bearer "$TOKEN"' https://my_endpoint/sth?"
echo $STR
CON="$(curl $STR)"
The echo there is:
-XGET -v -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXUyJ9.eyJleHAiOjE0NTM5NzIxODcsInVzZXJuYW1lIjoid2luZ3UtY2xpZW50LWFkbWluQHNwZWljaGVyMjEwLmNvbSIsImlhdCI6IjE0NTM4ODU3ODcifQ.ADLcrV4yph6owwgMLak2RsnC95WK17ULflCisvNWBkeA93G4WUQ-BMrjnhQeuIgSfxSYnAmgmI36ggc2PytWhkqk8lIYrMJTH80tggBYCnnuA2lM26IZ2ViUMK1cj-BH3-dh4HmqSm_hozAFnVqGQi9P5J4CBz8eCf_mKc3iq-7EnXRikTkgakF69-jPfFA_9yO26JzZeDpymowa-LRPafWPtYinzmkaUQ2SHjUdWtGmELAyzkGUOOXrZ8TgvV9Yeb-OnEoY54GRSlb4ogVzAwWCJ2Y6vxmvNpAN5wiUZMylqTGhnqFr9MOp4JId1RavjwT7STRp9bCHBxD55CtYoEQ-oSpDv6WkgB07CtCRi0Spx9ErVsaB0Xf1mH9XKAVjOQ_dNNpKTxlqIXMbbosxEhjYE9K6Z30c3uWhCgccNdEEHxPhi7d2bRO3M_3fJPKsYWWk5DXhxmkFpJ4fLf05JO31FFIoj8q7H3c5NEvXVk_keS-jPY5iP5xRY1dv9P8bWPEwFk1-qQrXZ1mMNiLDLxdB9cXE9Tm6Eo4Rxo71H2o4Z1DHmnVHHctsATzywsJIe3o8Ym5o0OsmS3WH3EJ-IS572lFv-ZQSkt3fq927JlvWotd9HHMT2MOPf8Zg5cdNd-hclZUfj7qOi-a0Afbn2cT3FccBXJ8l4' https://my_endpoint/sth?
And when i copy this from terminal and paste it into curl it works, but in the verbose log I find:
-XGET -v -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXUyJ9.eyJleHAiOjE0NTM5NzIxODcsInVzZXJuYW1lIjoid2luZ3UtY2xpZW50LWFkbWluQHNwZWljaGVyMjEwLmNvbSIsImlhdCI6IjE0NTM4ODU3ODcifQ.ADLcrV4yph6owwgMLak2RsnC95WK17ULflCisvNWBkeA93G4WUQ-BMrjnhQeuIgSfxSYnAmgmI36ggc2PytWhkqk8lIYrMJTH80tggBYCnnuA2lM26IZ2ViUMK1cj-BH3-dh4HmqSm_hozAFnVqGQi9P5J4CBz8eCf_mKc3iq-7EnXRikTkgakF69-jPfFA_9yO26JzZeDpymowa-LRPafWPtYinzmkaUQ2SHjUdWtGmELAyzkGUOOXrZ8TgvV9Yeb-OnEoY54GRSlb4ogVzAwWCJ2Y6vxmvNpAN5wiUZMylqTGhnqFr9MOp4JId1RavjwT7STRp9bCHBxD55CtYoEQ-oSpDv6WkgB07CtCRi0Spx9ErVsaB0Xf1mH9XKAVjOQ_dNNpKTxlqIXMbbosxEhjYE9K6Z30c3uWhCgccNdEEHxPhi7d2bRO3M_3fJPKsYWWk5DXhxmkFpJ4fLf05JO31FFIoj8q7H3c5NEvXVk_keS-jPY5iP5xRY1yirgabLv9P8bWPEwFk1-qQrXZ1mMNiLDLxdB9cXE9Tm6Eo4Rxo71H2o4Z1DHmnVHHctsATzywsJIe3o8Ym5o0OsmS3WH3EJ-IS572lFv-ZQSkt3fq927JlvWot5MN9HHMT2MOPf8Zg5c6udNd-hclZUfj7qOi-a0Afbn2cT3FccBXJ8l4' https://my_endpoint/sth?
* Rebuilt URL to: Bearer/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Could not resolve host: Bearer
* Closing connection 0
curl: (6) Could not resolve host: Bearer
* Rebuilt URL to: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXUyJ9.eyJleHAiOjE0NTM5NzIxODcsInVzZXJuYW1lIjoid2luZ3UtY2xpZW50LWFkbWluQHNwZWljaGVyMjEwLmNvbSIsImlhdCI6IjE0NTM4ODU3ODcifQ.ADLcrV4yph6owwgMLak2RsnC95WK17ULflCisvNWBkeA93G4WUQ-BMrjnhQeuIgSfxSYnAmgmI36ggc2PytWhkqk8lIYrMJTH80tggBYCnnuA2lM26IZ2ViUMK1cj-BH3-dh4HmqSm_hozAFnVqGQi9P5J4CBz8eCf_mKc3iq-7EnXRikTkgakF69-jPfFA_9yO26JzZeDpymowa-LRPafWPtYinzmkaUQ2SHjUdWtGmELAyzkGUOOXrZ8TgvV9Yeb-OnEoY54GRSlb4ogVzAwWCJ2Y6vxmvNpAN5wiUZMylqTGhnqFr9MOp4JId1RavjwT7STRp9bCHBxD55CtYoEQ-oSpDv6WkgB07CtCRi0Spx9ErVsaB0Xf1mH9XKAVjOQ_dNNpKTxlqIXMbbosxEhjYE9K6Z30c3uWhCgccNdEEHxPhi7d2bRO3M_3fJPKsYWWk5DXhxmkFpJ4fLf05JO31FFIoj8q7H3c5NEvXVk_keS-jPY5iP5xRY1yirgabLv9P8bWPEwFk1-qQrXZ1mMNiLDLxdB9cXE9Tm6Eo4Rxo71H2o4Z1DHmnVHHctsATzywsJIe3o8Ym5o0OsmS3WH3EJ-IS572lFv-ZQSkt3fq927JlvWot5MN9HHMT2MOPf8Zg5c6udNd-hclZUfj7qOi-a0Afbn2cT3FccBXJ8l4'/
* Could not resolve host: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXUyJ9.eyJleHAiOjE0NTM5NzIxODcsInVzZXJuYW1lIjoid2luZ3UtY2xpZW50LWFkbWluQHNwZWljaGVyMjEwLmNvbSIsImlhdCI6IjE0NTM4ODU3ODcifQ.ADLcrV4yph6owwgMLak2RsnC95WK17ULflCisvNWBkeA93G4WUQ-BMrjnhQeuIgSfxSYnAmgmI36ggc2PytWhkqk8lIYrMJTH80tggBYCnnuA2lM26IZ2ViUMK1cj-BH3-dh4HmqSm_hozAFnVqGQi9P5J4CBz8eCf_mKc3iq-7EnXRikTkgakF69-jPfFA_9yO26JzZeDpymowa-LRPafWPtYinzmkaUQ2SHjUdWtGmELAyzkGUOOXrZ8TgvV9Yeb-OnEoY54GRSlb4ogVzAwWCJ2Y6vxmvNpAN5wiUZMylqTGhnqFr9MOp4JId1RavjwT7STRp9bCHBxD55CtYoEQ-oSpDv6WkgB07CtCRi0Spx9ErVsaB0Xf1mH9XKAVjOQ_dNNpKTxlqIXMbbosxEhjYE9K6Z30c3uWhCgccNdEEHxPhi7d2bRO3M_3fJPKsYWWk5DXhxmkFpJ4fLf05JO31FFIoj8q7H3c5NEvXVk_keS-jPY5iP5xRY1yirgabLv9P8bWPEwFk1-qQrXZ1mMNiLDLxdB9cXE9Tm6Eo4Rxo71H2o4Z1DHmnVHHctsATzywsJIe3o8Ym5o0OsmS3WH3EJ-IS572lFv-ZQSkt3fq927JlvWot5MN9HHMT2MOPf8Zg5c6udNd-hclZUfj7qOi-a0Afbn2cT3FccBXJ8l4'
* Closing connection 1
curl: (6) Could not resolve host: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXUyJ9.eyJleHAiOjE0NTM5NzIxODcsInVzZXJuYW1lIjoid2luZ3UtY2xpZW50LWFkbWluQHNwZWljaGVyMjEwLmNvbSIsImlhdCI6IjE0NTM4ODU3ODcifQ.ADLcrV4yph6owwgMLak2RsnC95WK17ULflCisvNWBkeA93G4WUQ-BMrjnhQeuIgSfxSYnAmgmI36ggc
* Trying 54.228.198.226...
* Connected to my_endpoint (54.228.198.226) port 443 (#2)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.herokuapp.com
* Server certificate: DigiCert SHA2 High Assurance Server CA
* Server certificate: DigiCert High Assurance EV Root CA
> GET /api/sth? HTTP/1.1
> Host: my_endpoint/sth?
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Connection: keep-alive
< Server: nginx/1.8.0
< Content-Type: application/json
< Transfer-Encoding: chunked
< X-Powered-By: PHP/5.6.17
< Cache-Control: no-cache
< Allow: GET
< Date: Wed, 27 Jan 2016 09:09:49 GMT
< Via: 1.1 vegur
<
{ [59 bytes data]
100 48 0 48 0 0 41 0 --:--:-- 0:00:01 --:--:-- 0
* Connection #2 to host my_endpoint left intact
What's wrong with that header?
Use an array to store command arguments.
curl_options=(-XGET -v -H "Authorization: Bearer $TOKEN" "https://my_endpoint/sth?")
content=$(curl "${curl_options[#]}")
You can use
eval curl $STR "https://my_endpoint" > 'output_file'
I am trying to access google.com from my work using cURL for windows 32-bit(with SSH version). I am connecting via my company's proxy server but I am getting 400 proxy cycle detected error. Could someone please let me know why I am getting this error. The command & error message are as follows(Proxy IP changed to XXXX):
Command:
%curl -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre" -v --proxy-ntlm XXX.XXX.XXX.XXX:8080 -U name:password -I http://www.google.com
Output:
Enter proxy password for user 'name':
* Rebuilt URL to: XXX.XXX.XXX.XXX:8080/
* About to connect() to XXX.XXX.XXX.XXX port 8080 (#0)
* Trying XXX.XXX.XXX.XXX...
* Adding handle: conn: 0xcb0520
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0xcb0520) send_pipe: 1, recv_pipe: 0
* Connected to XXX.XXX.XXX.XXX (XXX.XXX.XXX.XXX) port 8080 (#0)
> HEAD / HTTP/1.1
> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre
> Host: XXX.XXX.XXX.XXX:8080
> Accept: */*
>
< HTTP/1.1 400 Cycle Detected
HTTP/1.1 400 Cycle Detected
< Date: Mon, 25 Nov 2013 11:56:06 GMT
Date: Mon, 25 Nov 2013 11:56:06 GMT
< Via: 1.1 localhost.localdomain
Via: 1.1 localhost.localdomain
< Cache-Control: no-store
Cache-Control: no-store
< Content-Type: text/html
Content-Type: text/html
< Content-Language: en
Content-Language: en
< Content-Length: 288
Content-Length: 288
<
* Connection #0 to host XXX.XXX.XXX.XXX left intact
* Rebuilt URL to: http://www.google.com/
* Adding handle: conn: 0xcb12f8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 1 (0xcb12f8) send_pipe: 1, recv_pipe: 0
* About to connect() to www.google.com port 80 (#1)
* Trying 173.194.115.50...
* Connection refused
* Trying 173.194.115.51...
* Connection refused
* Trying 173.194.115.49...
* Connection refused
* Trying 173.194.115.48...
* Connection refused
* Trying 173.194.115.52...
* Connection refused
* Failed connect to www.google.com:80; Connection refused
* Closing connection 1
curl: (7) Failed connect to www.google.com:80; Connection refused
For what it's worth, I am able to connect to google.com via browser using the said proxy address. And I am sure that I am giving the password(for the proxy) correctly.
You have set the proxy via --proxy parameter or -x parameter not via --proxy-ntlm, try this, please
curl -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre" -L --proxy http://xxx.xxx.xxx.xxx:8080 --proxy-ntlm -U name:password http://www.google.com
If you enter in a new redirect cycle you can try without -L parameter or set the --max-redirs parameter.
cURL manpage
I believe you are being knocked back due to authentication. Your work proxy likely requires authentication before it will allow you to access websites through it.
If your work uses Active Directory SSO (Single Sign On), try the following with your domain username and password:
curl --ntlm --user username:password http://www.google.com
Or not, try the following for basic auth:
curl --user username:password http://www.google.com