I have to get the sid value from response header and use this value in next request. but my regular expression is not picking up the value
Response:
HTTP/1.1 301 Moved Permanently
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Location: https://abc.be.com/tsg/de.aspx?sid=s44mNTM3MkRCRUMtRkEfrgtfTEwRDMyQUZFJDI1MDYxMTE5m12s
Server: Microsoft-IIS/8.5
X-AspNet-Version: 4.0.30319
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
Date: Mon, 07 Jun 2021 10:16:04 GMT
Content-Length: 0
enter image description here
enter image description here
I have tried my option and found solution my using the following regex:
Location: .+/de.aspx?sid=(.*?)\n
Just sid=(.*) should do the trick for you:
Where:
() - grouping
. - matches any character
* - zero or more occurrences
So it will start from sid= and capture everything till the line break
More information:
JMeter Regular Expressions
Using RegEx (Regular Expression Extractor) with JMeter
Perl 5 Regex Cheat sheet
Related
Currently I am using:
#!/bin/bash
PROCESS=$(curl --location --request -v -X POST 'https://jsonplaceholder.typicode.com/posts' \
--header 'Content-Type: application/json' \
--data-raw '{"title": "foo","body": "bar","userId": "1"}')
echo "$PROCESS"
And getting:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 111 100 67 100 44 208 137 --:--:-- --:--:-- --:--:-- 344
{
"title": "foo",
"body": "bar",
"userId": "1",
"id": 101
}
But I also want the response status e.g. 201 or like this.
HTTP/2 200
date: Mon, 30 Nov 2020 14:00:56 GMT
content-type: application/json; charset=utf-8
set-cookie: __cfduid=dfda1e85d5738eb18115dc0a07311a4dd1606744856; expires=Wed, 30-Dec-20 14:00:56 GMT; path=/; domain=.typicode.com; HttpOnly; SameSite=Lax
x-powered-by: Express
x-ratelimit-limit: 1000
x-ratelimit-remaining: 999
x-ratelimit-reset: 1606702897
vary: Origin, Accept-Encoding
access-control-allow-credentials: true
cache-control: max-age=43200
pragma: no-cache
expires: -1
x-content-type-options: nosniff
etag: W/"6b80-Ybsq/K6GwwqrYkAsFxqDXGC7DoM"
via: 1.1 vegur
cf-cache-status: HIT
age: 13185
cf-request-id: 06bb0df15c0000edfbfb9b8000000001
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=ABBCY6aKAHfezboFKgcq%2FlsWKQZDAORup49fKMArhm%2BYl3Kb99pMLrZpLtbXsfz%2BQ6RxnutmzE0mCX5AcIVGRjmq%2FIrIja5MeNFFnmpO7WBT1725PWdN1J0KFhcqNxvNP8He2TBjfd3N"}],"group":"cf-nel","max_age":604800}
nel: {"report_to":"cf-nel","max_age":604800}
server: cloudflare
cf-ray: 5fa518fbcbdfedfb-CDG
I want to do the post and the echo out body and response code in a nice way.
Response code is sent in HTTP headers.
You may redirect headers to STDERR e.g. as described here: Report HTTP Response Headers to stderr?
So you may do this:
out=$(curl -s -D /dev/stderr http://boardreader.com 2>/tmp/headers)
# parse /tmp/headers
If you don't want to mess with temp file, you may try more complex solutions like
Capture stdout and stderr into different variables
You can only issue either a post or header request in one call and so you will need to do this in two separate calls read into the same variable and so:
PROCESS=$(curl -I 'https://jsonplaceholder.typicode.com/posts' && curl -X POST 'https://jsonplaceholder.typicode.com/posts' --header 'Content-Type: application/json' --data '{"title": "foo","body": "bar","userId": "1"}')
To me it makes sense to check the headers first and if this command is successful, get the json response with both being read into the PROCESS variable. You can of course change the order if you wish.
I am using httpclient gem, it works fine on Windows, just moved to AWS EC2, tried it on https://victoriassecret.com and it gets this response:
= Response
HTTP/1.1 920 Unknown
Content-Type: text/html
Date: Wed, 21 Oct 2015 21:42:51 GMT
Connection: Keep-Alive
Content-Length: 23
<h1>File not found</h1>#<HTTP::Message:0x000000023f5168
#http_body=
#<HTTP::Message::Body:0x000000023f50a0
#body="<h1>File not found</h1>",
#chunk_size=nil,
#positions=nil,
#size=0>,
#http_header=
#<HTTP::Message::Headers:0x000000023f5140
#body_charset=nil,
#body_date=nil,
#body_encoding=#<Encoding:ASCII-8BIT>,
#body_size=0,
#body_type=nil,
#chunked=false,
#dumped=false,
#header_item=
[["Content-Type", "text/html"],
["Date", "Wed, 21 Oct 2015 21:42:51 GMT"],
["Connection", "Keep-Alive"],
["Content-Length", "23"]],
#http_version="1.1",
#is_request=false,
#reason_phrase="Unknown",
#request_absolute_uri=nil,
#request_method="GET",
#request_query=nil,
#request_uri=
#<URI::HTTPS:0x000000023f58c0 URL:https://www.victoriassecret.com/pink/new-and-now>,
#status_code=920>,
#peer_cert=
#<OpenSSL::X509::Certificate: subject=#<OpenSSL::X509::Name:0x000000024ebe00>, issuer=#<OpenSSL::X509::Name:0x000000024ebec8>, serial=#<OpenSSL::BN:0x000000024de110>, not_before=2015-05-27 00:00:00 UTC, not_after=2017-05-26 23:59:59 UTC>,
#previous=nil>
It does not work only with this website, httpclient get https://google.com for example works fine. But on Windows I get normal response from httpclient get https://www.victoriassecret.com. Butt when using standard NET/HTTP library I get the same 920 response on Windows.
This isn't ec2 related. It's most likely related to the User Agent header sent by the various http library implementations.
For example, they clearly don't like 'wget':
curl -A "Wget/1.13.4 (linux-gnu)" -v https://www.victoriassecret.com
* Rebuilt URL to: https://www.victoriassecret.com/
* Trying 98.158.54.100...
* Connected to www.victoriassecret.com (98.158.54.100) port 443 (#0)
* TLS 1.2 # truncated
> GET / HTTP/1.1
> Host: www.victoriassecret.com
> User-Agent: Wget/1.13.4 (linux-gnu)
> Accept: */*
>
< HTTP/1.1 910 Unknown
< Content-Type: text/html
< Date: Thu, 22 Oct 2015 01:16:31 GMT
< Connection: Keep-Alive
< Content-Length: 23
<
* Connection #0 to host www.victoriassecret.com left intact
<h1>File not found</h1>%
I am trying to secure my HDP2 Hadoop cluster using Kerberos.
So far Hdfs, Hive, Hbase, Hue Beeswax and Hue Job/task browsers are working properly ; however Hue's File Browser is not working, it answers :
WebHdfsException at /filebrowser/
AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] (error 500)
Request Method: GET
Request URL: http://bt1svlmy:8000/filebrowser/
Django Version: 1.2.3
Exception Type: WebHdfsException
Exception Value:
AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] (error 500)
Exception Location: /usr/lib/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py in _stats, line 208
Python Executable: /usr/bin/python2.6
Python Version: 2.6.6
(...)
My hue.inifile is configured with all security_enabled=true and other related parameters set.
I believe the problem is with WebHDFS.
I tried the curl commands given at http://hadoop.apache.org/docs/r1.0.4/webhdfs.html#Authentication
curl -i --negotiate -L -u : "http://172.19.115.50:14000/webhdfs/v1/filetoread?op=OPEN"
answers :
HTTP/1.1 403 Forbidden
Server: Apache-Coyote/1.1
Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
Content-Type: text/html;charset=utf-8
Content-Length: 1027
Date: Wed, 08 Oct 2014 06:55:51 GMT
<html><head><title>Apache Tomcat/6.0.37 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 403 - Anonymous requests are disallowed</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Anonymous requests are disallowed</u></p><p><b>description</b> <u>Access to the specified resource has been forbidden.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/6.0.37</h3></body></html>
And I could reproduce Hue's error message by adding a user with the following curl request:
curl --negotiate -i -L -u: "http://172.19.115.50:14000/webhdfs/v1/filetoread?op=OPEN&user.name=theuser"
it answers :
HTTP/1.1 500 Internal Server Error
Server: Apache-Coyote/1.1
Set-Cookie: hadoop.auth=u=theuser&p=theuser&t=simple&e=1412735529027&s=rQAfgMdExsQjx6N8cQ10JKWb2kM=; Path=/; Expires=Wed, 08-Oct-2014 02:32:09 GMT; HttpOnly
Content-Type: application/json
Transfer-Encoding: chunked
Date: Tue, 07 Oct 2014 16:32:09 GMT
Connection: close
{"RemoteException":{"message":"SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]","exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException"}}
It seems that there is no Kerberos negotiation between WebHDFS and curl.
I was expecting something like :
HTTP/1.1 401 UnauthorizedContent-Type: text/html; charset=utf-8
WWW-Authenticate: Negotiate
Content-Length: 0
Server: Jetty(6.1.26)
HTTP/1.1 307 TEMPORARY_REDIRECT
Content-Type: application/octet-stream
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: hadoop.auth="u=exampleuser&p=exampleuser#MYCOMPANY.COM&t=kerberos&e=1375144834763&s=iY52iRvjuuoZ5iYG8G5g12O2Vwo=";Path=/
Location: http://hadoopnamenode.mycompany.com:1006/webhdfs/v1/user/release/docexample/test.txt?op=OPEN&delegation=JAAHcmVsZWFzZQdyZWxlYXNlAIoBQCrfpdGKAUBO7CnRju3TbBSlID_osB658jfGfRpEt8-u9WHymRJXRUJIREZTIGRlbGVnYXRpb24SMTAuMjAuMTAwLjkxOjUwMDcw&offset=0
Content-Length: 0
Server: Jetty(6.1.26)
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Content-Length: 16
Server: Jetty(6.1.26)
A|1|2|3
B|4|5|6
Any idea what could have gone wrong ?
I do have in my hdfs-site.xml on every node :
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST#MY-REALM.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hadoop/conf/HTTP.keytab</value> <!-- path to the HTTP keytab -->
</property>
Looks like you do not access WebHDFS (default port = 50070) but HttpFS (default port = 14000), which is a "plain" webapp that is not secured the same way.
A WebHDFS url is often something like http://namenode:50070/webhdfs/v1 ; try to modify hue.ini with that parameter (WebHDFS is recommended over HttpFS)
I would like to do some redirects but involving the $args.
I am trying to to the following:
rewrite /aaa?a=1&aa=2 /bbb?b=1&bb=2 permanent;
But it does not work. The line below works fine, though
rewrite /aaa /bbb permanent;
I added those lines to my config file:
proxy_set_header x-request_uri "$request_uri";
proxy_set_header x-args "$args";
And I can see those headers:
GET /aaa?a=1&aa=2 HTTP/1.0
Host: www.example.com
x-request_uri: /aaa?a=1&aa=2
x-args: a=1&aa=2
Connection: close
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Accept: */*
What I am doing wrong? is there a way to accomplish redirect taking full $request_uri in consideration?
I've got the answer on irc.freenode.net #nginx:
Mod_rewrite does not match against url-with-args only without, use if or map instead.
I managed to get it working with if:
if ( $request_uri = '/aaa?a=1&aa=2' ){
return 301 $scheme://$host/bbb?b=1&bb=2;
}
Response header:
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.0.15
< Date: Wed, 02 Jul 2014 20:05:34 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: http://www.example.com/bbb?b=1&bb=2
< x-uri: /aaa?a=1&aa=2
I have a ruby cgi script which writes it output like this:
cgi.out("Cache-Control" => "no-cache, must-revalidate",
"type" => "text/html",
"charset" => "UTF-8") {
template.result(binding)
}
Unfortunately, when I view the headers from cURL, I see the following:
< HTTP/1.1 200 OK
< Date: Sun, 23 Aug 2009 09:48:03 GMT
< Server: Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_ssl/2.2.11 OpenSSL/0.9.8g
< 5541-Content-Type: text/html; charset=UTF-8
< Cache-Control: no-cache, must-revalidate
< Content-Length: 2495
< Cache-Control: max-age=86400
< Expires: Mon, 24 Aug 2009 09:48:03 GMT
< Content-Type: application/x-ruby
It's renaming my Content-Type, and adding a second cache control header. Clearly I have something misconfigured.
Turns out a had a debugging 'print' statement which was executing before the cgi.out() line. This caused a bit a text to prefix the headers.