HTTP Request Include Equals checkbox can't be unchecked - jmeter

When defining HTTP Request, there's a checkbox for each parameter: Include Equals
This checkbox can't be unchecked even when choosing different method or parameter.
I don't see any reference in HTTP Request for using it.
Why is this checkbox shown? Is there any usage for it?
Also it seems that Content-Type value per parameter is ignored,in GET it isn't sent:
GET http://www.google.com/?token=0Bfdsa
GET data:
In POST it send the regular www-form-urlencoded:
Content-Type: application/x-www-form-urlencoded; charset=UTF-8

I've also stumbled upon what does it mean, and I think I've found it. It gives you the option to include = (equals) sign or not for parameters with no value: foo= vs. foo. If the parameter has a value you cannot uncheck "Include Equals?":
| Name: | Value | Include Equals? |
|-------|-------|:---------------:|
| foo | | [x] |
| bar | | [ ] |
| baz | qux | [x] |
The above configuration generates the following url-encoded form:
foo=&bar&baz=qux
The "Content-Type" appears used with the "Use multipart/form-data" option checked – every parameter is sent as a separate part and its own Content-Type:
[x] Use multipart/form-data
| Name: | Value | Content-Type |
|-------|-------|--------------|
| foo | | text/x-foo |
| bar | | text/x-bar |
| baz | qux | text/x-baz |
The generated request looks like:
Content-Type: multipart/form-data; boundary=zIVpNBG_m1irxcTtk7ByTwBgDHbsjB1UjTdRTS
--zIVpNBG_m1irxcTtk7ByTwBgDHbsjB1UjTdRTS
Content-Disposition: form-data; name="foo"
Content-Type: text/x-foo; charset=US-ASCII
Content-Transfer-Encoding: 8bit
--zIVpNBG_m1irxcTtk7ByTwBgDHbsjB1UjTdRTS
Content-Disposition: form-data; name="bar"
Content-Type: text/x-bar; charset=US-ASCII
Content-Transfer-Encoding: 8bit
--zIVpNBG_m1irxcTtk7ByTwBgDHbsjB1UjTdRTS
Content-Disposition: form-data; name="baz"
Content-Type: text/x-baz; charset=US-ASCII
Content-Transfer-Encoding: 8bit
qux
--zIVpNBG_m1irxcTtk7ByTwBgDHbsjB1UjTdRTS--

Here it worked for me,
I unchecked 'use multipart/form-data' and from header pass 'Content-Type application/x-www-form-urlencoded'

Related

GitHub API, posting new comment using a variable

I have a file with a bunch of output from some performance tests. It looks similar to the following:
index | master | performance-fix | change %
--- | --- | --- | ---
load | 26212.8 | 28223.6 | 7.67%
type | 67.5 | 75.41 | 11.72%
minType | 56.91 | 59.6 | 4.73%
maxInserterSearch | 185.45 | 283.25 | 52.74%
minInserterHover | 25.97 | 27.55 | 6.08%
maxInserterHover | 44.47 | 44.7 | 0.52%
I am trying to submit a new comment on a Github issue using that table data. Standard text works fine, but when I try and pass the table along I'm getting the error:
{
"message": "Problems parsing JSON",
"documentation_url": "https://docs.github.com/rest/reference/issues#update-an-issue-comment"
}
My cURL request is as follows:
NEW_COMMENT=$(curl -sS \
-X PATCH \
-u $GH_LOGIN:$GH_AUTH_TOKEN \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/issues/comments/$COMMENT_ID" \
-d '{"body": "Results: <br />'"$TEST_RESULTS"'"}')
I have also tried creating the {"body": ...} using jq, and using the --data-urlencode flag. Both return the same "Problems parsing JSON" error.
It looks like $TEST_RESULTS contains characters that make the JSON not what you think it is, like including quotation marks and newlines
Maybe escaping the JSON output like this will help
escaped="$(printf '%s' "$TEST_RESULTS" | jq -Rs '.')"
... \
-d '{"body": "Results: <br />'"$escaped"'"}')

BASH command-line method to obtain OUI vendor info from MAC address

I' m trying to reproduce a method outlined in an old UnixStackExchange post to use a curl command to search for the vendor name using a MAC address obtained locally. The command is:
curl -sS "http://standards-oui.ieee.org/oui.txt" | grep -i "$OUI" | cut -d')' -f2 | tr -d 't'
However, it produces nothing when I run it. I've verified that "OUI" contains my MAC address to search on. Example:
echo $OUI
EC-58-EA
This is because HTTP server return a 301 Moved Permanently response
➜ ~ curl http://standards-oui.ieee.org/oui.txt -i
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.0
Date: Sun, 07 Mar 2021 05:41:37 GMT
Content-Type: text/html
Content-Length: 185
Location: http://standards-oui.ieee.org/oui/oui.txt
Connection: keep-alive
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.12.0</center>
</body>
</html>
Indicating new location ---> < Location: http://standards-oui.ieee.org/oui/oui.txt
You can curl new location o tell to curl to follow 301 redirection : curl -L http://standards-oui.ieee.org/oui.txt
testing
➜ ~ curl -LsS "http://standards-oui.ieee.org/oui.txt" | grep -i "EC-58-EA" | cut -d')' -f2 | tr -d 't'
Ruckus Wireless

How can i capture request headers using Bash

I need to make a script that can get a access token located in the request headers of a website can anyone help me with it?
You will be able to achieve this by piping your curl output through grep and cut commands. Here I have captured the value of Content-Length header.
curl -s -I example.com | grep "Content-Length" | cut -d ':' -f 2
Below is a sample script.
#!/bin/bash
DOMAIN="example.com"
HEADER="Content-Length"
HEADER_VALUE=$(curl -s -I $DOMAIN | grep $HEADER | cut -d ':' -f 2)
echo $HEADER_VALUE
Try using curl with option -I:
Example:
$ curl -I stackoverflow.com
HTTP/1.1 301 Moved Permanently
Content-Length: 143
Content-Type: text/html; charset=utf-8
Location: https://stackoverflow.com/
X-Request-Guid: 2396f2a8-3398-4264-9b26-ad79f282cb71
Content-Security-Policy: upgrade-insecure-requests
Accept-Ranges: bytes
Date: Mon, 08 Apr 2019 06:49:12 GMT
Via: 1.1 varnish
Connection: keep-alive
X-Served-By: cache-hhn1522-HHN
X-Cache: MISS
X-Cache-Hits: 0
X-Timer: S1554706153.814312,VS0,VE79
Vary: Fastly-SSL
X-DNS-Prefetch-Control: off
Set-Cookie: prov=e68f14b2-d35a-6ca6-8e8d-0b3f936049b4; domain=.stackoverflow.com;
expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly

Extracting HTTP content in bash doesn't work with nc output

Let's say I have this HTTP response:
POST / HTTP/1.1
Content-Type: text/plain;charset=UTF-8
Content-Length: 5
Connection: Keep-Alive
Accept-Encoding: gzip
Accept-Language: en,*
User-Agent: Mozilla/5.0
Host: 127.0.0.1:55764
Hello
And I'm interested only in content ("Hello"). I found this command to work if the text is fed from a file:
cat data.txt | tr '\n' '#' | sed "s/.*##//" | tr '#' '\n'
Hello
where data.txt contains the text above.
But if I try to feed it with the output of nc:
#!/bin/bash
while true
do
echo -e "HTTP/1.1 200 OK\n\n" | ./busybox-armv7l nc -l -p 55764 | tr '\n' '#' | sed "s/.*##//" | tr '#' '\n'
done
it doesn't work, i.e. it just print out everything:
POST / HTTP/1.1
Content-Type: text/plain;charset=UTF-8
Content-Length: 5
Connection: Keep-Alive
Accept-Encoding: gzip
Accept-Language: en,*
User-Agent: Mozilla/5.0
Host: 127.0.0.1:55764
HelloPOST / HTTP/1.1
Content-Type: text/plain;charset=UTF-8
Content-Length: 5
Connection: Keep-Alive
Accept-Encoding: gzip
Accept-Language: en,*
User-Agent: Mozilla/5.0
Host: 127.0.0.1:55764
Hello
Why the piping works with cat but not with nc?
output of nc goes to stderr just add & after second | to make the pipe effective:
echo -e "HTTP/1.1 200 OK\n\n" | ./busybox-armv7l nc -l -p 55764 |& tr '\n' '#' | sed "s/.*##//" | tr '#' '\n

Query Expansion / Synonyms when using POST Method

The situation
Our Google Search Appliance (Software Version: 7.2.0.G.112) is setup to expand queries using a custom synonyms file containing for example this entry: {men, mens}
The problem
The search appliance appears to use these synonyms when responding to a GET request but not when responding to a POST request. See the table below
+-----------------+-------------+----------------+-----------+
| Request Type | Query | Result Count | Good? |
+-----------------+-------------+----------------+-----------+
| GET | mens | 705 | yes |
| POST | mens | 691 | yes |
| GET | men | 706 | yes |
| POST | men | 88 | no |
+-----------------+-------------+----------------+-----------+
The Question
How can I enable the query expansion/ synonyms for the POST request so it will return (approximately) the same results.
The Requests in Detail
Get Request
GET /search?q=men&output=xml_no_dtd&client=default_frontend&
getfields=*&filter=0&start=0&num=25&site=some_value&
requiredfields=(-core__isblocked.core__brandid:brand.
(core__catalog:163%252D2101|(inv__0104|inv__3301))) HTTP/1.1
Host: xxx.xxx.xxx.xxx:80
Cache-Control: no-cache
Post Request
POST /search HTTP/1.1
Host: xxx.xxx.xxx.xxx
Content-Type: application/x-www-form-urlencoded
Content-Length: 242
Cache-Control: no-cache
q=men&output=xml_no_dtd&client=default_frontend&
getfields=*&filter=0&start=0&num=2&site=some_value&
requiredfields=(-core__isblocked.core__brandid:brand.
(core__catalog:163%2D2101|(inv__0104|inv__3301)))
Bonus question: why is the result for GET and POST for "mens" also different.
You can set "Query Expansion Policy" in the frontend. Are you sure you are using the the same frontend for both the queries. In my knowledge, the GET/POST method should not affect the search result.
Why isn't stackflow allowing me to add a comment to the question. Gosh..
UPDATE
Also, core__catalog:163 -- THis value is different in the GET and POST. Might be something to do with the character encoding/decoding? Can you remove all those requiredfields and just supply 'q' and look for the count?

Resources