multiline conf file to parse log file to elasticsearch - elasticsearch

I have this log file: http://dpaste.com/3FE2VNY
I only want to extract certain pieces of information such as date time, and number of events posted. My attempt of putting this into elasticsearch results in hanging of logstash. Not sure what I did wrong as I am new to this.
What I attempted to do was to simply grab all the content in the log file and pass it into elasticsearch. I understand that grok must be used to grab specific parts but I am not at that level just quite yet.
My goal is to extract:
start: Mon Apr 27 13:35:25 2015
finish: Mon Apr 27 13:35:36 2015
number of events posted: 10
Log file:
test_web_events.py: START: Mon Apr 27 13:35:25 2015
# TESTCASE TestWebPost ==================================================
# START TEST METHOD #################################: test_10_post_valid_json
[2015-04-27T13:35:25.657887] HTTP DELETE http://pppdc9prd3net:8080/rastplatz/v1/sink/db?k0=bradford4
{}
HTTP response: 200
0
POSTING event_id b29b6c7c-48cd-4cd9-b3c4-aa0a7edc1f35 to businessevent
Content-Type: text/plain
POSTING event_id 13678af1-3e3a-4a6e-a61c-404eb94b9768 to businessevent
Content-Type: text/plain
POSTING event_id 47b70306-2e7c-4cb2-9e75-5755d8d101d4 to businessevent
Content-Type: text/plain
POSTING event_id 6599cdb2-0630-470d-879d-1130cf70c605 to businessevent
Content-Type: text/plain
POSTING event_id d088ce29-fa0d-4f45-b628-045dba1fd045 to businessevent
Content-Type: text/plain
POSTING event_id 07d14813-b561-442c-9b86-dc40d1fcc721 to businessevent
Content-Type: text/plain
POSTING event_id b6aea24a-5424-4a0f-aac6-8cbaecc410db to businessevent
Content-Type: text/plain
POSTING event_id 016386bd-eac5-4f1c-8afc-a66326d37ddb to businessevent
Content-Type: text/plain
POSTING event_id 6610485d-71af-4dfa-9268-54be5408a793 to businessevent
Content-Type: text/plain
POSTING event_id 92786434-02f7-4248-a77b-bdd9d33b57be to businessevent
Content-Type: text/plain
Posted 10 events
# END TEST METHOD ###################################: test_10_post_valid_json
test_web_events.py: FINISH: Mon Apr 27 13:35:36 2015
conf file:
input {
file {
path => "/home/bli1/logstash-1.5.0/tmp/bradfordli2_post.log"
codec => multiline {
pattern => "^."
negate => true
what => "previous"
}
}
}
output {
elasticsearch { protocol => http host => "127.0.0.1:9200"}
stdout { codec => rubydebug }
}

You could use something like:
multiline {
pattern => "START:"
negate => "true"
what => "previous"
}
This instructs the multiline filter/codec to put all lines not containing START: in the previous logevent.
You can then use grok patterns to extract your 3 pieces of information. Take care you have to instruct grok to look in a multiline messages by using the multiline swith at the beginning of your grok pattern like so:
grok {
match => ["message", "(?m)Posted %{NONNEGINT:nrEvents} events"]
}
A word of warning if you are working with multithreaded inputs / several parallel worker threads. There are currently bugs in logstash multiline handling that can lead to lines from various events mixing each other up when being processed in parallel. I'm nor sure if that is relevant for you but take a look at this:
https://github.com/elastic/logstash/issues/1754
Another word of info. I don't really understand what's the difference between mutline filter and codec and when use one or the other. I use the filter in my project and it works fine however.

Related

processingFailure error (400) while retrieving CommentThreads list

I a trying to retrieve all the comments of a video via Python iteration/paging. I am logged correctly with a developer key
import googleapiclient.discovery as gg
import googleapiclient.errors as gge
yt = gg.build(api_service_name= 'youtube', api_version= 'v3', developerKey = M_KEY)
comments= []
page= ''
while True:
request = yt.commentThreads().list(
part= "snippet,replies",
order= "relevance",
maxResults= 100,
pageToken= page,
textFormat= "plainText",
videoId= video['id']
# video is a static dictionary i've saved outside the script
)
try:
response = request.execute()
page= response['nextPageToken']
comments.extend(response['items'])
print('Comments extended')
except KeyError:
# there are no more pages
print('Iteration ended')
break
except gge.HttpError as error:
print('HTTP error:', error.__dict__['resp']['status'])
What i'm expecting it to do is iterate the pages of comments until the response['nextPageToken'] throws a KeyError, meaning that there are no more pages of comments. Instead, what happens is that the execution goes flawlessly for a dozen of iteration (at best) then it starts to throw said processingFailure error which content looks like this:
{
"error": {
"errors": [
{
"domain": "youtube.commentThread",
"reason": "processingFailure",
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the <code>commentThread</code> resource in the request body to ensure that it is valid.",
"locationType": "other",
"location": "body"
}
],
"code": 400,
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the <code>commentThread</code> resource in the request body to ensure that it is valid."
}
}
I have tried to log both the page and the videoId to ensure nothing went wrong with them but they're both valid. I've also tried to time.sleep() for up to 15 minutes when that error occurs but nothing changes.
This is the request in json format at the time of the error, catched using request.to_json(), thanks to #stvar for suggesting it:
{
"uri": "https://www.googleapis.com/youtube/v3/commentThreads",
"method": "POST",
"body": "part=snippet%2Creplies&order=relevance&maxResults=100&pageToken=QURTSl9pM2xlemsyWjAzYlBhWkNRMTFMMWIyMjFsVVNnS2U0WE8zTkwxRzdRdkpsdlVqWHlwWEV2SmxkeUxjdkR1UWk3eVU1OTI1cmJEeUtJZHRGQWVmY21PUGxVOXBER3YtckE2NlhSWlRwQzR0Y2VyY0JDbC1uNVRaSU56RklzejJERmRCc2lLUjV2Rm1LV0Njc3ByMjliRXRMZmNjRFJucFgwNFNBYVhkSFJzYXkzNVpKTXNNSzNfWmVGd3dSRWxYQmwyQmxnWGJwNFZidVpiYjlOWjBabFFsalZFZkdqZFV0SHlrVEJqclppVnRtMjZCTnYtQm9WWjFrQ0dELUlLTnYwWG50cU5BQXJ3Ukh3VE9PZnNaZ0tZaWN1ZTdBakJkWFp5Ymo5M2R5Y0g1aWVsWUUzUVg0TU83Q2JZQ1IxWnRTMXUyTFhpSDdmMU9GTmtiQUE0UjdyVUVBelNnSjhTTDVsLU1TaERwVHdvSVhkX1ktNVBTc2xkX09zcjBOT3E3Z2lVWWRPRFhkVF9NN1JaQTEyUEJmU1hNbUtvM2JzU1NzOFRid29wTEo3Q0hucmJnNHcwNUJzaGtqSE8wa2g2U0FUY3pQbDJ1bGNnaFRKNEJCRm90TVNyWXNSREgyQVFqMU9PNnY3elBGSEhrYXJSMUJYS09yQ0tOVE5Oa3l5V00tdGY3TTlwY3o0VXJsaWRua1BrNXVhWmVLMzV1T2NmOEhqaUNucEdheTRfZjNiM1JkYUJuaGZqQjFMV1c0NFRJNXlzR2trdFpLemV1SU12V0tCTW11b1RMU05PXzA0eVdHM3lRclpZaC1BN3k4RDdhTW1uTHZtbDVsRzBVTVFHdkdkMTN4VHQzNW1tZ3BoY0F6VDJVWTFhUWpxdW8td1M0bnkzQTRtVGc5bGxQNV81ejV1dm1JX1ZDRVZIXzI3eXVnbHJBcC1Lb0NULUhHOGp3ZGNGeVFKbFNXbVh0Y2NQei1UbjBFLVhuZWp0eXh5NzVjOEtjS1FqTUppQWdOSDRmWWtSOUZPRHQtSEpsTnJtNWZVX2t4VDlVTDB6WmxWTHN5dlZzZllNQkFBOEJNMWZkOEtoTk5jMnQ4Y0hydXVScTNILXZLXzJodGFUNmxhQnEtay1PVV9yYzJFNEhKaDZjcUszV3ZGM2VLaUxJZjlwRmViYXRfVGRSOFZ6OF9vU2h6WjVqNkhVU0tqZHduLTNlaFhuTHFXSG1WSk1HUVQ4dkdIdDZvUFdKNkxOeFlhTmJzd1J4dGQzLXBHUmsxaHYtdFc2cTI1VDZsMWJGdE5Pb1RmR2hlRGM3cjZPcDJ2eTljQk1GcTJXaTFtNFhndzlWbDBOby1kNzZhLU5WNTI1VUlzRmpQSkRvSlFFMUQzNllzbi0tU01OYTg1a2poS2ZrWHpQMjQyd1hDb3h4blE5ZlJmN2xIMEstRFR6cUFWcTNDRDFfbjNubXY4Z2ZseGdVY2NjTWk0NzQ3SDFZcWs2eWxZWlB0Vl9iSldlbktOMjVFWUp6UnVRb3dfOXFQdmhBZEN5clJpX1g4aVhmdERnbE5XX1FjVWNXODRtSm1LSUpDVnJHVGlEeUtGb3BPMVYyWU5TbnQzY29NLUY5c3Y2WmpNVTNlVjIxQ2RwSzlKTUZwY0RxY2FlMGFtd2tucFpjeUtDN2xwOERYcDJwSU1RY0dIdXdCTmJIcWdjbTh1Q04wVTh1dktzeVdob09wX25uU19BMFNlRlBrNG1wZFRKVVJFVzVfdGQxbGFYemFqZjJOQTd1R0NCZ0RrMWlTS3BMMy1hY0FMd25KTGFYelJPQjZvRnlYMnBFelhCREgyRDJ3TnJWNldWUllqOVVvdHV2cVRXRXlBbkJpaFJpd3RIc2RaamVaUERldXItT1pkTVVFczBzNi1hZmhDYTFzWVM3SElsYkxtMkoybC03YlZVRkt1NEVSWV8tWHRJTko4d2hqWllWVU04UXlkQV84ZjFzVm01bW83cTd4R3ZOSVNabGRSaXVlTU91MXR6RVFYeTNwNHd3bzNVY1RncHdzY1VKQWw2eWNvcmdER0N5RjZiQkRmNnh0S256MzhreldFTm9XMDhlY1VUeEhnNTM2bHNYVlpKdGJrdHd1Y3VCc0hYOHlEc1EyZXJLTUlMTlVQb0FmU0hpdy1WdS1iT19fTTlMQUVWa3BnWloxSXdUZHotMW5zWWVnTVBzelE3VmQtRlBOajNfcmJJNnlZYkpDdmxKWXoxcjBZYkV4Q2duNGx1MTlrYWVVOXktT0lVX2dfVnc2cW9nSEVHSHZCUzFHRFFPaW1ydUxlY012bVQtaVBnTXQ1VWxOZVI3YW9nTkhFdHlwUGlneFdOM2Jkcm1iWEcxN25pQVI0TUpvVW1hemlrYWl6M0dnSTQ0VWhVWVMxaHEzeS05cnJRSkJ6TEVEZTB4anYySzR6WWhyMEtkZ0ZVMkxDZXlkcHFCSTBfU2Z3ZEVTYkE1YlE1SXI0M3lhUnJGZ1l0QVZFU2ZqRDE1MUhSLU1lU2dxWFpUem04RHVqOVBTUkZhbkFLWUJ2aGZsX0w2SzBabTBoUUxiYVZxT25ydk56U01YdklJZmtPemxtT0Nrc1JGOWVRQnMydk5lZkRjRExEUzRaWXFfTlNRdVNOUTFacFdLcklqRXc1NDg2eGU2NXlid3Y0SnRVeWplVE5CVVF4Qm01ajJfOWY2U2NWcWlVajYwYXJ3eXZ6RGt4cldCMndES0wzZ0xERl91bmlQaDVtUmNXSERXekJCVDAtd2ZnVFBadERnQlJWUEl3cWxCb3FfOEh2NkJCUlZqUThCMUk0OXM3Sjk3Sng3WFBpdUlEUFRnLV9kMnhoa2Z5QVpLRFNLSWl0ck1WUnhKRWFaZ0J0VDZmTjY3MG5SMkZQYUx0YTQ3dmgzYzhpa0Nua0dIS0VzSGYzOERiYkN4ZXM1ZkpkYV9nMEJrMnA2aHgycWFfZ04ySmRGellBaUphdXA2X213WXNxVW1QbUpfa2xZZTUwbVA2azMyRjV0Q2dRcWJVajFuVTFjRHB5QUZUcTZ1X3RwNGVBSmVoNWt6ajFZNTkwS1J2TzZreHhfQ1EzTTdDUGpCb0V1SXdFWmh6YjF0Q1NHUnoxTy11MURZak03ZnNEX0NCSUxPbkVuZGZ5VDlCMkkwNl9lLUw2bk05MUVfU29NTmg4WFBSSGNibnZ4T3h4T25Yal81a3NDMG5veDBERVdLRVBEV0pqRi1CbEpnV01HajhjR29jbXFEdXpIcFdCblcyZ2dsX1ZUNHJuMkxHUFV6ekZiaklFMXpob0w4MXJNOXVhZ2dMX1oxdzNkRi1sV3JDaGpING10b21qYTI0QW00dWFOOVZHSWJlT3lZVy1qSzNnUGxPU0hTZnFTS2VVLW80cHFqWTAtYV9lbHI5WHdnZV9nM25oX2Iwd2lUeXgxQndDcENrczUxb3RlZlZkSnREU20xSjdCM0VBZkZ4X1pkQ2YwQUszcWJxRVdwM25wM0w2WmpNTG1WM0RyWlU3TkpGeElBeGRkUTZlWEttazJuLW1mdFZtMTNVamRKZEthMkFfbS1HSnRtMFRTbFlVYnBqQ2puaUVrY2xieTB0TDF4UDNUWGNMUFo3enFtajB5T0dZbGRYQkxDQzlxdGNXN0d4V093RXUxd2Z4ek9oVEZzVUpabHpxeHNFVlVUbF9ORGk5U1BhVHg3QlJEWjdqM2g1VlFhb184SU1ZMERFLWJnY0htaXk2aXFoTjBMaUg2eTdjVmhHekFCbmdjNXJ2WkpQVE9jY09RRXE1SVJYenhvdVJxVGRDbnd5WGdrZnlyWG96T2FXcVRVRXBiWHBGRGFnZTUybmFJam1HanpPelNSZHBJLW5yaDgybm5BdldNNTRtcGNuaXRGbzJmODhyc2IzYmNUdDNoekNaRG5Oa2k1OUNXU2tuSnd2OG1GVjBGM2xid0lWd3QwWDR3Ukc4TlZZcXdkY2FOdS0xaWlsajAwQTBLVnJTVzljLS1WZF9WWGl0Y21naDdpX2c1djJwRFpuU3pWY1VNRWNOTy1GaVdSTUxDTHZ6VDFNYl9HRmsxVEZlTG5fVFdrNzRYdlpoSVRwMHJjSjdJT21lU09ObEtqTlNiX0FCNGtXRldpSHByY0htSGxOYS1JbWZWbWRCZUlLaFhxUG5haGZCMm1PbklDMGFJS0pmZ2RjUUtVSWpLeVBrTk5DMXo5VUVkeGZKRFRtZDh4OHJkV3BEbi0wUnJXY2x1Rk9XU04xeXpkbnA5U1FnV2huOTFYVlBGRG5Rem9CODdfRUU2d3liNy1LQ0JHbjFsVVRQc3pMS3JudjhDVG1Bb2xjYU1MaUprajZGT2dkS0ktd3IzNXZMSWhFQm9oamdGZW5KcWNOQ0Q5NDZGWXFzTE5peEVyenJHWG9ZaWtNRjVoUXJNVFVxdjhSYUgyRHZKcXpmOWtMRHJmV1dMRklLUTFvLVJZbDY5dzRFcXJxUGJoenF4SFpLYUFoOWpFcnNNWVZmMmsycy1YVXllMkhwNzJrOGl5eUpTTXdQVFhWTjJ4MVA1OC1FbnVlbGZSRDQwOWxkcUwwSkRWTFVHMUdhUi1jVnlBZVJKZmZNY3Z2OTQ4MVdqWDV5WFBvSjJJY25VZDdOc2JyUkRKQ1IybXY2NG5uX3NJUmJubXFvTHQ3dHVoV0pMQmRlb2tQOUU2cU1xN3gzVnFsMzVqVDhjSFNuSmNYZWczSG1veEhGWDlkTDNBRmtXVFhBaVQwSDR2dTJBaU1nRUJVZndMS3gzUjNXNlRRcGJqNnJkbldzT3FjdU1NSkwtOVZ2WEIzMEYtQlFFNW84dUthQjFvZFg4OVota255cWt5Mlo1cXpoWTJKel9vTmpLbHRlQVN0WkVURDh3WG8wSTY3OE0yTTF4OF9KQ3ctc1Uzd3hxejdyRlB0NzVXZkxKU1RvOUw1TWtEMjNCaWVaLTkzaExwTHlwaG9DaC1qcjE4cHgxRHRtMmc3TnZXaTJleHNHVW1WTzJlMkVlWFd6QUxfOENSUEJRU0E5d08xa2hjSkxBSG5kY1BlTzJxUkR0U3ppTG0zR0xTOTRLUU9YSmxGSWFoR0JjRmp4b1pUTmNCTFoxaTdBd0RpYmhJLVJyX19qOFkyLW5UUW4wYjBsbTFpa3MzSThqd3pFa0Q0aGZrVWFBWXhMdS1LaUg0R0xRNlB4YUdxb0xVdEtaMm1ld0ZVbi1uMTlGLVN5bDhNakN3czlnQ2ZYZHhFS2hlQ1ByZzNhNHljdFhiOHpJdDg3ZVdyeGFBVW1jdEkxbnJMMUw2WGpQcUFDc0N6WWQtMzhCNTZ0VWtXWnBlRmRIWnl5VkpkeF9XMzZnZG5MWnpoVFNRcGJhTjAxOUNPWkpZeTh6Zk9QQW9paU5IOXVESHgybkRHeUREY1ZCMFBIaVZ2aFN5dFNuWWJZbWJKX1N5dG9LUXl6LVViMzFIWHZURkVVVU1iTl9NdUNiTEwwem9CQ0EtODNyMGJLaGh1bGpXRkFBRWxXR1dTYlNIa3NEenN1NlBid2gxZTUyUFNhem5yVWN4Y0tF&textFormat=plainText&videoId=CJ_GCPaKywg&key=m_developer_key&alt=json",
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "google-api-python-client/1.7.9 (gzip)",
"content-length": "5730",
"x-http-method-override": "GET",
"content-type": "application/x-www-form-urlencoded"
},
"methodId": "youtube.commentThreads.list",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
NOTE: I need to have order= "relevance" in my request because I primarly need the most voted comments.
An answer is nowhere to be found, I hope you can help me
Issue is, we can't really retrieve all the comments of every video.
https://issuetracker.google.com/issues/134912604
We currently don't support paging through the whole stream. So there's no way to retrieve all the 1000+ commentThreads that you have for that video
This is not a solution to your problem. It just shows that querying the endpoint via a GET
request method succeeds obtaining from the API the needed page response.
# comments-wget [-d] VIDEO_ID [PAGE_TOKEN]
$ comments-wget() {
local x='eval'
[ "$1" == '-d' ] && {
x='echo'
shift
}
local v="$1"
quote2 -i v
local p="$2"
quote2 -i p
local O="/tmp/$v-comments%d.json"
local o
local k=0
while :; do
printf -v o "$O" "$k"
[ ! -f "$o" ] && break
(( k++ ))
done
quote o
k="$APP_KEY"
quote2 -i k
local a="$AGENT"
quote2 a
local c="\
wget \
--debug \
--verbose \
--no-check-certif \
--output-document=$o \
--user-agent=$a \
'https://www.googleapis.com/youtube/v3/commentThreads?key=$k&videoId=$v&part=replies,snippet&order=relevance&maxResults=100&textFormat=plainText&alt=json${p:+&pageToken=$p}'"
$x "$c"
}
$ PAGE_TOKEN=...
$ AGENT=... APP_KEY=... comments-wget CJ_GCPaKywg "$PAGE_TOKEN"
Setting --verbose (verbose) to 1
Setting --check-certificate (checkcertificate) to 0
Setting --output-document (outputdocument) to /tmp/CJ_GCPaKywg-comments0.json
Setting --user-agent (useragent) to ...
DEBUG output created by Wget 1.14 on linux-gnu.
--2019-06-10 17:41:11-- https://www.googleapis.com/youtube/v3/commentThreads?...
Resolving www.googleapis.com... 172.217.19.106, 216.58.214.202, 216.58.214.234, ...
Caching www.googleapis.com => 172.217.19.106 216.58.214.202 216.58.214.234 172.217.16.106 172.217.20.10 2a00:1450:400d:808::200a
Connecting to www.googleapis.com|172.217.19.106|:443... connected.
Created socket 5.
Releasing 0x0000000000ae57c0 (new refcount 1).
---request begin---
GET /youtube/v3/commentThreads?.../1.1
User-Agent: ...
Accept: */*
Host: www.googleapis.com
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK
Expires: Mon, 10 Jun 2019 14:43:39 GMT
Date: Mon, 10 Jun 2019 14:43:39 GMT
Cache-Control: private, max-age=0, must-revalidate, no-transform
ETag: "XpPGQXPnxQJhLgs6enD_n8JR4Qk/OUAqOrEpA9YYqmVx0wqn9en_OrE"
Vary: Origin
Vary: X-Origin
Content-Type: application/json; charset=UTF-8
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Content-Length: 205965
Server: GSE
Alt-Svc: quic=":443"; ma=2592000; v="46,44,43,39"
---response end---
200 OK
Registered socket 5 for persistent reuse.
Length: 205965 (201K) [application/json]
Saving to: ‘/tmp/CJ_GCPaKywg-comments0.json’
100%[==========================================>] 205,965 580KB/s in 0.3s
2019-06-10 17:41:18 (580 KB/s) - ‘/tmp/CJ_GCPaKywg-comments0.json’ saved [205965/205965]
Note that the shell functions quote and quote2 above are those from youtube-data.sh (they are not really needed). $PAGE_TOKEN is extracted from the body string of the JSON request object posted above.
The next question is: why your python code uses a POST request method?
Could it be that this is the cause of your problem?
According to Google's Python Client Library sample code and to Google's Youtube API sample code, you should have been coding your pagination loop as shown below:
request = yt.commentThreads().list(...)
while request:
response = request.execute()
# your processing code goes here ...
request = yt.commentThreads().list_next(request, response)

How do I set multiple cookies in a single Webrick response?

I use Webrick to test my HTTP client and I need to test how it gets and sets cookies.
Wikipedia provides an example of such response:
HTTP/1.0 200 OK
Content-type: text/html
Set-Cookie: theme=light
Set-Cookie: sessionToken=abc123; Expires=Wed, 09 Jun 2021 10:18:14 GMT
...
but if I do
server.mount_proc ?/ do |req, res|
res["set-cookie"] = %w{ 1=2 2=3 }
the whole array becomes a single cookie: "[\"1=2\", \"2=3\"]"
And then in WEBrick::HTTPResponse source code I see again the #header = Hash.new that probably means you can't repeat the header key.
Is it impossible?!
UPD:
This leaves me no hope:
https://github.com/rack/rack/issues/52#issuecomment-399629
https://github.com/rack/rack/blob/c859bbf7b53cb59df1837612a8c330dfb4147392/lib/rack/handler/webrick.rb#L98-L100
Another method should be used instead of res[...]=:
res.cookies.push WEBrick::Cookie.new("1", "2")
res.cookies.push WEBrick::Cookie.new("3", "4")
res.cookies.push WEBrick::Cookie.new("1", "5")

Failed to use "Regular Expression Extractor" capturing response header in Jmeter

I would like to capture response Header value for "Authorization:".
Response headers:
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json
Server: Microsoft-IIS/8.5
Authorization: Bearer MMSArOve7c9NffH4oTqBMW1SiWLUbQi2nm0ryR-
Wi5d_plLkk7xzTVo8b5_s1sg-Rut6vdDoTvlRju-
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Mon, 14 May 2018 03:50:47 GMT
Content-Length: 484
and I did this.
but the result is
JMeterVariables:
JMeterThread.last_sample_ok=true
JMeterThread.pack=org.apache.jmeter.threads.SamplePackage#33a6821
START.HMS=113828
START.MS=1526254708675
START.YMD=20180514
TESTSTART.MS=1526269844536
Token=test
__jm__Thread Group__idx=0
__jmeter.USER_TOKEN__=Thread Group 1-1
what I did wrong, please help Thank you!
You can use the following regex to extract Authorization
Bearer (((.*)\n)+)X-Asp
And Use Match No 1
For More information you may link to the following
JMeter Regular Expressions
Extracting variables
Don't use ^ which is Start of String Anchor
Applying ^a to abc matches a. ^b does not match abc at all, because the b cannot be matched right after the start of the string, matched by ^
So use the regular expression without it:
Bearer(.*)
You need to remove ^ character from your regular expression
More information:
Regular Expressions
Perl 5 Regex Cheat sheet
Using RegEx (Regular Expression Extractor) with JMeter

JMeter cannot assert Http response Code 423

I'm trying to assert http response codes in JMeter.
I think this is really simple, but I encountered a problem I cannot fix.
My server can return 2 response codes: 200 and 423.
There is no problem with 200, it just works, but I cannot assert 423, I don't know why.
I tried response assertion with the following configurations:
Field to test: Response Code,
Pattern Matching Rules: Contains
Patterns to test:
200 - works
423 - does not work
200|423 - 200 works, 423 does not work (wtf?)
I also tried BeanShell Assertion with
Failure = !(ResponseCode.contains("200") || ResponseCode.contains("423"));
It does not work, too.
Also I tried to match with the response message to contain "Locked" - does not work.
The server Response looks like this:
Thread Name: 10 Users, 100 Repeats 1-10
Sample Start: 2017-05-19 13:06:09 MESZ
Load time: 33
Connect Time: 2
Latency: 33
Size in bytes: 333
Sent bytes:768
Headers size in bytes: 333
Body size in bytes: 0
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""):
Response code: 423
Response message: Locked
Response headers:
HTTP/1.1 423 Locked
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
X-Application-Context: application:capacitytest
Content-Length: 0
Date: Fri, 19 May 2017 11:06:08 GMT
Server: Not_available
HTTPSampleResult fields:
ContentType:
DataEncoding: null
The response code is marked blue for a reason in Sampler Result. Don't know why.
I'm also logging the response code via Beanshell PostProcessor. It is 423...
Finally I'm asking here for your help.
I have no idea what the problem is or could be.
Thanks in advance.
If you are talking about HTTP Request sampler JMeter automatically threats HTTP Status Codes above 400 (inclusively) as failed. I would recommend the following setup:
Add Response Assertion as a child of your HTTP Request
Configure it as follows:
Apply to: according to your test scenario
Fields to test:
Response Code
Don't forget to check Ignore Status box
Pattern Matching Rules: Matches
Patterns to Test: 200|423
Assuming above configuration if the status of your request will be 200 OR 423 - it will pass, otherwise it will get failed by the assertion.
See How to Use JMeter Assertions in Three Easy Steps guide for comprehensive information regarding JMeter Assertions.

Render JSON from hash with headers

I need to add a header to my hash that I convert to JSON.
In my controller I have:
render json: #rates
With #rates being a hash that looks like this:
{:rates=>[{:service_name=>"Standard", :service_code=>"FU",
:total_price=>"1100", :currency=>"USD", :min_delivery_date=>"2016-03-11
08:00:00 +0000", :max_delivery_date=>"2016-03-16 06:59:59 +0000"},
{:service_name=>"Priority", :service_code=>"FU", :total_price=>"2300",
:currency=>"USD", :min_delivery_date=>"2016-03-08 08:00:00 +0000",
:max_delivery_date=>"2016-03-09 07:59:59 +0000"},
{:service_name=>"Expedited", :service_code=>"FU", :total_price=>"1420",
:currency=>"USD", :min_delivery_date=>"2016-03-09 08:00:00 +0000",
:max_delivery_date=>"2016-03-10 07:59:59 +0000"}]}
The JSON format is perfect, but I believe I need a header for it for the API to get the rates from me. I saw "Render JSON with header", but I don't get it.
I need to add:
Content-Type: application/json
Can I just add this to my hash as another key/value pair? Or do I have to do some special header value?
You can try adding the content_type option.
render json: #rates, content_type: "application/json"

Resources