I need to add a header to my hash that I convert to JSON.
In my controller I have:
render json: #rates
With #rates being a hash that looks like this:
{:rates=>[{:service_name=>"Standard", :service_code=>"FU",
:total_price=>"1100", :currency=>"USD", :min_delivery_date=>"2016-03-11
08:00:00 +0000", :max_delivery_date=>"2016-03-16 06:59:59 +0000"},
{:service_name=>"Priority", :service_code=>"FU", :total_price=>"2300",
:currency=>"USD", :min_delivery_date=>"2016-03-08 08:00:00 +0000",
:max_delivery_date=>"2016-03-09 07:59:59 +0000"},
{:service_name=>"Expedited", :service_code=>"FU", :total_price=>"1420",
:currency=>"USD", :min_delivery_date=>"2016-03-09 08:00:00 +0000",
:max_delivery_date=>"2016-03-10 07:59:59 +0000"}]}
The JSON format is perfect, but I believe I need a header for it for the API to get the rates from me. I saw "Render JSON with header", but I don't get it.
I need to add:
Content-Type: application/json
Can I just add this to my hash as another key/value pair? Or do I have to do some special header value?
You can try adding the content_type option.
render json: #rates, content_type: "application/json"
Related
I use Webrick to test my HTTP client and I need to test how it gets and sets cookies.
Wikipedia provides an example of such response:
HTTP/1.0 200 OK
Content-type: text/html
Set-Cookie: theme=light
Set-Cookie: sessionToken=abc123; Expires=Wed, 09 Jun 2021 10:18:14 GMT
...
but if I do
server.mount_proc ?/ do |req, res|
res["set-cookie"] = %w{ 1=2 2=3 }
the whole array becomes a single cookie: "[\"1=2\", \"2=3\"]"
And then in WEBrick::HTTPResponse source code I see again the #header = Hash.new that probably means you can't repeat the header key.
Is it impossible?!
UPD:
This leaves me no hope:
https://github.com/rack/rack/issues/52#issuecomment-399629
https://github.com/rack/rack/blob/c859bbf7b53cb59df1837612a8c330dfb4147392/lib/rack/handler/webrick.rb#L98-L100
Another method should be used instead of res[...]=:
res.cookies.push WEBrick::Cookie.new("1", "2")
res.cookies.push WEBrick::Cookie.new("3", "4")
res.cookies.push WEBrick::Cookie.new("1", "5")
I would like to capture response Header value for "Authorization:".
Response headers:
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json
Server: Microsoft-IIS/8.5
Authorization: Bearer MMSArOve7c9NffH4oTqBMW1SiWLUbQi2nm0ryR-
Wi5d_plLkk7xzTVo8b5_s1sg-Rut6vdDoTvlRju-
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Mon, 14 May 2018 03:50:47 GMT
Content-Length: 484
and I did this.
but the result is
JMeterVariables:
JMeterThread.last_sample_ok=true
JMeterThread.pack=org.apache.jmeter.threads.SamplePackage#33a6821
START.HMS=113828
START.MS=1526254708675
START.YMD=20180514
TESTSTART.MS=1526269844536
Token=test
__jm__Thread Group__idx=0
__jmeter.USER_TOKEN__=Thread Group 1-1
what I did wrong, please help Thank you!
You can use the following regex to extract Authorization
Bearer (((.*)\n)+)X-Asp
And Use Match No 1
For More information you may link to the following
JMeter Regular Expressions
Extracting variables
Don't use ^ which is Start of String Anchor
Applying ^a to abc matches a. ^b does not match abc at all, because the b cannot be matched right after the start of the string, matched by ^
So use the regular expression without it:
Bearer(.*)
You need to remove ^ character from your regular expression
More information:
Regular Expressions
Perl 5 Regex Cheat sheet
Using RegEx (Regular Expression Extractor) with JMeter
I have this log file: http://dpaste.com/3FE2VNY
I only want to extract certain pieces of information such as date time, and number of events posted. My attempt of putting this into elasticsearch results in hanging of logstash. Not sure what I did wrong as I am new to this.
What I attempted to do was to simply grab all the content in the log file and pass it into elasticsearch. I understand that grok must be used to grab specific parts but I am not at that level just quite yet.
My goal is to extract:
start: Mon Apr 27 13:35:25 2015
finish: Mon Apr 27 13:35:36 2015
number of events posted: 10
Log file:
test_web_events.py: START: Mon Apr 27 13:35:25 2015
# TESTCASE TestWebPost ==================================================
# START TEST METHOD #################################: test_10_post_valid_json
[2015-04-27T13:35:25.657887] HTTP DELETE http://pppdc9prd3net:8080/rastplatz/v1/sink/db?k0=bradford4
{}
HTTP response: 200
0
POSTING event_id b29b6c7c-48cd-4cd9-b3c4-aa0a7edc1f35 to businessevent
Content-Type: text/plain
POSTING event_id 13678af1-3e3a-4a6e-a61c-404eb94b9768 to businessevent
Content-Type: text/plain
POSTING event_id 47b70306-2e7c-4cb2-9e75-5755d8d101d4 to businessevent
Content-Type: text/plain
POSTING event_id 6599cdb2-0630-470d-879d-1130cf70c605 to businessevent
Content-Type: text/plain
POSTING event_id d088ce29-fa0d-4f45-b628-045dba1fd045 to businessevent
Content-Type: text/plain
POSTING event_id 07d14813-b561-442c-9b86-dc40d1fcc721 to businessevent
Content-Type: text/plain
POSTING event_id b6aea24a-5424-4a0f-aac6-8cbaecc410db to businessevent
Content-Type: text/plain
POSTING event_id 016386bd-eac5-4f1c-8afc-a66326d37ddb to businessevent
Content-Type: text/plain
POSTING event_id 6610485d-71af-4dfa-9268-54be5408a793 to businessevent
Content-Type: text/plain
POSTING event_id 92786434-02f7-4248-a77b-bdd9d33b57be to businessevent
Content-Type: text/plain
Posted 10 events
# END TEST METHOD ###################################: test_10_post_valid_json
test_web_events.py: FINISH: Mon Apr 27 13:35:36 2015
conf file:
input {
file {
path => "/home/bli1/logstash-1.5.0/tmp/bradfordli2_post.log"
codec => multiline {
pattern => "^."
negate => true
what => "previous"
}
}
}
output {
elasticsearch { protocol => http host => "127.0.0.1:9200"}
stdout { codec => rubydebug }
}
You could use something like:
multiline {
pattern => "START:"
negate => "true"
what => "previous"
}
This instructs the multiline filter/codec to put all lines not containing START: in the previous logevent.
You can then use grok patterns to extract your 3 pieces of information. Take care you have to instruct grok to look in a multiline messages by using the multiline swith at the beginning of your grok pattern like so:
grok {
match => ["message", "(?m)Posted %{NONNEGINT:nrEvents} events"]
}
A word of warning if you are working with multithreaded inputs / several parallel worker threads. There are currently bugs in logstash multiline handling that can lead to lines from various events mixing each other up when being processed in parallel. I'm nor sure if that is relevant for you but take a look at this:
https://github.com/elastic/logstash/issues/1754
Another word of info. I don't really understand what's the difference between mutline filter and codec and when use one or the other. I use the filter in my project and it works fine however.
I have an email object (instance of the mail gem):
header = #<Mail::Message:70363632703900, Multipart: false, Headers: <Date: Tue, 03 Jan 2012 13:48:55 +0000>, <From: Mike Tyson <some#email.com>>, <To: <my#email.com>>, <Subject: RE: =?ISO?Q?Ordrebekr=E6ftelse_nummer_CCL1221805621?=>>
When I try to access the subject (which is a String object) by e.g. typing
puts header.subject
I get this error:
unknown encoding name - ISO
Instead of just reading the entire string it seems that Ruby reads the ISO as the encoding. I have no idea why this is.
Do anyone have an idea about how to access this string?
Additional info:
Here is an example of a string that also contains encoding info, but doesn't raise errors:
header_2 = #<Mail::Message:70327581075160, Multipart: false, Headers: <Date: Tue, 03 Jan 2012 13:01:29 +0100>, <From: J.C.W. <some#email.com>>, <To: Tiger Woods <my#email.com>>, <Subject: =?ISO-8859-1?Q?Re=3A_=D8v_alts=E5=2C_vi_er_n=F8dt_til_at_aflyse_din_ordre_?= =?ISO-8859-1?Q?107210?=>>
When attempting to load a page which is a CSV that has encoding of UTF-8, using Mechanize V2.5.1, I used the following code:
a.content_encoding_hooks << lambda{|httpagent, uri, response, body_io|
response['Content-Encoding'] = 'none' if response['Content-Encoding'].to_s == 'UTF-8'
}
p4 = a.get(redirect_url, nil, ['accept-encoding' => 'UTF-8'])
but I find that the content encoding hook is not being called and I get the following error and traceback:
/Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:787:in 'response_content_encoding': unsupported content-encoding: UTF-8 (Mechanize::Error)
from /Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:274:in 'fetch'
from /Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:949:in 'response_redirect'
from /Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:299:in 'fetch'
from /Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:949:in 'response_redirect'
from /Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:299:in 'fetch'
from /Users/jackrg/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/mechanize-2.5.1/lib/mechanize.rb:407:in 'get'
from prototype/test1.rb:307:in `<main>'
Does anyone have an idea why the content hook code is not firing and why I am getting the error?
but I find that the content encoding hook is not being called
What makes you think that?
The error message references this code:
def response_content_encoding response, body_io
...
...
out_io = case response['Content-Encoding']
when nil, 'none', '7bit', "" then
body_io
when 'deflate' then
content_encoding_inflate body_io
when 'gzip', 'x-gzip' then
content_encoding_gunzip body_io
else
raise Mechanize::Error,
"unsupported content-encoding: #{response['Content-Encoding']}"
So mechanize only recognizes the content encodings: '7bit', 'deflate', 'gzip', or 'x-gzip'.
From the HTTP/1.1 spec:
4.11 Content-Encoding
The Content-Encoding entity-header field is used as a modifier to the
media-type. When present, its value indicates what additional content
codings have been applied to the entity-body, and thus what decoding
mechanisms must be applied in order to obtain the media-type
referenced by the Content-Type header field. Content-Encoding is
primarily used to allow a document to be compressed without losing the
identity of its underlying media type.
Content-Encoding = "Content-Encoding" ":" 1#content-coding
Content codings are defined in section 3.5. An example of its use is
Content-Encoding: gzip
The content-coding is a characteristic of the entity identified by the
Request-URI. Typically, the entity-body is stored with this encoding
and is only decoded before rendering or analogous usage. However, a
non-transparent proxy MAY modify the content-coding if the new coding
is known to be acceptable to the recipient, unless the "no-transform"
cache-control directive is present in the message.
...
...
3.5 Content Codings
Content coding values indicate an encoding transformation that has
been or can be applied to an entity. Content codings are primarily
used to allow a document to be compressed or otherwise usefully
transformed without losing the identity of its underlying media type
and without loss of information. Frequently, the entity is stored in
coded form, transmitted directly, and only decoded by the recipient.
content-coding = token
All content-coding values are case-insensitive. HTTP/1.1 uses
content-coding values in the Accept-Encoding (section 14.3) and
Content-Encoding (section 14.11) header fields. Although the value
describes the content-coding, what is more important is that it
indicates what decoding mechanism will be required to remove the
encoding.
The Internet Assigned Numbers Authority (IANA) acts as a registry for
content-coding value tokens. Initially, the registry contains the
following tokens:
gzip An encoding format produced by the file compression program "gzip" (GNU zip) as described in RFC 1952 [25]. This format is a
Lempel-Ziv coding (LZ77) with a 32 bit CRC.
compress The encoding format produced by the common UNIX file compression program "compress". This format is an adaptive
Lempel-Ziv-Welch coding (LZW).
Use of program names for the identification of encoding formats
is not desirable and is discouraged for future encodings. Their
use here is representative of historical practice, not good
design. For compatibility with previous implementations of HTTP,
applications SHOULD consider "x-gzip" and "x-compress" to be
equivalent to "gzip" and "compress" respectively.
deflate The "zlib" format defined in RFC 1950 [31] in combination with the "deflate" compression mechanism described in RFC 1951 [29].
identity The default (identity) encoding; the use of no transformation whatsoever. This content-coding is used only in the
Accept- Encoding header, and SHOULD NOT be used in the
Content-Encoding header.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5
In other words, an http content encoding has nothing to do with ascii v. utf-8 v. latin-1.
In addition the source code for Mechanize::HTTP::Agent has this in it:
# A list of hooks to call after retrieving a response. Hooks are called with
# the agent and the response returned.
attr_reader :post_connect_hooks
# A list of hooks to call before making a request. Hooks are called with
# the agent and the request to be performed.
attr_reader :pre_connect_hooks
# A list of hooks to call to handle the content-encoding of a request.
attr_reader :content_encoding_hooks
So it doesn't even look like you are calling the right hook.
Here is an example I got to work:
require 'mechanize'
a = Mechanize.new
p a.content_encoding_hooks
func = lambda do |a, uri, resp, body_io|
puts body_io.read
puts "The Content-Encoding is: #{resp['Content-Encoding']}"
if resp['Content-Encoding'].to_s == 'UTF-8'
resp['Content-Encoding'] = 'none'
end
puts "The Content-Encoding is now: #{resp['Content-Encoding']}"
end
a.content_encoding_hooks << func
a.get(
'http://localhost:8080/cgi-bin/myprog.rb',
[],
nil,
"Accept-Encoding" => 'gzip, deflate' #This is what Firefox always uses
)
myprog.rb:
#!/usr/bin/env ruby
require 'cgi'
cgi = CGI.new('html3')
headers = {
"type" => 'text/html',
"Content-Encoding" => "UTF-8",
}
cgi.out(headers) do
cgi.html() do
cgi.head{ cgi.title{"Content-Encoding Test"} } +
cgi.body() do
cgi.div(){ "The Accept-Encoding was: #{cgi.accept_encoding}" }
end
end
end
--output:--
[]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><HEAD><TITLE>Content-Encoding Test</TITLE></HEAD><BODY><DIV>The Accept-Encoding was: gzip, deflate</DIV></BODY></HTML>
The Content-Encoding is: UTF-8
The Content-Encoding is now: none