Following my question here.
So I am trying to do a smart redirect using this:
get "/category/:id/merge" do
#... setting #catalog_id and category
call env.merge("PATH_INFO" => "/catalog/#{#catalog_id}/category/#{category.id}", "REQUEST_METHOD"=>"PATCH","QUERY_STRING"=>"merge=1")
status 200
end
But when I look in logs, I see something that is not only frustrating but also completely absurd:
# this one is from internal call
I, [2013-03-21T15:55:54.382153 #29569] INFO -- : Processing GET /catalog/1/category/2686/merge
I, [2013-03-21T15:55:54.382239 #29569] INFO -- : Parameters: {}
...
I, [2013-03-21T15:55:54.394992 #29569] INFO -- : Processing PATCH /catalog/1/category/2686
I, [2013-03-21T15:55:54.395041 #29569] INFO -- : Parameters: {"merge"=>"1"}
I, [2013-03-21T15:55:54.395560 #29569] INFO -- : Processed PATCH /catalog/1/category/2686?merge=1 with status code 404
I, [2013-03-21T15:55:54.395669 #29569] INFO -- : Processed GET /catalog/1/category/2686/merge with status code 200
# this one is a direct request
I, [2013-03-21T15:56:36.246535 #29588] INFO -- : Processing PATCH /catalog/1/category/2686
I, [2013-03-21T15:56:36.246629 #29588] INFO -- : Parameters: {"merge"=>"1"}
...
I, [2013-03-21T15:56:36.286216 #29588] INFO -- : Processed PATCH /catalog/1/category/2686?merge=1 with status code 204
And the body of internal 404 request is just Sinatra's standard 404 error page. How the hell can he tell me straight in the eye that he doesn't know this route if I caught him serving exactly the same URL with acceptable 204?
UPDATE
It gets even more exciting when I change REQUEST_METHOD to GET - works like a charm.
I, [2013-03-21T17:09:37.718756 #3141] INFO -- : Processing GET /catalog/1/category/2686/merge
I, [2013-03-21T17:09:37.718838 #3141] INFO -- : Parameters: {}
...
I, [2013-03-21T17:09:37.735632 #3141] INFO -- : Processing GET /catalog/1/category/2686
I, [2013-03-21T17:09:37.735678 #3141] INFO -- : Parameters: {"merge"=>"1"}
...
I, [2013-03-21T17:09:37.773033 #3141] INFO -- : Processed GET /catalog/1/category/2686?merge=1 with status code 200
I, [2013-03-21T17:09:37.773143 #3141] INFO -- : Processed GET /catalog/1/category/2686/merge with status code 200
Related
I have a log file looking like that:
2015-12-03 17:08:36 **START** ACTION
.
...some data
.
2015-12-03 17:08:36 **START** ACTION
2015-12-03 17:08:36 **END** ACTION
2015-12-03 17:08:38 **START** ACTION
.
...another some data
.
2015-12-03 17:08:51 ERROR SEARCHQUEUE-DAILY-SEARCH :: [User1] :: Failed to find item in cache: Black-ish.S02E09.Man.At.Work.720p.EXTENDED.HULU.WEBRip.AAC2.0.H264-NTb[rartv]
2015-12-03 17:08:51 **END** ACTION
2015-12-03 17:08:53 DEBUG SEARCHQUEUE-DAILY-SEARCH :: [User1] :: Unable to parse the filename Christmas.Through.the.Decades.Part1.The.60s.HDTV.x264-W4F[rartv] into a valid show
2015-12-03 17:09:57 INFO SEARCHQUEUE-DAILY-SEARCH :: [admin] :: Skipping Blindspot.S01E10.nl because we don't want an episode that's Unknown
2015-12-03 17:09:57 DEBUG SEARCHQUEUE-DAILY-SEARCH :: [admin] :: None of the conditions were met, ignoring found episode
2015-12-03 17:09:57 INFO SEARCHQUEUE-DAILY-SEARCH :: [admin] :: Skipping Arrow.S04E08.720p.FASTSUB.VOSTFR.720p.HDTV.x264-ZT.mkv because we don't want an episode that's 720p HDTV
2015-12-03 17:09:58 DEBUG SEARCHQUEUE-DAILY-SEARCH :: [User1] :: Using cached parse result for: Arrow.S04E08.1080p.WEB-DL.DD5.1.H264-RARBG
2015-12-03 17:09:58 **END** ACTION
As you can see, there is START action and END action, but between them could be another actions with START and END.
So what i need to do is to check what is the fastest action by finding out the time an action takes by subtracting it's END time from it's START time
I'm new to bash and unix and i have no idea how to do it?
please help!
With GNU awk you can use
gawk -v min=9999999 '$4!="ACTION" {next}
{t=$1" "$2; gsub(/-|:/, "", t); t=mktime(t)}
$3=="**START**" {start[++c]=t}
$3=="**END**" {t=start[c--]-t; if(t<min) min=t}
END {print "fastest action took " min " second(s)"}' yourFile.log
This computes the duration for each action and prints the minimum of all these durations.
If you want to print additional information you can adapt the script accordingly. For every piece of information you need an additional variable similar to min that is updated inside if(t<min). To print information about the start line, you also need to store this information in an array similar to start[] in the **START** rule.
The script takes a general approach, where every action is processed. But this is actually not necessary. If an action contains another action, we know that it cannot be the shortest one, because the action inside has to be shorter. Therefore, we could replace the array with a regular variable and only process the innermost actions.
I'm having problems using the EC2 connector with filters for DescribeInstances. Specifically, I'm trying to find all instances that have the tag "classId" set.
I've also tried to find all instances that have the classId tag with specific string, e.g. "123".
Below are the XMLs of the describeInstance for both scenarios.
tag-key ------
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag-key" values="#[['classId']]">
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
tag:classId:----
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values >
<ec2:value value="#['123']" />
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Each time I receive an error like the following (for tag:classId):
ERROR 2021-03-29 08:32:49,693 [[MuleRuntime].uber.04: [ec2-play].ec2-playFlow.BLOCKING #1092a5bc] [processor: ; event: df5e2df0-908a-11eb-94b5-38f9d38da5c3] org.mule.runtime.core.internal.exception.OnErrorPropagateHandler:
********************************************************************************
Message : The filter 'null' is invalid (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 33e3bbfb-99ea-4382-932f-647662810c92; Proxy: null)
Element : ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances)
Element DSL : <ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values>
<ec2:value value="#['123']"></ec2:value>
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Error type : EC2:INVALID_PARAMETER_VALUE
FlowStack : at ec2-playFlow(ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances))
(set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
NOTE: The code works without a filter, returning all instances. But, that isn't what I want or need. The more filtering I can do the faster the response.
Does anyone have samples of the filter option working? Can you tell me what I'm doing wrong?
Thanks!
This surely is a bug. I tried the same and it was not working for me as well. I enabled debug logging and found that the connector is not sending the filter.1.Name=tag:classId as a query parameter in the request. Here is the debug log that I found. (Notice there is no filter.1.Name=tag:classId in the query string)
DEBUG 2021-04-02 21:55:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Value.1=123"
However, I tried to use the Expression or Bean Reference option and set the expression directly as [{name: 'tag:classId', values:['123']}] like this:
and it worked correctly. Here is the same debug log after this change
DEBUG 2021-04-02 21:59:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Name=tag%3AclassId&Filter.1.Value.1=123"
Also, I want to point out very weird behaviour, this does not work if you try to format [{name: 'tag:classId',values: ['123']}] across multiple lines in the expression and will give an error during deployment.
When I deploy my RoR (4.2.6) application to ElasticBeanstalk, it appears that the initialization process is getting run four times. This is impacting the way I rely on a singleton instance of a job Scheduler object (using Rufus Scheduler).
In a couple of initializer files and in application.rb, I added a few log statements:
Here:
# /config/initializers/scheduler.rb
require 'rufus-scheduler'
::Rufus_lockfile = "/tmp/.rufus-scheduler.lock"
::Scheduler = Rufus::Scheduler.singleton(
:lockfile => Rufus_lockfile
)
Rails.logger.info "1: started Scheduler #{Scheduler.object_id}"
And here:
# /config/initializers/cookies_serializer.rb
Rails.logger.info "2: some other initializer"
And here:
# /config/application.rb
require File.expand_path('../boot', __FILE__)
require 'rails/all'
Bundler.require(*Rails.groups)
module MyApp
class Application < Rails::Application
...
config.after_initialize do
Rails.logger.info "3: after app is initialized"
end
end
end
After I run eb deploy and it completes, this is what I see at the top of app/log/production.log:
I, [2016-05-31T10:31:08.756302 #18753] INFO -- : 2: some other initializer
I, [2016-05-31T10:31:08.757252 #18753] INFO -- : 1: started Scheduler 47235057343600
I, [2016-05-31T10:31:08.896353 #18753] INFO -- : 3: after app is initialized
I, [2016-05-31T10:31:23.669517 #18817] INFO -- : 2: some other initializer
I, [2016-05-31T10:31:23.670380 #18817] INFO -- : 1: started Scheduler 46989489069800
I, [2016-05-31T10:31:23.806154 #18817] INFO -- : 3: after app is initialized
D, [2016-05-31T10:31:23.969103 #18817] DEBUG -- : ^[[1m^[[36mActiveRecord::SchemaMigration Load (1.3ms)^[[0m ^[[1mSELECT "schema_migrations".* FROM "schema_migrations"^[[0m
I, [2016-05-31T10:31:33.108449 #18897] INFO -- : 2: some other initializer
I, [2016-05-31T10:31:33.109513 #18897] INFO -- : 1: started Scheduler 47156425207060
I, [2016-05-31T10:31:33.116500 #18901] INFO -- : 2: some other initializer
I, [2016-05-31T10:31:33.117374 #18901] INFO -- : 1: started Scheduler 47156425216940
I, [2016-05-31T10:31:33.790266 #18901] INFO -- : 3: after app is initialized
I, [2016-05-31T10:31:33.844517 #18897] INFO -- : 3: after app is initialized
So it looks like the initializer files and even the code in my after_initializer block are getting run four times... and I can't figure out why.
Question got asked on rufus-scheduler issue system, answered there: https://github.com/jmettraux/rufus-scheduler/issues/208
I cannot seem to get around a chef error that deals with similarly named attributes in my attributes/default.rb file.
I have 2 attributes:
default['test']['webservice']['https']['keyManagerPwd'] = 'password'
...
...
default['test']['webservice']['https']['keyManagerPwd']['type'] = 'encrypted'
Notice that, up until the last bracket (['type']), the names are identical.
I am referencing these attributes in a template and in a template block in the recipe. When I go to run it, I receive this error:
==================================================[0m
I, [2015-01-28T13:36:43.668692 #7920] INFO -- core-14-2-centos-65:
[31mRecipe Compile Error
in /tmp/kitchen/cache/cookbooks/avx/attributes/default.rb[0m
I, [2015-01-28T13:36:43.669192 #7920] INFO -- core-14-2-centos-65:
=================================================================[0m
I, [2015-01-28T13:36:43.669192 #7920] INFO -- core-14-2-centos-65:
I, [2015-01-28T13:36:43.669692 #7920] INFO -- core-14-2-centos-65:
[0mIndexError[0m
I, [2015-01-28T13:36:43.669692 #7920] INFO -- core-14-2-centos-65: -------
--[0m
I, [2015-01-28T13:36:43.669692 #7920] INFO -- core-14-2-centos-65: string not matched[0m
I, [2015-01-28T13:36:43.670192 #7920] INFO -- core-14-2-centos-65:
I, [2015-01-28T13:36:43.670192 #7920] INFO -- core-14-2-centos-65:
[0mCookbook Trace:[0m
I, [2015-01-28T13:36:43.670692 #7920] INFO -- core-14-2-centos-65: --------[0m
I, [2015-01-28T13:46:05.101875 #8332] INFO -- core-14-2-centos-65:
[0m113>> default['webservice']['https']['keyManagerPwd']['type'] =
'encrypted'
It seems as if Chef cannot distinguish between 2 attributes when the only differentiation is the ending.
If I modify the same attributes by placing some unique text at the front of the name, there is not issue with the recipe at all:
default['test']['1']['webservice']['https']['keyManagerPwd'] = 'password'
...
...
default['test']['2']['webservice']['https']['keyManagerPwd']['type'] = 'encrypted'
By putting the ['1'] and ['2'] there, it solves the problem.
I am fairly new to Chef so I'm thinking its just something simple I'm overlooking. Does anyone have any ideas or suggestions? Thanks.
Simple answer: You cannot do this. That's not a Chef problem, nor a ruby problem - it's a general problem of like most programming languages.
Let's use foo as a variable instead of the lengthy default['test']['webservice']['https']['keyManagerPwd'].
What you effectively do is
1: foo = "password"
2: foo['type'] = "encrypted"
In line 1, foo is a string. In line 2, it is treated as hash (called array in some other languages). The second line automatically overwrites your foo = "password" assignment. It's effectively the same as
1: foo = "password"
2: foo = {}
3: foo['type'] = "encrypted"
The alternative would be to use
foo['something'] = "password"
foo['type'] = "encrypted"
Or translated to your code:
default['test']['webservice']['https']['keyManagerPwd']['something'] = 'password'
default['test']['webservice']['https']['keyManagerPwd']['type'] = 'encrypted'
This should work.
In Sinatra I have a route setup similar to the following:
put '/test' do
begin
logger.info 'In put begin block'
write_to_file(request.body.read)
[200, '']
rescue RuntimeError => e
[500, 'some error']
end
end
def write_to_file(data)
logger.info "writing data with size #{data.size}"
# Code to save file...
end
When I send data that is < ~500 MBytes it everything seems to work correctly but when I attempt to send data that is >= 500 MBytes I get some weird log output and then the client eventually errors out with the following error: Excon::Errors::SocketError: EOFError (EOFError)
The logs from the server (Sinatra) look like the following:
For data < 500 MBytes:
I, [2013-01-07T21:33:59.386768 #17380] INFO -- : In put begin block
I, [2013-01-07T21:34:01.279922 #17380] INFO -- : writing data with size 209715200
xxx.xxx.xxx.xxx - - [07/Jan/2013 21:34:22] "PUT /test " 200 - 22.7917
For data > 500 MBytes:
I, [2013-01-07T21:47:37.434022 #17386] INFO -- : In put begin block
I, [2013-01-07T21:47:41.152932 #17386] INFO -- : writing data with size 524288000
I, [2013-01-07T21:48:16.093683 #17380] INFO -- : In put begin block
I, [2013-01-07T21:48:20.300391 #17380] INFO -- : writing data with size 524288000
xxx.xxx.xxx.xxx - - [07/Jan/2013 21:48:39] "PUT /test " 200 - 62.4515
I, [2013-01-07T21:48:54.718971 #17386] INFO -- : In put begin block
I, [2013-01-07T21:49:00.381725 #17386] INFO -- : writing data with size 524288000
I, [2013-01-07T21:49:33.980043 #17267] INFO -- : In put begin block
I, [2013-01-07T21:49:41.990671 #17267] INFO -- : writing data with size 524288000
xxx.xxx.xxx.xxx - - [07/Jan/2013 21:50:06] "PUT /test " 200 - 110.2076
xxx.xxx.xxx.xxx - - [07/Jan/2013 21:51:22] "PUT /test " 200 - 108.5339
Not entirely sure whats going on here so I guess my question is two fold, A. What is fundamentally different between these two cases that would cause them to behave this way? B. Is there a better way to handle data to mitigate against this?
The problem turned out to be with apache/passenger. Running the server with WEBrick alleviated the issue.