In web2py, is there a way to have a piece of common code be executed before all controllers are called?
For example, I want to add some code that will log client IPs to a log of requests to enable analysis. I could simply make the first line of all my controllers be something like response = RequestBase(request) but I'm curious to know if this is a problem that's already been solved through some other mechanisms.
You could simply put your piece of logging code in the model definition file, models/db.py, or in your controller controllers/default.py like this:
with open("mylog.log", "at") as f:
f.write(repr(request))
def index():
# index controller definition
# ... rest of the code
or, if you need functions or classes to be defined:
# --------------------------
# Log part:
# --------------------------
def my_log(request):
with open("mylog.log", "at") as f:
f.write(repr(request))
my_log(request)
# --------------------------
# Controllers part:
# --------------------------
def index():
# index controller definition
# ... rest of the code
Of course, repr(request) is not how you want it, but you get the idea: from there you can log any information you like before the controllers are called (they are just defined at this stage).
The server already maintains a log in the root directory, in httpserver.log.
Place the code in a model file and it will get executed before any controllers. If you only want the code to execute for a specific controller, place it at the top of the controller before any functions.
Related
I'm confused about factories.
#pytest.fixture
def a_api_request_factory():
return APIRequestFactory()
class TestUserProfileDetailView(TestCase):
def test_create_userprofile(self, up=a_user_profile, rf=a_api_request_factory):
"""creates an APIRequest and uses an instance of UserProfile from a_user_profile to test a view user_detail_view"""
request = rf().get('/api/userprofile/') # the problem line
request.user = up.user
response = userprofile_detail_view(request)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['user'], up.user.username)
if I take out the parens from rf().get.... then I get
"function doesn't have a get attribute".
If I call it directly then it gives me:
"Fixture "a_api_request_factory" called directly. Fixtures are not
meant to be called directly, but are created automatically when test
functions request them as parameters. See
https://docs.pytest.org/en/stable/fixture.html for more information
about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly
about how to update your code."
I do believe I've hit every combination of with or without parens in all relevant locations. Where do the parens go for fixtures?
Or better yet is there a pattern to avoid this type of confusion completely?
I am trying to build a custom resource which would in turn use another of my custom resource as part of its action. The pseudo-code would look something like this
customResource A
property component_id String
action: doSomething do
component_id = 1 if component_id.nil?
node.default[component_details][component_id] = ''
customResource_b "Get me component details" do
comp_id component_id
action :get_component_details
end
Chef::log.info("See the output computed by my customResourceB")
Chef::log.info(node[component_details][component_id])
end
Thing to note:
1. The role of customResource_b is to make a PS call to a REST web service and store the JSON result in node[component_details][component_id] overriding its value. I am creating this attribute node on this resource since I know it will be used later one, hence avoiding compile time issues.
Issues I am facing:
1. When testing a simple recipe that calls this resource in chef-client, the code in the resource gets executed to the last log line and after that the call to customResource_b is made. Which is something I am not expecting to happen.
Any advice would be appreciated. I am also quite new to Chef so any design improvements are also welcome
there is no need to nest chef resources, rather use chef idompotance, guards and notification.
and as usualy, you can always use a condition to decide which cookbook\recipe to run.
I'm trying to automate the following scenario with locust:
Login to application (put it in on_start, so it will login all the sessions first) and get token value from response of login call.
Create an organization
Create a user.
I need these calls to be executed in the order shown.
However, if I add #task for the 2nd and 3rd steps, it will pick these calls randomly, which causes my code to break.
Any suggestions?
Use Locust's TaskSequence class:
class SequentialTasks(TaskSequence):
def on_start(self):
# login to application and get token value from response of login call
#seq_task(1) # the first thing to do
#task(n) # do it n times
def create_org(self):
# create org
#seq_task(2) # the second thing to do
#task(n) # do it n times
def create_user(self):
# create user
You can just do it all in a single task. There is no limit to a single HTTP call within a single task (you could even put it all in on_start if you want to).
class MyTaskSet(TaskSet):
def on_start(self):
// do login
self.token = ...
#task
def create_task(self):
// create org
self.client.post(...)
// crete user
self.client.pos(...)
I have a gem (e.g. mygem) and as is normal, I add mygem to a file by putting require "mygem" at the top. What if I have a method in mygem called finish_jobs and I want it to run in the following location:
require "mygem"
# code, code code
finish_jobs
How would I do that without forcing the user to add the method every time they use the gem?
Specifically, what I am trying to do is write a server app (with rack) and I need the methods in the body of the file to be processed before the server is started.
This is certainly possible.
Why not just add the code directly into the Gem (since it sounds like it is under your control and is not an external dependency)?
module MyGem
def printSomething
p 2 + 2
end
module_function :printSomething
printSomething()
# => 4
end
If this isn't what you had in mind, let me know and I can update the solution.
Also, see Kernel#at_exit
A more explanatory guide on Kernel#at_exit
I don't know of a way to do what you're describing.
One workaround would be to provide an API which accepts a block. This approach allows you to run code after the user's setup without exposing implementation details to them.
A user could call your library method, providing a block to set up their server:
require "mygem"
MyGem.code_code_code {
# user's code goes here
}
Then, your library code would:
Accept the block
Call some library code
Here's an example implementation:
module MyGem
# Run some user-provided code by `yield`-ing the block
# Then run the gem's finalizer
def self.code_code_code
# Execute the block:
yield
# Finalize:
finish_jobs
end
end
This way, you can accept code from the user but still control setup and finalization.
I hope it helps!
I want to insert name of the method calling the logger methods into my log files. Not the whole stack trace, but the class, method and/or line number would be great.
In any method, one can use caller to get an array of strings, each of which contains the file, line number and method name. I've come up with a pretty awful kludge using regexes and Enumerable#find to try to return the first non-logger stack frame. I guess it works, but if the locations of the logging Ruby files change in a different version or Rails, or I name my files something to do with logs, it will break. Same with if I take a given index from the top of the stack (I did this at first, then refactored one thing and naturally it gave me the wrong frame).
Note that I'm not looking to just log the controller or action, as those can be retrieved easily. Mostly this is for stuff in the lib/ directory.
Isn't there an easy way to do this? I don't want to have to pass in __method__ every time I make a logging statement.
I've looked all over at different solutions for capturing the exact place (file, line number, method name) where I invoke any given logger instance method from within my rails app. To do this, you need to override Logger's format_message method, and a good place to do this is in your rails project's config/environment.rb file.
This is what I've come up with, which is good enough for me ;o)
class Logger
def format_message(severity, timestamp, progname, msg)
line = ''
Kernel.caller.each{|entry|
if (entry.include? Rails.root.to_s)
line = " #{entry.gsub(Rails.root.to_s,'').gsub(/\/(.+)\:in `(.+)'/, "\\1 -> \\2")}"
break
end
}
"[#{timestamp.strftime("%Y%m%d.%H:%M:%S")}] #{severity}#{line}: #{msg}\n"
end
end
Kernel.caller holds an enumerable array of the entire backtrace. If you look at it in its entirety, you'll see most calls are internal inside of a gem somewhere well outside your project. I've found that by looping through the Kernel.caller until I find the first place that includes my Rails.root, I can get the line with the information I want to parse.
Example:
If I call Rails.logger.debug("Streamer class started!") from the start method of my Streamer class, the raw entry would look like this:
/Users/chikoon/www/my_rails_app/lib/streamer.rb:7:in `start'
so by the time it makes it through my formatter, I've got the timestamp, severity mode, the file path, line number, method name, and message:
[20140919.19:23:44] DEBUG lib/streamer.rb:7 -> start: Streamer class started!
I hope that helps get your wheels turning.
How about setting up log_tags to call the __method__?
Blog::Application.configure do
config.log_tags = [lambda { |req| __method__ }]
end