I'm wondering when developing an Ansible Collection, is it possible to get arbitrary logs written to a log file/console?
This being a random print() statement to help debugging, or is the only way just to concatenate your final return message?
Thank you
Question:
Is it possible to get arbitrary logs written to a log file/console?
Answer:
Your question for me looks similar to Is it possible to print out debugging logs while task is running in Ansible?.
According the answer there citing from documentation
Ansible executes each module, usually on the remote managed node, and collects return values. ...
Ansible modules normally return a data structure that can be registered into a variable ...
so such live output is not implemented. There is an option for Debugging modules during development
To see what is actually happening in the module
but that might not fit all of your cases.
Question:
This being a random print() statement to help debugging
Answer:
According the Developer Guide » Debugging modules » Simple debugging
Since print() statements do not work inside modules, raising an exception is a good approach if you just want to see some specific data. Put raise Exception(some_value) somewhere in the module and run it normally. Ansible will handle this exception, pass the message back to the control node, and display it.
Related
Spf13/cobra command offers a number of elegant tools to provide feedback to the user. I have more experience using Python/headless service where the standard is to use logging libraries and then redirect to stdio if necessary.
However, the more I’ve been exploring cobra, it feels like this is the wrong path. Instead it feels like I should send everything through cobra, and pick and choose from that buffer whatever should go to logging.
Is there any idiomatic guidance here?
I would suggest to use methods provided from cobra.Command for messages that are intended to be read by users.
Logs are usually used to show/save messages that will be read by developers (in this case, you) or if users explicitly want to read the logs.
With that reasoning, you can actually use both of them. For example, you can perform
c.Println("<success message>") to tell users that the command success, and
Debug/Info/Error logs in your CLI app which will be displayed (or saved in a logfile) if user pass --verbose flag to your app.
In Golang how you guys manage to write logs into multiple file base on the package name.
For example in my current app, I am trying to collect multiple hardware stats from different packages called Netapp, IBM etc but under the same application. So, I would like to write logs from those package in separate folder like /var/log/myapp/netapp.log and /var/log/myapp/ibm.log?
Any pointer or clue would be very helpful ?
Thanks James
One approach you could take is to implement the Observer pattern. It's a great approach when you need to make several things happen with the same input/event. In your case, logging the same input to different logs. You can find more information here.
In a situation you described and following this example, you can do following things:
Your different logging implementations (with different logging destination folders) can implement the Observer interface by putting your logging code for each logging implementation in OnNotify method.
Create an instance of eventNotifier and register all your logging implementations with eventNotifier.Register method. Something like:
notifier := eventNotifier{
observers: map[Observer]struct{}{},
}
notifier.Register(netAppLogger)
notifier.Register(ibmLogger)
Use eventNotifier.Notify whenever and wherever you need to do logging and it will use all registered logging implementations.
I want to copy the Scribd developers challenge, but build it using the Gosu framework in ruby. I know how to do most of it, except I'm not 100% sure how to do the following. I'd like a few ideas on the best way to approach this.
Other people (students) will be able to check their ruby code into the repo and I'd like to eventually run all the different bots against each other to determine a winner. Here are my questions about how I would do this.
There is a time limit and ram usage limit. How would you enforce this. Essentially, what I think I want to do is have the game class have a board representation, and then call each engine's main method and pass it in the game board. The method then should return a move. If it doesn't return a move in the time limit, then we move on to the next move. Also, there should be a ram limit such that they can't just iterate over all possibilities and store them in memory and essentially store all the states in the game.
Specifically, how can I spawn a process I can monitor and kill in ruby?
Time and RAM are concerns, sure, but the greater concern is security. Running arbitrary user code on your server invites attacks. What's to prevent a user from uploading code that monkey-patches your app code in order to cheat, or send spam from your server, or break things with FileUtils.rm_rf(__dir__) or while { fork }?
To run user code safely, you must run it in a sandbox. But I'll get back to that.
The simplest way to start (and solve the time/RAM problem) will be to...
Run user code in a separate process
Mandate that the user's script must define a class (or module) with a specific name, e.g. Bot, that implements your main interface. Write a wrapper script that will take as an argument the path to a user's script and read the board data (as Marshaled data, or serialized to YAML or JSON) from $stdin. The script will then require the temporary file and pass the board data to Bot. Finally, it will take the output from Bot, marshal/serialized it, and write it to $stdout.
When a user uploads a script, your app will write it to a temporary file and run the above wrapper script (with e.g. Open3), passing it the marshaled/serialized board data on stdin, then reading and unmarshaling/deserializing the result from its stdout/stderr.
How does this solve the time/RAM problem? Well, since you're just running your wrapper script in a separate process by invoking ruby, you can lean on your OS's process-management features, thus removing the possibility of the user monkey-patching their way around those restrictions. If you google e.g. "limit process memory" along with the name of your OS you'll find lots of information. For example, for Linux this tool looks handy: timeout. With such a tool you might run e.g.:
$ timeout -t 60 -m 10000 ruby /path/to/user/script.rb
Security
Okay, so what about security? It's a hard problem, not least because Ruby is so flexible, and so I can't just tell you "this is the solution."
One thing you could do is run all user code in a virtual machine using e.g. Docker. This would make it easy to prevent the user code from accessing your (real) filesystem or the network. (In this case it may make sense to have a simple Ruby server running on the VM that can receive scripts and board data from your app, run the scripts, and respond with the results, since your app won't be able to directly invoke ruby on the VM.)
This still leaves a lot of room for mischief, though. It mitigates the damage that can be done by FileUtils.rm_rf or while { fork }, as you can just spin up a fresh VM, but that's still an inconvenience. To prevent those entirely, you really need a sandbox that reliably keeps the user from accessing methods and modules that could be used maliciously. There's no One True Way to do this in Ruby, alas, but there are some tools and some code out there that will help you get started. Googling "Ruby sandbox" will turn up a lot. One project I've found instructive is RubyFiddle, which is open source and so its code is available on GitHub. It will point you to jruby-sandbox, which does sandboxing with JRuby because Java, unlike (MRI) Ruby, does have mature sandboxing solutions.
I hope that's helpful. Good luck!
The team I am working with has bought a CloudPort license (from CrossCheck Networks) and we are currently facing the problem of not being able to implement any sort of logic in the service Mocks (to control response selection). It would be something as simple as:
if (requestCounter++ == 1)
then
response = $fn:Global(MyFirstXmlString)$ // <-- this is CloudP syntax for vrbls
else
response = $fn:Global(MySecondXmlString)$.
We did not find any sample for using the Dll Plugin and neither of the two JScript and VBScript Tasks are working (i.e., our client machine gets back not the desired MySecondXmlString response but instead a Fault with
<faultstring>
ActiveX control '0e59f1d5-1fbe-11d0-8ff2-00a0d10038bc' cannot be instantiated
because the current thread is not in a single-threaded apartment.
</faultstring>.
Believe it or not, the fault above is being obtained even if the J- or VB-Script task is left empty! It's hard for us to believe that all the logic functionality advertised in the CloudPort UI is fake and that nothing is able to help one implement the kind of logic described above.
Any help would be appreciated!
Thanks,
Pompi
PS: A bit more details here on why the kind of logic described above is needed: We use SoapSonar in our testing framework to fire requests to a BizTalk orchestration application. The CP mocks are needed to simulate the environment of that BT orchestration. We cannot control individual mocked responses via SSonar requests: the (for cloudport: incoming) client requests are made by Production code and cannot be altered or controled by our SSonar client). The only Tasks functionality that worked for us is a DB-table working as an offline channel b/w SSonar and CP (SSonar writes in it and CP reads from it). CloudPort's reading of, say, responseXml's, from DB works fine but we cannot find a way to implement further behavior controlling logic on the CP side. Therefore, this stackoverflow posting. And thx for having the patience to read this whole shananigan :).
Don't think you can control this from the script.
The threading model should be controlled by the host, which I suppose uses windows's "vbscript.dll" for the actual execution.
So if you cannot find any settings under the tool's options or in the help :), you should look in the registry keys for the threading options of that ActiveX or "vbscript.dll"
That is the "ThreadingModel" value and try to change the values (you will also have to search the net for those, don't know them by heart).
There are chances that some other application (antivirus?) has changes the path to the dll that the COM interface should actually point to (see http://social.technet.microsoft.com/Forums/en-US/ieitpropriorver/thread/ac10bd5f-6d91-4aac-857c-0ed5758088ec)
Hope it helps.
I'm currently working on a command line tool and since this is my first time designing a tool like this I have a few design questions, most notably how to handle a non lethal error.
The tool that I'm working on raises a main server on a configurable port and after that an optional web server on a non configurable port. If we then choose to do this again (while using a different port for the main server) we would obviously get an binding error when try to start up the optional web server.
Since this is a non lethal error (running the webserver is optional) and from UI experience my initial thoughts would be to print out a clear error and carry on with the program. However I've been told that from a scripting stand point print out the error and then existing is better practice.
So what is the better?
You might also want to consider that people might want to write scripts which expect the invocation to succeed even if the webserver is already running.
If you define a default behavior of 'fail if webserver already running', then such scripts will have to parse your error message, or read/understand your return value and figure out that the invocation failed for this particular reason (i.e. webserver already running).
Give them a way out of this and introduce a flag (argument) where they can decide which behavior they want. In the absence of the flag, do the safer thing maybe (i.e. error out if webserver is running).