Here is the case;
There is this app called "termux" on android which allows me to use a terminal on android, and one of the addons are androids API's like sensors, tts engines, etc.
I wanted to make a script in ruby using this app, specifically this api, but there is a catch:
The script:
require('json')
JSON.parse(%x'termux-sensor -s "BMI160 Gyro" -n 1')
-s = Name or partially the name of the sensor
-n = Count of times the command will run
returns me:
{
"BMI160 Gyroscope" => {
"values" => [
-0.03...,
0.00...,
1.54...
]
}
}
I didn't copied and pasted the values, but that's not the point, the point is that this command takes almost a full second the load, but there is a way to "make it faster"
If I use the argument "-d" and not use "-n", I can specify the time in milliseconds to delay between data being sent in STDOUT, it also takes a full second to load, but when it loads, the delay works like charm
And since I didn't specify a 'n' number of times, it never stops, and there is the problem
How can I retrieve the data continuously in ruby??
I thought about using another thread so it won't stop my program, but how can I tell ruby to return the last X lines of the STDOUT from a command that hasn't and will not ever stop since "%x'command'" in ruby waits for a return?
If I understood you need to connect to stdout from a long running process.
see if this works for your scenario using IO.popen:
# by running this program
# and open another terminal
# and start writing some data into data.txt
# you will see it appearing in this program output
# $ date >> data.txt
io_obj = IO.popen('tail -f ./data.txt')
while !io_obj.eof?
puts io_obj.readline
end
I found out a built in module that saved me called PTY and the spawn#method plus thread management helped me to keep a variable updated with the command values each time the command outputted new bytes
I,m running a Java file from BeanShell Sampler in jmeter, I'm getting the output successful in cmd windows of jmeter. The output comprises of series of logger files,I need to extract only a specified string from the cmd window and use it for another sample
Given you run your program using i.e. ProcessBuilder you should be able to access its output via Process.getInputStream() method
Process process = new ProcessBuilder('c:\\apps\\jmeter\\bin\\jmeter.bat', '-v').start()
String output = org.apache.commons.io.IOUtils.toString(process.getInputStream(),'UTF-8')
log.info('My program output is:')
log.info(output)
Also I would recommend considering switching to JSR223 Sampler and Groovy language as this way it will be much faster and easier:
def output = "jmeter.bat -v".execute().text
log.info('My program output is:')
log.info(output)
Demo:
This java bean shell Command made the console out by j meter that is std out to be written in a file
System.setOut(new PrintStream(new BufferedOutputStream(new FileOutputStream("D:\\dir1\\dir2\\abc.out")),true));
Make sure your path to file should have double backward slash
When running a large set of tests using MsTest from the command line, I can see each test executing and its outcome logged in the window like so:
Passed Some.NameSpace.Test1
Passed Some.NameSpace.Test2
And so on for thousands of tests. Once completed, MsTest will spit out a summary like this
Summary
---------
Test run failed
Passed 2000
Failed 1
------------
Total 2001
At this point I either have to start scrolling backwards in the window trying to find the needle in a haystack that represents my single failing test, or I can open the huge xml file that represents the result, and text-search for some keyword indicating a failed test.
Isn't there an easier way? Can I have MsTest report progress without dumping Passed test names to the console (still logging failed ones), or can I have a summary of just Failed tests at the end?
I think its obvious what any command line user wants to do: follow progress AND know the outcome at the end, without having to read xml or browse the cmd window history.
Answering my own question: A simple wrapper/parser script that calls MsTest.exe and parses/summarizes the output, either the stdout or the trx, is the only solution it seems.
You could use the TestContext.CurrentTestOutcome at the end of each test to determine if the test was failed and then log all failed tests to a different file.
[TestCleanup]
public void CleanUp()
{
if (TestContext.CurrentTestOutcome.ToString().Equals("Failed"))
{
TestContext.WriteLine("{0}.{1} ==> {2}", TestContext.FullyQualifiedTestClassName,
TestContext.TestName, TestContext.CurrentTestOutcome.ToString());
//Log the result to a file.
}
}
I don't know if this could help you.
I have a Capistrano deploy file (Capfile) that is rather large, contains a few namespaces and generally has a lot of information already in it. My ultimate goal is, using the Tinder gem, paste the output of the entire deployment into Campfire. I have Tinder setup properly already.
I looked into using the Capistrano capture method, but that only works for the first host. Additionally that would be a lot of work to go through and add something like:
output << capture 'foocommand'
Specifically, I am looking to capture the output of any deployment from that file into a variable (in addition to putting it to STDOUT so I can see it), then pass that output in the variable into a function called notify_campfire. Since the notify_campfire function is getting called at the end of a task (every task regardless of the namespace), it should have the task name available to it and the output (which is stored in that output variable). Any thoughts on how to accomplish this would be greatly appreciated.
I recommend not messing with the Capistrano logger, Instead use what unix gives you and use pipes:
cap deploy | my_logger.rb
Where your logger reads STDIN and STDOUT and both records, and pipes it back to the appropriate stream.
For an alternative, the Engineyard cap recipies have a logger – this might be a useful reference if you do need to edit the code, but I recommend not doing.
It's sort of a hackish means of solving your problem, but you could try running the deploy task in a Rake task and capturing the output using %x.
# ...in your Rakefile...
task :deploy_and_notify do
output = %x[ cap deploy ] # Run your deploy task here.
notify_campfire(output)
puts output # Echo the output.
end
If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #