I have some R code inside a file called analyse.r. I would like to be able to, from the command line (CMD), run the code in that file without having to pass through the R terminal and I would also like to be able to pass parameters and use those parameters in my code, something like the following pseudocode:
C:\>(execute r script) analyse.r C:\file.txt
and this would execute the script and pass "C:\file.txt" as a parameter to the script and then it could use it to do some further processing on it.
How do I accomplish this?
You want Rscript.exe.
You can control the output from within the script -- see sink() and its documentation.
You can access command-arguments via commandArgs().
You can control command-line arguments more finely via the getopt and optparse packages.
If everything else fails, consider reading the manuals or contributed documentation
Identify where R is install. For window 7 the path could be
1.C:\Program Files\R\R-3.2.2\bin\x64>
2.Call the R code
3.C:\Program Files\R\R-3.2.2\bin\x64>\Rscript Rcode.r
There are two ways to run a R script from command line (windows or linux shell.)
1) R CMD way
R CMD BATCH followed by R script name. The output from this can also be piped to other files as needed.
This way however is a bit old and using Rscript is getting more popular.
2) Rscript way
(This is supported in all platforms. The following example however is tested only for Linux)
This example involves passing path of csv file, the function name and the attribute(row or column) index of the csv file on which this function should work.
Contents of test.csv file
x1,x2
1,2
3,4
5,6
7,8
Compose an R file “a.R” whose contents are
#!/usr/bin/env Rscript
cols <- function(y){
cat("This function will print sum of the column whose index is passed from commandline\n")
cat("processing...column sums\n")
su<-sum(data[,y])
cat(su)
cat("\n")
}
rows <- function(y){
cat("This function will print sum of the row whose index is passed from commandline\n")
cat("processing...row sums\n")
su<-sum(data[y,])
cat(su)
cat("\n")
}
#calling a function based on its name from commandline … y is the row or column index
FUN <- function(run_func,y){
switch(run_func,
rows=rows(as.numeric(y)),
cols=cols(as.numeric(y)),
stop("Enter something that switches me!")
)
}
args <- commandArgs(TRUE)
cat("you passed the following at the command line\n")
cat(args);cat("\n")
filename<-args[1]
func_name<-args[2]
attr_index<-args[3]
data<-read.csv(filename,header=T)
cat("Matrix is:\n")
print(data)
cat("Dimensions of the matrix are\n")
cat(dim(data))
cat("\n")
FUN(func_name,attr_index)
Runing the following on the linux shell
Rscript a.R /home/impadmin/test.csv cols 1
gives
you passed the following at the command line
/home/impadmin/test.csv cols 1
Matrix is:
x1 x2
1 1 2
2 3 4
3 5 6
4 7 8
Dimensions of the matrix are
4 2
This function will print sum of the column whose index is passed from commandline
processing...column sums
16
Runing the following on the linux shell
Rscript a.R /home/impadmin/test.csv rows 2
gives
you passed the following at the command line
/home/impadmin/test.csv rows 2
Matrix is:
x1 x2
1 1 2
2 3 4
3 5 6
4 7 8
Dimensions of the matrix are
4 2
This function will print sum of the row whose index is passed from commandline
processing...row sums
7
We can also make the R script executable as follows (on linux)
chmod a+x a.R
and run the second example again as
./a.R /home/impadmin/test.csv rows 2
This should also work for windows command prompt..
save the following in a text file
f1 <- function(x,y){
print (x)
print (y)
}
args = commandArgs(trailingOnly=TRUE)
f1(args[1], args[2])
No run the following command in windows cmd
Rscript.exe path_to_file "hello" "world"
This will print the following
[1] "hello"
[1] "world"
Related
I have a CSV file like below:
E Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 Mean
1 0.7019 0.6734 0.6599 0.6511 0.701 0.6977 0.680833333
2 0.6421 0.6478 0.6095 0.608 0.6525 0.6285 0.6314
3 0.6039 0.6096 0.563 0.5539 0.6218 0.5716 0.5873
4 0.5564 0.5545 0.5138 0.4962 0.5781 0.5154 0.535733333
5 0.5056 0.4972 0.4704 0.4488 0.5245 0.4694 0.485983333
I'm trying to use find the row number where the final column has a value below a certain range. For example, below 0.6.
Using the above CSV file, I want to return 3 because E = 3 is the first row where Mean <= 0.60. If there is no value below 0.6 I want to return 0. I am in effect returning the value in the first column based on the final column.
I plan to initialize this number as a constant in gnuplot. How can this be done? I've tagged awk because I think it's related.
In case you want a gnuplot-only version... if you use a file remove the datablock and replace $Data by your filename in " ".
Edit: You can do it without a dummy table, it can be done shorter with stats (check help stats). Even shorter than the accepted solution (well, we are not at code golf here), but additionally platform-independent because it's gnuplot-only.
Furthermore, in case E could be any number, i.e. 0 as well, then it might be better
to first assign E = NaN and then compare E to NaN (see here: gnuplot: How to compare to NaN?).
Script:
### conditional extraction into a variable
reset session
$Data <<EOD
E Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 Mean
1 0.7019 0.6734 0.6599 0.6511 0.701 0.6977 0.680833333
2 0.6421 0.6478 0.6095 0.608 0.6525 0.6285 0.6314
3 0.6039 0.6096 0.563 0.5539 0.6218 0.5716 0.5873
4 0.5564 0.5545 0.5138 0.4962 0.5781 0.5154 0.535733333
5 0.5056 0.4972 0.4704 0.4488 0.5245 0.4694 0.485983333
EOD
E = NaN
stats $Data u ($8<=0.6 && E!=E? E=$1 : 0) nooutput
print E
### end of script
Result:
3.0
Actually, OP wants to return E=0 if the condition was not met. Then the script would be like this:
E=0
stats $Data u ($8<=0.6 && E==0? E=$1 : 0) nooutput
Another awk. You could initialize the default return value to var ret in BEGIN but since it's 0 there is really no point as empty var+0 produces the same effect. If the threshold value of 0.6 is not met before the ENDis reached, that is returned. If it is met, exit invokes the END and ret is output:
$ awk '
NR>1 && $NF<0.6 { # final column has a value below a certain range
ret=$1 # I want to return 3 because E = 3
exit
}
END {
print ret+0
}' file
Output:
3
Something like this should do the trick:
awk 'NR>1 && $8<.6 {print $1;fnd=1;exit}END{if(!fnd){print 0}}' yourfile
I have a TCL script that say, has 30 lines of automation code which I am executing in the dc shell (Synopsys Design Compiler). I want to stop and exit the script at line 10, exit the dc shell and bring it back up again after performing a manual review. However, this time, I want to run the script starting from line number 11, without having to execute the first 10 lines.
Instead of having two scripts, one which contains code till line number 10 and the other having the rest, I would like to make use of only one script and try to execute it from, let's say, line number N.
Something like:
source a.tcl -line 11
How can I do this?
If you have Tcl 8.6+ and if you consider re-modelling your script on top of a Tcl coroutine, you can realise this continuation behaviour in a few lines. This assumes that you run the script from an interactive Tcl shell (dc shell?).
# script.tcl
if {[info procs allSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
proc allSteps {args} {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
rename allSteps ""; # self-clean, once the remainder of steps (11-N) have run
}
coroutine nextSteps allSteps
}
nextSteps; # run coroutine
Pack your script into a proc body (allSteps).
Within the proc body: Place a yield to indicate the hold/ continuation point after your first steps (e.g., after the 10th step).
Create a coroutine nextSteps based on allSteps.
Protect the proc and coroutine definitions in a way that they do not cause a re-definition (when steps are pending)
Then, start your interactive shell and run source script.tcl:
% source script.tcl
1
2
3
Now, perform your manual review. Then, continue from within the same shell:
% source script.tcl
4
5
6
Note that you can run the overall 2-phased sequence any number of times (because of the self-cleanup of the coroutine proc: rename):
% source script.tcl
1
2
3
% source script.tcl
4
5
6
Again: All this assumes that you do not exit from the shell, and maintain your shell while performing your review. If you need to exit from the shell, for whatever reason (or you cannot run Tcl 8.6+), then Donal's suggestion is the way to go.
Update
If applicable in your case, you may improve the implementation by using an anonymous (lambda) proc. This simplifies the lifecycle management (avoiding re-definition, managing coroutine and proc, no need for a rename):
# script.tcl
if {[info commands nextSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
coroutine nextSteps apply {args {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
}}
}
nextSteps
The simplest way is to open the text file, parse it to get the first N commands (info complete is useful there), and then evaluate those (or the rest of the script). Doing this efficiently produces slightly different code when you're dropping the tail as opposed to when you're dropping the prefix.
proc ReadAllLines {filename} {
set f [open $filename]
set lines {}
# A little bit careful in case you're working with very large scripts
while {[gets $f line] >= 0} {
lappend lines $line
}
close $f
return $lines
}
proc SourceFirstN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
foreach line $lines {
append script $line "\n"
if {[info complete $script] && [incr i] >= $n} {
break
}
}
info script $filename
unset lines
uplevel 1 $script
}
proc SourceTailN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
for {set j 0} {$j < [llength $lines]} {incr j} {
set line [lindex $lines $j]
append script $line "\n"
if {[info complete $script]} {
if {[incr i] >= $n} {
info script $filename
set realScript [join [lrange $lines [incr j] end] "\n"]
unset lines script
return [uplevel 1 $realScript]
}
# Dump the prefix we don't need any more
set script {}
}
}
# If we get here, the script had fewer than n lines so there's nothing to do
}
Be aware that the kinds of files you're dealing with can get pretty large, and Tcl currently has some hard memory limits. On the other hand, if you can source the file at all, you're already within that limit…
I have written a python script that calls unix sort using subprocess module. I am trying to sort a table based on two columns(2 and 6). Here is what I have done
sort_bt=open("sort_blast.txt",'w+')
sort_file_cmd="sort -k2,2 -k6,6n {0}".format(tab.name)
subprocess.call(sort_file_cmd,stdout=sort_bt,shell=True)
The output file however contains an incomplete line which produces an error when I parse the table but when I checked the entry in the input file given to sort the line looks perfect. I guess there is some problem when sort tries to write the result to the file specified but I am not sure how to solve it though.
The line looks like this in the input file
gi|191252805|ref|NM_001128633.1| Homo sapiens RIMS binding protein 3C (RIMBP3C), mRNA gnl|BL_ORD_ID|4614 gi|124487059|ref|NP_001074857.1| RIMS-binding protein 2 [Mus musculus] 103 2877 3176 846 941 1.0102e-07 138.0
In output file however only gi|19125 is printed. How do I solve this?
Any help will be appreciated.
Ram
Using subprocess to call an external sorting tool seems quite silly considering that python has a built in method for sorting items.
Looking at your sample data, it appears to be structured data, with a | delimiter. Here's how you could open that file, and iterate over the results in python in a sorted manner:
def custom_sorter(first, second):
""" A Custom Sort function which compares items
based on the value in the 2nd and 6th columns. """
# First, we break the line into a list
first_items, second_items = first.split(u'|'), second.split(u'|') # Split on the pipe character.
if len(first_items) >= 6 and len(second_items) >= 6:
# We have enough items to compare
if (first_items[1], first_items[5]) > (second_items[1], second_items[5]):
return 1
elif (first_items[1], first_items[5]) < (second_items[1], second_items[5]):
return -1
else: # They are the same
return 0 # Order doesn't matter then
else:
return 0
with open(src_file_path, 'r') as src_file:
data = src_file.read() # Read in the src file all at once. Hope the file isn't too big!
with open(dst_sorted_file_path, 'w+') as dst_sorted_file:
for line in sorted(data.splitlines(), cmp = custom_sorter): # Sort the data on the fly
dst_sorted_file.write(line) # Write the line to the dst_file.
FYI, this code may need some jiggling. I didn't test it too well.
What you see is probably the result of trying to write to the file from multiple processes simultaneously.
To emulate: sort -k2,2 -k6,6n ${tabname} > sort_blast.txt command in Python:
from subprocess import check_call
with open("sort_blast.txt",'wb') as output_file:
check_call("sort -k2,2 -k6,6n".split() + [tab.name], stdout=output_file)
You can write it in pure Python e.g., for a small input file:
def custom_key(line):
fields = line.split() # split line on any whitespace
return fields[1], float(fields[5]) # Python uses zero-based indexing
with open(tab.name) as input_file, open("sort_blast.txt", 'w') as output_file:
L = input_file.read().splitlines() # read from the input file
L.sort(key=custom_key) # sort it
output_file.write("\n".join(L)) # write to the output file
If you need to sort a file that does not fit in memory; see Sorting text file by using Python
I am just new to scripting and I need some help. I have something like a bazillion files that look like this.
Assign F2 Height
3IleN 2.34025e+07
4PheN 2.05028e+07
6LysN 1.43672e+07
7ThrN 1.49120e+07
8LeuN 1.30838e+07
9ThrN 1.44298e+07
And i want it to look like this + save it in another file with the same name as the previous file however, with a "MOD" written at the beginning.
Number AA Height
3 IleN 6.20756e+07
4 PheN 5.26499e+07
7 ThrN 3.00216e+07
8 LeuN 3.26377e+07
9 ThrN 4.03901e+07
10 GlyN 2.73659e+07
12 ThrN 3.16319e+07
13 IleN 5.94604e+07
If you could please describe and explain the parameters used, that would be of great help.
Thanks!
The following should work for you:
sed 's/^\([0-9]*\)/\1 /' filename
I want to see what functions are called in my user-space C99 program and in what order. Also, which parameters are given.
Can I do this with DTrace?
E.g. for program
int g(int a, int b) { puts("I'm g"); }
int f(int a, int b) { g(5+a,b);g(8+b,a);}
int main() {f(5,2);f(5,3);}
I wand see a text file with:
main(1,{"./a.out"})
f(5,2);
g(10,2);
puts("I'm g");
g(10,5);
puts("I'm g");
f(5,3);
g(10,3);
puts("I'm g");
g(11,5);
puts("I'm g");
I want not to modify my source and the program is really huge - 9 thousand of functions.
I have all sources; I have a program with debug info compiled into it, and gdb is able to print function parameters in backtrace.
Is the task solvable with DTrace?
My OS is one of BSD, Linux, MacOS, Solaris. I prefer Linux, but I can use any of listed OS.
Here's how you can do it with DTrace:
script='pid$target:a.out::entry,pid$target:a.out::return { trace(arg1); }'
dtrace -F -n "$script" -c ./a.out
The output of this command is like as follows on FreeBSD 14.0-CURRENT:
dtrace: description 'pid$target:a.out::entry,pid$target:a.out::return ' matched 17 probes
I'm g
I'm g
I'm g
I'm g
dtrace: pid 39275 has exited
CPU FUNCTION
3 -> _start 34361917680
3 -> handle_static_init 140737488341872
3 <- handle_static_init 2108000
3 -> main 140737488341872
3 -> f 2
3 -> g 2
3 <- g 32767
3 -> g 5
3 <- g 32767
3 <- f 0
3 -> f 3
3 -> g 3
3 <- g 32767
3 -> g 5
3 <- g 32767
3 <- f 0
3 <- main 0
3 -> __do_global_dtors_aux 140737488351184
3 <- __do_global_dtors_aux 0
The annoying thing is that I've not found a way to print all the function arguments (see How do you print an associative array in DTrace?). A hacky workaround is to add trace(arg2), trace(arg3), etc. The problem is that for nonexistent arguments there will be garbage printed out.
Yes, you can do this with dtrace. But you probably will never be able to do it on linux. I've tried multiple versions of the linux port of dtrace and it's never done what I wanted. In fact, it once caused a CPU panic. Download the dtrace toolkit from http://www.brendangregg.com/dtrace.html. Then set your PATH accordingly. Then execute this:
dtruss -a yourprogram args...
Your question is exceedingly likely to be misguided. For any non-trivial program, printing the sequense of all function calls executed with their parameters will result in multi-MB or even multi-GB output, that you will not be able to make any sense of (too much detail for a human to understand).
That said, I don't believe you can achieve what you want with dtrace.
You might begin by using GCC -finstrument-functions flag, which would easily allow you to print function addresses on entry/exit to every function. You can then trivialy convert addresses into function names with addr2line. This gives you what you asked for (except parameters).
If the result doesn't prove to be too much detail, you can set a breakpoint on every function in GDB (with rb . command), and attach continue command to every breakpoint. This will result in a steady stream of breakpoints being hit (with parameters), but the execution will likely be at least 100 to 1000 times slower.