How can I output a matrix in Octave? - matrix

I got 2 files, the first one is the master, that calls the second one.
So, I want to output a matrix created in the second file, while runs the first file, I guess via printf in the master file maybe??
I've tried this way and didn't work, besides the fact that shows the rows in the place of columns:
printf("[%f; %f; %f]\n",r)

If you want debugging output (and especially inside a loop) you initially disable the pager with more off and then use disp(of_course_you_have_to_add_the_name_of_your_matrix_here) or just mention the variable without trailing ; or remove the trailing ; at the assignment
more off
for k=1:2
a = rand(2) * k; # remove trailing ;
a # or mention it without ;
disp (a) # or use disp which doesn't show the variable name
endfor
which outputs
a =
0.80112 0.53222
0.48930 0.56336
0.80112 0.53222
0.48930 0.56336
a =
1.30374 1.85382
0.30519 0.42486
1.30374 1.85382
0.30519 0.42486
See that a is displayed twice: once with "a = " and once without

Create the following files in your current folder.
%%% in file: second.m
A = [1,2;3,4]; % adding a semicolon suppresses the output
%%% in file: master.m
% run 'second.m' script - this will add A on the workspace, since
% running this script is as if you had dumped the file's contents here
second
% call the value of A without a semicolon to show contents on screen
A
Then from your octave terminal, run the 'master.m' script:
master
This will display the contents of A on screen.

Related

Saving the contents of a file generated inside process script into Nextflow variable

I have a Nextflow process that uses a bash script (check_bam.sh) to generate a text file. The only options for the contents of that text file are either a 0 or any other number. I would like to extract that 0 or the other value and save it to a Nextflow variable, to be able to use a conditional, in the way that if the content of the file is a 0, the Nextflow script should skip some processes, and if it's any other number that is not zero, the execution should be carried out completely. I am not having problems with the use of Nextflow conditionals and setting channels to empty, but in the part of saving that value that is generated inside the script part into a Nextflow variable to use outside processes.
The process that generates the file (result_bam.txt) with the 0 or other number is as follows (I have simplified it to make it as clear as possible):
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
What I am checking is the number of mapped reads in the BAM file, and I would like to save that number into a Nextflow variable because if the number is zero, the execution should skip most of the following processes, but if the number is different than zero, it means that there are mapped reads in the file and the execution should continue as intended.
I have thought that maybe using cat result_bam.txt > $FOO or FOO=``cat result_bam.txt` could be a solution but I don't know how to properly save it so the variable is usable between processes.
Use an env channel to grab the data from FOO=``cat result_bam.txt and turn it into a channel.
Few things come into my mind there, hopefully I understand your problem well. Is check_bam.sh only counting lines of BAM file?
The first option for me would be to check if there is possibility for you, to check if the BAM file has content from your pipeline. This might be useful: countLines_documentation. You should be cautious here, as huge BAM file can lead to memory exception (countLines "loads" the file).
Second option, maybe better, is to pass file result_bam.txt into channel channel_check_bam, and then, following process should be run regarding if the content of file (the number in file result_bam.txt) is greater than 0. So, when you are connecting this channel to other process, you should read the content as:
input:
val bam_lines from channel_check_bam.map{ it.readLines() } // Gives a list of lines, so 1st line will be your number of mapped reads.
when:
bam_lines[0].toInteger() > 0
This way it should be run only when number in result_bam.txt is > 0.
I was testing that with DSL2, so the code might need some little changes - but it works.
Cris Tuñí - Edit: 08/24/2021
Thanks to the help of DawidGaceck I could edit my processes to run only when the number in the file was different than zero. My code ended looking like this:
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam_process1,
channel_check_bam_process2
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
process PROCESS1 {
input:
val bam_lines from channel_check_bam_process1.map{ it.readLines() }
when:
bam_lines[0].toInteger() > 0
script:
"""
foo bar baz
"""
Hope this helps anyone with the same question or a similar issue!

In MATLAB, how to store Unix stdout in cell array for multi-line commands?

I have to run unix code on a remote system that does not have MATLAB. I call it like this
% shared key between computers so no password required to type in
ssh_cmd = 'ssh login#ipaddress ';
for x = 1:nfiles
cmd = sprintf('calcPosition filename%d',x);
% as an example, basically it runs a c++ command
% on that computer and I pass in a different filename as an input
full_cmd = [ssh_cmd, cmd];
[Status,Message] = system(full_cmd);
% the stdout is a mix of strings and numbers
% <code that parses Message and handles errors>
end
For a small test this takes about 30 seconds to run. If I set it so that
full_cmd = [ssh_cmd, cmd1; cmd2; cmdN]; % obviously not valid syntax here
% do the same thing as above, but no loop
it takes about 5 seconds, so most of the time is spent connecting to the other system. But Message is the combined stdout of all n files.
I know I can pipe the outputs to separate files, but I'd rather pass it back directly to MATLAB without addition I/O. So is there a way to get the stdout (Message) as a cell array (or Unix equivalent) for each separate command? Then I could just loop over the cells of Message and the remainder of the function would not need to be changed.
I thought about using an identifier text around each command call as a way to help parse Message, e.g.
full_cmd = [ssh_cmd; 'echo "begincommand " '; cmd1; 'echo "begincommand " '; cmd2]; % etc
But there must be something more elegant than that. Any tips?

Assign BASH variable from file with specific criteria

A config file that the last line contains data that I want to assign everything to the RIGHT of the = sign into a variable that I can display and call later in the script.
Example: /path/to/magic.conf:
foo
bar
ThisOption=foo.bar.address:location.555
What would be the best method in a bash shell script to read the last line of the file and assign everything to the right of the equal sign? In this case, foo.bar.address:location.555.
The last line always has what I want to target and there will only ever be a single = sign in the file that happens to be the last line.
Google and searching here yielded many close but non-relative results with using sed/awk but I couldn't come up with exactly what I'm looking for.
Use sed:
variable=$(sed -n 's/^ThisOption=//p' /path/to/magic.conf)
echo "The option is: $variable")
This works by finding and removing the ThisOption= marker at the start of the line, and printing the result.
IMPORTANT: This method absolutely requires that the file be trusted 100%. As mentioned in the comments, anytime you "eval" code without any sanitization there are grave risks (a la "rm -rf /" magnitude - don't run that...)
Pure, simple bash. (well...using the tail utility :-) )
The advantage of this method, is that it only requires you to know that it will be the last line of the file, it does not require you to know any information about that line (such as what the variable to the left of the = sign will be - information that you'd need in order to use the sed option)
assignment_line=$(tail -n 1 /path/to/magic.conf)
eval ${assignment_line}
var_name=${assignment_line%%=*}
var_to_give_that_value=${!var_name}
Of course, if the var that you want to have the value is the one that is listed on the left side of the "=" in the file then you can skip the last assignment and just use "${!var_name}" wherever you need it.

Stata delimit in command line

I am working on a .do file created by someone else. This person used a semicolon delimiter in the entire file. I am trying to go through this file and see what is going on. I like to do this by selecting a portion of the code and hitting the "Execute Selection (do)" button. However, the delimiter seems to be messing up this. Are there any workarounds for me?
Suppose your do-file looks like this:
#delimit ;
set obs
10 ;
gen x = _n ;
gen y = x^2 ;
gen z = x
^3;
Anytime you highlight a selection and press "Execute selection (do)", Stata creates a temporary, self-contained do-file, with default delimit at cr and runs that:
"When a do-file begins execution, the delimiter is automatically set to
carriage return, even if it was called from another do-file that set the
delimiter to semicolon."
It does not sequentially run those commands from the console. Therefore, if you select the first 2 commands in the do-file above, the temporary do-file includes a call to #delimit whereas if you selected the last 2 commands, the temporary do-file would not have this call and would throw a syntax error for two line commands.
One solution could be to copy-paste selections to a fresh do-file that just had the #delimit command at the beginning, and then run that.
You could also write a script to rid your do-file of semicolons. If a line does not end in a semicolon, then append the next line to the end of the current line, and check this line again. Depending on how complex the syntax is in your do-file, this would be more or less difficult.
Another option is comment out the lines you have already ran by enclosing them with /* */ and to use exit; where you want to stop. You do have to be a little careful with local macros.

Customizing bash completion output: each suggestion on a new line

When you type something, you often use bash autocompletion: you start writing a command, for example, and you type TAB to get the rest of the word.
As you have probably noticed, when multiple choices match your command, bash displays them like this :
foobar#myserv:~$ admin-
admin-addrsync admin-adduser admin-delrsync admin-deluser admin-listsvn
admin-addsvn admin-chmod admin-delsvn admin-listrsync
I'm looking for a solution to display each possible solution on a new line, similar to the last column on a ls -l. Ever better, it would be perfect if I could apply a rule like this: "if you find less than 10 suggestions, display them one by line, if more => actual display".
bash prior to version 4.2 doesn't allow any control over the output format of completions, unfortunately.
Bash 4.2+ allows switching to 1-suggestion-per-line output globally, as explained in Grisha Levit's helpful answer, which also links to a clever workaround to achieve a per-completion-function solution.
The following is a tricky workaround for a custom completion.
Solving this problem generically, for all defined completions, would be much harder (if there were a way to invoke readline functions directly, it might be easier, but I haven't found a way to do that).
To test the proof of concept below:
Save to a file and source it (. file) in your interactive shell - this will:
define a command named foo (a shell function)
whose arguments complete based on matching filenames in the current directory.
(When foo is actually invoked, it simply prints its argument in diagnostic form.)
Invoke as:
foo [fileNamePrefix], then press tab:
If between 2 and 9 files in the current directory match, you'll see the desired line-by-line display.
Otherwise (1 match or 10 or more matches), normal completion will occur.
Limitations:
Completion only works properly when applied to the LAST argument on the command line being edited.
When a completion is actually inserted in the command line (once the match is unambiguous), NO space is appended to it (this behavior is required for the workaround).
Redrawing the prompt the first time after printing custom-formatted output may not work properly: Redrawing the command line including the prompt must be simulated and since there is no direct way to obtain an expanded version of the prompt-definition string stored in $PS1, a workaround (inspired by https://stackoverflow.com/a/24006864/45375) is used, which should work in typical cases, but is not foolproof.
Approach:
Defines and assigns a custom completion shell function to the command of interest.
The custom function determines the matches and, if their count is in the desired range, bypasses the normal completion mechanism and creates custom-formatted output.
The custom-formatted output (each match on its own line) is sent directly to the terminal >/dev/tty, and then the prompt and command line are manually "redrawn" to mimic standard completion behavior.
See the comments in the source code for implementation details.
# Define the command (function) for which to establish custom command completion.
# The command simply prints out all its arguments in diagnostic form.
foo() { local a i=0; for a; do echo "\$$((i+=1))=[$a]"; done; }
# Define the completion function that will generate the set of completions
# when <tab> is pressed.
# CAVEAT:
# Only works properly if <tab> is pressed at the END of the command line,
# i.e., if completion is applied to the LAST argument.
_complete_foo() {
local currToken="${COMP_WORDS[COMP_CWORD]}" matches matchCount
# Collect matches, providing the current command-line token as input.
IFS=$'\n' read -d '' -ra matches <<<"$(compgen -A file "$currToken")"
# Count matches.
matchCount=${#matches[#]}
# Output in custom format, depending on the number of matches.
if (( matchCount > 1 && matchCount < 10 )); then
# Output matches in CUSTOM format:
# print the matches line by line, directly to the terminal.
printf '\n%s' "${matches[#]}" >/dev/tty
# !! We actually *must* pass out the current token as the result,
# !! as it will otherwise be *removed* from the redrawn line,
# !! even though $COMP_LINE *includes* that token.
# !! Also, by passing out a nonempty result, we avoid the bell
# !! signal that normally indicates a failed completion.
# !! However, by passing out a single result, a *space* will
# !! be appended to the last token - unless the compspec
# !! (mapping established via `complete`) was defined with
# !! `-o nospace`.
COMPREPLY=( "$currToken" )
# Finally, simulate redrawing the command line.
# Obtain an *expanded version* of `$PS1` using a trick
# inspired by https://stackoverflow.com/a/24006864/45375.
# !! This is NOT foolproof, but hopefully works in most cases.
expandedPrompt=$(PS1="$PS1" debian_chroot="$debian_chroot" "$BASH" --norc -i </dev/null 2>&1 | sed -n '${s/^\(.*\)exit$/\1/p;}')
printf '\n%s%s' "$expandedPrompt" "$COMP_LINE" >/dev/tty
else # Just 1 match or 10 or more matches?
# Perform NORMAL completion: let bash handle it by
# reporting matches via array variable `$COMPREPLY`.
COMPREPLY=( "${matches[#]}" )
fi
}
# Map the completion function (`_complete_foo`) to the command (`foo`).
# `-o nospace` ensures that no space is appended after a completion,
# which is needed for our workaround.
complete -o nospace -F _complete_foo -- foo
bash 4.2+ (and, more generally, applications using readline 6.2+) support this with the use of the completion-display-width variable.
The number of screen columns used to display possible matches when performing completion. The value is ignored if it is less than 0 or greater than the terminal screen width. A value of 0 will cause matches to be displayed one per line. The default value is -1.
Run the following to set the behavior for all completions1 for your current session:
bind 'set completion-display-width 0'
Or modify your ~/.inputrc2 file to have:
set completion-display-width 0
to change the behavior for all new shells.
1 See here for a method for controlling this behavior for individual custom completion functions.
2 The search path for the readline init file is $INPUTRC, ~/.inputrc, /etc/inputrc so modify the file appropriate for you.

Resources