Checking sudoku solution (reading matrix from file) Python - sudoku

Need to make a program that reads the matrix from a .txt file and checks if the sudoku is solved correctly. The .txt file name is inserted in the command line. Has to return either True or False.
The sudoku checking part works, but there is a trouble with reading matrix from a file that is inserted in the command line.

Related

Saving the contents of a file generated inside process script into Nextflow variable

I have a Nextflow process that uses a bash script (check_bam.sh) to generate a text file. The only options for the contents of that text file are either a 0 or any other number. I would like to extract that 0 or the other value and save it to a Nextflow variable, to be able to use a conditional, in the way that if the content of the file is a 0, the Nextflow script should skip some processes, and if it's any other number that is not zero, the execution should be carried out completely. I am not having problems with the use of Nextflow conditionals and setting channels to empty, but in the part of saving that value that is generated inside the script part into a Nextflow variable to use outside processes.
The process that generates the file (result_bam.txt) with the 0 or other number is as follows (I have simplified it to make it as clear as possible):
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
What I am checking is the number of mapped reads in the BAM file, and I would like to save that number into a Nextflow variable because if the number is zero, the execution should skip most of the following processes, but if the number is different than zero, it means that there are mapped reads in the file and the execution should continue as intended.
I have thought that maybe using cat result_bam.txt > $FOO or FOO=``cat result_bam.txt` could be a solution but I don't know how to properly save it so the variable is usable between processes.
Use an env channel to grab the data from FOO=``cat result_bam.txt and turn it into a channel.
Few things come into my mind there, hopefully I understand your problem well. Is check_bam.sh only counting lines of BAM file?
The first option for me would be to check if there is possibility for you, to check if the BAM file has content from your pipeline. This might be useful: countLines_documentation. You should be cautious here, as huge BAM file can lead to memory exception (countLines "loads" the file).
Second option, maybe better, is to pass file result_bam.txt into channel channel_check_bam, and then, following process should be run regarding if the content of file (the number in file result_bam.txt) is greater than 0. So, when you are connecting this channel to other process, you should read the content as:
input:
val bam_lines from channel_check_bam.map{ it.readLines() } // Gives a list of lines, so 1st line will be your number of mapped reads.
when:
bam_lines[0].toInteger() > 0
This way it should be run only when number in result_bam.txt is > 0.
I was testing that with DSL2, so the code might need some little changes - but it works.
Cris Tuñí - Edit: 08/24/2021
Thanks to the help of DawidGaceck I could edit my processes to run only when the number in the file was different than zero. My code ended looking like this:
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam_process1,
channel_check_bam_process2
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
process PROCESS1 {
input:
val bam_lines from channel_check_bam_process1.map{ it.readLines() }
when:
bam_lines[0].toInteger() > 0
script:
"""
foo bar baz
"""
Hope this helps anyone with the same question or a similar issue!

How to implement file-substitution macro in bash?

I have a set of text files and a set of GoLang files. The GoLang files contain directives such as the following:
//go:embed hello.txt
var s string
I want to write a bash script which takes the above code and substitutes the following in its place:
var s string = "<contents of hello.txt>"
Specifically, I want to bash script to go through all GoLang source files and replace all go:embed/string declaration pairs with a string defined to be the contents of the file specified in the embed directive.
I'm wondering if there is an existing program which can be configured to do the above. Otherwise, I'm planning on writing the algorithm myself.
Further explaination:
I am trying to replicate GoLang's embed directive (https://tip.golang.org/pkg/embed/).
We are not yet on GoLang 1.16, so we cannot use this functionality, but we are replicating it as closely as possible so that moving over to the standard implementation is as painless as possible.
Below is an attempt at solving your problem:
for i in file1 file2; do
awk '/^\/\/go:embed /{f=$2;next}/^var/&&f{printf"%s = \"",$0;system("cat "f);print"\"";f=0;next}1' < "$i" > "$i.new"
done
The awk script prints all normal lines, only if it encounters the embed directive this line will be skipped (and the file name remembered in variable f). A subsequent line starting with var will then be extended by the content of the file with the remembered name (using the system call "cat").
Beware, there are no error checks at all, no attempt to fix quotes and whatever. So for practical use - unless the file contents you are about to embed are known to be good-natured - you probably have to take a more sophisticated approach.

Shell script that passes parameter to gnuplot does not export the diagram (epslatex)

I have created a shell script (.run) that accepts the prefix for the names of the pictures as a parameter, and then calls gnuplot. However, for some reason, the picture is not saved. The code is:
#!/bin/sh
molecule=$1
echo "Plotting DFT-ADF PY results for $molecule"
echo "Tranmission plot (negatory SO)"
gnuplot -p << EOF
#!/usr/bin/gnuplot
set terminal epslatex size 5,3 color colortext
set output '$molecule_trans.tex'
plot cos(x) w l title 'cos(x)', sin(x) w l title 'sin(x)'
EOF
For my bachelor thesis I have to make several plots that are the same. Additionally, the computational cluster uses a qeueing system. For the purpose of being true to this system, I have created several shell scripts that automatically do stuff. In particular, about 45 simulations are called by the shell scripts, followed by a shell script that enters each simulations' directory and uses python files to evaluate the data into [.dat] files. Next, it should use a gnuplot file to make the graph. I use EPSLaTeX to make my figures, because it is so much nicer. However, in the current implementation this required me to manually edit the various latex files to rename the pictures.
In case you'll need more variables and do not want a $1, $2... mess:
You must use brackets around the variable name
set output '${molecule}_trans.tex'
because the underscore is a valid character for variable names, and bash looks for the variable $molecule_trans, see http://www.gnu.org/software/bash/manual/bashref.html#Shell-Parameter-Expansion.

method for merging two files, opinion needed

Problem: I have two folders (one is Delta Folder-where the files get updated, and other is Original Folder-where the original files exist). Every time the file updates in Delta Folder I need merge the file from Original folder with updated file from Delta folder.
Note: Though the file names in Delta folder and Original folder are unique, but the content in the files may be different. For example:
$ cat Delta_Folder/1.properties
account.org.com.email=New-Email
account.value.range=True
$ cat Original_Folder/1.properties
account.org.com.email=Old-Email
account.value.range=False
range.list.type=String
currency.country=Sweden
Now, I need to merge Delta_Folder/1.properties with Original_Folder/1.properties so, my updated Original_Folder/1.properties will be:
account.org.com.email=New-Email
account.value.range=True
range.list.type=String
currency.country=Sweden
Solution i opted is:
find all *.properties files in Delta-Folder and save the list to a temp file(delta-files.txt).
find all *.properties files in Original-Folder and save the list to a temp file(original-files.txt)
then i need to get the list of files that are unique in both folders and put those in a loop.
then i need to loop each file to read each line from a property file(1.properties).
then i need to read each line(delta-line="account.org.com.email=New-Email") from a property file of delta-folder and split the line with a delimiter "=" into two string variables.
(delta-line-string1=account.org.com.email; delta-line-string2=New-Email;)
then i need to read each line(orig-line=account.org.com.email=Old-Email from a property file of orginal-folder and split the line with a delimiter "=" into two string variables.
(orig-line-string1=account.org.com.email; orig-line-string2=Old-Email;)
if delta-line-string1 == orig-line-string1 then update $orig-line with $delta-line
i.e:
if account.org.com.email == account.org.com.email then replace
account.org.com.email=Old-Email in original folder/1.properties with
account.org.com.email=New-Email
Once the loop finishes finding all lines in a file, then it goes to next file. The loop continues until it finishes all unique files in a folder.
For looping i used for loops, for splitting line i used awk and for replacing content i used sed.
Over all its working fine, its taking more time(4 mins) to finish each file, because its going into three loops for every line and splitting the line and finding the variable in other file and replace the line.
Wondering if there is any way where i can reduce the loops so that the script executes faster.
With paste and awk :
File 2:
$ cat /tmp/l2
account.org.com.email=Old-Email
account.value.range=False
currency.country=Sweden
range.list.type=String
File 1 :
$ cat /tmp/l1
account.org.com.email=New-Email
account.value.range=True
The command + output :
paste /tmp/l2 /tmp/l1 | awk '{print $NF}'
account.org.com.email=New-Email
account.value.range=True
currency.country=Sweden
range.list.type=String
Or with a single awk command if sorting is not important :
awk -F'=' '{arr[$1]=$2}END{for (x in arr) {print x"="arr[x]}}' /tmp/l2 /tmp/l1
I think your two main options are:
Completely reimplement this in a more featureful language, like perl.
While reading the delta file, build up a sed script. For each line of the delta file, you want a sed instruction similar to:
s/account.org.com.email=.*$/account.org.email=value_from_delta_file/g
That way you don't loop through the original files a bunch of extra times. Don't forget to escape & / and \ as mentioned in this answer.
Is using a database at all an option here?
Then you would only have to write code for extracting data from the Delta files (assuming that can't be replaced by a database connection).
It just seems like this is going to keep getting more complicated and slower as time goes on.

How to change prompt_alternatives_on flag in prolog from .plrc?

I can change prompt_alternatives_on flag in the REPL.
But how do I change this flag in .plrc?
Then I get
permission to modify static_procedure `set_prolog_flag/2'
Goal: To not get the "More?" text for all the answers all the time. By changing the flag.
Put :- (a colon and a hypen) in front of the line to execute it when the file is loaded.
:- set_prolog_flag(key, value).
This is true of any line of code in any source file that you want to have evaluated when the file is loaded instead of considered a new fact or rule (which causes the error because it attempts to redefine set_prolog_flag/2).

Resources