Putting to a Filename named after a variable - wolfram-mathematica

I am using Put (>>) to store information that I am obtaining in Mathematica. The problem is that I am putting a several variables. However, if I do the following, it outputs to a file called year rather than my variable.
For example:
year=64;
sortedTally>>year;
This exports to a file named year rather than a file named 64. The documentation notes that expr >> filename is equivalent to expr >> "filename". Is there any way to circumvent this and put to a filename that changes based on the variables? This is similarly reiterated in the documentation file on Operator Input Forms (at the bottom).

In this case you need to use Put[sortedTally, ToString[year]].

Related

How to implement file-substitution macro in bash?

I have a set of text files and a set of GoLang files. The GoLang files contain directives such as the following:
//go:embed hello.txt
var s string
I want to write a bash script which takes the above code and substitutes the following in its place:
var s string = "<contents of hello.txt>"
Specifically, I want to bash script to go through all GoLang source files and replace all go:embed/string declaration pairs with a string defined to be the contents of the file specified in the embed directive.
I'm wondering if there is an existing program which can be configured to do the above. Otherwise, I'm planning on writing the algorithm myself.
Further explaination:
I am trying to replicate GoLang's embed directive (https://tip.golang.org/pkg/embed/).
We are not yet on GoLang 1.16, so we cannot use this functionality, but we are replicating it as closely as possible so that moving over to the standard implementation is as painless as possible.
Below is an attempt at solving your problem:
for i in file1 file2; do
awk '/^\/\/go:embed /{f=$2;next}/^var/&&f{printf"%s = \"",$0;system("cat "f);print"\"";f=0;next}1' < "$i" > "$i.new"
done
The awk script prints all normal lines, only if it encounters the embed directive this line will be skipped (and the file name remembered in variable f). A subsequent line starting with var will then be extended by the content of the file with the remembered name (using the system call "cat").
Beware, there are no error checks at all, no attempt to fix quotes and whatever. So for practical use - unless the file contents you are about to embed are known to be good-natured - you probably have to take a more sophisticated approach.

How to obtain variable values from a text file for use by a bash script?

I have a bash script, and I'd like it to read in the values of variables from a text file.
I'm thinking that in the text file where the values are stored, I'd use a different line for each variable / value pair and use the equals sign.
VARIABLE1NAME=VARIABLE1VALUE
VARIABLE2NAME=VARIABLE2VALUE
I'd then like my bash script to assign the value VARIABLE1VALUE to the variable VARIABLE1NAME, and the same for VARIABLE2VALUE / VARIABLE2NAME.
Since the syntax of the file is the syntax you would use in the script itself, the source command should do the trick:
source text-file-with-assignments.txt
alternately, you can use . instead of source, but in a case like this, using the full name is more clear.
The documentation can be found in the GNU Bash Reference Manual.

Replace literal at HTML using Unix shell script

I have a series of scripts declared in an HTML with the following format:
xxx.jfhdskfjhdskjfhdskjfjioe3874.bundle.js
The part between the periods is a dynamic hash, but it will always be an alphanumeric with the same positions. My problem is that I need to dynamically modify that hash, with the new generated files, which are in the same directory as the HTML itself. Is there a clean way to do it in Unix with a script?
You must be more specific: You want to generate new hashes for all scripts in this directory or need just a tool to change one by one basis? where do you get new hashes from? Below I attached simple script to change the part between first and second period sign. Script should be called with old name as first argument and new hash as second argument. It could be compressed to just one line but I used variables for clarity.
#! /bin/sh
OLDNAME=$1
NEWHASH=$2
NEWNAME=$(printf "%s" "$OLDNAME" | sed "s/^\([^\.]*\)\.[^\.]*\.\(.*\)/\1\.$NEWHASH\.\2/")
echo $NEWNAME

Simple map for pipeline in shell script

I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."

How to rename files keeping a variable part of the original file name

I'm trying to make a script that will go into a directory and run my own application with each file matching a regular expression, specifically Test[0-9]*.txt.
My input filenames look like this TestXX.txt. Now, I could just use cut and chop off the Test and .txt, but how would I do this if XX wasn't predefined to be two digits? What would I do if I had Test1.txt, ..., Test10.txt? In other words, How would I get the [0-9]* part?
Just so you know, I want to be able to make a OutputXX.txt :)
EDIT:
I have files with filename Test[0-9]*.txt and I want to manipulate the string into Output[0-9]*.txt
Would something like this help?
#!/bin/bash
for f in Test*.txt ;
do
process < $f > ${f/Test/Output}
done
Bash Shell Parameter Expansion
A good tutorial on regexes in bash is here. Summarizing, you need something like:
if [[$filenamein =~ "^Test([0-9]*).txt$"]]; then
filenameout = "Output${BASH_REMATCH[1]}.txt"
and so on. The key is that, when you perform the =~" regex-match, the "sub-matches" to parentheses-enclosed groups in the RE are set in the entries of arrayBASH_REMATCH(the[0]entry is the whole match,1` the first parentheses-enclosed group, etc).
You need to use rounded brackets around the part you want to keep.
i.e. "Test([0-9]*).txt"
The syntax for replacing these bracketed groups varies between programs, but you'll probably find you can use \1 , something like this:
s/Test(0-9*).txt/Output\1.txt/
If you're using a unix shell, then 'sed' might be your best bet for performing the transformation.
http://www.grymoire.com/Unix/Sed.html#uh-4
Hope that helps
for file in Test[0-9]*.txt;
do
num=${file//[^0-9]/}
process $file > "Output${num}.txt"
done

Resources