How to implement file-substitution macro in bash? - bash

I have a set of text files and a set of GoLang files. The GoLang files contain directives such as the following:
//go:embed hello.txt
var s string
I want to write a bash script which takes the above code and substitutes the following in its place:
var s string = "<contents of hello.txt>"
Specifically, I want to bash script to go through all GoLang source files and replace all go:embed/string declaration pairs with a string defined to be the contents of the file specified in the embed directive.
I'm wondering if there is an existing program which can be configured to do the above. Otherwise, I'm planning on writing the algorithm myself.
Further explaination:
I am trying to replicate GoLang's embed directive (https://tip.golang.org/pkg/embed/).
We are not yet on GoLang 1.16, so we cannot use this functionality, but we are replicating it as closely as possible so that moving over to the standard implementation is as painless as possible.

Below is an attempt at solving your problem:
for i in file1 file2; do
awk '/^\/\/go:embed /{f=$2;next}/^var/&&f{printf"%s = \"",$0;system("cat "f);print"\"";f=0;next}1' < "$i" > "$i.new"
done
The awk script prints all normal lines, only if it encounters the embed directive this line will be skipped (and the file name remembered in variable f). A subsequent line starting with var will then be extended by the content of the file with the remembered name (using the system call "cat").
Beware, there are no error checks at all, no attempt to fix quotes and whatever. So for practical use - unless the file contents you are about to embed are known to be good-natured - you probably have to take a more sophisticated approach.

Related

Replace literal at HTML using Unix shell script

I have a series of scripts declared in an HTML with the following format:
xxx.jfhdskfjhdskjfhdskjfjioe3874.bundle.js
The part between the periods is a dynamic hash, but it will always be an alphanumeric with the same positions. My problem is that I need to dynamically modify that hash, with the new generated files, which are in the same directory as the HTML itself. Is there a clean way to do it in Unix with a script?
You must be more specific: You want to generate new hashes for all scripts in this directory or need just a tool to change one by one basis? where do you get new hashes from? Below I attached simple script to change the part between first and second period sign. Script should be called with old name as first argument and new hash as second argument. It could be compressed to just one line but I used variables for clarity.
#! /bin/sh
OLDNAME=$1
NEWHASH=$2
NEWNAME=$(printf "%s" "$OLDNAME" | sed "s/^\([^\.]*\)\.[^\.]*\.\(.*\)/\1\.$NEWHASH\.\2/")
echo $NEWNAME

Append function output to top of created file

I have a script running in linux which is simply multiple function calls outputted into a file. I have a function which is an overview of important information That I would like to append to the top of the file for easy viewing.
The problem is that I cannot simply call this overview function first because it is dependant on previous functions.
Is there an easier way to do this without creating a temp file? This is a fairly large file and that would take pretty long.
If you're using something like a Perl script, it should be possible to first leave some space at the top for the overview, then write all your data.
After this, reopen the file with write/append (w+) access, move the filehandle position to the desired position early in the file, using seek() or sysseek() functions, then write the overview data to it.
Help on Perl functions can be obtained here: http://perldoc.perl.org/perlfunc.html
First of all, perhaps you meant prepend, not append.
In linux, suppose you want:
file1 -> BottomContent
file2 -> TopContent
You could use:
$ cat file2 file1 > finalfile; rm file[12]
#SandeepY has one reasonable solution.
Any time you modify a file in Unix, you're using a system that is opening your original file and an new file, so there's (almost always) a temporary file involved, whether you can see it or not.
That being said, another solution, as you specfied a function is providing some output, is to use a process group to "marshall" your output into one stream, and redirect that into your file.
mv mainFile mainFile.tmp
{
myFunc
cat mainFile.tmp
} > mainFile && /bin/rm mainFile.tmp
As you seem to need this regularly, it should be easy to turn this into a function, replacing mainFile with "$1".
IHTH

Simple map for pipeline in shell script

I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."

Concatenating strings fails when read from certain files

I have a web application that is deployed to a server. I am trying to create a script that amoing other things reads the current version of the web application from a properties file that is deployed along with the application.
The file looks like this:
//other content
version=[version number]
build=[buildnumber]
//other content
I want to create a variable that looks like this: version-buildnumber
Here is my script for it:
VERSION_FILE=myfile
VERSION_LINE="$(grep "version=" $VERSION_FILE)"
VERSION=${VERSION_LINE#$"version="}
BUILDNUMBER_LINE=$(grep "build=" $VERSION_FILE)
BUILDNUMBER=${BUILDNUMBER_LINE#$"build="}
THEVERSION=${VERSION}-${BUILDNUMBER}
The strange thing is that this works in some cases but not in others.
The problem I get is when I am trying to concatenate the strings (i.e. the last line above). In some cases it works perfectly, but in others characters from one string replace the characters from the other instead of being placed afterwards.
It does not work in these cases:
When I read from the deployed file
If I copy the deployed file to another location and read from there
It does work in these cases:
If I write a file from scratch and read from that one.
If I create my own file and then copy the content from the deployed file into my created file.
I find this very strange. Is there someone out there recognizing this?
It is likely that your files have carriage returns in them. You can fix that by running dos2unix on the file.
You may also be able to do it on the fly on the strings you're retrieving.
Here are a couple of ways:
Do it with sed instead of grep:
VERSION_LINE="$(sed -n "/version=/{s///;s/\r//g;p}" $VERSION_FILE)"
and you won't need the Bash parameter expansion to strip the "version=".
OR
Do the grep as you have it now and do a second parameter expansion to strip the carriage return.
VERSION=${VERSION_LINE#$"version="}
VERSION=${VERSION//$'\r'}
By the way, I recommend habitually using lowercase or mixed case variable names in order to reduce the chance of name collisions.
Given this foo.txt:
//other content
version=[version number]
build=[buildnumber]
//other content
you can extract a version-build string more easily with awk:
awk -F'=' '$1 == "version" { version = $2}; $1 == "build" { build = $2}; END { print version"-"build}' foo.txt
I don't know why your script doesn't work. Can you provide an example of erroneous output?
From this sentence:
In some cases it works perfectly, but in others characters from one string replace the characters from the other instead of being placed afterwards.
I can't understand what's actually going on (I'm not a native English speaker so it's probably my fault).
Cheers,
Giacomo

Putting to a Filename named after a variable

I am using Put (>>) to store information that I am obtaining in Mathematica. The problem is that I am putting a several variables. However, if I do the following, it outputs to a file called year rather than my variable.
For example:
year=64;
sortedTally>>year;
This exports to a file named year rather than a file named 64. The documentation notes that expr >> filename is equivalent to expr >> "filename". Is there any way to circumvent this and put to a filename that changes based on the variables? This is similarly reiterated in the documentation file on Operator Input Forms (at the bottom).
In this case you need to use Put[sortedTally, ToString[year]].

Resources