I have to following bit of code in my makefile:
SRC_DIRS = . folder
MAIN_CXX_FILES=${foreach d,${SRC_DIRS},${wildcard ${d}/${strip ${EXE_PREFIX}}*.cpp}}
COMMON_CXX_FILES=${filter-out ${MAIN_CXX_FILES},${foreach d,${SRC_DIRS},${wildcard ${d}/*.cpp}}}
Here my two MAIN_CXX_FILES and COMMON_CXX_FILES variables do exactly what I want them to do (grab all .c and .cpp files from the folders specified with the SRC_DIRS variable) but the path to a file in 'folder' (in the COMMON_CXX_FILES variable) look like this folder/file.cpp whereas I would like it to look like folder\file.cpp
I have tried the following but it doesn't work
COMMON_CXX_FILES=${foreach d,${COMMON_CXX_FILES},${subst /,\,${d}}}
Try this: (replace space with comma separator).
LST:=file1 file2 file3
WITH_SEP:=$(shell echo $(LST) | sed 's/ /,/'
Related
I'm building a script that will search for all the files in a certain directory.
Then I put the files that it found in the file files.txt, like below:
ls /opt/files/ | while read line
do
files=`echo $line | grep -o '[^ ]*$'`
echo $files >> files.txt
done
Now I want to put the found file in an other file called config.properties at a specified position.
Below you will see how the config.properties file looks like.
rest.server.url=https\://hostname/RestService/
rest.username=user
rest.password=pass
rest.response.type=json
hotfix.async=false
hotfix.operation=hotfix
hotfix.dlFilePath=/opt/files/<file>
So at the I want to insert the filename that has been found.
I came up with the following code below:
cat files.txt | while read files
do
#How can I code the part below?
insert $files into config.properties at line hotfix.dlFilePath=/opt/files/<file>
done
Only how can I insert $files in the config.properties file at the position of ?
I have a feeling it can be done with awk or sed, but not sure.
Perhaps try and integrate this solution using sed:
sed -i -E "s/(^hotfix\.dlFilePath\=\/opt\/files\/)(.*$)?/\1$files/" config.properties
Please note the use of double quotes around the substitution expression in sed to make the shell expand variables. So, with a variable like this
files="foo"
and given this input (assuming that the <file> was just a placeholder):
rest.server.url=https\://hostname/RestService/
rest.username=user
rest.password=pass
rest.response.type=json
hotfix.async=false
hotfix.operation=hotfix
hotfix.dlFilePath=/opt/files/
you'll get this result
rest.server.url=https\://hostname/RestService/
rest.username=user
rest.password=pass
rest.response.type=json
hotfix.async=false
hotfix.operation=hotfix
hotfix.dlFilePath=/opt/files/foo
Are you saying that you want to generate the file config.properties so that it has one stanza for each file that you find? Just do:
for file in /opt/files/*; do
cat << EOF
rest.server.url=https\://hostname/RestService/
rest.username=user
rest.password=pass
rest.response.type=json
hotfix.async=false
hotfix.operation=hotfix
hotfix.dlFilePath=$file
EOF
done > config.properties
input
Say I have three files in subdirectory /directory/: a.txt, b.txt, and c.txt.
a.txt
var1=bla
var2=blabla
var3=blablabla
b.txt
foo1=bar
foo2=barbar
foo3=barbarbar
c.txt
name1=string
name2=string2
name3=string3
I have created a 4th file, variables.txt
variables.txt
var2=new
foo1=newnew
foo3=newnewnew
desired output
I would like a bash script that uses the content of variables.txt to edit all files in /directory/, replacing any files with matching variables from variables.txt. For example, a.txt, b.txt, and c.txt become the following:
a.txt
var1=bla
var2=new
var3=blablabla
b.txt
foo1=newnew
foo2=barbar
foo3=newnewnew
c.txt
name1=string
name2=string2
name3=string3
c.txt does not change as nothing in variable.txt matches its variables.
How would this script look like?
context
In reality, I have about 20 files with 5-20 variables each, and I need to change maybe 3-4 variables in each file. Other users will also use variables.txt, defining it to suit their purposes, so the cleaner I can write variables.txt, the better. Fortunately, each variable has a unique name and only appears once in a file from this subdirectory. Thank you!
Using awk and the inplace extension:
awk -F= --include inplace '
NR==FNR{a[$1]=$2;next}
($1 in a){$2=a[$1]}
1' variable.txt /directory/[abc].txt
This script fills the array a with the content of file variable.txt.
For each other file, it replaces the value if the parameter is part of the array.
The --include inplace option allows to change files directly (without creating temporary file), which is the same as the sed -i option.
I would like to rename files with multiple extensions (only the csv files) with .csv extension to the end.
example Input files in directory:
zebra.txt
sounds.pdf
input.csv
input.csv.aa
input.csv.ab .. ..
input.csv.zz
123.csv
123.csv.aa ...
123.csc.zz xxx.csv yyy.csv
All .csv. files are in the same format. I would like the output to be *.csv file with no further extensions
I would like to rename the files to keep the last part of extension to be swapped like below.
input.csv.aa to input_aa.csv
input.csv.ab to input_ab.csv ..
input.csv.zz to input_zz.csv
xxx.csv - will remain as is
yyy.csv - will remain as is
or
If we can combine to one file based on name, that is fine too
input.csv (combined all input.csv.aa,input.csv.ab, ..,input.csv.zz)
123.csv (combined all 123.csv.aa, .. , 123.csc.zz) xxx.csv yyy.csv
Try something like this for your first option:
find . -name "*.csv*" | \
sed -e 's/\(\(.*\)\.csv\(.*\)\)/\1|\2\3.csv/' | \
tr '|' '\n' | \
xargs -n 2 mv
This does:
Finds all the files with a .csv extension somewhere in the name, putting one filename per line
Changes the line to output the original name, then a pipe character (|), then a new filename with any additional extensions after .csv moved to come before the .csv (e.g. input.csv.aa becomes input.csv.aa|input.aa.csv)
Replace pipe with a newline
For every two lines, pass as arguments to mv to rename the files
To rename *.csv files:
find . -regex ".*\.csv\.[a-z]*$" -exec rename 's/(\.csv)\.([a-z]+)$/_$2$1/' {} \;
-regex pattern - file name matches regular expression pattern
rename perlexp - renames the filenames according to the Perl expression perlexp
To combine a separate group of files (let's say input.csv.aa,input.csv.ab, ..,input.csv.zz) into one file - use the following cat approach:
cat input.csv.* > input.csv
I've found many similar examples but cannot find an example to do the following. I have a query file with file names (file1, file2, file3, etc.) and would like to find these files in a directory tree; these files may appear more than once in the dir tree, so I'm looking for the full path. This option works well:
find path/to/files/*/* -type f | grep -E "file1|file2|file3|fileN"
What I would like is to pass grep a file with filenames, e.g. with the -f option, but am not successful. Many thanks for your insight.
This is what the query file looks like:
so the file contains one column of filenames separated by '\n' and here is how it looks like:
103128_seqs.fna
7010_seqs.fna
7049_seqs.fna
7059_seqs.fna
7077A_seqs.fna
7079_seqs.fna
grep -f FILE gets the patterns to match from FILE ... one per line*:
cat files_to_find.txt
n100079_seqs.fna
103128_seqs.fna
7010_seqs.fna
7049_seqs.fna
7059_seqs.fna
7077A_seqs.fna
7079_seqs.fna
Remove any whitespace (or do it manually):
perl -i -nle 'tr/ //d; print if length' files_to_find.txt
Create some files to test:
touch `cat files_to_find.txt`
Use it:
find ~/* -type f | grep -f files_to_find.txt
output:
/home/user/tmp/7010_seqs.fna
/home/user/tmp/103128_seqs.fna
/home/user/tmp/7049_seqs.fna
/home/user/tmp/7059_seqs.fna
/home/user/tmp/7077A_seqs.fna
/home/user/tmp/7079_seqs.fna
/home/user/tmp/n100079_seqs.fna
Is this what you want?
I need to write a script that reads all the file names from a directory and then depending on the file name, for example if it contains R1 or R2, it will concatenates all the file names that contain, for example R1 in the name.
Can anyone give me some tip how to do this?
The only thing I was able to do is:
#!/bin/bash
FILES="path to the files"
for f in $FILES
do
cat $f
done
and this only shows me that the variable FILE is a directory not the files it has.
To make the smallest change that fixes the problem:
dir="path to the files"
for f in "$dir"/*; do
cat "$f"
done
To accomplish what you describe as your desired end goal:
shopt -s nullglob
dir="path to the files"
substrings=( R1 R2 )
for substring in "${substrings[#]}"; do
cat /dev/null "$dir"/*"$substring"* >"${substring}.out"
done
Note that cat can take multiple files in one invocation -- in fact, if you aren't doing that, you usually don't need to use cat at all.
Simple hack:
ls -al R1 | awk '{print $9}' >outputfilenameR1
ls -al R2 | awk '{print $9}' >outputfilenameR2
Your expectation that
for f in $FILES
will loop over all the file names which are stored in the directory defined by the variable FILES was disappointed by the fact that you had observed that the value of FILES was the only item processed in the for loop.
In order to create a list of files out of the value pointing to a directory it is necessary to provide a pattern for file names which if applied to the file system will give a list of found directory and file names upon evaluation of the pattern by using $FILES.
This can be done by appending of /* to the directory pattern string stored in the variable FILES which is then used to be evaluated to a list of file names using the $-character as directive for the shell to evaluate the value stored in FILES and replace $FILES with a list of found files. The pure * after /* guarantees that all entries in the directory are returned, so the list will contain not only files but also sub-directories if there are any.
In other words if you change the assignment to:
FILES="path to the files/*"
the script will then behave like you have expected it.