Bash script rewrite in powershell - bash

I have a bash script that put names, dates, and place in to a sample file. The code looks like this
#/bin/bash
if [ $# -ne 2 ]; then
echo You should give two parameters!
exit
fi
while read line
do
name=`echo $line | awk '{print $1}'`
date=`echo $line | awk '{print $2}'`
place=`echo $line | awk '{print $3}'`
echo `cat $1 | grep "<NAME>"| sed -e's/<NAME>/'$name'/g'`
echo `cat $1 | grep "<DATE>" | sed -e's/<DATE>/'$date'/g'`
echo `cat $1 | grep "<PLACE>" | sed -e's/<PLACE>/' $place'/g'`
echo
done < $2
I want to write it in powershel. This is my try:
if($args.count -ne 2)
{
write-host You should give two parameters!
return
}
$input = Get-Content $args[1]
$samplpe = Get-Content $args[0]
foreach($line in $input)
{
name= $line | %{"$($_.Split('\t')[1])"}
date= $line | %{"$($_.Split('\t')[2])"}
place= $line | %{"$($_.Split('\t')[3])"}
write-host cat $sample | Select-String [-"<NAME>"] | %{$_ -replace "<NAME>", "$name"}`
write-host cat $sample | Select-String [-"<NAME>"] | %{$_ -replace "<DATE>", "$date"}
write-host cat $sample | Select-String [-"<NAME>"] | %{$_ -replace "<PLACE>", "$place"}
}
But it's dosent work, but i dont know why :S

The obvious problem is an apparent typo here:
$samplpe = Get-Content $args[0]
Beyond that, it's hard to tell without knowing what the data looks like, but it appears to be much more complicated than it needs to be. I'd do something like this (best guess on what the data looks like).
'<Name>__<Date>__<Place>' | set-content args0.txt
"NameString`tDateString`tPlacestring" | Set-Content args1.txt
$script =
{
if($args.count -ne 2)
{
write-host You should give two parameters!
return
}
$Name,$Date,$Place = (Get-Content $args[1]).Split("`t")
(Get-Content $args[0]) -replace '<Name>',$name -replace '<Date>',$Date -replace '<Place>',$Place | Write-Host
}
&$script args0.txt args1.txt
NameString__DateString__Placestring

param($data,$template)
Import-Csv -LiteralPath $data -Delimiter "`t" -Header Name,Date,Place | %{
(sls "<NAME>" $template).Line -replace "<NAME>",$_.Name
(sls "<DATE>" $template).Line -replace "<DATE>",$_.Date
(sls "<PLACE>" $template).Line -replace "<PLACE>",$_.Place
}

Related

Search for Three Consecutive Lines of Text Inside a File

Problem: I need to search for some text inside a file, it contains three consecutive lines of text. How do I verify (find the existence of) those lines are in the file?
Expected Return Value: A boolean
Example Input File: text.txt
one
two
three
four
five
Example Pattern to Search For
two
three
four
Simple Answer:
$file = (Get-Content -Raw file.txt) -replace "`r" # removing "`r" if present
$pattern = 'two
three
four' -replace "`r"
$file | Select-String $pattern -Quiet -SimpleMatch
RE-EDIT. Wow. This is a tricky way to do it. At the prompt, $pattern has no "`r", but in a script it does. This should work as a script or at the prompt.
$file = (get-content -raw file.txt) -replace "`r"
$pattern = 'two
three
four' -replace "`r"
# just showing what they really are
$file -replace "`r",'\r' -replace "`n",'\n'
$pattern -replace "`r",'\r' -replace "`n",'\n'
# 4 ways to do it
$file -match $pattern
$file | select-string $pattern -quiet -simplematch
$file -like "*$pattern*"
$file.contains($pattern)
# output
one\ntwo\nthree\nfour\nfive\n
two\nthree\nfour
True
True
True
True
Hmm, trying a regex way. In single line mode, a . can match a "`r" or a "`n".
$file = get-content -raw file.txt
$pattern = '(?s)two.{1,2}three.{1,2}four'
# $pattern = 'two\r?\nthree\r?\nfour'
# $pattern = 'two\r\nthree\r\nfour'
# $pattern = 'two\nthree\nfour'
$file -match $pattern
$file | select-string $pattern -quiet

Powershell Iterate Through Two Directories and Write Contents to CSV

I have to write the contents of two directories to a CSV where Directory 1 is Column 1 and Directory 2 is Column 2.
I am fairly new to Powershell so please be gentle.
Here is what I have tried:
$source_loc = "Z:\Dir1"
$dest_loc = "Y:\Dir2"
$outfile = "C:\Outfile.csv"
$source = get-ChildItem $source_loc
$dest = get-ChildItem $dest_loc
Remove-Item $outfile -ErrorAction Ignore
Add-Content $outfile "Old URL, New URL"
$max = ( $source | Measure-Object ).Count
For ( $i = 0; $i -lt $max; $i++ )
{
$cur = $source | Select-Object -Index $i | Select Name
$old = "http://thisisawebsite.web/oldurl/" + $cur
$cur = $dest | Select-Object -Index $i | Select Name
$new = "http://thisisawebsite.web/newurl/" + $cur
$line = $old + ',' + $new
Add-Content $outfile $line
}
The problem I am having is that right now the output of Outfile.csv looks like this:
http://thisisawebsite.web/oldurl/#{Name=File1.ext},http://thisisawebsite.web/newurl/#{Name=File1.ext}
http://thisisawebsite.web/oldurl/#{Name=File2.ext},http://thisisawebsite.web/newurl/#{Name=File2.ext}
When I want it to look like this:
http://thisisawebsite.web/oldurl/File1.ext,http://thisisawebsite.web/newurl/File1.ext
http://thisisawebsite.web/oldurl/File2.ext,http://thisisawebsite.web/newurl/File2.ext
I tried converting to arrays and indexing into it but it seems to do the same thing. Do I need to do a substr on the $cur variable and ignore the first 7 characters and then ignore the last one? I know my code is rubbish so please try your best not to insult me.
Thanks! ^.^
There are a few simpler ways of doing what you're trying to do, I believe. A quick and dirty way of making this work, however, is expanding the "name" property that you're selecting. Try this small snippet on the two noted lines :
$cur = $source | Select-Object -Index $i | Select -ExpandProperty Name
$cur = $dest | Select-Object -Index $i | Select -ExpandProperty Name
This should select the Value for the "name" property instead of a hashtable including both.
Another useful thing to note for the future is that with double quotes, you don't need to manually concatenate a variable and a string in powershell.
$new = "http://thisisawebsite.web/newurl/" + $cur
Is the same as
$new = "http://thisisawebsite.web/newurl/$cur"

Reading a file line-by-line / field-by-field (pipe-delimited) in powershell

Actually beginner in PowerShell. I want to convert shell script into powershell.
I have sample.txt file which contains 3 lines. I want to select each word in line to different variable and which variables are need to use in next if condition.
get-content sample.txt
wordone|secondword|thridword|fourthword
wordone1|secondword1|thridword1|fourthword1
wordone2|secondword2|thridword2|fourthword2
I have already shell script:
grep "^[^#]" ./sample.txt > test.txt
for line in `cat test.txt`
do
dir=`echo $line | cut -d'|' -f1`
metadata_name=`echo $line | cut -d'|' -f2`
metadata_file=`echo $line | cut -d'|' -f3`
config_file=`echo $line | cut -d'|' -f4`
if [ "${config_file}" != "" ]
then
cp ${dir}.xml folder1/.
Please help me how to convert shell to powershell.
Is this what you are looking for?
$textfile = get-content "sample.txt"
$sample = "sample.txt"
for ($i=0;$i -lt $textfile.count;$i++){
$textrow = ($textfile[$i]).split("|")
$dir = $textrow[0]
$metadata_name = $textrow[1]
$metadata_file = $textrow[2]
$config_file = $textrow[3]
If($config_file -ne ""){
New-Item $dir/ConfigurationPlan -type directory -force
$sample | Out-File "$dir/ConfigurationPlan/$($config_file)_sample.xml"
}
}
I'd treat your .\sample.txt file as a csv file and use Import-Csv with -Delimiter '|' and -Header dir,metadata_name,metadata_file,config_file parameters.
Then iterate the lines checking for dir and existence of the file to copy.
$Sample = Import-Csv sample.txt -delimiter '|' -Header dir,metadata_name,metadata_file,config_file
$Sample
"----"
ForEach ($Line in $Sample){
if ($Line.config_file -ne ""){
$File = "$($line.dir).xml"
"config_file is $($Line.config_file) checking $File"
if (Test-Path $File){
Copy-ITem -Path $File -Destination "folder1\" -whatif
}
}
}
Sample output:
> Q:\Test\2017\07\21\SO_45239319.ps1
dir metadata_name metadata_file config_file
--- ------------- ------------- -----------
wordone secondword thridword fourthword
wordone1 secondword1 thridword1 fourthword1
wordone2 secondword2 thridword2 fourthword2
----
config_file is fourthword checking wordone.xml
config_file is fourthword1 checking wordone1.xml
config_file is fourthword2 checking wordone2.xml
If your output looks OK, remove the -WhatIf parameter of Copy-Item

Find duplicated nested directories

I have a large directory tree with this nested directories duplicates (but not all):
data/home/home/
data/banners/banners/
resources/users/documents/documents/
How can I merge only duplicated directories with this actions:
copy (without replace) data/home/home/ contents to data/home/
delete data/home/home
My current code:
#/bin/bash
for folder in $(find httpdocs -type d); do
n=$(echo $folder | tr "/" "\n" | wc -l)
nuniq=$(echo $folder | tr "/" "\n" | sort | uniq | wc -l)
[ $n -eq $nuniq ] || echo "Duplicated folder $folder"
done
But have a problem, because data/home/es/home is a valid folder, but detected as duplicated.
Thanks.
you can use uniq command as below;
#/bin/bash
for folder in $(find httpdocs -type d); do
nuniq=$(echo $folder | tr "/" "\n" | uniq -d | wc -l)
if [ "$nuniq" -gt "0" ]
then
echo "Duplicated folder $folder"
fi
done
man uniq;
-d, --repeated
only print duplicate lines
you can try the following script for copy and delete folder. I can not test this, so take a backup your httpdocs folder before run this.
#/bin/bash
for folder in $(find httpdocs -type d); do
nuniq=$(echo $folder | tr "/" "\n" | uniq -d | wc -l)
if [ "$nuniq" -gt "0" ]
then
dest=$(echo $folder | tr '/' '\n' | awk '!a[$0]++' | tr '\n' '/')
mv -i $folder/* $dest
rmdir $folder
fi
done
For example;
user#host $ echo "data/home/es/home" | tr "/" "\n"
data
home
es
home
user#host $ echo "data/home/es/home" | tr "/" "\n" | uniq -d | wc -l
0
user#host $ echo "data/home/home" | tr "/" "\n"
data
home
home
user#host $ echo "data/home/home" | tr "/" "\n" | uniq -d
home
user#host $ echo "data/home/home" | tr "/" "\n" | uniq -d | wc -l
1

repeat function with different arguments if one variable within the function is empty

The function has to check if TARGET is empty, and if it is empty i want it to be repeated but this time with c1TIME=$hours:$minutes:[0-9][0-9]
then run it, and if it is still empty then with c1TIME=$hours:$c1minutes[0-9]:[0-9][0-9]
after that it should stop and echo some info.
$1 is a standard logfile with errors
$TIME string with time in this pattern HH:MM:SS
I am sorry if this is confusing, I'm appreciate any help
what this function does in detail:
first it takes a random time for example 12:34:56 and reads only
the "5" in c1seconds, then c1TIME is 12:34:5[0-9]
Target now tries to find a line where the string of c1TIME is located in the logfile and
prints the number of the line.
c1DATE changes the 12:34:5[0-9] in the found time like 12:34:56 for example.
the rest is actually unimportant
like I said above when TARGET is empty it has to run again, so how can i check TARGET while or after the function executed?
function check() {
c1seconds=`echo $TIME | awk '{print substr ($1,7,8)}' | head -1c`
c1TIME=$hours:$minutes:$c1seconds[0-9]
TARGET=`cat -n $1 | grep "$c1TIME" | awk '{printf "%s\n",$1}' | awk 'sub("$", "")'| head -1`
c1DATE=`cat $1 | grep "$c1TIME" | grep -Eo "[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}" | head -1`
EXCEPTION=`tail -$[$WINDOW_LINES-$TARGET] $1 | grep -c Exception`
ERROR=`tail -$[$WINDOW_LINES-$TARGET] $1 | grep -c Error`
SEMAPHORE=`tail -$[$WINDOW_LINES-$TARGET] $1| grep -c semaphore`
echo $TARGET
return 0
}
You want to repeat while target is empty: :) so here you go:
function check() {
c1seconds=`echo $TIME | awk '{print substr ($1,7,8)}' | head -1c`
c1TIME=$hours:$minutes:$c1seconds[0-9]
TARGET=`cat -n $1 | grep "$c1TIME" | awk '{printf "%s\n",$1}' | awk 'sub("$", "")'| head -1`
if [ x$TARGET = x ] ; then
c1TIME=$hours:$minutes:[0-9][0-9]
TARGET=`cat -n $1 | grep "$c1TIME" | awk '{printf "%s\n",$1}' | awk 'sub("$", "")'| head -1`
fi
c1DATE=`cat $1 | grep "$c1TIME" | grep -Eo "[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}" | head -1`
EXCEPTION=`tail -$[$WINDOW_LINES-$TARGET] $1 | grep -c Exception`
ERROR=`tail -$[$WINDOW_LINES-$TARGET] $1 | grep -c Error`
SEMAPHORE=`tail -$[$WINDOW_LINES-$TARGET] $1| grep -c semaphore`
echo $TARGET
}
BTW feels like an ugly way to do what you are doing :P
Feels even uglier :) but i don't know what you are trying to accomplish or what your input is so the answer is as it is above.
well you can always use recursion. keep a level=0 at the global level and call the function again.
level=0;
function check() {
Now instead of
c1TIME=$hours:$minutes:$c1seconds[0-9]
use
if [ $level -eq 0 ]; then c1TIME=$hours:$minutes:$c1seconds[0-9]; fi
if [ $level -eq 1 ]; then c1TIME=$hours:$minutes:[0-9][0-9]; fi
if [ $level -eq 2 ]; then c1TIME=$hours:$c1minutes[0-9]:[0-9][0-9]; fi
if [ $level -ge 2 ]; then echo "ERROR"; exit; fi
Now instead of
TARGET=`cat -n $1 | grep "$c1TIME" | awk '{printf "%s\n",$1}' | awk 'sub("$", "")'| head -1`
use
TARGET=`cat -n $1 | grep "$c1TIME" | awk '{printf "%s\n",$1}' | awk 'sub("$", "")'| head -1`
if [ "x$TARGET" = "x" ]; then ((level++)); check $#; fi
I have not run this so typos may be there, but main idea is as the number of recursion and checks are at max 3, use a simple if check with the depth of the recursion. You can modify the series of if to if else. Also if grep can open a file, why cat file and pipe it, and if awk can search for a pattern in a file, why cat file | grep pattern before it?

Resources