Read in two text file lists and write out 1 line from to a text file - windows

I need to create a powershell script that reads in two text files with lists.
The first list is used to identify the remote computers that will have a file appended to it.
The second list is the key phrase that needs to be appended to the file.
List 1:
Computer1
Computer2
Computer3
List 2:
ABC
DEF
GHI
Script would loop through each pointing to a file say C:\temp\help.txt on each of the remote computers and write 1 line from List 2.
So Example: When the script runs it goes out to List 1, finds that computer 1 is first and opens the file \computer1\C$\temp\help.txt, it would then grab the first line from List 2 and write to the file ABC. Close the file and go on to Computer 2. Computer 2 would be \computer2\C$\temp\help.txt and would grab the 2nd item from List 2 and write to it DEF save and move on.
It's been hard to find any help reading in and looping through 2 lists. Or perhaps I am thinking of it wrong. I have gotten to Get-Content to read in the file and foreach($x for text1) can go through 1 of the text files loops but can not figure out how to loop through the 2nd text file.

Processing side-by-side arrays is always a pain, and tends to be error prone. Ideally, as suggested, the computer names and strings would be together in a CSV or something.
But as is, something like this should get you on the right track. You might have to fiddle with newlines at the beginning/end of the strings a bit.
$machines = Get-Content .\MachineList.txt
$strings = Get-Content .\StringsList.txt
if($machines.Count -ne $strings.Count){ throw 'Counts do not match' }
for($i = 0; $i -lt $strings.Count; $i++)
{
$path = "\\$($machines[$i])\C`$\temp\help.txt" # unc path
$strings[$i] | Add-Content $path
}
If you have it in a CSV like
Config.csv
---------
ComputerName,String
machine1,string1
machine2,string2
machine3,string3
Then you could simplify to this:
Import-Csv .\Config.csv |%{
$_.String | Add-Content "\\$($_.ComputerName)\C`$\temp\help.txt"
}

Related

How to check if the following files are exists are not with condition in shell script?

I had an scenario
In this path $path1 i have list of files
LINUX-7.1.0.00.00-010.RHEL6.DEBUG.i386.rpm
LINUX-7.1.0.00.00-010.RHEL6.DEBUG.x86_64.rpm
LINUX-7.1.0.00.00-010.RHEL6.i386.rpm
LINUX-7.1.0.00.00-010.RHEL6.x86_64.rpm
LINUX-7.1.0.00.00-010.RHEL7.DEBUG.x86_64.rpm
LINUX-7.1.0.00.00-010.RHEL7.x86_64.rpm
LINUX-7.1.0.00.00-010.SLES12SP4.DEBUG.x86_64.rpm
LINUX-7.1.0.00.00-010.SLES12SP4.x86_64.rpm
In $path2 i have these files
7.1.0.00.00-010 - (build.major).(build.minor).(build.servicepack).(build.patch).(build.hotfix)-(build.number)
build.major - 7
build.minor - 1
build.servicepack - 0
build.patch - 0
build.hotfix - 0
build.number - 010
I need to check if List of particular files exists or not, if exists then it can follow some steps else it should exit.
As Barmar said, this website is more aimed at solving technical issues.
Assuming you don't know where to look, I would approach the problem with the following steps:
"cat" the input file and use "awk" to extract the 3rd column
use the output in a for loop to iterate through the lines (even if you could do it with awk directly), concatenating in a variable (called tmp for example)
looking for the files using $tmp in their name.
So, in shell, you can use awk to select a column from a text input, you can iterate directly through lines of a text flux with a for loop and you can insert the value of a variable in a string, using $myVariable.
You're now on tracks!

Is there a PowerShell Get-Content Function to extract line based on the first character?

I am trying to extract each line from a CSV that has over 1million (1,000,000) lines, where the first character is a 1.
The 1 in this case, refers to the 1st line of a log. There are several different logs in this file, and I need the first line from all of them. Problem is (as you could understand) 1 is not unique, and can appear in any of the 12 'columns' of data I have in this CSV
Essentially, I would like to extract them all to a new CSV file as well, for further break down.
I know it sounds simple enough, but I cannot seem to get the information I need.
I have searched StackOverflow, Microsoft, Google and my own Tech Team.
PS: Get-Content 'C:\Users\myfiles\Desktop\massivelogs.csv' | Select-String "1" | Out-File "extractedlogs.csv"
The immediate answer is that you must use Select-String '^1 in order to restrict matching to the start (^) of each input line.
However, a much faster solution is to use the switch statement with the -File` option:
$inFile = 'C:\Users\myfiles\Desktop\massivelogs.csv'
$outFile = 'extractedlogs.csv'
& { switch -File $inFile -Wildcard { '1*' { $_ } } } | Set-Content $outFile
Note, however, that the output file won't be a true CSV file, because it will lack a header row.
Also, note that Set-Content applies an edition-specific default character encoding (the active ANSI code page in Windows PowerShell, BOM-less UTF-8 in PowerShell Core); use -Encoding as needed.
Using -Wildcard with a wildcard pattern (1*) speeds things up slightly, compared to -Regex with ^1.

Need windows script to run simple for loop across all the parameters listed in a text file

I need to run windows "net view" command against each of these values in test.txt file.
test.txt contains a list of servers as below:
\\LB042073
\\LB042425
\\LB042507
\\LB045196
I need to run "net view" command against each of these servers.
Below command does not work:
Get-Content test.txt | %{& net view}
Thanks in advance.
In PowerShell:
Get-Content test.txt | %{net view $_}
Get-Content - read the file outputting each line individually
| - Pipeline character. It works the same in windows and *nix. It passes the output of one command to the input of the next command
% - Alias for ForEach-Object. This is a loop construct that will do some code for each object in a list
{ - Beginning of the code block
net view $_ - runs the net.exe program passing it two parameters view and the contents of the special $_ variable. The $_ variable in a ForEach-Object loop holds the input item for the current iteration of the loop.
} - End of the code block

Call script on all file names starting with string in folder bash

I have a set of files I want to perform an action on in a folder that i'm hoping to write a scipt for. Each file starts with mazeFilex where x can vary from any number , is there a quick and easy way to perform an action on each file? e.g. I will be doing
cat mazeFile0.txt | ./maze_ppm 5 | convert - maze0.jpg
how can I select each file knowing the file will always start with mazeFile?
for fname in mazeFile*
do
base=${fname%.txt}
base=${base#mazeFile}
./maze_ppm 5 <"$fname" | convert - "maze${base}.jpg"
done
Notes
for fname in mazeFile*; do
This codes starts the loop. Written this way, it is safe for all filenames, whether they have spaces, tabs or whatever in their names.
base=${fname%.txt}; base=${base#mazeFile}
This removes the mazeFile prefix and .txt suffix to just leave the base name that we will use for the output file.
./maze_ppm 5 <"$fname" | convert - "maze${base}.jpg"
The output filename is constructed using base. Note also that cat was unnecessary and has been removed here.
for i in mazeFile*.txt ; do ./maze_ppm 5 <$i | convert - `basename maze${i:8} .txt`.jpg ; done
You can use a for loop to run through all the filenames.
#!/bin/bash
for fn in mazeFile*; do
echo "the next file is $fn"
# do something with file $fn
done
See answer here as well: Bash foreach loop
I see you want a backreference to the number in the mazeFile. Thus I recommend John1024's answer.
Edit: removes the unnecessary ls command, per #guido 's comment.

Comparing two text files and counting number of occurrences

I'm trying to write a blog post about the dangers of having a common access point name.
So I did some wardriving to get a list of access point names, and I downloaded a list of the 1000 most common access point names (which there exists rainbow tables for) from Renderlab.
But how can I compare those two text files, to see how many of my collected access point names that are open to attacks from rainbow tables?
The text files are build like this:
collected.txt:
linksys
internet
hotspot
Most common access point names are called
SSID.txt:
default
NETGEAR
Wireless
WLAN
Belkin54g
So the script should sort the lines, compare them and show how many times the lines from collected.txt are found in SSID.txt ..
Does that make any sense? Any help would be grateful :)
If you don't mind using python script:
file1=open('collected.txt', 'r') # open file 1 for reading
with open('SSID.txt', 'r') as content_file: # ready file 2
SSID = content_file.read()
found={} # summary of found names
for line in file1:
if line in SSID:
if line not in found:
found[line]=1
else:
found[line]+=1
for i in found:
print found[i], i # print out list and no. of occurencies
...it can be run in the dir containing these files - collected.txt and SSID.txt - it will return a list looking like this:
5 NETGEAR
3 default
(...)
Script reads file 1 line-by line and compares it to the whole file 2. It can be easily modified to take file names from command prompt.
First, take a look on a simple tutorial about sdiff command, like How do I Compare two files under Linux or UNIX. Also, Notepad++ support this.
To find the number of times each line in file A appears in file B, you can do:
awk 'FNR==NR{a[$0]=1; next} $0 in a { count[$0]++ }
END { for( i in a ) print i, count[i] }' A B
If you want the output sorted, pipe the output to sort, but there's no need to sort just to find the counts. Note that the $0 in a clause can be omitted at the cost of consuming more memory, which may be a problem if file B is very large.

Resources