With Powershell i'm trying to split a text file into multiple files using the the beginning of each line as a delimiter
Input file (transfer.txt):
3M|9935551876|11.99|2235641|001|1|100|N|780
3M|1135741031|13.99|8735559|003|1|100|N|145
3M|5835551001|20.50|4556481|002|1|100|N|222
3M|4578420001|33.00|1125785|001|1|100|N|652
8L|00811444243|134148|4064080040|1|02/05/2017 21:15:13|8|170502707|19.85
8L|00811444243|130925|4189133003|1|02/05/2017 21:15:13|8|170502707|4.69
8L|00811444243|136513|4186144003|2|02/05/2017 21:15:13|8|170502707|10.83
Output file (Article.txt):
3M|9935551876|11.99|2235641|001|1|100|N|780
3M|1135741031|13.99|8735559|003|1|100|N|145
3M|5835551001|20.50|4556481|002|1|100|N|222
3M|4578420001|33.00|1125785|001|1|100|N|652
Here's a snippet of my code:
$Path = "D:\BATCH\"
$InputFile = (Join-Path $Path "transfer.txt")
$Reader = New-Object System.IO.StreamReader($InputFile)
while (($Line = $Reader.ReadLine()) -ne $null) {
if ($Line.StartsWith("3M")) {
$OutputFile = "Article.txt"
}
Add-Content (Join-Path $Path $OutputFile) $Line
}
This as a result, creates the same file as the input file. What's wrong with the code?
The below line is the problem. It is outside the If loop and adding the content of each line to the output file. But as I understand, that is not what you want. You want only the content that pass the If condition to be added to the output file. Hence, it needs to be inside the If loop.
Add-Content (Join-Path $Path $OutputFile) $Line
Although I am not too found of this approach because you would be making as many Disk I/O operations as there are lines that pass the if condition. Not very good for scalability.
You can change your code to something like this to reduce number of Disk I/O to just 1.
$out = While (($Line = $Reader.ReadLine()) -ne $null) {
If ($Line.StartsWith("3M")) {
$Line
}
}
$OutputFile = "Article.txt"
Add-Content (Join-Path $Path $OutputFile) $Out
As others have already pointed out, you never change the output file to anything different from "Article.txt", and you write all input lines to the defined output file.
If you want to write the lines of the input file to different files depending on the value of the first field I'd recommend naming the output files after that value. And since you're writing the output with Add-Content I'd also suggest reading the input file via Get-Content for simplicity reasons. Use a StreamReader when performance is an issue (in which case you'll want to use a StreamWriter too), but not just because.
Get-Content $InputFile | ForEach-Object {
$basename, $null = $_.Split('|', 2)
Add-Content (Join-Path $Path "${basename}.txt") $_
}
Related
I have a bunch of files in folder A and their corresponding metadata files in folder B. I want to loop though the data files and check if the columns are the same in the metadata file, (since incoming data files could have new columns added at any position without notice). If the columns in both files match, no action to is to be taken. If Data file has more columns than metadata file, then those columns should be deleted from incoming data file. Any help would be appreciated. Thanks!
Data file is ps_job.txt
“empid”|”name”|”deptid”|”zipcode”|”salary”|”gender”
“1”|”Tom”|”10″|”11111″|”1000″|”M”
“2”|”Ann”|”20″|”22222″|”2000″|”F”
Meta data file is ps_job_metadata.dat
“empid”|”name”|”zipcode”|”salary”
I would like my output to be
“empid”|”name”|”zipcode”|”salary”
“1”|”Tom”|”11111″|”1000″
“2”|”Ann”|”22222″|”2000″
That's a seemingly simple question with a very complicated answer. However, I've broken down the code for what you will need to do. Here are the steps that need to happen in order for powershell to do everything you're asking of it.
Read the .dat file
Save the .dat data into an object
Read the .txt file
Save the .txt header into an object
Check for the differences
Delete the old text file (that had too many columns)
Create a new text file with the new columns
I've made some assumptions in how this looks. However, with the way I've structured the code, it should be easy enough to make modifications as necessary if my assumptions are wrong. Here are my assumptions:
The text file will always have all of the columns that the DAT file has (even though it will sometimes have more)
The dat file is structured like a text file and can be directly imported into powershell.
And here is the code, with comments. I've done my best to explain the purpose of each section, but I've written this with the expectation that you have a basic knowledge of powershell, especially arrays. If you have questions I'll do my best to answer, though I'll ask that you refer to the section of code you have questions on.
###
### The paths. I'm sure you will have multiples of each file. However, I didn't want to attempt to pull in
### the files with this sample code as it can vary so much in your environment.
###
$dat = "C:\StackOverflow\thingy.dat"
$txt = "C:\stackoverflow\ps_job.txt"
###
### This is the section to process the DAT file
###
# This will read the file and put it in a variable
$dat_raw = get-content -Path $dat
# Now, let's seperate out the punctuation and give us our object
$dat_array = $dat_raw.split("|")
$dat_object = #()
foreach ($thing in $dat_array)
{
$dat_object+=$thing.Replace("""","")
}
###
### This is the section to process the TXT file
###
# This will read the file and put it into a variable
$txt_raw = get-content -Path $txt
# Now, let's seperate out the punctuation and give us our object
$txt_header_array = $txt_raw[0].split("|")
$txt_header_object = #()
foreach ($thing in $txt_header_array)
{
$txt_header_object += $thing.Replace("""","")
}
###
### Now, let's figure out which columns we're eliminating (if any)
###
$x = 0
$total = $txt_header_object.count
$to_keep = #()
While ($x -le $total)
{
if ($dat_object -contains $txt_header_object[$x])
{
$to_keep += $x
}
$x++
}
### Now that we know which objects to keep, we can apply the changes to each line of the text file.
### We will save each line to a new variable. Then, once we have the new variable, we will delete
### The existing file with a new file that has only the data we want.Note, we will only run this
### Code if there's a difference in the files.
if ($total -ne $to_keep.count)
{
### This first section will go line by line and 'fix' the number of columns
$new_text_file = #()
foreach ($line in $txt_raw)
{
if ($line.Length -gt 0)
{
# Blank out the array each time
$line_array = #()
foreach ($number in $to_keep)
{
$line_array += ($line.split("|"))[$number]
}
$new_text_file += $line_array -join "|"
}
else
{
$new_text_file +=""
}
}
### This second section will delete the original file and replace it with our good
### file that has been created.
Remove-item -Path $txt
$new_text_file | out-file -FilePath $txt
}
This small example can be a start for your solution :
$ps_job = Import-Csv D:\ps_job.txt -Delimiter '|'
$ps_job_metadata = (Get-Content D:\ps_job_metadata.txt) -split '\|'-replace '"'
foreach( $d in (Compare-Object $column $ps_job_metadata))
{
if($d.SideIndicator -eq '<=')
{
$ps_job | %{ $_.psobject.Properties.Remove($d.InputObject) }
}
}
$ps_job | Export-Csv -Path D:\output.txt -Delimiter '|' -NoTypeInformation
I tried this and it works.
$outputFile = "C:\Script_test\ps_job_mod.dat"
$sample = Import-Csv -Path "C:\Script_test\ps_job.dat" -Delimiter '|'
$metadataLine = Get-Content -Path "C:\Script_test\ps_job_metadata.txt" -First 1
$desiredColumns = $metadataLine.Split("|").Replace("`"","")
$sample | select $desiredColumns | Export-Csv $outputFile -Encoding UTF8 -NoTypeInformation -Delimiter '|'
Please note that the smart quotes are in consistent over the rows and there are empty lines between the rows (I highly recommend to reformat/update your question).
Anyways, as long as the quoting of the header is consistent between the two (ps_job.txt and ps_job_metadata.dat) files:
# $JobTxt = Get-Content .\ps_job.txt
$JobTxt = #'
“empid”|”name”|”deptid”|”zipcode”|”salary”|”gender”
“1”|”Tom”|”10″|”11111″|”1000″|”M”
“2”|”Ann”|”20″|”22222″|”2000″|”F”
'#
# $MetaDataTxt = Get-Content .\ps_job_metadata.dat
$MetaDataTxt = #'
“empid”|”name”|”zipcode”|”salary”
'#
$Job = ConvertFrom-Csv -Delimiter '|' $JobTxt
$MetaData = ConvertFrom-Csv -Delimiter '|' (#($MetaDataTxt) + 'x|')
$Job | Select-Object $MetaData.PSObject.Properties.Name
“empid” ”name” ”zipcode” ”salary”
------- ------ --------- --------
“1” ”Tom” ”11111″ ”1000″
“2” ”Ann” ”22222″ ”2000″
Here's the same answer I posted to your question on Powershell.org
$jobfile = "ps_job.dat"
$metafile = "ps_job_metadata.dat"
$outputfile = "some_file.csv"
$meta = ((Get-Content $metafile -First 1 -Encoding UTF8) -split '\|')
Class ColumnSelector : System.Collections.Specialized.OrderedDictionary {
Select($line,$meta)
{
$meta | foreach{$this.add($_,(iex "`$line.$_"))}
}
ColumnSelector($line,$meta)
{
$this.select($line,$meta)
}
}
import-csv $jobfile -Delimiter '|' |
foreach{[pscustomobject]([columnselector]::new($_,$meta))} |
Export-CSV $outputfile -Encoding UTF8 -NoTypeInformation -Delimiter '|'
Output
PS C:\>Get-Content $outputfile
"empid"|"name"|"zipcode"|"salary"
"1"|"Tom"|"11111"|"1000"
"2"|"Ann"|"22222"|"2000"
Provided you want to keep those curly quotes and your code page and console font supports all the characters, you can do the following:
# Create array of properties delimited by |
$headers = (Get-Content .\ps_job_metadata.dat -Encoding UTF8) -split '\|'
Import-Csv ps_job.dat -Delimiter '|' -Encoding utf8 | Select-Object $headers
I currently have a CSV which contains 1 column that lists many file FullNames. (ie. "\\server\sub\folder\file.ext").
I am attempting to import this CSV, move the file to a separate location and append a GUID to the beginning of the filename in the new location (ie GUID_File.ext). I've been able to move the files, generate the GUID_ but haven't been able to store and reuse the existing filename.ext, it just gets cut off and the file ends up just being a GUID_. I just am not sure how to store the existing filename for reuse.
$Doc = Import-CSV C:\Temp\scripttest.csv
ForEach ($line in $Doc)
{
$FileBase = $Line.basename
$FileExt = $Line.extension
Copy-Item -path $line.File -Destination "\\Server\Folder\$((new-guid).guid.replace('-',''))_$($Filebase)$($FileExt)"
}
If possible, I'm going to also need to store and place all the new GUID_File.ext back into a CSV and store any errors to another file.
I currently have a CSV which contains 1 column that lists many file FullNames. (ie. "\server\sub\folder\file.ext").
This isn't a CSV. It's just a plaintext file with a list.
Here's how you can accomplish your goal, however:
foreach ($path in (Get-Content -Path C:\Temp\scripttest.csv))
{
$file = [System.IO.FileInfo]$path
$prefix = (New-Guid).Guid -replace '-'
Copy-Item -Path $file.FullName -Destination "\\Server\Folder\${prefix}_$file"
}
This will take your list, convert the item into a FileInfo type it can work with, and do the rest of your logic.
Based on:
$FileBase = $line.basename
$FileExt = $line.extension
it sounds like you mistakenly think that the $line instances representing the objects returned from Import-Csv C:\Temp\scripttest.csv are [System.IO.FileInfo] instances, but they're not:
What Import-Csv outputs are [pscustomobject] instances whose properties reflect the column values of the input CSV, and the values of these properties are invariably strings.
You must therefore use $line.<column1Name> to refer to the column containing the full filenames, where <column1Name> is the name defined for the column of interest in the header line (the 1st line) of the input CSV file.
If the CSV file has no header line, you can specify the column names by passing an array of column names to Import-Csv's -Header parameter, e.g.,
Import-Csv -Header Path, OtherCol1, OtherCol2, ... C:\Temp\scripttest.csv
I'll assume that the column of interest is named Path in the following solution:
$Doc = Import-Csv C:\Temp\scripttest.csv
ForEach ($rowObject in $Doc)
{
$fileName = Split-Path -Leaf $rowObject.Path
Copy-Item -Path $rowObject.Path `
-Destination "\\Server\Folder\$((new-guid).guid.replace('-',''))_$fileName"
}
Note how Split-Path -Leaf is used to extract the filename, including extension, from the full input path.
If I read your question carefully, you want to:
copy the files listed in the CSV file in the 'File' column.
the new files should have a GUID prepended to the filename
you need a new CSV file where the new filenames are stored for later reference
you want to track any errors and write those to a (log) file
Assuming you have an input CSV file looking something like this:
File,Author,MoreStuff
\\server\sub\folder\file.ext,Someone,Blah
\\server\sub\folder\file2.ext,Someone Else,Blah2
\\server\sub\folder\file3.ext,Same Someone,Blah3
Then below script does hopefully what you want.
It creates new filenames by prepending them with a GUID and copies the files in the CSV listed in column File to some destination path.
It outputs a new CSV file in the destination folder like this:
OriginalFile,NewFile
\\server\sub\folder\file.ext,\\anotherserver\sub\folder\38f7bec9e4c0443081b385277a9d253d_file.ext
\\server\sub\folder\file2.ext,\\anotherserver\sub\folder\d19546f7a3284ccb995e5ea27db2c034_file2.ext
\\server\sub\folder\file3.ext,\\anotherserver\sub\folder\edd6d35006ac46e294aaa25526ec5033_file3.ext
Any errors are listed in a log file (also in the destination folder).
$Destination = '\\Server\Folder'
$ResultsFile = Join-Path $Destination 'Copy_Results.csv'
$Logfile = Join-Path $Destination 'Copy_Errors.log'
$Doc = Import-CSV C:\Temp\scripttest.csv
# create an array to store the copy results in
$result = #()
# loop through the csv data using only the column called 'File'
ForEach ($fileName in $Doc.File) {
# check if the given file exists; if not then write to the errors log file
if (Test-Path -Path $fileName -PathType Leaf) {
$oldBaseName = Split-Path -Path $fileName.Path -Leaf
# or do $oldBaseName = [System.IO.Path]::GetFileName($fileName)
$newBaseName = "{0}_{1}" -f $((New-Guid).toString("N")), $oldBaseName
# (New-Guid).toString("N") returns the Guid without hyphens, same as (New-Guid).Guid.Replace('-','')
$destinationFile = Join-Path $Destination $newBaseName
try {
Copy-Item -Path $fileName -Destination $destinationFile -Force -ErrorAction Stop
# add an object to the results array to store the original filename and the full filename of the copy
$result += New-Object -TypeName PSObject -Property #{
'OriginalFile' = $fileName
'NewFile' = $destinationFile
}
}
catch {
Write-Error "Could not copy file to '$destinationFile'"
# write the error to the log file
Add-content $Logfile -Value "$((Get-Date).ToString("yyyy-MM-dd HH:mm:ss")) - ERROR: Could not copy file to '$destinationFile'"
}
}
else {
Write-Warning "File '$fileName' does not exist"
# write the error to the log file
Add-content $Logfile -Value "$((Get-Date).ToString("yyyy-MM-dd HH:mm:ss")) - WARNING: File '$fileName' does not exist"
}
}
# finally create a CSV with the results of this copy.
# the CSV will have two headers 'OriginalFile' and 'NewFile'
$result | Export-Csv -Path $ResultsFile -NoTypeInformation -Force
Thank you to everyone for the solutions. All of them worked and worked well. I chose Theo as the answer for the fact that his solution solved the error logging and stored all the new renamed files with GUID_File.ext new to the existing CSV info.
Thank you all.
Well as the title states i made a PowerShell script that had such a horrible performance that it overextended the server ressources and crashed it.
The script reads an entire.xml file and appends a text at the beginning and at the end of the file. Also it changes the name of the file accroding to what is located in my filename.txt.
The .xml files are around 500 MB big and have over 4.7 million rows. Is there a way, that i don't have to read the entire file but not loose information?
function start-jobhere([scriptblock]$block){
start-job -argumentlist (get-location),$block { set-location $args[0]; invoke-expression $args[1] }
}
$handler_button1_Click= {
Try{
$job3 = start-jobhere {
#Text that should be at filebeginning
#('<?xml version="1.0" encoding="UTF-8"?>
<ids:ControlInfo>
<ids:ObjectFormat>CSV</ids:ObjectFormat>
<ids:SeparatorForCSV>;</ids:SeparatorForCSV>
</ids:ControlInfo>
<ids:BatchDeltaUntil></ids:BatchDeltaUntil>
</ids:BatchInfo>
</ids:Header>
<ids:Body>'
) + (get-content ZUB_Lokalisation.xml) | set-content ZUB_Lokalisation.xml
#Text that should be at file end
Add-Content ZUB_Lokalisation.xml -Value "</ids:Body>`n</ids:SimpleOperation>"
#Information that goes into the header of the file but has to be extracted from the filename inside a .txt
$filename = Select-String filename.txt -Pattern "Lokalisation"
$nameoffile = [System.IO.Path]::GetFileName($filename)
$split = $nameoffile.split('_')
$finalid = $split[5]
$content = Get-Content ZUB_Lokalisation.xml
$content[8] = ' <ids:BatchInfo ids:BatchID="{0}">' -f $finalid
$content | Set-Content ZUB_Lokalisation.xml
#Rename the file
Rename-Item ZUB_Lokalisation.xml -NewName $filename}
}catch [System.Exception]{zeigen
[System.Windows.Forms.MessageBox]::Show("ZUB_LOK_ERROR", "ERROR")}
}
Get-Job | Wait-Job | Where State -eq "Running"
}
Create files containing the start and end fragments that you want.
Then run this in a dos window or batch file:
COPY StartFile.TXT + YourXMLFile.TXT + EndFile.TXT OutputFile.TXT
This sticks the three files together and saves them as OutputFile.TXT
I have standard Apache log files, between 500Mb and 2GB in size. I need to sort the lines in them (each line starts with a date yyyy-MM-dd hh:mm:ss, so no treatment necessary for sorting.
The simplest and most obvious thing that comes to mind is
Get-Content unsorted.txt | sort | get-unique > sorted.txt
I am guessing (without having tried it) that doing this using Get-Content would take forever in my 1GB files. I don't quite know my way around System.IO.StreamReader, but I'm curious if an efficient solution could be put together using that?
Thanks to anyone who might have a more efficient idea.
[edit]
I tried this subsequently, and it took a very long time; some 10 minutes for 400MB.
Get-Content is terribly ineffective for reading large files. Sort-Object is not very fast, too.
Let's set up a base line:
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$c = Get-Content .\log3.txt -Encoding Ascii
$sw.Stop();
Write-Output ("Reading took {0}" -f $sw.Elapsed);
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$s = $c | Sort-Object;
$sw.Stop();
Write-Output ("Sorting took {0}" -f $sw.Elapsed);
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$u = $s | Get-Unique
$sw.Stop();
Write-Output ("uniq took {0}" -f $sw.Elapsed);
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$u | Out-File 'result.txt' -Encoding ascii
$sw.Stop();
Write-Output ("saving took {0}" -f $sw.Elapsed);
With a 40 MB file having 1.6 million lines (made of 100k unique lines repeated 16 times) this script produces the following output on my machine:
Reading took 00:02:16.5768663
Sorting took 00:02:04.0416976
uniq took 00:01:41.4630661
saving took 00:00:37.1630663
Totally unimpressive: more than 6 minutes to sort tiny file. Every step can be improved a lot. Let's use StreamReader to read file line by line into HashSet which will remove duplicates, then copy data to List and sort it there, then use StreamWriter to dump results back.
$hs = new-object System.Collections.Generic.HashSet[string]
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$reader = [System.IO.File]::OpenText("D:\log3.txt")
try {
while (($line = $reader.ReadLine()) -ne $null)
{
$t = $hs.Add($line)
}
}
finally {
$reader.Close()
}
$sw.Stop();
Write-Output ("read-uniq took {0}" -f $sw.Elapsed);
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$ls = new-object system.collections.generic.List[string] $hs;
$ls.Sort();
$sw.Stop();
Write-Output ("sorting took {0}" -f $sw.Elapsed);
$sw = [System.Diagnostics.Stopwatch]::StartNew();
try
{
$f = New-Object System.IO.StreamWriter "d:\result2.txt";
foreach ($s in $ls)
{
$f.WriteLine($s);
}
}
finally
{
$f.Close();
}
$sw.Stop();
Write-Output ("saving took {0}" -f $sw.Elapsed);
this script produces:
read-uniq took 00:00:32.2225181
sorting took 00:00:00.2378838
saving took 00:00:01.0724802
On same input file it runs more than 10 times faster. I am still surprised though it takes 30 seconds to read file from disk.
I've grown to hate this part of windows powershell, it is a memory hog on these larger files. One trick is to read the lines [System.IO.File]::ReadLines('file.txt') | sort -u | out-file file2.txt -encoding ascii
Another trick, seriously is to just use linux.
cat file.txt | sort -u > output.txt
Linux is so insanely fast at this, it makes me wonder what the heck microsoft is thinking with this set up.
It may not be feasible in all cases, and i understand, but if you have a linux machine, you can copy 500 megs to it, sort and unique it, and copy it back in under a couple minutes.
If each line of the log is prefixed with a timestamp, and the log messages don't contain embedded newlines (which would require special handling), I think it would take less memory and execution time to convert the timestamp from [String] to [DateTime] before sorting. The following assumes each log entry is of the format yyyy-MM-dd HH:mm:ss: <Message> (note that the HH format specifier is used for a 24-hour clock):
Get-Content unsorted.txt
| ForEach-Object {
# Ignore empty lines; can substitute with [String]::IsNullOrWhitespace($_) on PowerShell 3.0 and above
if (-not [String]::IsNullOrEmpty($_))
{
# Split into at most two fields, even if the message itself contains ': '
[String[]] $fields = $_ -split ': ', 2;
return New-Object -TypeName 'PSObject' -Property #{
Timestamp = [DateTime] $fields[0];
Message = $fields[1];
};
}
} | Sort-Object -Property 'Timestamp', 'Message';
If you are processing the input file for interactive display purposes you can pipe the above into Out-GridView or Format-Table to view the results. If you need to save the sorted results you can pipe the above into the following:
| ForEach-Object {
# Reconstruct the log entry format of the input file
return '{0:yyyy-MM-dd HH:mm:ss}: {1}' -f $_.Timestamp, $_.Message;
} `
| Out-File -Encoding 'UTF8' -FilePath 'sorted.txt';
(Edited to be more clear based on n0rd's comments)
It's might be a memory issue. Since you're loading the entire file into memory to sort it (and adding the overhead of the pipe into Sort-Object and the pipe into Get-Unique), it's possible that you're hitting the memory limits of the machine and forcing it to page to disk, which will slow things down a lot. One thing you might consider is splitting the logs up before sorting them, and then splicing them back together.
This probably won't match your format exactly, but if I've got a large log file for, say, 8/16/2012 which spans several hours, I can split it up into a different file for each hour using something like this:
for($i=0; $i -le 23; $i++){ Get-Content .\u_ex120816.log | ? { $_ -match "^2012-08-16 $i`:" } | Set-Content -Path "$i.log" }
This is creating a regular expression for each hour of that day and dumping all the matching log entries into a smaller log file named by the hour (e.g. 16.log, 17.log).
Then I can run your process of sorting and getting unique entries on a much smaller subsets, which should run a lot faster:
for($i=0; $i -le 23; $i++){ Get-Content "$i.log" | sort | get-unique > "$isorted.txt" }
And then you can splice them back together.
Depending on the frequency of the logs, it might make more sense to split them by day, or minute; the main thing is to get them into more manageable chunks for sorting.
Again, this only makes sense if you're hitting the memory limits of the machine (or if Sort-Object is using a really inefficient algorithm).
"Get-Content" can be faster than you think. Check this code-snippet in addition to the above solution:
foreach ($block in (get-content $file -ReadCount 100)) {
foreach ($line in $block){[void] $hs.Add($line)}
}
There doesn't seem to be a great way to do it in powershell, including [IO.File]::ReadLines(), but with the native windows sort.exe or the gnu sort.exe, either within cmd.exe, 30 million random numbers can be sorted in about 5 minutes with around 1 gb of ram. The gnu sort automatically breaks things up into temp files to save ram. Both commands have options to start the sort at a certain character column. Gnu sort can merge sorted files. See external sorting.
30 million line test file:
& { foreach ($i in 1..300kb) { get-random } } | set-content file.txt
And then in cmd:
copy file.txt+file.txt file2.txt
copy file2.txt+file2.txt file3.txt
copy file3.txt+file3.txt file4.txt
copy file4.txt+file4.txt file5.txt
copy file5.txt+file5.txt file6.txt
copy file6.txt+file6.txt file7.txt
copy file7.txt+file7.txt file8.txt
With gnu sort.exe from http://gnuwin32.sourceforge.net/packages/coreutils.htm . Don't forget the dependency dll's -- libiconv2.dll & libintl3.dll. Within cmd.exe:
.\sort.exe < file8.txt > filesorted.txt
Or windows sort.exe within cmd.exe:
sort.exe < file8.txt > filesorted.txt
With the function below:
PS> PowerSort -SrcFile C:\windows\win.ini
function PowerSort {
param(
[string]$SrcFile = "",
[string]$DstFile = "",
[switch]$Force
)
if ($SrcFile -eq "") {
write-host "USAGE: PowerSort -SrcFile (srcfile) [-DstFile (dstfile)] [-Force]"
return 0;
}
else {
$SrcFileFullPath = Resolve-Path $SrcFile -ErrorAction SilentlyContinue -ErrorVariable _frperror
if (-not($SrcFileFullPath)) {
throw "Source file not found: $SrcFile";
}
}
[Collections.Generic.List[string]]$lines = [System.IO.File]::ReadAllLines($SrcFileFullPath)
$lines.Sort();
# Write Sorted File to Pipe
if ($DstFile -eq "") {
foreach ($line in $lines) {
write-output $line
}
}
# Write Sorted File to File
else {
$pipe_enable = 0;
$DstFileFullPath = Resolve-Path $DstFile -ErrorAction SilentlyContinue -ErrorVariable ev
# Destination File doesn't exist
if (-not($DstFileFullPath)) {
$DstFileFullPath = $ev[0].TargetObject
}
# Destination Exists and -force not specified.
elseif (-not $Force) {
throw "Destination file already exists: ${DstFile} (using -Force Flag to overwrite)"
}
write-host "Writing-File: $DstFile"
[System.IO.File]::WriteAllLines($DstFileFullPath, $lines)
}
return
}
How to write script in powershell which finds given string in all files in given directory and changes it to given second one ?
thanks for any help,
bye
Maybe something like this
$files = Get-ChildItem "DirectoryContainingFiles"
foreach ($file in $files)
{
$content = Get-Content -path $file.fullname
$content | foreach {$_ -replace "toreplace", "replacewith"} |
Set-Content $file.fullname
}
If the string to replace spans multiple lines then using Get-Content isn't going to cut it unless you stitch together the output of Get-Content into a single string. It's easier to use [io.file]::ReadAllText() in this case e.g.:
Get-ChildItem | Where {!$_.PSIsContainer} |
Foreach { $txt = [IO.File]::ReadAllText($_.fullname);
$txt -replace $old,$new; $txt | Out-File $_}
Note with with $old, you may need to use a regex directive like '(?s)' at the beginning to indicate that . matches newline characters also.
I believe that you can get the list of all files in a directory (simple?). Now comes the replacement part. Here is how you can do it with power shell:
type somefile.txt | %{$_ -replace "string_to_be_replaces","new_strings"}
Modify it as per your need. You can also redirect the output to a new file the same way you do other redirection (using: >).
To get the list of files, use:
Get-ChildItem <DIR_PATH> -name