I have many many files in one folder, which look like this:
E123_1_410_4.03_97166_456_2.B.pdf
E123-1-410-4.03-97166-456_2.B.pdf
I can change all the underscores, but not just 5 of them.
$names = "AD1-D-1234-3456-01","111-D-abcd-3456-01","abc-d-efgi-jklm-no","xxx-xx-xxxx-xxxx-xx"
$names |
ForEach-Object{
$new = $_ -replace '(?x)
^ # beginning of string
( # begin group 1
[^-]{3} # a pattern of three non-hyphen characters
) # end of group 1
- # a hyphen
( # begin group 2
[^-] # a non-hyphen (one character)
- # a hyphen
[^-]{4} # a pattern of non-hyphen characters four characters in length
- # a hyphen
[^-]{4} # a pattern of non-hyphen characters four characters in length
) # end of group 2
- # a hyphen
( # begin group 3
[^-]{2} # a pattern of non-hyphen characters two characters in length
) # end of group 3
$ # end of string
', '$1_$2_$3' # put the groups back in order and insert "_" between the three groups
if ($new -eq $_){ # check to see if the substitution worked. I.e., was the pattern in $_ correct
Write-Host "Replacement failed for '$_'"
}
else{
$new
}
}
This will rename the files by replacing all underscores in it to dashes, except for the last underscore:
(Get-ChildItem -Path 'X:\Where\The\Files\Are' -Filter '*_*.*' -File) | Rename-Item -NewName {
$prefix, $postfix = $_.Name -split '^(.+)(_[^_]+)$' -ne ''
"{0}$postfix" -f ($prefix -replace '_', '-')
} -WhatIf
I have put the Get-ChildItem inside brackets to let it finish gathering the files first. If you leave that out, there is the possibility it might pick up files that were already renamed which is a waste of time.
The added switch _WhatIf is a safety device. This lets you see in the console window what the code would rename. If you are satisfied this is correct, remove the -WhatIf switch and run the code again so the files actually are renamed.
Examples:
X:\Where\The\Files\Are\111_D_abcd_3456_01_qqq_7C.pdf --> X:\Where\The\Files\Are\111-D-abcd-3456-01-qqq_7C.pdf
X:\Where\The\Files\Are\AD1_D-1234_3456-01_xyz_3.A.pdf --> X:\Where\The\Files\Are\AD1-D-1234-3456-01-xyz_3.A.pdf
X:\Where\The\Files\Are\E123_1_410_4.03_97166_456_2.B.pdf --> X:\Where\The\Files\Are\E123-1-410-4.03-97166-456_2.B.pdf
If you want to keep the last underscore when renaming your file, use split to deconstruct part of the word, and reconstruct the name by using a loop. At last add the dash a the end. In this way whatever the number of underscores, you can replace all of them.
Working code:
$names = "E123_1_410_4.03_97166_456-test-test_2.pdf", "E123_1_410_4.03_97166_456_2.B.pdf"
$names |
ForEach-Object{
$new = [string]::empty;
#split
$tab = $_.split("_");
#do nothing if there is only one or no dash
if($tab.count -gt 2){
#reconstruct by using keep a dash at the end
$new = $tab[0];
for($i = 1; $i -lt $tab.count - 1; $i++){
$txt = $tab[$i];
$new += "-" + $txt ;
}
#add last dash
$txt = $tab[$tab.count - 1];
$new += "_" + $txt;
if ($new -eq $_){ # check to see if the substitution worked. I.e., was the pattern in $_ correct
Write-Host "Replacement failed for '$_'"
}
else{
write-Host $new;
}
}
}
Related
I am trying to store a text file string which has a beginning and end that make it a substring of the original text file. I am new to Powershell so my methods are simple/crude. Basically my approach has been:
Roughly get what I want from the start of the string
Worry about trimming off what I don't want later
My minimum reproducible example is as follows:
# selectStringTest.ps
$inputFile = Get-Content -Path "C:\test\test3\Copy of 31832_226140__0001-00006.txt"
# selected text string needs to span from $refName up to $boundaryName
[string]$refName = "001 BARTLETT"
[string]$boundaryName = "001 BEECH"
# a rough estimate of the text file lines required
[int]$lines = 200
if (Select-String -InputObject $inputFile -pattern $refName) {
Write-Host "Selected shortened string found!"
# this selects the start of required string but with extra text
[string]$newFileStart = $inputFile | Select-String $refName -CaseSensitive -SimpleMatch -Context 0, $lines
}
else {
Write-Host "Selected string NOT FOUND."
}
# tidy up the start of the string by removing rubbish
$newFileStart = $newFileStart.TrimStart('> ')
# this is the kind of thing I want but it doesn't work
$newFileStart = $newFileStart - $newFileStart.StartsWith($boundaryName)
$newFileStart | Out-File tempOutputFile
As it is: the output begins correctly but I cannot remove text including and after $boundaryName
The original text file is OCR generated (Optical Character Recognition) So it is unevenly formatted. There are newlines in odd places. So I have limited options when it comes to delimiting.
I am not sure my if (Select-String -InputObject $inputFile -pattern $refName)is valid. It appears to work correctly. The general design seems crude. In that I am guessing how many lines I will need. And finally I have tried various methods of trimming the string from $boundaryName without success. For this:
string.split() not practical
replacing spaces with newlines in an array & looping through to elements of $boundaryName is possible but I don't know how to terminate the array at this point before returning it to string.
Any suggestions would be appreciated.
Abbreviated content of x2 200 listings single Copy of 31832_226140__0001-00006.txt file is:
Beginning of text file
________________
BARTLETT-BEDGGOOD
PENCARROW COMPOSITE ROLL
PAGE 6
PAGE 7
PENCARROW COMPOSITE ROLL
BEECH-BEST
www.
.......................
001 BARTLETT. Lois Elizabeth
Middle of text file
............. 15 St Ronans Av. Lower Hutt Marned 200 BEDGGOOD. Percy Lloyd
............15 St Ronans Av, Lower Mutt. Coachbuild
001 BEECH, Margaret ..........
End of text file
..............312 Munita Rood Eastbourne, Civil Eng 200 BEST, Dons Amy .........
..........50 Man Street, Wamuomata, Marned
SO NON
To use a regex across newlines, the file needs to be read as a single string. Get-Content -Raw will do that. This assumes that you do not want the lines containing refName and boundaryName included in the output
$c = Get-Content -Path '.\beech.txt' -Raw
$refName = "001 BARTLETT"
$boundaryName = "001 BEECH"
if ($c -match "(?smi).*$refName.*?`r`n(.*)$boundaryName.*?`r`n.*") {
$result = $Matches[1]
}
$result
More information at https://stackoverflow.com/a/12573413/447901
How close does this come to what you want?
function Process-File {
param (
[Parameter(Mandatory = $true, Position = 0)]
[string]$HeadText,
[Parameter(Mandatory = $true, Position = 1)]
[string]$TailText,
[Parameter(ValueFromPipeline)]
$File
)
Process {
$Inside = $false;
switch -Regex -File $File.FullName {
#'^\s*$' { continue }
"(?i)^\s*$TailText(?<Tail>.*)`$" { $Matches.Tail; $Inside = $false }
'^(?<Line>.+)$' { if($Inside) { $Matches.Line } }
"(?i)^\s*$HeadText(?<Head>.*)`$" { $Matches.Head; $Inside = $true }
default { continue }
}
}
}
$File = 'Copy of 31832_226140__0001-00006.txt'
#$Path = $PSScriptRoot
$Path = 'C:\test\test3'
$Result = Get-ChildItem -Path "$Path\$File" | Process-File '001 BARTLETT' '001 BEECH'
$Result | Out-File -FilePath "$Path\SpanText.txt"
This is the output:
. Lois Elizabeth
............. 15 St Ronans Av. Lower Hutt Marned 200 BEDGGOOD. Percy Lloyd
............15 St Ronans Av, Lower Mutt. Coachbuild
, Margaret ..........
How to modify a string (LINE2 "line number LINE2 is on") in a windows ascii text file using search strings that are easy to read and easy to add/modify/delete using PowerShell 5. This script will parse a 2500 line file, find 139 instances of the strings, replace them and overwrite the original in less than 165ms on average depending on which method you use. Which method is faster? Which method is easier to add/modify/delete the strings?
Search for strings "AROUND LINE {1-9999}" and "LINE2 {1-9999}" and replace {1-9999} with the {line number} the code is on. The tests were done with a 2500 line file not the two line sample.bat.
sample.bat contains two lines:
ECHO AROUND LINE 5936
TITLE %TIME% DISPLAY TCP-IP SETTINGS LINE2 5937
Method One: Using Get-Content + -replace + Set-Content:
Measure-command {
copy-item $env:temp\sample9.bat -d $env:temp\sample.bat -force
(gc $env:temp\sample.bat) | foreach -Begin {$lc = 1} -Process {
$_ -replace 'AROUND LINE \d+', "AROUND LINE $lc" -replace 'LINE2 \d+', "LINE2 $lc"
++$lc
} | sc -Encoding Ascii $env:temp\sample.bat}
Results: 175ms-387ms in ten runs for an average of 215ms.
You modify the search by adding / removing / modifying -replace.
-replace 'AROUND LINE \d+', "AROUND LINE $lc" -replace 'LINE2 \d+', "LINE2 $lc" -replace 'PLACEMARK \d+', "PLACEMARK $lc"
powershell $env:temp\sample.ps1 $env:temp\sample.bat:
(gc $args[0]) | foreach -Begin {$lc = 1} -Process {
$_ -replace 'AROUND LINE \d+', "AROUND LINE $lc" -replace 'LINE2 \d+', "LINE2 $lc"
++$lc
} | sc -Encoding Ascii $args[0]
Method Two: Using switch and .NET frameworks:
Measure-command {
copy-item $env:temp\sample9.bat -d $env:temp\sample.bat -force
$file = "$env:temp\sample.bat"
$lc = 0
$updatedLines = switch -Regex ([IO.File]::ReadAllLines($file)) {
'^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$' { $Matches[1] + ++$lc + $Matches[2] }
default { ++$lc; $_ }
}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)}
Results: 73ms-816ms in ten runs for an average of 175ms.
Method Three: Using switch and .NET frameworks optimized version based on a precompiled regex:
Measure-command {
copy-item $env:temp\sample9.bat -d $env:temp\sample.bat -force
$file = "$env:temp\sample.bat"
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$lc = 0
$updatedLines = & {foreach ($line in [IO.File]::ReadLines($file)) {
$lc++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $lc + $g[2].Value
} else { $line }
}}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)}
Results: 71ms-236ms in ten runs for an average of 106ms.
Add/Modify/Delete your search string:
AROUND LINE|LINE2|PLACEMARK
AROUND LINE|LINE3
LINE4
powershell $env:temp\sample.ps1 $env:temp\sample.bat:
$file=$args[0]
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$lc = 0
$updatedLines = & {foreach ($line in [IO.File]::ReadLines($file
)) {
$lc++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $lc + $g[2].Value
} else { $line }
}}
[IO.File]::WriteAllLines($file
, $updatedLines, [Text.Encoding]::ASCII)
Editor's note: This is a follow-up question to Iterate a backed up ascii text file, find all instances of {LINE2 1-9999} replace with {LINE2 "line number the code is on"}. Overwrite. Faster?
The evolution of this question from youngest to oldest:
1. 54757890 2. 54737787 3. 54712715 4. 54682186
Update: I've used #mklement0 regex solution.
switch -Regex -File $file {
'^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$' { $Matches[1] + ++$lc + $Matches[2] }
default { ++$lc; $_ }
}
Given that regex ^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$ contains only 2 capture groups - the part of the line before the number to replace (\d+) and the part of the line after, you must reference these groups with indices 1 and 2 into the automatic $Matches variable in the output (not 2 and 3).
Note that (?:...) is a non-capturing group, so by design it isn't reflected in $Matches.
Instead of reading the file with [IO.File]::ReadAllLines($file), I'm using the -File option with switch, which directly reads the lines from file $file.
The ++$lc inside default { ++$lc; $_ } ensures that the line counter is also incremented for non-matching lines before passing the line at hand through ($_).
Performance notes
You can improve the performance slightly with the following obscure optimization:
# Enclose the switch statement in & { ... } to speed it up slightly.
$updatedLines = & { switch -Regex -File ... }
With high iteration counts (a large number of lines), using a precompiled [regex] instance rather than a string literal that PowerShell converts to a regex behind the scenes can speed things up further - see benchmarks below.
Additionally, if case-sensitive matching is sufficient, you can squeeze out a little more performance by adding the -CaseSensitive option to the switch statement.
At a high level, what makes the solution fast is the use of switch -File to process the lines, and, generally, the use of .NET types for file I/O (rather than cmdlets) (IO.File]::WriteAllLines() in this case, as shown in the question) - see also this related answer.
That said, marsze's answer offers a highly optimized foreach loop approach based on a precompiled regex that is faster with higher iteration counts - it is, however, more verbose.
Benchmarks
The following code compares the performance of this answer's switch approach with marsze's foreach approach.
Note that in order to make the two solutions fully equivalent, the following tweaks were made:
The & { ... } optimization was added to the switch command as well.
The IgnoreCase and CultureInvariant options were added to the foreach approach to match the options PS regexes implicitly use.
Instead of a 6-line sample file, performance is tested with a 600-line, a 3,000 and a 30,000-line file respectively, so as to show the effects of the iteration count on performance.
100 runs are being averaged.
Sample results from my Windows 10 machine running Windows PowerShell v5.1 - the absolute times aren't important, but hopefully the relative performance shown in the Factor column is generally representative:
VERBOSE: Averaging 100 runs with a 600-line file of size 0.03 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.023 # switch -Regex -File with regex string literal...
1.16 0.027 # foreach with precompiled regex and [regex].Match...
1.23 0.028 # switch -Regex -File with precompiled regex...
VERBOSE: Averaging 100 runs with a 3000-line file of size 0.15 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.063 # foreach with precompiled regex and [regex].Match...
1.11 0.070 # switch -Regex -File with precompiled regex...
1.15 0.073 # switch -Regex -File with regex string literal...
VERBOSE: Averaging 100 runs with a 30000-line file of size 1.47 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.252 # foreach with precompiled regex and [regex].Match...
1.24 0.313 # switch -Regex -File with precompiled regex...
1.53 0.386 # switch -Regex -File with regex string literal...
Note how at lower iteration counts switch -regex with a string literal is fastest, but at around 1,500 lines the foreach solution with a precompiled [regex] instance starts to get faster; using a precompiled [regex] instance with switch -regex pays off to a lesser degree, only with higher iteration counts.
Benchmark code, using the Time-Command function:
# Sample file content (6 lines)
$fileContent = #'
TITLE %TIME% NO "%zmyapps1%\*.*" ARCHIVE ATTRIBUTE LINE2 1243
TITLE %TIME% DOC/SET YQJ8 LINE2 1887
SET ztitle=%TIME%: WINFOLD LINE2 2557
TITLE %TIME% _*.* IN WINFOLD LINE2 2597
TITLE %TIME% %%ZDATE1%% YQJ25 LINE2 3672
TITLE %TIME% FINISHED. PRESS ANY KEY TO SHUTDOWN ... LINE2 4922
'#
# Determine the full path to a sample file.
# NOTE: Using the *full* path is a *must* when calling .NET methods, because
# the latter generally don't see the same working dir. as PowerShell.
$file = "$PWD/test.bat"
# Note: input is the number of 6-line blocks to write to the sample file,
# which amounts to 600 vs. 3,000 vs. 30,0000 lines.
100, 500, 5000 | % {
# Create the sample file with the sample content repeated N times.
$repeatCount = $_
[IO.File]::WriteAllText($file, $fileContent * $repeatCount)
# Warm up the file cache and count the lines.
$lineCount = [IO.File]::ReadAllLines($file).Count
# Define the commands to compare as an array of scriptblocks.
$commands =
{ # switch -Regex -File with regex string literal
& {
$i = 0
$updatedLines = switch -Regex -File $file {
'^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$' { $Matches[1] + ++$i + $Matches[2] }
default { ++$i; $_ }
}
[IO.File]::WriteAllLines($file, $updatedLines, [text.encoding]::ASCII)
}
}, { # switch -Regex -File with precompiled regex
& {
$i = 0
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$updatedLines = switch -Regex -File $file {
$regex { $Matches[1] + ++$i + $Matches[2] }
default { ++$i; $_ }
}
[IO.File]::WriteAllLines($file, $updatedLines, [text.encoding]::ASCII)
}
}, { # foreach with precompiled regex and [regex].Match
& {
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$i = 0
$updatedLines = foreach ($line in [IO.File]::ReadLines($file)) {
$i++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $i + $g[2].Value
} else { $line }
}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)
}
}
# How many runs to average.
$runs = 100
Write-Verbose -vb "Averaging $runs runs with a $lineCount-line file of size $('{0:N2} MB' -f ((Get-Item $file).Length / 1mb))..."
Time-Command -Count $runs -ScriptBlock $commands | Out-Host
}
Alternative solution:
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$lc = 0
$updatedLines = & {foreach ($line in [IO.File]::ReadLines($file)) {
$lc++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $lc + $g[2].Value
} else { $line }
}}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)
I want to search a textfile for more than one string. If i find at least 1 string ( i repeat , i only need one string to be found, not all of them ) i want the program to stop and create a file in which i will find the text : "found"
This is my code that doesn't work properly :
$f = 'C:\users\datboi\desktop\dump.dmp'
$text = 'found'
$array = "_command",".command","-
command","!command","+command","^command",":command","]command","[command","#command","*command","$command","&command","#command","%command","=command","/command","\command","command!","command#","command#","command$","command%","command^","command&","command*","command-","command+","command=","command\","command/","command_","command.","command:"
$len = 9
$offset = 8
$data = [IO.File]::ReadAllBytes($f)
for ($i=0; $i -lt $data.Count - $offset; $i++) {
$slice = $data[$i..($i+$offset)]
$sloc = [char[]]$slice
if ($array.Contains($sloc)){
$text > 'command.log'
break
}
}
When i say it doesn t work properly i mean : it runs, no errors, but even if the file contains at least one of the strings from the array, it doesn't create the file i want .
This is literally what the Select-String cmdlet was created for. You can use a Regular Expression to simplify your search. For the RegEx I would use:
[_\.-!\+\^:]\[\#\*\$&#%=/\\]command|command[_\.-!\+\^:\#\*\$&#%=/\\]
That comes down to any of the characters in the [] brackets followed by the word 'command', or the word 'command' followed by any of the characters in the [] brackets. Then just pipe that to a ForEach-Object loop that outputs to your file and breaks.
Select-String -Path $f -Pattern '[_\.-!\+\^:]\[\#\*\$&#%=/\\]command|command[_\.-!\+\^:\#\*\$&#%=/\\]' | ForEach{
$text > 'command.log'
break
}
First, I would recommend using a regular expression as you can greatly shorten your code.
Second, PowerShell is good at pattern matching.
Example:
$symbolList = '_\-:!\.\[\]#\*\/\\&#%\^\+=\$'
$pattern = '([{0}]command)|(command[{0}])' -f $symbolList
$found = Select-String $pattern "inputfile.txt" -Quiet
$found
The $symbolList variable is a regular expression pattern containing a list of characters you want to find either before or after the word "command" in your search string.
The $pattern variable uses $symbolList to create the pattern.
The $found variable will be $true if the pattern is found in the file.
I am currently working to convert AS3 class to JavaScript using Powershell script.
Below is the sample code needs to be converted.
package somePackageName
{
class someClassName
{
// other codes
}
}
I need the entire package block to be removed and "class someClassName{" should be converted to "function someClassName(){".
The "someClassName" can be any string.
And I need the output like this.
function someClassName()
{
}
This is what I tried.
$l1 = Get-Content $dest | Where-Object {$_ -like 'class'}
$arr = $l1 -split ' '
$n1 = "function "+ $arr[1] + "() " +$arr[2]
(Get-Content $dest) -creplace $l1, $n1 | Set-Content $dest
I can able to achieve what I intended if the opening brace is in same line as the package declaration line. As Powershell checks line by line, I am stuck if the opening brace present in next line.
Regex based solution
Depending on your willingness to post process this or accept leading spaces you could use this regex to remove the block outside of the class and replace with a function declaration. This is messier than it needs to be but safer since we cannot guess what // other codes is. You could just match the whole class block outright but if there are other curly braces in there it would muddy the regex.
PS M:\> (Get-Content -Raw $dest) -replace "(?sm).*?class (\w+)(.*)}",'function $1()$2'
function someClassName()
{
// other codes
}
See Regex101 for more detail on what the regex is doing.
Basically dump everything until the word class (first time). Then keep everything until the last closing brace
Note the leading space in the greater portion. This is honoring the existing space. To account for this we need to calculate the indentation. Simply removing all leading space would break existing indentation in the class/function.
So a solution like this might be preferred:
# Read in the file as a single string
$raw = (Get-Content -Raw $dest)
# Using the line that has the class declaration measure the number of spaces in front of it.
[void]($raw -match "(?m)^(\s+)class")
$leadingSpacesToRemove = $Matches[1].Length
# Remove the package block. Also remove a certain amount of leading space.
$raw -replace "(?sm).*?class (\w+)(.*)}",'function $1()$2' -replace "(?m)^\s{$leadingSpacesToRemove}"
Less regex
Seems filtering the lines with no leading spaces is an easy way to narrow down to what you want.
Get-Content $dest | Where-Object{$_.StartsWith(" ")}
From there we still need to replace the "class" and deal with the leading spaces. For those we are going to use similar solutions to what I showed above.
# Read in the file as a single string. Skipping the package wrapper since it has no leading spaces.
$classBlock = Get-Content $dest | Where-Object{$_.StartsWith(" ")}
# Get the class name and the number of leading spaces.
$classBlock[0] -match "^(\s+)class (\w+)" | Out-Null
$leadingSpacesToRemove = $matches[1].Length
$className = $matches[2]
# Output the new declaration and the trimmed block.
# Using an array to start so that piping output will be in one pipe
#("function $className()") + ($classBlock | Select -Skip 1) -replace "^\s{$leadingSpacesToRemove}"
Both solutions try to account for your exact specifications and account for the presence of weird stuff inside the class block.
I'd suggest using regex:
#class myclass -> function myclass()
#(Get-Content $dest) -creplace 'class\s(.+)', 'function $1()' |
Set-Content $dest
This will capture the class declaration and replace it with a backreference to the class name capture.
Does anybody know how to determine the location of a file that's in one of the folders specified by the PATH environmental variable other than doing a dir filename.exe /s from the root folder?
I know this is stretching the bounds of a programming question but this is useful for deployment-related issues, also I need to examine the dependencies of an executable. :-)
You can use the where.exe utility in the C:\Windows\System32 directory.
For WindowsNT-based systems:
for %i in (file) do #echo %~dp$PATH:i
Replace file with the name of the file you're looking for.
If you want to locate the file at the API level, you can use PathFindOnPath. It has the added bonus of being able to specify additional directories, in case you want to search in additional locations apart from just the system or current user path.
On windows i'd say use %WINDIR%\system32\where.exe
Your questions title doesn't specify windows so I imagine some folks might find this question looking for the same with a posix OS on their mind (like myself).
This php snippet might help them:
<?php
function Find( $file )
{
foreach( explode( ':', $_ENV( 'PATH' ) ) as $dir )
{
$command = sprintf( 'find -L %s -name "%s" -print', $dir, $file );
$output = array();
$result = -1;
exec( $command, $output, $result );
if ( count( $output ) == 1 )
{
return( $output[ 0 ] );
}
}
return null;
}
?>
This is slightly altered production code I'm running on several servers. (i.e. taken out of OO context and left some sanitation and error checking out for brevity.)
Using PowerShell on Windows...
Function Get-ENVPathFolders {
#.Synopsis Split $env:Path into an array
#.Notes
# - Handle 1) folders ending in a backslash 2) double-quoted folders 3) folders with semicolons 4) folders with spaces 5) double-semicolons i.e. blanks
# - Example path: 'C:\WINDOWS\;"C:\Path with semicolon; in the middle";"E:\Path with semicolon at the end;";;C:\Program Files;
# - 2018/01/30 by Chad#ChadsTech.net - Created
$NewPath = #()
$env:Path.ToString().TrimEnd(';') -split '(?=["])' | ForEach-Object { #remove a trailing semicolon from the path then split it into an array using a double-quote as the delimeter keeping the delimeter
If ($_ -eq '";') { # throw away a blank line
} ElseIf ($_.ToString().StartsWith('";')) { # if line starts with "; remove the "; and any trailing backslash
$NewPath += ($_.ToString().TrimStart('";')).TrimEnd('\')
} ElseIf ($_.ToString().StartsWith('"')) { # if line starts with " remove the " and any trailing backslash
$NewPath += ($_.ToString().TrimStart('"')).TrimEnd('\') #$_ + '"'
} Else { # split by semicolon and remove any trailing backslash
$_.ToString().Split(';') | ForEach-Object { If ($_.Length -gt 0) { $NewPath += $_.TrimEnd('\') } }
}
}
Return $NewPath
}
$myFile = 'desktop.ini'
Get-ENVPathFolders | ForEach-Object { If (Test-Path -Path $_\$myFile) { Write-Output "Found [$_\$myFile]" } }
I also blogged the answer with some details over at http://blogs.catapultsystems.com/chsimmons/archive/2018/01/30/parse-envpath-with-powershell
In addition to the 'which' (MS Windows) and 'where' (unix/linux) utilities, I have written my own utility which I call 'findinpath'. In addition to finding the executable that would be executed, if handed to the command line interpreter (CLI), it will find all matches, returned path-search-order so you can find path-order problems. In addition, my utility returns not just executables, but any file-specification match, to catch those times when a desired file isn't actually executable.
I also added a feature that has turned out to be very nifty; the -s flag tells it to search not just the system path, but everything on the system disk, known user-directories excluded. I have found this feature to be incredibly useful in systems administration tasks...
Here's the 'usage' output:
usage: findinpath [ -p <path> | -path <path> ] | [ -s | -system ] <file>
or findinpath [ -h | -help ]
where: <file> may be any file spec, including wild cards
-h or -help returns this text
-p or -path uses the specified path instead of the PATH environment variable.
-s or -system searches the system disk, skipping /d /l/ /nfs and /users
Writing such a utility is not hard and I'll leave it as an exercise for the reader. Or, if asked here, I'll post my script - its in 'bash'.
just for kicks, here's a one-liner powershell implementation
function PSwhere($file) { $env:Path.Split(";") | ? { test-path $_\$file* } }