Output Filenames in a Folder to a Text File - windows

Using Windows Command Prompt or Windows PowerShell, how can I output all the file names in a single directory to a text file, without the file extension?
In Command Prompt, I was using:
dir /b > files.txt
Result
01 - Prologue.mp3
02 - Title.mp3
03 - End.mp3
files.txt
Desired Output
01 - Prologue
02 - Title
03 - End
Notice the "dir /b > files.txt" command includes the file extension and puts the filename at the bottom.
Without using a batch file, is there a clean Command Prompt or PowerShell command that can do what I'm looking for?

In PowerShell:
# Get-ChildItem (gci) is PowerShell's dir equivalent.
# -File limits the output to files.
# .BaseName extracts the file names without extension.
(Get-ChildItem -File).BaseName | Out-File files.txt
Note: You can use dir in PowerShell too, where it is simply an alias of Get-ChildItem. However, to avoid confusion with cmd.exe's internal dir command, which has fundamentally different syntax, it's better to use the PowerShell-native alias, gci. To see all aliases defined for Get-ChildItem, run Get-Alias -Definition Get-ChildItem
Note that use of PowerShell's > redirection operator - which is effectively an alias of the Out-File cmdlet - would also result in the undesired inclusion of the output, files.txt, in the enumeration, as in cmd.exe and POSIX-like shells such as bash, because the target file is created first.
By contrast, use of a pipeline with Out-File (or Set-Content, for text input) delays file creation until the cmdlet in this separate pipeline segment is initialized[1] - and because the file enumeration in the first segment has by definition already completed by that point, due to the Get-ChildItem call being enclosed in (...), the output file is not included in the enumeration.
Also note that property access .BaseName was applied to all files returned by (Get-ChildItem ...), which conveniently resulted in an array of the individual files' property values being returned, thanks to a feature called member-access enumeration.
Character-encoding note:
In Windows PowerShell, Out-File / > creates "Unicode" (UTF-16LE) files, whereas Set-Content uses the system's legacy ANSI code page.
In PowerShell (Core) 7+, BOM-less UTF-8 is the consistent default.
The -Encoding parameter can be used to control the encoding explicitly.
[1] In the case of Set-Content, it is actually delayed even further, namely until the first input object is received, but that is an implementation detail that shouldn't be relied on.

Related

Best method to automate across platforms

On a daily basis I edit G code to be run on a CO2 laser and I must remove the same string from every file before running it. Currently I open each file in notepad and do ctrl + h and replace it that way. I am looking for a more efficient way to do it.
I am given a packet with a run number (ie 123456), and that run number corresponds to a directory on the network. In that directory are a number of .txt or .nc files that require the string to be removed.
Each run directory is contained in a file structure that looks like this
"\\server\x\companyname\123456"
In \companyname\ there could be hundreds of run directories, all 6 digits - these are the run numbers and are different for each case. I would like to be able to create a program that prompts me for the the run number, and then finds that directory in \x, or rather the complete path as \companyname changes based on the run number, and then replace the string in each file within the \123456 directory. So if the string is Y11 and I was given the run number 000001, I would be prompted for the run number, it would search for the directory and set the path to \\server\x\walmart\000001\ and search each file in this directory for the string Y11 and delete it.
I tried using powershell and was able to find the run folder using Prompt and get-childitem using -recurse and setting a prompted variable to use for -include, however the system doesn't allow scripts!
I can't run scripts because it's a work computer. It says the execution of scripts are disabled on this system. Basically get-executionpolicy is set to restricted and I don't have a cert for this.
This is the code i have so far
$Run = Read-Host prompt "What is the run number?"
$Files = Get-ChildItem -Path "\\\server\x\" -Filter "$Run" -recurse
foreach ($file in $files)
(Get-object $file.PSPath) |
Foreach-Object {$_ -replace "G33", ""} |
Set-Content $file.PSPath
The last portion starting with foreach is something I copied and pasted off a previous example. I have very little experience with powershell. So the run number is located in a directory two levels below x\, for and when the user is prompted for the run number I want it to search all of x, locate the path including the directory that contains the run number (walmart in the example above), and then get all the files in the run number folder. Those files would have a path like
\server\x\walmart\000001\118000001_01.NC
There can be many files in this run folder, and they all need to have the string G33 removed from them.
I am new to this, can someone please explain step by step using powershell, visual basic or any method you think would be the easiest? Even java! I am hoping to be able to use this across PCs with different operating systems. Mostly windows. Thanks!
One possible solution is having an administrator for that server run PowerShell's Set-ExecutionPolicy RemoteSigned commandlet, so that local PowerShell scripts can be executed. Note that on Windows 64-bit, this might need to be done from both a 64-bit PowerShell prompt and from a 32-bit PowerShell prompt, to ensure local PowerShell scripts could be executed from either PowerShell.
Another approach is to use sed (short for Stream Editor), a classic Unix utility that is available on Linux, macOS and Windows. For the latter, search Google for UnixUtils on SourceForge, which includes sed and several other classic Unix utilities built for Windows. Sed can do global search-and-replace within text files. It is non-interactive and meant to be executed from a script (or Batch File).
A VBScript can also read each text file and do search-and-replace. You can open the text file and using a loop read each line. If a line contains the search string, replace that text (optionally with nothing) and then write it; otherwise write it unmodified. You write to a temp file in the same path as the file you're reading, and when done you delete the original and rename the new temp file to the original file name. You have to do error checking all along the way.
As for locating the sub-folders by Run Number, you can probably do that with a batch file (or bash script on Linux and macOS). Something like this:
#echo off
:get-directory-list
dir \\server\x /S /B /AD /OGNE > "C:\Users\Me\Desktop\ListOfDirectories.txt"
:prompt-runnumber
SET RUNNUMBER=
SET /P RUNNUMBER= Enter the 6-digit run number:
if not defined RUNNUMBER echo invalid input & goto prompt-runnumber
if /i "%RUNNUMBER%" EQU "q" goto :EOF
:get-dir-for-run-number
type "C:\Users\Me\Desktop\ListOfDirectories.txt" | find "%RUNNUMBER%" > nul
if ERRORLEVEL 1 echo Run Number not found & goto prompt-runnumber
type "C:\Users\Me\Desktop\ListOfDirectories.txt" | find "%RUNNUMBER%" > %TEMP%\%~n0.tmp"
set /P RUNFOLDER= < "%TEMP%\%~n0.tmp"
echo Run Folder full path is %RUNFOLDER%
:process-files-in-folder
for %%i in (%RUNFOLDER%\*.*) do (
echo Processing folder %RUNFOLDER%, file %%i
rem ... run sed, VBscript, etc. to edit file
)
You might have to edit the above batch file to use a mapped drive letter. For bash, you'd have to first mount the network share and refer to the mount point "directory".
bash offers corresponding commands to list directories, sort them, search for text strings (using grep rather than find), prompting for input, reading input from text files, etc. Both Windows Batch and bash support sub-routines (batch files can call themselves and pass a label as a parameter) if needed.

How can I get PowerShell current location every time I open terminal from file explorer

I can open PowerShell window in any directory using Windows File Explorer.
I want to run a script every time a new PowerShell window is open and use current directory where it was open in the script.
Using $profile let me for automatic script execution but $pwd variable does not have directory used to open PowerShell window but has C:\WINDOWS\system32. I understand PowerShell starts in C:\WINDOWS\system32, run $profile and next change location used with File Explorer. How can I get file explorer current directory it when my script is executes from $profile or maybe there is another way to automatic execute my script after PowerShell window is open?
Note: The answer below provides a solution based on the preinstalled File Explorer shortcut-menu commands for Window PowerShell.
If modifying these commands - which requires taking ownership of the registry keys with administrative privileges - or creating custom commands is an option, you can remove the NoWorkingDirectory value from the following registry keys (or custom copies thereof):
HKEY_CLASSES_ROOT\Directory\shell\Powershell
HKEY_CLASSES_ROOT\Directory\Background\shell\Powershell
Doing so will make the originating folder the working directory before PowerShell is invoked, so that $PROFILE already sees that working directory, as also happens when you submit powershell.exe via File Explorer's address bar.[1]
Shadowfax provides an important pointer in a comment on the question:
When you hold down Shift and then invoke the Open PowerShell window here shortcut-menu command on a folder or in the window background in File Explorer, powershell.exe is initially started with C:\Windows\System32 as the working directory[1], but is then instructed to change to the originating folder with a Set-Location command passed as a parameter; e.g., a specific command may look like this:
"PowerShell.exe" -noexit -command Set-Location -literalPath 'C:\Users\jdoe'
As an aside: The way this shortcut-menu command is defined is flawed, because it won't work with folder paths that happen to contain ' chars.
At the time of loading $PROFILE, C:\Windows\System32 is still effect, because any command passed to -command isn't processed until after the profiles have been loaded.
If you do need to know in $PROFILE what the working directory will be once the session is open, use the following workaround:
$workingDir = [Environment]::GetCommandLineArgs()[-1] -replace "'"
[Environment]::GetCommandLineArgs() returns the invoking command line as an array of arguments (tokens), so [-1] returns the last argument, assumed to be the working-directory path; -replace "'" removes the enclosing '...' from the result.
However, so as to make your $PROFILE file detect the (ultimately) effective working directory (location) irrespective of how PowerShell was invoked, more work is needed.
The following is a reasonably robust approach, but note that a fully robust solution would be much more complex:
# See if Set-Location was passed and extract the
# -LiteralPath or (possibly implied) -Path argument.
$workingDir = if ([Environment]::CommandLine -match '\b(set-location|cd|chdir\sl)\s+(-(literalpath|lp|path|PSPath)\s+)?(?<path>(?:\\").+?(?:\\")|"""[^"]+|''[^'']+|[^ ]+)') {
$Matches.path -replace '^(\\"|"""|'')' -replace '\\"$'
} else { # No Set-Location command passed, use the current dir.
$PWD.ProviderPath
}
The complexity of the solution comes from a number of factors:
Set-Location has multiple aliases.
The path may be passed positionally, with -Path or with -LiteralPath or its alias -PSPath.
Different quoting styles may be used (\"...\", """...""", '...'), or the path may be unquoted.
The command may still fail:
If the startup command uses prefix abbreviations of parameter names, such as -lit for -LiteralPath.
If a named parameter other than the path follows set-location (e.g., -PassThru).
If the string set-location is embedded in what PowerShell ultimately parses as a string literal rather than a command.
If the startup command is passed as a Base64-encoded string via -EncodedCommand.
[1] When you type powershell.exe into File Explorer's address bar instead, the currently open folder is made the working directory before PowerShell is started, and no startup command to change the working directory is passed; in that case, $PROFILE already sees the (ultimately) effective working directory.
1.open the registry (command:regedit)
2.find out the path \HKEY_CLASSES_ROOT\Directory\Background\shell\Powershell\command (not \HKEY_CLASSES_ROOT\Directory\Background\shell\cmd\command)
3.the default value should be powershell.exe -noexit -command Set-Location -literalPath "%V"
4.you can change some param,
ps: you change the command to cmd.exe /s /k pushd "%V". if you do so, shift & right button in the explorer will open the cmd, not powershell

Windows Batch File, Route Output of EXE called in Batch File

In my batch file, I call a EXE and would like the output to be redirected to a file. In the PowerShell command line, it would look something like this:
prog.exe file.txt | Out-File results\results.txt -Encoding ascii
The above works in the command line. In my batch file, I have written it as this:
prog.exe file.txt | powershell -Command "Out-File results\file.txt -Encoding ascii"
When I run the batch file, the results file gets created but contains zero content. How can write this to behave like I need it too?
The following should work in a batch file:
prog.exe file.txt > results\results.txt
If you want to redirect both stdout and stderr use:
prog.exe file.txt > results\results.txt 2>&1
kichik's helpful answer shows you an effective solution using batch-file features alone.
Unless you have a need to create files with an encoding other than ASCII or the active OEM code page, there's no need to get PowerShell involved - it'll only slow things down.
That said, you can choose a different code page via chcp in cmd.exe, but for output to a file only 65001 for UTF-8 really makes sense, but note that the resulting file will have no BOM - unlike when you use Out-File -Encoding utf8 in Windows PowerShell.
If you do need to use PowerShell - e.g., to create UTF-16LE ("Unicode") files or UTF-8 files with BOM - you'll have to use $Input with a PowerShell-internal pipe in your PowerShell command in order to access the stdin stream (i.e., what was piped in):
prog.exe file.txt | powershell -c "$Input | Out-File results\file.txt -Encoding ascii"
Note that only characters representable in the active code page (as reflected in chcp) will be recognized by PowerShell and can be translated into potentially different encodings.
Choosing -Encoding ascii would actually transliterate characters outside the (7-bit) ASCII range to literal ? characters, which would result in loss of information.

Are there any fundamental incompatibilities when using a CMD script in a console using PowerShell?

I have an extensive set of CMD scripts for an automation suite.
In a console using CMD.exe, everything works fine. After installing the Windows Creator's update, where PowerShell becomes the new default Windows shell via Explorer's menu options, my scripts break at-random. I can't provide any meaningful code for repro for two main reasons:
No obvious error is occurring; my automated scripts just hang, eventually
The halt isn't even occurring in the same place each time
What I can tell you is that the suite heavily relies on exit codes, use of findstr.exe, and type.
I know things like Windows macros, e.g., %Var% are not compatible, but I was assuming that since the only call I did was to a .bat file, .bat behavior would be the only thing I would need to worry about.
If that's not the case, should my initial .bat be triggering the direct execution of a CMD.exe instance with my parameters? If so, what's the best way to do that, PowerShell-friendly?
eryksun's comments on the question are all worth heeding.
This section of the answer provides a generic answer to the generic question in the question's title. See the next section for considerations specific to the OP's scenario.
Generally speaking, there are only a few things to watch out for when invoking a batch file from PowerShell:
Always include the specific filename extension (.bat or .cmd) in the filename, e.g., script_name.bat
This ensures that no other forms of the same command (named script_name, in the example) with higher precedence are accidentally executed, which could be:
In case of a command name without a path component:
An alias, function, cmdlet, or an external executable / PowerShell script (*.ps1) that happens to be located in a directory listed earlier in the $env:PATH (%PATH%) variable; if multiple executables by the same name are in the same (earliest) directory, the next point applies too.
In case of a command name with a path component:
A PowerShell script (*.ps1) or executable with the same filename root whose extension comes before .bat or .cmd in the %PATHEXT% environment variable.
If the batch file is located in the current directory, you must prefix its filename with ./
By design, as a security measure, PowerShell - unlike cmd.exe - does NOT invoke executables located in the current directory by filename only, so invoking script_name.bat to invoke a batch file of that name in the current directory does not work.[1]
Instead, you must use a path to target such an executable so as to explicitly signal the intent to execute something located in the current directory, and the simplest approach is to use prefix ./ (.\, if running on Windows only); e.g., ./script_name.bat.
When passing parameters to the batch file:
Either: be aware of PowerShell's parsing rules, which are applied before the arguments are passed to the batch file - see this answer of mine.
Or: use --% (the PSv3+ stop-parsing symbol) to pass the remaining arguments as if they'd been passed from a batch file (no interpretation by PowerShell other than expansion of %<var>%-style environment-variable references).
[1] eryksun points out that on Vista+ you can make cmd behave like PowerShell by defining environment variable NoDefaultCurrentDirectoryInExePath (its specific value doesn't matter).
While ill-advised, you can still force both environments to always find executables in the current directory by explicitly adding . to the %PATH% / $env:PATH variable; if you prepend ., you get the default cmd behavior.
As for your specific scenario:
After installing the Windows Creator's update, where PowerShell becomes the new default Windows shell via Explorer's menu options
This applies to the following scenarios:
Pressing Win-X (system-wide keyboard shortcut) now offers PowerShell rather than cmd in the shortcut menu that pops up.
Using File Explore's File menu now shows Open Windows PowerShell in place of Open command prompt (cmd).
However, nothing has changed with respect to how batch files are invoked when they are opened / double-clicked from File Explorer: The subkeys of HKEY_CLASSES_ROOT\batchfile and HKEY_CLASSES_ROOT\cmdfile in the registry still define the shell\open verb as "%1" %*, which should invoke a batch file implicitly with cmd /c, as before.
However, per your comments, your batch file is never run directly from File Explorer, because it require parameter values, and it is invoked in two fundamental ways:
Explicitly via a cmd console, after entering cmd in the Run dialog that is presented after pressing Win-R (system-wide keyboard shortcut).
In this case, everything should work as before: you're invoking your batch file from cmd.
Explicitly via PowerShell, using File Explorer's File menu.
Per your comments, the PowerShell console may either be opened:
directly in the directory in which the target batch file resides.
in an ancestral directory, such as the root of a thumb drive on which the batch file resides.
In both cases, PowerShell's potential interpretation of arguments does come into play.
Additionally, in the 2nd case (ancestral directory), the invocation will only work the same if the batch file either does not depend on the current directory or explicitly sets the current directory (such as setting it to its own location with cd /d "%~dp0").
This is a non-answer solution if encountering the question's specific behavior. I've verified all my halting scripts stopped halting after implementing a shim-like workaround.
As erykson said, there doesn't appear to be a reason why using a shim would be required. The goal is then to explicitly launch the script in CMD when using PowerShell, which aligns with Jeff Zeitlin's original suggestion in the question's comments.
So, let's say you're in my shoes with your own script_name.bat.
script_name.bat was your old script that initializes and kicks off everything. We can make sure that whatever was in script_name.bat is correctly run via CMD instead of PowerShell by doing the following:
Rename script_name.bat to script_name_shim.bat
Create a new script_name.bat in the same directory
Set its contents to:
#echo off
CMD.exe /C "%~dp0script_name_shim.bat" %*
exit /b %errorlevel%
That will launch your script with CMD.exe regardless of the fact that you started in PowerShell, and it will also use all your command-line arguments too.
This looks like a chicken egg problem, wihtout knowing the code it's difficult to tell where the problem is.
There are a ton of ways to start batches with cmd.exe even in win10cu.
Aliases are only a problem when working interactively with the PowerShell console and expecting behavior as it used to be in cmd.exe.
The aliases depend also on the loaded/imported modules and profiles.
This small PowerShell script will get all items from Help.exe and
perform a Get-Command with the item.
internal commands without counterparts in PoSh are filtered out by the ErrorAction SilentlyContinue.
Applications (*.exe files) are assumed identical and removed by the where clause.
help.exe |
Select-String '^[A-Z][^ ]+'|
ForEach-Object {
Get-Command $_.Matches.Value -ErrorAction SilentlyContinue
}| Where-Object CommandType -ne 'Application'|Select *|
Format-Table -auto CommandType,Name,DisplayName,ResolvedCommand,Module
Sample output on my system, all these items will likely work differently in PowerShell:
CommandType Name DisplayName ResolvedCommand Module
----------- ---- ----------- --------------- ------
Alias call call -> Invoke-Method Invoke-Method pscx
Alias cd cd -> Set-LocationEx Set-LocationEx Pscx.CD
Alias chdir chdir -> Set-Location Set-Location
Alias cls cls -> Clear-Host Clear-Host
Alias copy copy -> Copy-Item Copy-Item
Alias del del -> Remove-Item Remove-Item
Alias dir dir -> Get-ChildItem Get-ChildItem
Alias echo echo -> Write-Output Write-Output
Alias erase erase -> Remove-Item Remove-Item
Alias fc fc -> Format-Custom Format-Custom
Function help pscx
Alias md md -> mkdir mkdir
Function mkdir
Function more
Alias move move -> Move-Item Move-Item
Function Pause
Alias popd popd -> Pop-Location Pop-Location
Function prompt
Alias pushd pushd -> Push-Location Push-Location
Alias rd rd -> Remove-Item Remove-Item
Alias ren ren -> Rename-Item Rename-Item
Alias rmdir rmdir -> Remove-Item Remove-Item
Alias set set -> Set-Variable Set-Variable
Alias sc sc -> Set-Content Set-Content
Alias sort sort -> Sort-Object Sort-Object
Alias start start -> Start-Process Start-Process
Alias type type -> Get-Content Get-Content

How to use cmd type pipe (/piping) in PowerShell?

In cmd (and bash), pipe "|" pushes output to another command in the original format of the first command's output (as string).
In PowerShell, everything that comes out the pipe is an object (even a string is a string object).
Because of that, some commands fail when run in a PowerShell command window as opposed to a Windows command window.
Example:
dir c:\windows | gzip > test.gz
When this command is run in the Windows command prompt window it works properly - directory listing of C:\windows gets compressed into test.gz file.
The same command in PowerShell fails, because PowerShell does not use cmd-style pipe and replaces it with PowerShell pipe (working with array of file system items).
Q. How do you disable the default piping behavior in PowerShell to make traditional Windows commands work identically in PowerShell?
I tried using the escape character "`" before the pipe "`|", but it didn't work. I also tried invoke-expression -command "command with | here", but it also failed.
if you want to send strings down the pipeline you can use the cmdlet "out-string"
For Example:
get-process | out-string
If you are specifically looking for a PowerShell way to zip up files, check out the PowerShell Community Extensions. there are a bunch of cmdlets to zip and unzip all kinds of files.
http://pscx.codeplex.com
If you can pipe the output of (CMD) dir into gzip, then gzip apparently knows how to parse dir output. The (string) output from the PowerShell dir command (aka Get-ChildItem) doesn't look the same, so gzip likely would not be able to parse it. But, I'd also guess that gzip would be happy to take a list of paths, so this would probably work:
dir c:\windows | select -ExpandProperty FullName | gzip > test.gz
No warrantees express or implied.
If you really need to use the old school DOS pipe system in PowerShell, it can be done by running a command in a separate, temporary DOS session:
& cmd /c "dir c:\windows | gzip > test.gz"
The /c switch tells cmd to run the command then exit. Of course, this only works if all the commands are old school DOS - you can't mix-n-match them with PowerShell commands.
While there are PowerShell alternatives to the example given in the question, there are lots of DOS programs that use the old pipe system and will not work in PowerShell. svnadmin load is one that I've the pleasure of having to deal with.
You can't. PowerShell was designed to pass objects down a pipeline, not text. There isn't a backwards-compatability mode to DOS.

Resources