How to check for log4j vulnerability on windows (surface go) - windows

In this post:
https://gist.github.com/Neo23x0/e4c8b03ff8cdf1fa63b7d15db6e3860b
the following powershell command is claimed to check for log4j vulnerabilities:
gci 'C:\' -rec -force -include *.jar -ea 0 | foreach {select-string "JndiLookup.class" $_} | select -exp Path
That results in access denied errors.
Further checking reveals one has to run powershell as Administrator.
https://learn.microsoft.com/en-us/answers/questions/32390/access-is-denied-while-i-am-running-a-command-in-p.html
Doing that still gives access denied errors.
Further on in the above article it says to first issue:
Set-ExecutionPolicy AllSigned
That still gives Access Denied.
What is further required to get this to execute?

tl;dr
Use the following, streamlined version of the command, which should also perform much better.
# Run WITH ELEVATION (as admin):
gci C:\ -rec -file -force -filter *.jar -ev errs 2>$null | # Use -filter, not -include
select-string "JndiLookup.class" | # Pipe directly to select-string
select -exp Path
Note: -ea 0 - short for: -ErrorAction SilentlyContinue - should normally silence any error messages, but if that doesn't work for you for some reason, 2>$null should be effective.
-ev errs - short for: -ErrorVariable errs - collects all errors that occur in variable $errs, which you can examine after the fact to determine whether the errors are an indication of an actual permission problem.
Errors are expected in Windows PowerShell, even when running with elevation, namely relating to hidden system junctions, discussed below. However, you can ignore these errors.
In PowerShell (Core) 7+, where these errors no longer occur, you could omit -ev errs 2>$null above. Any errors that surface then would be indicative of a true permission problem.
Background information:
In general, even running with elevation (as admin) doesn't guarantee that all directories can be accessed. File-system ACLs at the directory level can prevent even elevated processes from enumerating the directory's files and subdirectories.
Notably, there are several hidden system junctions (links to other directories), defined for pre-Vista backward-compatibility only - such as C:\Documents and Settings and C:\Users\<username>\My Documents - that even elevated processes aren't permitted to enumerate the children of.
During file-system enumeration, this fact only becomes apparent in Windows PowerShell, which reports access-denied errors for these junctions. PowerShell (Core) 7+, by contrast, quietly skips them.
Even in Windows PowerShell the problem is only a cosmetic one, because these junctions merely point to directories that can be enumerated with elevation and therefore are with a -Recursive enumeration of the entire drive.
To find all these hidden system junctions:
# Run WITH ELEVATION (as admin):
cmd /c dir c:\ /s /b ashdl
Additional information is in this answer.

Related

how to do taskkill for all programs in the folder using cmd / powershell

I want to do Taskkill for all programs ending with exe in a folder with cmd / powershell
Example
taskkill /f /im C:\folder\*.exe
In PowerShell, something like this:
$files = gci "C:\Path\To\Files" -Filter "*.exe"
foreach($file in $files){
Get-Process |
Where-Object {$_.Path -eq $file.FullName} |
Stop-Process -WhatIf
}
Remove the -WhatIf when you're confident that the correct processes would be stopped.
I would advise that you try a different built-in command utility, WMIC.exe:
%SystemRoot%\System32\wbem\WMIC.exe Process Where "ExecutablePath Like 'P:\\athTo\\Folder\\%'" Call Terminate 2>NUL
Just change P:\\athTo\\Folder as needed, remembering that each backward slash requires doubling. You may have difficulties with other characters in your 'folder' name, but those are outside of the scope of my answer. To learn more about those please read, LIKE Operator
Note: If you are running the command from a batch-file, as opposed to directly within cmd, then change the % character to %%
nimizen's helpful PowerShell answer is effective, but can be simplified:
Get-Process |
Where-Object { (Split-Path -Parent $_.Path) -eq 'C:\folder' } |
Stop-Process -WhatIf
Note: The -WhatIf common parameter in the command above previews the operation. Remove -WhatIf once you're sure the operation will do what you want.
Note:
The above only targets processes whose executables are located directly in C:\folder, by checking whether the (immediate) parent path (enclosing directory) is the one of interest, using Split-Path -Parent.
If you wanted to target executables located in C:\Folder and any of its subfolders, recursively, use the following (inside Where-Object's script block ({ ... }):
$_.Path -like 'C:\folder\*'
Unlike Compo's helpful wmic.exe-based answer[1], this answer's solution also works on Unix-like platforms (using PowerShell (Core) 7+).
[1] Technically, wmic.exe is deprecated, as evidenced by wmic /? printing WMIC is deprecated in red, as the first (nonempty) line. Consider using PowerShell's CIM cmdlets, such as Get-CimInstance, instead, which has the added advantage of returning objects rather than text, for robust subsequent processing.

powershell - robocopy/List option not kicking back folder access denies

Edit: Also, the reason I'm not using logging via Robocopy is that it doesn't seem to want to work with /MT: either. All I get is a start and finish status using robocopy logging.
Below is my current script for finding specific files on a network via robocopy. It works perfect, especially with the multithreading. However, I am searching network shares, and occasionally, I come across a directory that I do not have access to. Robocopy seems to have some ability to kick back access denied messages, but I have not been able to get them to work with the /L option. /MT: also sometimes tends to do some weird things to the output. /V (verbose) has no effect on the output when combined with /L.
Without /MT: set, I am able to slowly get all directories and files listed. However, if I am denied access to a particular directory, the output will only show the parent directory where I am initially denied.
If I enable /MT: I am still able to find files and folders, however, the log then doesn't output the top parent directory in a path where I am denied access.
Robocopy is my only option due to longpath problems with DIR and Get-childitem. It also so far has worked the fastest out of all the options due to threading.
Permissions are not changeable, and I do NOT have backup administrator rights to override ACL denies.
If anyone knows how to capture the deny/failure logs, or has a clue, I'd be super happy. I know I have denies because the bottom of each log will list how many times it failed. But if it's listing how many times it failed, surely there's a way to write each time individually to a log?
To set up an artificial fail at home, I used a standard account to run ISE, and using my admin, created a directory with some files, and then another subdirectory inside, removing all permissions for everyone for the subdirectory and files inside it, and changed the owner to system. All the files showed up, however, the subdirectory only showed with the top folder, and was unable to search any deeper.
$ErrorActionPreference = 'continue'
remove-item C:\Users\$env:username\documents\log.txt
$measure= Measure-Command -Expression{
$path = "C:\Users\$env:username\New folder"
robocopy $path null /V /L /E /FP /XJD /XJF /R:1 /W:1 /MT:8 | % {
[string] $out=$_;
if(!($out.Contains("%"))){ECHO $out | Out-File "c:\users\$env:username\documents\log.txt" -Append}
} }
write-host $measure
/MT:x Definitely messes with output in a live copy.
I suspect the errors are going through the error stream (stderr), not standard output (stdout)
First I would add /np to your options. This way you don't have to deal with the "%" later.
Then try using redirection. I don't do it very often, but I think 2>&1 > will do it. It might look something like:
robocopy $path $FakeDest /V /L /E /FP /XJD /XJF /R:1 /W:1 /MT:8 2>&1 > "c:\users\$env:username\documents\log.txt"
Also, If you wrap this in start-process there are parameters to redirect both Standard and error streams. Again not tested but it might look something like:
Start-Process -FilePath C:\Windows\System32\Robocopy.exe -ArgumentList "$path $FakeDestination /V /L /E /FP /XJD /XJF /R:1 /W:1 /MT:8" -RedirectStandardError C:\temp\Errors.txt -RedirectStandardOutput c:\temp\Output.txt
Obviously change the file names...
If you really want to get crazy there's a module for NTFS permissions. It has cmdlets like Get-ChildItem2 these are based on a 3rd part API called AlphaFS (I think). it can access paths beyond the typical 260 character limit. I'm sure it will be slower, but failing the first 2 it may be worth it. I don't know how they react to access denied though...
I'm sorry I don't have time to test any of these approaches at the moment. However, do let me know if this helps at all. I'll be around...

How do I run multiple `cmd.exe /c` commands without getting an error?

I'm building a solution that needs to execute a dynamically built terminal command cross-platform. This is the command on macOS (actually a single line of text, changed for readability):
cat "/path/to/file1.txt" "/path/to/file2.txt" > "/path/to/output.txt" ;
rm "/path/to/file1.txt" ; rm "/path/to/file2.txt"
After a bit of research, the equivalent on Windows would be:
cmd.exe /c type "C:/path/to/file1.txt" "C:/path/to/file2.txt" > "C:/path/to/output.txt" ;
del "C:/path/to/file1.txt" ; del "C:/path/to/file2.txt"
Now, that seems to work when I put it in manually in the PowerShell, but I get errors. Note that the concatenation and deletion of the files appears to work, but I'm getting the following error:
The system cannot find the file specified.
Error occurred while processing C:/path/to/file1.txt
The system cannot find the file specified.
Error occurred while processing C:/path/to/file2.txt
When the Windows version of the command is dynamically built and executed, the concatenation works but the file removal does not, and my guess is it's because of these errors.
What does the Windows version of this need to be in order to work exactly like the macOS version?
(In case you're wondering, this is within a FileMaker database that uses the BE_ExecuteSystemCommand function from the BaseElements plugin.)
Windows uses backslashes, not forward slashes, in path names. Some commands will allow you to use forward slashes instead, but del is not one of them:
C:\Users\UoW>del "c:/Users/UoW/test.dat"
The system cannot find the path specified.
C:\Users\UoW>del "c:\Users\UoW\test.dat"
C:\Users\UoW>
Try staying in PowerShell, try this:
Add-Content -Path "C:/path/to/output.txt" -Value (Get-Content "C:/path/to/file1.txt")
Add-Content -Path "C:/path/to/output.txt" -Value (Get-Content "C:/path/to/file2.txt")
Remove-Item -Path "C:/path/to/file1.txt"
Remove-Item -Path "C:/path/to/file2.txt"

Are there any fundamental incompatibilities when using a CMD script in a console using PowerShell?

I have an extensive set of CMD scripts for an automation suite.
In a console using CMD.exe, everything works fine. After installing the Windows Creator's update, where PowerShell becomes the new default Windows shell via Explorer's menu options, my scripts break at-random. I can't provide any meaningful code for repro for two main reasons:
No obvious error is occurring; my automated scripts just hang, eventually
The halt isn't even occurring in the same place each time
What I can tell you is that the suite heavily relies on exit codes, use of findstr.exe, and type.
I know things like Windows macros, e.g., %Var% are not compatible, but I was assuming that since the only call I did was to a .bat file, .bat behavior would be the only thing I would need to worry about.
If that's not the case, should my initial .bat be triggering the direct execution of a CMD.exe instance with my parameters? If so, what's the best way to do that, PowerShell-friendly?
eryksun's comments on the question are all worth heeding.
This section of the answer provides a generic answer to the generic question in the question's title. See the next section for considerations specific to the OP's scenario.
Generally speaking, there are only a few things to watch out for when invoking a batch file from PowerShell:
Always include the specific filename extension (.bat or .cmd) in the filename, e.g., script_name.bat
This ensures that no other forms of the same command (named script_name, in the example) with higher precedence are accidentally executed, which could be:
In case of a command name without a path component:
An alias, function, cmdlet, or an external executable / PowerShell script (*.ps1) that happens to be located in a directory listed earlier in the $env:PATH (%PATH%) variable; if multiple executables by the same name are in the same (earliest) directory, the next point applies too.
In case of a command name with a path component:
A PowerShell script (*.ps1) or executable with the same filename root whose extension comes before .bat or .cmd in the %PATHEXT% environment variable.
If the batch file is located in the current directory, you must prefix its filename with ./
By design, as a security measure, PowerShell - unlike cmd.exe - does NOT invoke executables located in the current directory by filename only, so invoking script_name.bat to invoke a batch file of that name in the current directory does not work.[1]
Instead, you must use a path to target such an executable so as to explicitly signal the intent to execute something located in the current directory, and the simplest approach is to use prefix ./ (.\, if running on Windows only); e.g., ./script_name.bat.
When passing parameters to the batch file:
Either: be aware of PowerShell's parsing rules, which are applied before the arguments are passed to the batch file - see this answer of mine.
Or: use --% (the PSv3+ stop-parsing symbol) to pass the remaining arguments as if they'd been passed from a batch file (no interpretation by PowerShell other than expansion of %<var>%-style environment-variable references).
[1] eryksun points out that on Vista+ you can make cmd behave like PowerShell by defining environment variable NoDefaultCurrentDirectoryInExePath (its specific value doesn't matter).
While ill-advised, you can still force both environments to always find executables in the current directory by explicitly adding . to the %PATH% / $env:PATH variable; if you prepend ., you get the default cmd behavior.
As for your specific scenario:
After installing the Windows Creator's update, where PowerShell becomes the new default Windows shell via Explorer's menu options
This applies to the following scenarios:
Pressing Win-X (system-wide keyboard shortcut) now offers PowerShell rather than cmd in the shortcut menu that pops up.
Using File Explore's File menu now shows Open Windows PowerShell in place of Open command prompt (cmd).
However, nothing has changed with respect to how batch files are invoked when they are opened / double-clicked from File Explorer: The subkeys of HKEY_CLASSES_ROOT\batchfile and HKEY_CLASSES_ROOT\cmdfile in the registry still define the shell\open verb as "%1" %*, which should invoke a batch file implicitly with cmd /c, as before.
However, per your comments, your batch file is never run directly from File Explorer, because it require parameter values, and it is invoked in two fundamental ways:
Explicitly via a cmd console, after entering cmd in the Run dialog that is presented after pressing Win-R (system-wide keyboard shortcut).
In this case, everything should work as before: you're invoking your batch file from cmd.
Explicitly via PowerShell, using File Explorer's File menu.
Per your comments, the PowerShell console may either be opened:
directly in the directory in which the target batch file resides.
in an ancestral directory, such as the root of a thumb drive on which the batch file resides.
In both cases, PowerShell's potential interpretation of arguments does come into play.
Additionally, in the 2nd case (ancestral directory), the invocation will only work the same if the batch file either does not depend on the current directory or explicitly sets the current directory (such as setting it to its own location with cd /d "%~dp0").
This is a non-answer solution if encountering the question's specific behavior. I've verified all my halting scripts stopped halting after implementing a shim-like workaround.
As erykson said, there doesn't appear to be a reason why using a shim would be required. The goal is then to explicitly launch the script in CMD when using PowerShell, which aligns with Jeff Zeitlin's original suggestion in the question's comments.
So, let's say you're in my shoes with your own script_name.bat.
script_name.bat was your old script that initializes and kicks off everything. We can make sure that whatever was in script_name.bat is correctly run via CMD instead of PowerShell by doing the following:
Rename script_name.bat to script_name_shim.bat
Create a new script_name.bat in the same directory
Set its contents to:
#echo off
CMD.exe /C "%~dp0script_name_shim.bat" %*
exit /b %errorlevel%
That will launch your script with CMD.exe regardless of the fact that you started in PowerShell, and it will also use all your command-line arguments too.
This looks like a chicken egg problem, wihtout knowing the code it's difficult to tell where the problem is.
There are a ton of ways to start batches with cmd.exe even in win10cu.
Aliases are only a problem when working interactively with the PowerShell console and expecting behavior as it used to be in cmd.exe.
The aliases depend also on the loaded/imported modules and profiles.
This small PowerShell script will get all items from Help.exe and
perform a Get-Command with the item.
internal commands without counterparts in PoSh are filtered out by the ErrorAction SilentlyContinue.
Applications (*.exe files) are assumed identical and removed by the where clause.
help.exe |
Select-String '^[A-Z][^ ]+'|
ForEach-Object {
Get-Command $_.Matches.Value -ErrorAction SilentlyContinue
}| Where-Object CommandType -ne 'Application'|Select *|
Format-Table -auto CommandType,Name,DisplayName,ResolvedCommand,Module
Sample output on my system, all these items will likely work differently in PowerShell:
CommandType Name DisplayName ResolvedCommand Module
----------- ---- ----------- --------------- ------
Alias call call -> Invoke-Method Invoke-Method pscx
Alias cd cd -> Set-LocationEx Set-LocationEx Pscx.CD
Alias chdir chdir -> Set-Location Set-Location
Alias cls cls -> Clear-Host Clear-Host
Alias copy copy -> Copy-Item Copy-Item
Alias del del -> Remove-Item Remove-Item
Alias dir dir -> Get-ChildItem Get-ChildItem
Alias echo echo -> Write-Output Write-Output
Alias erase erase -> Remove-Item Remove-Item
Alias fc fc -> Format-Custom Format-Custom
Function help pscx
Alias md md -> mkdir mkdir
Function mkdir
Function more
Alias move move -> Move-Item Move-Item
Function Pause
Alias popd popd -> Pop-Location Pop-Location
Function prompt
Alias pushd pushd -> Push-Location Push-Location
Alias rd rd -> Remove-Item Remove-Item
Alias ren ren -> Rename-Item Rename-Item
Alias rmdir rmdir -> Remove-Item Remove-Item
Alias set set -> Set-Variable Set-Variable
Alias sc sc -> Set-Content Set-Content
Alias sort sort -> Sort-Object Sort-Object
Alias start start -> Start-Process Start-Process
Alias type type -> Get-Content Get-Content

Task Scheduler - Powershell - Get-ChildItem - Error: Parameter cannot be found...'File'

I have a script that uses Get-ChildItem like so:
$files = Get-ChildItem -Path $directories -Recurse -File -Include "package.json"
that works perfectly fine when run from ANY powershell prompt, EXCEPT when run from the task scheduler.
My task scheduler action is:
powershell -version 3.0 -noprofile -nolog -noninteractive -file somescript.ps1 someargs
From the task scheduler I get the error: a parameter cannot be found that matches parameter name 'file' for Get-ChildItem.
What I know:
This error is usually caused because the 'File' attribute was not added until Powershell v3.0
I am supposed to be able to force Powershell to run in version 3.0 using the -version 3.0 parameter. However, this doesn't appear to be making any difference from the task scheduler.
Any ideas regarding what I need to change?
Interestingly, the error reported is misleading. In this case the error is because I was referring to a mapped drive that is only available when my user is physically logged in to the box.
So though the reported error is: Error: Parameter cannot be found…'File', the actual issue is that I am attempting to pass a mapped drive that is unavailable, to Get-ChildItem.
I was able to confirm this by writing a very simply script that only contained Get-ChildItem and attempting to run it as a scheduled task both with and without "run even when user is not logged in".
It only failed when that box was checked. Testing with the network path, instead of the mapped drive, fixed the issue.
Hopefully this helps others in the future.

Resources