Create a virtual drive with PowerShell - windows

I'm trying to create a D: drive in Windows, which points to some local directory (e.g. C:\DDrive) using PowerShell.
This code runs fine:
New-PSDrive -Name D -Root "C:\D_Drive\" -PSProvider "FileSystem"
But in the Windows Explorer no D:-drive is visible.
How does one use that command correctly?
Also: the drive should be permanent, so I tried adding a "-Persist" parameter. But that leads to an error ("unknown parameter "-Persist"...").

Just run:
subst D: "C:\D_Drive\"
in non-elevated PS session (don't run as Administrator).

The New-PSDrive command only creates a mapping that is visible in PowerShell. It's not shown in explorer at all.
Here are two other questions that ask the same thing:
https://community.spiceworks.com/topic/649234-powershell-mapped-drive-not-showing-in-my-computer
https://social.technet.microsoft.com/Forums/windowsserver/en-US/96222ba2-90f9-431d-b05a-82b804cdc76e/newpsdrive-does-not-appear-in-explorer?forum=winserverpowershell

Related

Invoke-Command doesn't see local network drives

I have two computers: A and B. I'm trying to automate some CI\CD tasks, and my task is to start some process on B remotely, from A. The .exe file itself is on the R drive, which is a local network drive. So I do this:
# here $cred has encrypted credentials, but it is off topic...
Invoke-Command -ComputerName B -Credential $cred -ScriptBlock {
R:\WebClient\Platform\UP_110\Proc.exe
}
So apparently this would be the same thing as typing R:\WebClient\Platform\UP_110\Proc.exe on B's PowerShell and hitting Enter.
Now the problem is that I get this error when running the above code on A:
The term 'R:\WebClient\Platform\UP_110\Proc.exe' is not recognized as the name of a cmdlet, function, sc
ript file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is corr
ect and try again.
+ CategoryInfo : ObjectNotFound: (R:\WebClient\Pl...IMS.UP.Host.exe:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
+ PSComputerName : B
Apparently it says that there is no such file as R:\WebClient\Platform\UP_110\Proc.exe on my B computer. But that is not true. I do have it:
As a matter of a fact, I have this R drive both on A and B.
The code works fine if I move the .exe to any directory under the C drive (which is the system disk for me), but not for R.
Now even funnier is that I can run R:\WebClient\Platform\UP_110\Proc.exe on A and B manually. And it works.
So what's the issue here I'm facing? Thanks.
PowerShell Remoting can only access drives by default that are mapped within the system context. Most commonly, this will be letter drives based on attached hardware (whether this be USB, SATA, SCSI, etc.).
Drives mapped in the user context, such as remote drives, are not mapped because a full logon does not occur the same way as if you log in locally. There are two workarounds you have at your disposal:
Use the UNC path when accessing files over an SMB/CIFS share (e.g. \\server.domain.tld\ShareName\Path\To\Folder\Or\file.ext
Map the drive within the ScriptBlock passed to Invoke-Command using New-PSDrive:
# Single letter drive name
New-PSDrive -Name "R" -PSProvider FileSystem -Root "\\server.domain.tld\ShareName"
Get-ChildItem R:
# More descriptive drive name
New-PSDrive -Name "RemoteDrive" -PSProvider FileSystem -Root "\\server.domain.tld\ShareName"
Get-ChildItem RemoteDrive:
Three things to note:
Get-ChildItem in the example above is to show that listing the contents of the new drives should show the files you expect to see at the remote directory. This can be omitted once you are sure it works for you.
Additionally, using a longer drive name is a PowerShell feature and does not mean that you can map shared folders as a drive from within File Explorer with more than a single character.
You may run into the double hop issue trying to map to a remote drive this way, if you are attempting to use the same credential you initiated Invoke-Command with. Solving it properly is beyond the scope of Stack Overflow as this is a major architectural consideration for Active Directory.
However, you can work around it by building the credential object and passing it toNew-PSDrive from within the ScriptBlock, or running Invoke-Command with-Authentication CredSSP if your organization does not block it (many do).

Powershell Map Network Drive, write file to it but Notepad fails to find file

My Powershell script:
New-PSDrive -Name J -Root \\myserver\mypath -PSProvider FileSystem
"test" | Out-File J:\test.txt
Get-Content -Path J:\test.txt
notepad J:\test.txt
J drive maps ok, the file gets created, Get-Content can read it BUT notepad (or any .exe) cannot see the file.
What should I do to make the drive mapping visible to other executables run within the script?
Thanks.
New-PSDrive by default creates drives that are visible only to PowerShell commands, in the same session.
To create a regular mapped drive that all processes see, use the -Persist switch, in which case you're restricted to the usual single-letter drive names (such as J: in your example).
Note: Despite the switch's name, the resulting mapping is only persistent (retained across OS sessions) if you either invoke it directly from the global scope of your PowerShell session (possibly by dot-sourcing a script that contains the New-PSDrive call from there) or explicitly use -Scope Global.
Otherwise, the mapping goes out of scope (is removed) along with the scope in which it was defined.

How can I get PowerShell current location every time I open terminal from file explorer

I can open PowerShell window in any directory using Windows File Explorer.
I want to run a script every time a new PowerShell window is open and use current directory where it was open in the script.
Using $profile let me for automatic script execution but $pwd variable does not have directory used to open PowerShell window but has C:\WINDOWS\system32. I understand PowerShell starts in C:\WINDOWS\system32, run $profile and next change location used with File Explorer. How can I get file explorer current directory it when my script is executes from $profile or maybe there is another way to automatic execute my script after PowerShell window is open?
Note: The answer below provides a solution based on the preinstalled File Explorer shortcut-menu commands for Window PowerShell.
If modifying these commands - which requires taking ownership of the registry keys with administrative privileges - or creating custom commands is an option, you can remove the NoWorkingDirectory value from the following registry keys (or custom copies thereof):
HKEY_CLASSES_ROOT\Directory\shell\Powershell
HKEY_CLASSES_ROOT\Directory\Background\shell\Powershell
Doing so will make the originating folder the working directory before PowerShell is invoked, so that $PROFILE already sees that working directory, as also happens when you submit powershell.exe via File Explorer's address bar.[1]
Shadowfax provides an important pointer in a comment on the question:
When you hold down Shift and then invoke the Open PowerShell window here shortcut-menu command on a folder or in the window background in File Explorer, powershell.exe is initially started with C:\Windows\System32 as the working directory[1], but is then instructed to change to the originating folder with a Set-Location command passed as a parameter; e.g., a specific command may look like this:
"PowerShell.exe" -noexit -command Set-Location -literalPath 'C:\Users\jdoe'
As an aside: The way this shortcut-menu command is defined is flawed, because it won't work with folder paths that happen to contain ' chars.
At the time of loading $PROFILE, C:\Windows\System32 is still effect, because any command passed to -command isn't processed until after the profiles have been loaded.
If you do need to know in $PROFILE what the working directory will be once the session is open, use the following workaround:
$workingDir = [Environment]::GetCommandLineArgs()[-1] -replace "'"
[Environment]::GetCommandLineArgs() returns the invoking command line as an array of arguments (tokens), so [-1] returns the last argument, assumed to be the working-directory path; -replace "'" removes the enclosing '...' from the result.
However, so as to make your $PROFILE file detect the (ultimately) effective working directory (location) irrespective of how PowerShell was invoked, more work is needed.
The following is a reasonably robust approach, but note that a fully robust solution would be much more complex:
# See if Set-Location was passed and extract the
# -LiteralPath or (possibly implied) -Path argument.
$workingDir = if ([Environment]::CommandLine -match '\b(set-location|cd|chdir\sl)\s+(-(literalpath|lp|path|PSPath)\s+)?(?<path>(?:\\").+?(?:\\")|"""[^"]+|''[^'']+|[^ ]+)') {
$Matches.path -replace '^(\\"|"""|'')' -replace '\\"$'
} else { # No Set-Location command passed, use the current dir.
$PWD.ProviderPath
}
The complexity of the solution comes from a number of factors:
Set-Location has multiple aliases.
The path may be passed positionally, with -Path or with -LiteralPath or its alias -PSPath.
Different quoting styles may be used (\"...\", """...""", '...'), or the path may be unquoted.
The command may still fail:
If the startup command uses prefix abbreviations of parameter names, such as -lit for -LiteralPath.
If a named parameter other than the path follows set-location (e.g., -PassThru).
If the string set-location is embedded in what PowerShell ultimately parses as a string literal rather than a command.
If the startup command is passed as a Base64-encoded string via -EncodedCommand.
[1] When you type powershell.exe into File Explorer's address bar instead, the currently open folder is made the working directory before PowerShell is started, and no startup command to change the working directory is passed; in that case, $PROFILE already sees the (ultimately) effective working directory.
1.open the registry (command:regedit)
2.find out the path \HKEY_CLASSES_ROOT\Directory\Background\shell\Powershell\command (not \HKEY_CLASSES_ROOT\Directory\Background\shell\cmd\command)
3.the default value should be powershell.exe -noexit -command Set-Location -literalPath "%V"
4.you can change some param,
ps: you change the command to cmd.exe /s /k pushd "%V". if you do so, shift & right button in the explorer will open the cmd, not powershell

Powershell Function to autocomplete and change directory

I have the following Powershell function to set a directory, nice and simple. When i type dev, auto complete works for the items inside the directory..
Example: dev ./project
However when pressing enter, the directory changes to the set-location 'E:\OneDrive\Website Workspace\', not it's child 'E:\OneDrive\Website Workspace\project'.. How would i go about this correctly.
function dev {
set-location 'E:\OneDrive\Website Workspace\'
}
Why don't you use PS drives
New-PSDrive -Name ws -PSProvider filesystem -Root 'E:\OneDrive\Website Workspace\'
get-childitem ws:\project
You just have to put the first line into you profile.

How can I convince powershell (run through task scheduler) to find my network drive?

I have a simple powershell script on windows 7 that doesn't work properly. (this is not an issue on XP)
get-psdrive
When I run it directly, I get
Name Used (GB) Free (GB) Provider Root
---- --------- --------- -------- ----
A FileSystem A:\
Alias Alias
C 12.30 11.60 FileSystem C:\
cert Certificate \
D FileSystem D:\
Env Environment
Function Function
HKCU Registry HKEY_CURRENT_USER
HKLM Registry HKEY_LOCAL_MACHINE
**Q 1486.63 289.41 FileSystem Q:\**
Variable Variable
WSMan WSMan
When I run this through task scheduler, I get
Name Used (GB) Free (GB) Provider Root
---- --------- --------- -------- ----
A FileSystem A:\
Alias Alias
C 12.30 11.60 FileSystem C:\
cert Certificate \
D FileSystem D:\
Env Environment
Function Function
HKCU Registry HKEY_CURRENT_USER
HKLM Registry HKEY_LOCAL_MACHINE
Variable Variable
WSMan WSMan
Note that I'm missing my Q: drive. If there's any way to get this resolved, I'll be able to copy files there....
Network drives, and really all drive letters for that matter, are "mapped" to volumes for a given logon session. When you are creating a scheduled task to run it creates a new login session (even if you are currently logged in) and runs the scheduled task in that context. Thus, while you may be logged in and have a Q drive mapped - the second session that is running the task has a completely different environment, Windows is just nice enough to automatically map the C: (and other physical drives) for all sessions.
You shouldn't need to map a map a drive when using PowerShell, other than for perhaps convenience. Unlike the cmd.exe predecessor, PowerShell is perfectly happy to change the current directory to a UNC style path:
cd \\server\share\directory
Is it possible to accomplish what you need without mapping a drive at all? You have mentioned copying files - if the task is running with your credentials, and assuming you have permissions to the Q: drive (lets say \server\share), then your script should be able to do something like:
copy c:\logs\*.log \\server\share\logs
And work just fine without needing to map a drive.
Here is the complete command info for my test that worked. If your environment is different please note how. The task is configured to run as my domain account, only when I am logged in, highest privileges and configured for Windows 7/Server 2008 R2.
The action is to Start a program:
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Arguments
-command copy c:\logs\*.log \\server\share\logs
Maybe before running get-psdrive in script, firstly do something like this:
$net = new-object -comobject Wscript.Network
$net.mapnetworkdrive("Q:","\\path\to\share",0,"domain\user","password")
and after doing your job (copying files..):
$net.removenetworkdrive("Q:")
There is a hack if you don't want to have a password in the script, which I prefer to avoid :
Open your folder with sufficient privileges (domain user for example)
Open a powershell as Administrator and an make a symlink from UNC to local path
New-Item -ItemType SymbolicLink -Path "C:\LocalTemp\" -Value "\unc"
You can now use the UNC path in your powershell script directly, it will open it with the credential provided in the scheduled task.
There is probably some issues with credentials in scheduled tasks, however this is still better in my opinion than password in clear or pseudo obfuscated in scripts.

Resources