Powershell Remoting performance - performance

I am managing a lab for a technology conference, and I have a PowerShell script that does some cleanup between sessions, deleting leftover cruft from the desktop, resetting links in the Taskbar, etc. When I run this script "locally", meaning I use a shortcut while logged in as the generic lab user, it takes about 5 seconds to complete. However, when I run the same script using Remoting it takes nearly 10 minutes to complete. And this on a single machine. My worry is that I have 40 machines that need to refresh, and sometimes only 15 minutes total between sessions. The code I am testing with is pretty simple
Invoke-Command -computerName:$machine -argumentList: $file, $context –scriptblock {
param (
[String]$file,
[string]$context
)
& powershell.exe -noProfile -noLogo -executionPolicy bypass -file $file -context:$context
} -credential:$credential -authentication:CredSSP –asJob -jobName:$machine > $null
So, my question is, is this "normal", or is there some indication of a problem and I should really expect this to work at something closer to "local" speeds? FWIW, I am testing this on a VM.
Additionally, I am using a BallonTip to alert anyone who is sitting at the machine that a reset is happening. This works great when "manually" launching the script. However, when using Remoting the Ballon Tip never shows up. Of course neither does the console, so perhaps this is just a limitation of Remoting, that no UI can be triggered?
Any insights greatly appreciated. Thanks!

You won't be able to get a UI to show up like that since the script won't be running in the interactive user's session.
As for performance, I'm not entirely sure why would it take such a drastically longer time.
First, I would say try to remote into your local machine (instead of just running it locally), by passing either localhost, ., or the actual local computer name to the -ComputerName parameter of Invoke-Command. See if the performance is still as bad, or how it differs.
I'm also wondering if it has to do with spawning another powershell.exe process on the remote side. In theory it shouldn't, but what if you do this instead:
Invoke-Command -computerName:$machine -argumentList: $file, $context –scriptblock {
param (
[String]$file,
[string]$context
)
$sb = [ScriptBlock]::Create((Get-Content $file -Raw))
& $sb -context:$context
} -credential:$credential -authentication:CredSSP –asJob -jobName:$machine > $null

Related

How to use PowerShell to run some code upon system shutdown?

I'm provisioning a Windows VM that needs to run some PowerShell code when it boots. It also needs to run some different code when it shuts down.
To do the former, I can use New-JobTrigger and Register-ScheduledJob in my initial provisioning script like so:
$StartupTrigger = New-JobTrigger -AtStartup
Register-ScheduledJob -Name "Startup Job" -Trigger $StartupTrigger -Credential $DesiredCredentials -ScriptBlock {
Do-InterestingThings $using:ExternalResource
}
Doesn't even have to be a separate script file, it can just be a script block. Any variables from an outer scope will be serialized and used when the job runs. Pretty neat.
The real problem I'm solving involves creating an external resource whose lifetime is tied to the VM's uptime. When the VM is created, this resource will be created. When the VM is shut down, this resource needs to be cleaned up. How can I use PowerShell to run some code just before the VM is scheduled to shut down (regardless of how it got the order)? It doesn't need to be a script block, it can be a separate script file.
There are two reasonable ways to do this:
Local Group Policy:
This can be done in the local group policy editor: gpedit.msc. Navigate to Computer Configuration/Windows Settings/Scripts (Startup/Shutdown)/Shutdown. You can add 'Scripts' and/or 'PowerShell Scripts' here which get executed before other shutdown processes.
Event-Based Scheduled Task:
From this answer:
scheduled task as follows:
Type : On Event (Basic)
Log : System
Source : User32
EventID : 1074
When a user or command initiates a shutdown or restart as a logged on
user or on a user's behalf, event ID 1074 will fire. By creating a
task to use this to trigger a script, it will start the script and
allow it to finish
Note that this does not delay the shutdown (so has to be quick) and can sometimes fail to trigger before the task scheduler service stops.
Final Note:
Always make sure that your code can handle a dirty shutdown. After all, the fastest way to call a reboot is the power button...

What can I query to see if Windows is booted and done with updates?

My goal is to remotely check a group of computers (extensive list) not only to see if the server has rebooted (usually when it was last rebooting), but if Windows is fully up and running at the login screen, and it won't restart for further updates or still be installing updates.
I did find a service called AppReadiness, which stopped it until the server rebooted. I am concerned that if it is set to manual, it may not always start. Could somebody please confirm if this is a reliable service?
EDIT: As I'm writing this, I did find out that it was stopped until it says "Working on updates, 100% complete, Don't turn off your computer" but as the server hung on that message, the AppReadiness service started. Is there anything better to watch? I've read other answers on different questions say to check if C$ is available, but that is available sooner than AppReadiness is available.
The code that is being used to check the service:
$creds = Get-Credential -Message "Enter server credentials:" -UserName "SERVERNAME\Administrator"
Get-WmiObject Win32_Service -ComputerName "SERVERIPADDRESS" -Credential $creds | Where-Object {$_.Name -eq "AppReadiness"}
EDIT 2: Also, instead of monitoring services, I have also tried looking for processes like winlogon.exe and loginui.exe for guidance on the server's condition but I'm not receiving the results I'm looking to record. These processes show when the server is getting ready when I was hoping they would only show once the login GUI was visible.
EDIT 3:
This edit is for the answer by #Kelv.Gonzales who stated to check for the Windows Event Log "DHCPv4 client service is started" log entry. That doesn't work and is on par with other services and events that I monitored. It shows valid before the login screen.
My code was:
$creds = Get-Credential -Message "Enter server credentials:" -UserName "SERVERNAME\Administrator"
$server = "IPADDRESSOFSERVER"
while($true)
{
$event = Get-WmiObject Win32_NTLogEvent -ComputerName $server -Credential $creds -Filter "(logfile='System' AND eventcode = '50036')" | select -First 1
$event.ConvertToDateTime($event.TimeWritten)
Start-Sleep -Seconds 5
}
That one liner will just fire off once of course. Any reason why you are using WMI, instead of the built-in PowerShell cmdlet - Get-Service?
My suggestion is use an WMI Event watcher, using what you already have in place but target the service and any dependent services and have that event notify you when the state is running.
Use PowerShell to Monitor and Respond to Events on Your Server
This article is using PowerShell and VBScript to do this, but you can do this with all PowerShell.
You can have a temporary or permanent watcher.
PowerShell and Events: WMI Temporary Event Subscriptions
Those can get a bit deep, so, if they aren't for you, you can just use your one line in a Do Loop that stops after the service comes online.
Basic example:
$TargetHost = $env:COMPUTERNAME
do {
$TargetOperation = Get-WmiObject Win32_Service -ComputerName $TargetHost |
Where-Object {$_.Name -eq "AppReadiness"}
"Checking host $TargetHost for service/process $($TargetOperation.Name)"
Start-Sleep -Seconds 3
} until (($TargetOperation).State -eq 'Running')
"Validation of host $TargetHost for service/process $($TargetOperation.Name) complete"
# Results
Checking host WS70 for service/process AppReadiness
Checking host WS70 for service/process AppReadiness
Checking host WS70 for service/process AppReadiness
Validation of host WS70 for service/process AppReadiness complete
You can of course add as many services or processes as you'd like using operation logic.
All the above applies to almost whatever you'd like to watch. Services, process, file-folder.
Or just use this script in the loop.
Get Remote Logon Status - Powershell
This script will return the logon status of the local or a remote
machine. Return types include "Not logged on", "Locked", "Logged
on", and "Offline.
The most useful part of this is to check whether a computer is in the
locked state, although the other return types could also be useful.
This is a simple function, and can easily be included in a larger
script. The return types could be changed to numbers for the calling
script to more easily parse the return value.
Download: GetRemoteLogonStatus.ps1
Would finding 0 updates queued accomplish what you need then? This arcticle goes into detail on how to accomplish it.
To check if windows is ready to log in, can you query for the event log for 'DHCPv4 client service is started' - Event ID 50036 satisfy if windows is ready.
I spent a week searching for a solution for this issue. I wanted to share what is currently my best solution, hopefully it helps someone.
#kelv-gonzales pointed me in the right direction with his comment about the Windows Modules Installer and TrustedInstaller.
Breakdown of my solution
There exists a process called TiWorker.exe that is the Windows Modules Installer Worker process. This is the process responsible for actually installing the Windows Update. This process (from what I've observed) reliably runs throughout the duration of a Windows Update being installed. Once the update has been installed and a pending reboot is required, this process stops and goes away.
When you reboot the machine to satisfy the pending reboot, the TiWorker.exe process is one of the very first processes to start while the machine is coming back up. In some cases, depending on the nature of the update, will finish the installation of an update (that's the increasing percentage you see during boot-up) while the computer comes back up. Once the login screen is available, this process typically remains running four roughly 2 minutes (from what I've observed). After which, it stops and goes away. I found that waiting these few minutes was a small price to pay for reliably knowing when updates have finished processing.
My scripts logic
I was able to write a script that keys off of the existence of this service. For example, below is a high-level flow of my script.
Initiate the installation of Windows Updates.
Wait for updates to finishing installing and for a pending reboot.
Once a pending reboot exists, wait until TiWorker.exe is no longer running and then trigger a reboot.
While the machine is rebooting, repeatedly check the status of the machine during the reboot. While the machine is coming back up, TiWorker.exe will start back up to finish the update. Keep waiting until TiWorker.exe is no longer running (typically, the update will finish and the OS will wait at the lock screen for about 2 minutes before this process stops).
Code
I modified the code posted by #gabriel-graves to check for the TiWorker.exe process.
$creds = Get-Credential -Message "Enter server credentials:" -UserName "SERVERNAME\Administrator"
Get-WmiObject Win32_Process -ComputerName "SERVERIPADDRESS" -Credential $creds | Where-Object {$_.Name -eq "TiWorker.exe"}

GetScheduledTaskInfo NextRunTime is wrong

I'm trying to use Powershell to get the NextRunTime for some scheduled tasks. I'm retrieving the values but they don't match up to what I'm seeing in the Task Scheduler Management Console.
For example, in the Task Scheduler console my "TestTask" has a Next Run Time value of "1/9/2018 12:52:30 PM". But when I do the following call in Powershell it shows "12:52:52 PM" for the NextRunTime.
Get-ScheduledTask -TaskName "TestTask" | Get-ScheduledTaskInfo
From what I've seen the seconds value is always the same value as the minutes when returned from the PowerShell Get-ScheduledTaskInfo cmdlet. I'm wondering if there's a time formatting error (hh:mm:mm instead of hh:mm:ss) in that cmdlet but I have no idea how to look for that.
The task is running at the exact time shown in the console so that makes me think that it's an issue with the powershell call.
Has anyone seen this issue before and know how to get the correct NextRunTime value in PowerShell? I'm also seeing the same issue with the LastRunTime value.
I've tried this on Windows Server 2016 and Windows 10 and get the same results on both operating systems.
I can confirm that I see the same issue on Server 2012R2 as well. You can get the correct information by using the task scheduler COM object, getting the root folder (or whatever folder your task is stored in, but most likely its in the root folder), and then getting the task info from that. Here's how you'd do it:
$Scheduler = New-Object -ComObject Schedule.Service
$Scheduler.Connect()
$RootFolder = $Scheduler.GetFolder("\")
$Task = $RootFolder.GetTask("TestTask")
$Task.NextRunTime
Probably also worth noting that you can use the Connect() method to connect to the task scheduler on other computers (if you have rights to access their task scheduler), and get information about their tasks, stop or start their tasks, make new tasks... lots of good stuff here if you don't mind not using the *-ScheduledTask* cmdlets.

How many windows services can be in "Starting" state at once?

I was writing some powershell code to start a number of internally developed windows services in parallel. The services don't depend on other services and I ran the Start-Service command in a loop spawning a PSJob per Start-Service call.
However after reviewing the events generated by Service Control Manager it seems like only one service is in Starting state at anyone time.
Fully acknowledge I could have messed up the powershell code but I thought I would check to see there is not a fundamental Windows (Server 2003) limit in play here (ie only one service starting at a time).
Daniel
cls
$Services = $('Service1', 'Service2', 'Service3')
foreach($Service in $Services) {
Start-Job -ScriptBlock {
Param($ServiceName)
Start-Service $ServiceName
} -ArgumentList $Service
}
Get-Job | Wait-Job | Receive-Job<br/>
Had an anonymous contact look in the Windows Source code for me and the answer is that only 1 service may be starting at anyone time. This is enforced by the SCM taking a global lock on the services database before starting a service.

Map a network drive to be used by a service

Suppose some Windows service uses code that wants mapped network drives and no UNC paths. How can I make the drive mapping available to the service's session when the service is started? Logging in as the service user and creating a persistent mapping will not establish the mapping in the context of the actual service.
Use this at your own risk. (I have tested it on XP and Server 2008 x64 R2)
For this hack you will need SysinternalsSuite by Mark Russinovich:
Step one:
Open an elevated cmd.exe prompt (Run as administrator)
Step two:
Elevate again to root using PSExec.exe:
Navigate to the folder containing SysinternalsSuite and execute the following command
psexec -i -s cmd.exe
you are now inside of a prompt that is nt authority\system and you can prove this by typing whoami. The -i is needed because drive mappings need to interact with the user
Step Three:
Create the persistent mapped drive as the SYSTEM account with the following command
net use z: \\servername\sharedfolder /persistent:yes
It's that easy!
WARNING: You can only remove this mapping the same way you created it, from the SYSTEM account. If you need to remove it, follow steps 1 and 2 but change the command on step 3 to net use z: /delete.
NOTE: The newly created mapped drive will now appear for ALL users of this system but they will see it displayed as "Disconnected Network Drive (Z:)". Do not let the name fool you. It may claim to be disconnected but it will work for everyone. That's how you can tell this hack is not supported by M$.
I found a solution that is similar to the one with psexec but works without additional tools and survives a reboot.
Just add a sheduled task, insert "system" in the "run as" field and point the task to a batch file with the simple command
net use z: \servername\sharedfolder /persistent:yes
Then select "run at system startup" (or similar, I do not have an English version) and you are done.
You'll either need to modify the service, or wrap it inside a helper process: apart from session/drive access issues, persistent drive mappings are only restored on an interactive logon, which services typically don't perform.
The helper process approach can be pretty simple: just create a new service that maps the drive and starts the 'real' service. The only things that are not entirely trivial about this are:
The helper service will need to pass on all appropriate SCM commands (start/stop, etc.) to the real service. If the real service accepts custom SCM commands, remember to pass those on as well (I don't expect a service that considers UNC paths exotic to use such commands, though...)
Things may get a bit tricky credential-wise. If the real service runs under a normal user account, you can run the helper service under that account as well, and all should be OK as long as the account has appropriate access to the network share. If the real service will only work when run as LOCALSYSTEM or somesuch, things get more interesting, as it either won't be able to 'see' the network drive at all, or require some credential juggling to get things to work.
A better way would be to use a symbolic link using mklink.exe. You can just create a link in the file system that any app can use. See http://en.wikipedia.org/wiki/NTFS_symbolic_link.
There is a good answer here:
https://superuser.com/a/651015/299678
I.e. You can use a symbolic link, e.g.
mklink /D C:\myLink \\127.0.0.1\c$
You could us the 'net use' command:
var p = System.Diagnostics.Process.Start("net.exe", "use K: \\\\Server\\path");
var isCompleted = p.WaitForExit(5000);
If that does not work in a service, try the Winapi and PInvoke WNetAddConnection2
Edit: Obviously I misunderstood you - you can not change the sourcecode of the service, right? In that case I would follow the suggestion by mdb, but with a little twist: Create your own service (lets call it mapping service) that maps the drive and add this mapping service to the dependencies for the first (the actual working) service. That way the working service will not start before the mapping service has started (and mapped the drive).
ForcePush,
NOTE: The newly created mapped drive will now appear for ALL users of this system but they will see it displayed as "Disconnected Network Drive (Z:)". Do not let the name fool you. It may claim to be disconnected but it will work for everyone. That's how you can tell this hack is not supported by M$...
It all depends on the share permissions. If you have Everyone in the share permissions, this mapped drive will be accessible by other users. But if you have only some particular user whose credentials you used in your batch script and this batch script was added to the Startup scripts, only System account will have access to that share not even Administrator.
So if you use, for example, a scheduled ntbackuo job, System account must be used in 'Run as'.
If your service's 'Log on as: Local System account' it should work.
What I did, I didn't map any drive letter in my startup script, just used net use \\\server\share ... and used UNC path in my scheduled jobs. Added a logon script (or just add a batch file to the startup folder) with the mapping to the same share with some drive letter: net use Z: \\\... with the same credentials. Now the logged user can see and access that mapped drive. There are 2 connections to the same share. In this case the user doesn't see that annoying "Disconnected network drive ...". But if you really need access to that share by the drive letter not just UNC, map that share with the different drive letters, e.g. Y for System and Z for users.
Found a way to grant Windows Service access to Network Drive.
Take Windows Server 2012 with NFS Disk for example:
Step 1: Write a Batch File to Mount.
Write a batch file, ex: C:\mount_nfs.bat
echo %time% >> c:\mount_nfs_log.txt
net use Z: \\{your ip}\{netdisk folder}\ >> C:\mount_nfs_log.txt 2>&1
Step 2: Mount Disk as NT AUTHORITY/SYSTEM.
Open "Task Scheduler", create a new task:
Run as "SYSTEM", at "System Startup".
Create action: Run "C:\mount_nfs.bat".
After these two simple steps, my Windows ActiveMQ Service run under "Local System" priviledge, perform perfectly without login.
The reason why you are able to access the drive in when you normally run the executable from command prompt is that when u are executing it as normal exe you are running that application in the User account from which you have logged on . And that user has the privileges to access the network. But , when you install the executable as a service , by default if you see in the task manage it runs under 'SYSTEM' account . And you might be knowing that the 'SYSTEM' doesn't have rights to access network resources.
There can be two solutions to this problem.
To map the drive as persistent as already pointed above.
There is one more approach that can be followed. If you open the service manager by typing in the 'services.msc'you can go to your service and in the properties of your service there is a logOn tab where you can specify the account as any other account than 'System' you can either start service from your own logged on user account or through 'Network Service'. When you do this .. the service can access any network component and drive even if they are not persistent also.
To achieve this programmatically you can look into 'CreateService' function at
http://msdn.microsoft.com/en-us/library/ms682450(v=vs.85).aspx and can set the parameter 'lpServiceStartName ' to 'NT AUTHORITY\NetworkService'. This will start your service under 'Network Service' account and then you are done.
You can also try by making the service as interactive by specifying SERVICE_INTERACTIVE_PROCESS in the servicetype parameter flag of your CreateService() function but this will be limited only till XP as Vista and 7 donot support this feature.
Hope the solutions help you.. Let me know if this worked for you .
I find a very simple method: using command "New-SmbGlobalMapping" of powershell, which will mount drive globally:
$User = "usernmae"
$PWord = ConvertTo-SecureString -String "password" -AsPlainText -Force
$creds = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User, $PWord
New-SmbGlobalMapping -RemotePath \\192.168.88.11\shares -Credential $creds -LocalPath S:
You wan't to either change the user that the Service runs under from "System" or find a sneaky way to run your mapping as System.
The funny thing is that this is possible by using the "at" command, simply schedule your drive mapping one minute into the future and it will be run under the System account making the drive visible to your service.
I can't comment yet (working on reputation) but created an account just to answer #Tech Jerk #spankmaster79 (nice name lol) and #NMC issues they reported in reply to the "I found a solution that is similar to the one with psexec but works without additional tools and survives a reboot." post #Larry had made.
The solution to this is to just browse to that folder from within the logged in account, ie:
\\servername\share
and let it prompt to login, and enter the same credentials you used for the UNC in psexec. After that it starts working. In my case, I think this is because the server with the service isn't a member of the same domain as the server I'm mapping to. I'm thinking if the UNC and the scheduled task both refer to the IP instead of hostname
\\123.456.789.012\share
it may avoid the problem altogether.
If I ever get enough rep points on here i'll add this as a reply instead.
Instead of relying on a persistent drive, you could set the script to map/unmap the drive each time you use it:
net use Q: \\share.domain.com\share
forfiles /p Q:\myfolder /s /m *.txt /d -0 /c "cmd /c del #path"
net use Q: /delete
This works for me.

Resources