Slow WUA (Windows Update API) - windows

I'm working on a PowerShell script to do some Windows Update tasks. Most tasks center around getting a collection of Windows Updates that have not yet been applied, using the code snippet below. Once that collection is returned, I iterate through it and perform such tasks as hiding, downloading, or installing the updates.
I notice that this code can take anywhere from 6 to 115 sceconds to run. Typically the longer runs are after the machine has restarted or been idle for more than 15 minutes.
But if I open the Windows Update control panel item, it instantly knows how many updates are outstanding, and can give me a list (collection) of those outstanding updates. If I click WU's "check for updates" link, it will take >10 seconds to check again, and sometimes that check will yield different results than it "knew" instantly when I opened it.
So I assume that WUA maintains a cached collection of updates somewhere, probably updated automatically once daily. My question: how can my code access that cache, rather than running the longer "check for updates" code shown below? Specifically, I am hoping to quickly obtain an IUpdateCollection to work with.
$Session = New-Object -ComObject Microsoft.Update.Session
$Searcher = $Session.CreateUpdateSearcher()
$Searcher.Online = $false #tested $true and $false; $true slightly slower
$Criteria = "IsInstalled=0 and Type='Software'"
$SearchResult = $Searcher.Search($Criteria)
$SearchResult.Updates
Note that all this is happening on a current, Windows2012R2 system.

Look like the cache is a CAB file called wsusscn2.cab that is regularly downloaded from MSFT. There is a direct link to it in the msdn link I posted below. Perhaps write a script that downloads that once per day/week (maybe to a network share if this is going to be a widely deployed script), and then change your script to force it to always look at the CAB file instead of online. Like this:
$Session = New-Object -ComObject Microsoft.Update.Session
$UServiceManager = New-Object -ComObject Microsoft.Update.ServiceManager
$UService = $UServiceManager.AddScanPackageService("Offline Sync Service", "c:\wsusscn2.cab")
$Searcher = $Session.CreateUpdateSearcher()
$Searcher.ServerSelection = 3
$Searcher.ServiceID = $UService.ServiceID
$Criteria = "IsInstalled=0 and Type='Software'"
$SearchResult = $Searcher.Search($Criteria)
$SearchResult.Updates
msdn

Related

Script runs hundreds of times faster in ISE than in the shell. Why, and how do I fix it?

Myself and some other people at work have been trying to figure out exactly why this excerpt of this script runs so much faster in ISE than in the shell.
For context, the entire script (which compares AD hashes to a list of known compromised hashes), will run in ISE in about 30 minutes with the expected results. However, when invoked remotely or run locally from the shell, it takes up to 10 days in some cases.
We've found that this little bit of code in a function is where things go wonky. I'm not 100% certain, but I believe it may be resulting from the use of System.IO.StreamReader. Specifically, calling the ReadLine() method; but really not sure.
$fsHashDictionary = New-Object IO.Filestream $HashDictionary,'Open','Read','Read'
$frHashDictionary = New-Object System.IO.StreamReader($fsHashDictionary)
while (($lineHashDictionary = $frHashDictionary.ReadLine()) -ne $null) {
if($htADNTHashes.ContainsKey($lineHashDictionary.Split(":")[0].ToUpper()))
{
$foFoundObject = [PSCustomObject]#{
User = $htADNTHashes[$lineHashDictionary.Split(":")[0].ToUpper()]
Frequency = $lineHashDictionary.Split(":")[1]
Hash = $linehashDictionary.Split(":")[0].ToUpper()
}
$mrMatchedResults += $foFoundObject
}
Afaik, there isn't anything that can explain a "Script runs hundreds of times faster in ISE than in the shell" therefore I suspect the available memory differences between one and the other session are causing your script to run into performance issues.
Knowing that custom PowerShell objects are pretty heavy. To give you an idea how much memory they consume, try something like this:
$memBefore = (Get-Process -id $pid).WS
$foFoundObject = [PSCustomObject]#{
User = $htADNTHashes[$lineHashDictionary.Split(":")[0].ToUpper()]
Frequency = $lineHashDictionary.Split(":")[1]
Hash = $linehashDictionary.Split(":")[0].ToUpper()
}
$memAfter = (Get-Process -id $pid).WS
$memAfter - $memBefore
Together with the fact that arrays (as $mrMatchedResults) are mutual and therefore causing the array to be rebuild every time you use the increase assignment operator (+=), the PowerShell session might be running out of physically memory causing Windows to constantly swapping memory pages.
.Net methods like [System.IO.StreamReader] are definitely a lot faster then PowerShell cmdlets (as e.g. Get-Content) but that doesn't mean that you have to pot everything into memory. Meaning, instead of assigning the results to $lineHashDictionary (which loads all lines into memory), stream each object to the next cmdlet.
Especially For you main object, try to respect the PowerShell pipeline. As recommended in Why should I avoid using the increase assignment operator (+=) to create a collection?, you better not assign the output at all but pass the pipeline output directly to the next cmdlet (and eventually release to its destination, as e.g. display, AD, disk) to free up memory.
And if you do use .Net classes (along with the StreamReader class) make sure that you dispose the object as shown in the PowerShell scripting performance considerations article, otherwise you function might leak even more memory than required.
the performance of a complete (PowerShell) solution is supposed to be better than the sum of its parts. Meaning, don't focus too much on a single function if it concerns performance issues, instead look at you whole solution. The PowerShell pipeline gives you the opportunity to e.g. load objects from AD and process them almost simultaneously and using just a little more memory than each single object.
It's probably because ISE uses the WPF framework and benefits from hardware acceleration, a PowerShell console does not.

Domain rejoin as post imaging task

My Question:
Edit: I'm specifically trying to do this programmatically through Powershell.
How can I check to see if a computer that is not currently joined to the domain (Freshly imaged) has an entry in the domain so that I can delete it prior to re-adding the computer to the domain? i.e. to ensure that as I image computers their old records are removed and only a single Computer object exists for each machine?
I know how to check if a computer is currently connected to a domain, and I know how to reconcile a trust relationship. Have I made a mistake in not using a script to pull the computer out of the domain as a pre-install task? I've already imaged a number of computers but I forgot some software and will be re-imaging again. So I'm hoping to get this implemented to remove all duplicate entries on the next go-around.
Background:
I've recently stepped into a network and domain that has less than 10% documentation available. It was one of those lovely systems that was more duct tape as it was built as-needed using whatever tutorials were available at the time.
In attempting to fix some of the more glaring issues, I've prepared a fresh Windows 10 image.
Now, all the computers are already in the domain, but because of the organizational issues and such, I'm using a powershell script to re-add them and put them into better defined OUs with names that make more sense. Previously devices were named after the individual that used them, but with so many staff changes that system is completely ineffective in tracking devices.
Previous Search Attempts:
I've tried searching for post image tasks for checking if a computer already exists before adding it to a domain, but the only results I've been able to get both through Google and Spiceworks are those that simply go over adding a computer or removing it from the domain, and the ones on repairing the trust relationship.
It may be simply that I'm not aware of the correct phrasing for the question, so if this has been answered before I would be ecstatic for someone to post the link for me.
Thank you for reading!
The solution ended up being to simply add the computers to the domain and then after 90 days I ran a script to purge anything that hadn't reported into Active Directory. I did the same thing with accounts at a 180 day mark.
It meant I had ~10,000 extra items in Active Directory and had a negative impact on speed, but now that everything is properly tracked I won't have to repeat the process next year.
I made certain to back up all items using ldifde in the event that I accidentally deleted a machine that was only used at certain times of the year such as the ISTEP test caching machines. Being able to back them up made me much more willing to bypass reconciling the entries and to simply purge them.
Here is the script in case anyone comes across the same issue:
#I always set the location at the start of a script in case I forget at the end of a previous script.
Set-Location C:\
#ActiveDirectory is the only necessary module for this script.
Import-Module ActiveDirectory
#While this is not 100% secure, I still feel it is better than leaving the credentials in plaintext within the code.
$keyfile = "\\[domain controller]\GPO Deployment Files\Scripts\AES.txt"
[byte[]] $key = ([byte array key])
$secure = Get-Content $keyfile | ConvertTo-SecureString -Key $key
$username = "[domain\admin]"
$cred = new-object -typename System.Management.Automation.PSCredential `
-argumentlist $username, $secure
#Establishes the 90 day cutoff based off the current day.
$datecutoff = (Get-Date).AddDays(-90)
#As I wanted to go district-wide, I did not bother adding an OU based filter.
$computers = Get-ADComputer -Filter {LastLogonDate -lt $datecutoff} -Properties * | Sort LastLogonDate | Select DistinguishedName,Name,LastLogonDate
#This is a test CSV - I ran all code up until this line so I could verify the list with a quick visual skim.
$computers | Add-Content -Path "\\[domain controller]\DeletedADItems\test.csv"
#Stepping through the list
Foreach ($computer in $computers) {
#I know there are likely elegant ways to do this, but using the -replace option was the simplest solution I could come up with at the time.
$computername = $computer.Name -replace '#{name=',''
$computername = $computername -replace '}',''
$computerDN = $computer.DistinguishedName -replace '#{DistinguishedName=',''
$computerDN = $computerDN -replace '}',''
<# I discovered that LDIFDE doesn't always play nicely with PS drives and such, or at least it failed to do so in my case.
So I back up the files to my local drive and then I copy them to the domain controller path that I used with the test.csv file. #>
$exportFile = "C:\DeletedADItems\$computername.ldf"
#Prior to deletion, export the object to the export file location.
ldifde -d "$computerDN" -f "$exportFile"
#Finally, remove the item and mark confirm false so you don't have to confirm hundreds of deletions.
Remove-ADComputer -Identity "$computerDN" -Confirm:$False
}
It is mostly self-explanatory, but I added some basic comments just in case.

Windows 10 Update - No LastSuccessTime in Registry

I have a Nagios plugin https://outsideit.net/check-ms-win-updates/ which checks when the last WSUS was installed successfully. This is based on a string 'LastSuccessTime' in a registry key located here: 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update\Results\Install'
$LastSuccessTimeFolder = 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update\Results\Install'
$LastSuccessTimeValue = Get-ItemProperty -Path $LastSuccessTimeFolder -Name LastSuccessTime | Select-Object -ExpandProperty LastSuccessTime
This key is not available on Windows 10 it seems. So how can I get the LastSuccessTime date / time form a Windows 10 pc?
Not the best solution, but parsing out C:\Windows\Logs\WindowsUpdate will easily get you a list of the last times that updates were checked for.
Figuring out whether it was successful or not would require parsing the logs. How hard that would be depends on whether or not the "Exit code" at the end changes based on success or failure or not. Since I don't need that for my current purposes I'll leave it to some future expert to decipher.

Why is this PowerShell code (Invoke-WebRequest / getElementsByTagName) so incredibly slow on my machines, but not others?

I wrote some screen-scraping code in PowerShell and was surprised that it took around 30 seconds to parse a few HTML tables. I stripped it down to try and figure out where all the time was being spent, and it seems to be in the getElementsByTagName calls.
I've included a script below which on both my home desktop, my work desktop and my home slate, takes around 1-2 seconds for each iteration (full results pasted below). However, other people in the PowerShell community are reporting far shorter times (only several milliseconds for each iteration).
I'm struggling to find any way of narrowing down the problem, and there doesn't seem to be a pattern to the OS/PS/.NET/IE versions.
The desktop I'm currently running it on is a brand new Windows 8 install with only PS3 and .NET 4.5 installed (and all Windows Update patches). No Visual Studio. No PowerShell profile.
$url = "http://www.icy-veins.com/restoration-shaman-wow-pve-healing-gear-loot-best-in-slot"
$response = (iwr $url).ParsedHtml
# Loop through the h2 tags
$response.body.getElementsByTagName("h2") | foreach {
# Get the table that comes after the heading
$slotTable = $_.nextSibling
# Grab the rows from the table, skipping the first row (column headers)
measure-command { $rows = $slotTable.getElementsByTagName("tr") | select -Skip 1 } | select TotalMilliseconds
}
Results from my desktop (the work PC and slate give near identical results):
TotalMilliseconds
-----------------
1575.7633
2371.5566
1073.7552
2307.8844
1779.5518
1063.9977
1588.5112
1372.4927
1248.7245
1718.3555
3283.843
2931.1616
2557.8595
1230.5093
995.2934
However, some people in the Google+ PowerShell community reported results like this:
TotalMilliseconds
-----------------
76.9098
112.6745
56.6522
140.5845
84.9599
48.6669
79.9283
73.4511
94.0683
81.4443
147.809
139.2805
111.4078
56.3881
41.3386
I've tried both PowerShell ISE and a standard console, no difference. For the work being done, these times seem kinda excessive, and judging by the posts in the Google+ community, it can go quicker!
See my comment in: https://connect.microsoft.com/PowerShell/feedback/details/778371/invoke-webrequest-getelementsbytagname-is-incredibly-slow-on-some-machines#tabs
I got the same slowness running the script in 64 bits, but when running in 32bits mode, everything is very fast !
Lee Holmes was able to reproduce the issue, and here is his writeup
"The issue is that he’s piping COM objects into another cmdlet – in this case, Select-Object. When that happens, we attempt to bind parameters by property name. Enumerating property names of a COM object is brutally slow – so we’re spending 86% of our time on two very basic CLR API calls:
(…)
// Get the function description from a COM type
typeinfo.GetFuncDesc(index, out pFuncDesc);
(…)
// Get the function name from a COM function description
typeinfo.GetDocumentation(funcdesc.memid, out strName, out strDoc, out id, out strHelp);
(…)
We might be able to do something smart here with caching.
A workaround is to not pipe into Select-Object, but instead use language features:
# Grab the rows from the table, skipping the first row (column headers)
$allRows = #($slotTable.getElementsByTagName("tr"))
$rows = $allRows[1..$allRows.Count]
"
Did you try disabling progress?
$ProgressPreference = "SilentlyContinue"
In my case this solved serious performance problems with Invoke-WebRequest.
I have noticed the same phenomenon on a new Windows 8 system. I have a 104MB file stored on a remote web server that takes from 2 to 5 minutes to download depending on server load, bandwidth, etc. Tried through FF, IE10, Chrome, cURL, and even set up a test FTP server and tried with Windows FTP command. I consistently get the same results.
However, the same file (this is not an exaggeration) takes nearly 3 HOURS to transfer.
$file = 'C:\User\me\Desktop\file.mp4'
$site = 'http://my.site/file.mp4'
Invoke-WebRequest $site -Method Get -OutFile $file
Seems pretty cut-and-dry -- site's not https, file's not an executable or anything that Windows might see as 'unsafe', and there's no authentication needed. It just takes forever to finish.
I thought that my AV's real-time scanning might be the culprit, but disabling that made no discernible difference. Is there maybe was some memory allocation at work here? Similar to how adding the -ReadCount option to Get-Content makes getting the content of large files much faster? I can't find any such option for Invoke-WebRequest.

UnauthorizedAccessException on MemoryMappedFile in C# 4

I wanted to play around with using a MemoryMappedFile to access an existing binary file. If this even at all possible or am I a crazy person?
The idea would be to map the existing binary file directly to memory for some preferably higher-speed operations. Or to atleast see how these things worked.
using System.IO.MemoryMappedFiles;
System.IO.FileInfo fi = new System.IO.FileInfo(#"C:\testparsercap.pcap");
MemoryMappedFileSecurity sec = new MemoryMappedFileSecurity();
System.IO.FileStream file = fi.Open(System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite, System.IO.FileShare.ReadWrite);
MemoryMappedFile mf = MemoryMappedFile.CreateFromFile(file, "testpcap", fi.Length, MemoryMappedFileAccess.Read, sec, System.IO.HandleInheritability.Inheritable, true);
MemoryMappedViewAccessor FileMapView = mf.CreateViewAccessor();
PcapHeader head = new PcapHeader();
FileMapView.Read<PcapHeader>(0, out head);
I get System.UnauthorizedAccessException was unhandled (Message=Access to the path is denied.) on the mf.CreateViewAccessor() line.
I don't think it's file-permissions, since I'm running as a nice insecure administrator user, and there aren't any other programs open that might have a read-lock on the file. This is on Vista with UAC disabled.
If it's simply not possible and I missed something in the documentation, please let me know. I could barely find anything at all referencing this feature of .net 4.0
Thanks!
I know this is an old question, but I just ran into the same error and was able to solve it.
Even though I was opening the MemoryMappedFile as read-only (MemoryMappedFileRights.Read) as you are, I also needed to create the view accessor as read-only as well:
var view = mmf.CreateViewAccessor(offset, size, MemoryMappedFileAccess.Read);
Then it worked. Hope this helps someone else.
If the size is more than the file length, it gives the UnAuthorized Access exception. Because we are trying to access memory beyond the limits of the file.
var view = mmf.CreateViewAccessor(offset, size, MemoryMappedFileAccess.Read);
It is difficult to say what might be going wrong. Since there is no documentation on the MSDN website yet, your best bet is to install FILEMON from SysInternals, and see why that is happening.
Alternately, you can attach a native debugger (like WinDBG) to the process, and put a breakpoint on MapViewOfFile and other overloads. And then see why that call is failing.
Using the .CreateViewStream() from the instance of MemoryMappedFile removed the error from my code. I was unable to get .CreateViewAcccessor() working w/the access denied error

Resources