Powershell function Restart PC - windows

I want to create a function that take a array of pc and check if a pc name in parameter match with a pc name in the array at index of $i but when i create this function my function has a bug, the bug is $nomPeripherique is empty but i call it with "DESKTOP-QVRFEN4"
$tableauPeripherique= #("DESKTOP-QVRFEN4","DESKTOP-QVRETTD")
function Redemarrer($tableauPeripherique,$nomPeripherique){
for ($i = 0; $i -lt $tableauPeripherique.Length; $i++)
{
if ($tableauPeripherique[$i] -eq $nomPeripherique)
{
Restart-Computer -ComputerName $nomPeripherique
}
else
{
echo "nom de peripherique n'est pas présent dans la liste"
}
}
}
Redemarrer($tableauPeripherique,"DESKTOP-QVRFEN4")

Your are calling the function incorrectly. This is a common mistake for New PowerShell users as in many programming languages the function call has the arguments parenthetically stated after the Function name.
The call should look like:
Redemarrer $tableauPeripherique "DESKTOP-QVRFEN4"
No comma & no parens. The way you are calling it you're passing an entire array as the first argument. As such there's no second argument to compare to within the loop.
Personally I prefer advanced parameters. While there's a lot to know on the topic of Advanced Functions and their parameters there's still a low barrier to entry.
function Redemarrer
{
Param(
[Parameter(Mandatory = $true, Position = 0)]
[String[]]$tableauPeripherique,
[Parameter(Mandatory = $true, Position = 1)]
[String]$nomPeripherique
) # End Param block...
Process{
for ($i = 0; $i -lt $tableauPeripherique.Length; $i++)
{
if ($tableauPeripherique[$i] -eq $nomPeripherique) {
Restart-Computer -ComputerName $nomPeripherique
}
else {
Write-Output "nom de peripherique n'est pas présent dans la liste"
}
}
} # End Process Block...
}
Note: You do not need Write-Output or it's alias echo. More importantly the function will return the string if & when the else block fires. This can be problematic if you are assigning the function return and/or piping to another command. In short if your only intent is to write something to the console use Write-Host instead. However, as you get more advanced there are reasons not to do that either.
Of course this answer is a poor substitute for the topic. One good relatively entry level book on the matter is "Learn PowerShell Scripting in a Month of Lunches" by Don Jones & Jeffrey Hicks. Incidentally Don Jones has a lot to say about Write-Host.

Related

Invoke-Command faster than the command itself?

I was trying to measure some ways to write to files in PowerShell. No question about that but I don't understand why the first Measure-Command statement below takes longer to be executed than the 2nd statement.
They are the same but in the second one I write a scriptblock to send to Invoke-Command and in the 1st one I only run the command.
All informations about Invoke-Command speed I can find are about remoting.
This block takes about 4 seconds:
Measure-Command {
$stream = [System.IO.StreamWriter] "$PSScriptRoot\t.txt"
$i = 0
while ($i -le 1000000) {
$stream.WriteLine("This is the line number: $i")
$i++
}
$stream.Close()
} # takes 4 sec
And this code below which is exactly the same but written in a scriptblock passed to Invoke-Command takes about 1 second:
Measure-Command {
$cmdtest = {
$stream = [System.IO.StreamWriter] "$PSScriptRoot\t2.txt"
$i = 0
while ($i -le 1000000) {
$stream.WriteLine("This is the line number: $i")
$i++
}
$stream.Close()
}
Invoke-Command -ScriptBlock $cmdtest
} # Takes 1 second
How is that possible?
As it turns out, based on feedback from a PowerShell team member on GitHub issue #8911, the issue is more generally about (implicit) dot-sourcing (such as direct invocation of an expression) vs. running in a child scope, such as with &, the call operator, or, in the case at hand, with Invoke-Command -ScriptBlock.
Running in a child scope avoids variable lookups that are performed when (implicitly) dot-sourcing.
Therefore, as of Windows PowerShell v5.1 / PowerShell (Core) 7.2.x, you can speed up statements involving script blocks by invoking them via & { ... }, in a child scope (somewhat counter-intuitively, given that creating a new scope involves extra work).
Note that using & means that such blocks then cannot modify the caller's variables directly, but there are workarounds.
The following simplified code, which uses a foreach expression to loop 1 million times (1e6) demonstrates the performance advantage of running via & { ... }:
# REGULAR, direct invocation of an expression (a `foreach` statement in this case),
# which is implicitly DOT-SOURCED
(Measure-Command { $result = foreach ($n in 1..1e6) { $n } }).TotalSeconds
# OPTIMIZED invocation in CHILD SCOPE, using & { ... }
# up to 10+ TIMES FASTER, depending on OS and PowerShell edition
(Measure-Command { $result = & { foreach ($n in 1..1e6) { $n } } }).TotalSeconds
However, note that the performance advantage diminishes and can even go away the more preexisting variables are being referenced in the script block:
# Define a few sample variables to reference in the script blocks.
# Note that, due to PowerShell's dynamic scoping, even the child
# scope created by & { ... } sees these variables.
$i1=1; $i2=2; $i3=3; $i4=4; $i5=5
(Measure-Command { $result = foreach ($n in 1..1e6) { $n, $i1, $i2, $i3, $i4, $i5 } }).TotalSeconds
# MAY OR MAY NOT BE FASTER, depending on the OS and PowerShell edition.
(Measure-Command { $result = & { foreach ($n in 1..1e6) { $n, $i1, $i2, $i3, $i4, $i5 } } }).TotalSeconds
The reason is that variables that aren't created in the script block (by assigning to them inside it) require a variable lookup with & { ... } too, due to PowerShell's dynamic scoping (see this answer).

Avoid duplicate Bulk-New-ADUser Creation via a csv file

I have some modifications for this script. UserID is normally a user’s first name followed by their last name.
What I need to do is to compare the proxy address & SAMAccountName attribute associated with each Username before it create.
So I mean lets say Jack Sparrow , if jsparrow already in use then script will try as jasparrow (first and second letter of firstname) and in use jasparrow as well , will be jacsparrow and so on. I want to avoid duplicate usernames.
2-I decided that it would be better to make 'fun' passwords that use the first two letters of the FirstName , day/month , first two letters of the Lastname. The end result is that users get a password like "Ja1009Sp".
Firstname,LastName,Department,Manager,MobilePhone
Jack,Sparrow,IT,jsmith,1 88 635 5254-0551
John Smith,Sparrow,Finance,jsmith,188 635 5254-0554
Script :
Import-Module ActiveDirectory
$UserList = Import-CSV -Path C:\Temp\CreateUsers.csv
$targetOU='OU=usersOU,DC=My,DC=Domain,DC=org'
$upnDomain='sec.local'
foreach($Person in $UserList){
$useritems=#{
GivenName=$Person.Firstname
Surname=$Person.LastName
Department=$Person.Department
AccountPassword=ConvertTo-SecureString -String $Person.Password -AsPlainText -force
ChangePasswordAtLogon=$false
Enabled=$true
DisplayName="$($Person.Firstname) $($Person.Lastname)"
Manager=$Person.Manager
MobilePhone=$Person.MobilePhone
Name="$($Person.Firstname) $($Person.Lastname)"
SamAccountName="$($Person.Firstname+$Person.LastName.Substring(0,1))"
UserPrincipalName="$($Person.FirstName+$Person.LastName.Substring(0,1))#$upnDomain"
Company="Contoso"
}
New-ADUser #useritems -Path $targetOU
}
Try something like this.. I don't have AD available atm. to test the Get-ADUser-query used to look for existing account so it might need some tuning.
foreach ($Person in $UserList) {
#Reset counters
$i = 1
$n = 1
do {
if($i -le $person.Firstname.Length) {
$user = "$($Person.Firstname.Substring(0,$i)+$Person.LastName)"
$i++
} else {
#All combinations in use, adding number
$user = "$($Person.Firstname.Substring(0,1)+$Person.LastName+$n)"
$n++
}
} while ((Get-ADUser -Filter "(samAccountName -eq '$user') -or (proxyaddresses -like '$user*')"))
#Result username
#$user
#$useritems = #{
#.....
#SamAccountName=$user
#UserPrincipalName="$user#$upnDomain"
#....
#}
}
If all combinations are in use including jacksparrow, it tries jsparrow1 ++ until it finds a free number.
The password can be generated using:
$Password = "{0}{1}{2}" -f $Person.Firstname.Substring(0,2), (Get-Date).ToString("ddMM"), $Person.Lastname.Substring(0,2)

Converting a powershell script to Runspace

I wrote a quick script to find the percentage of users in one user list (TEMP.txt) that are also in another user list (TEMP2.txt) It worked great for a while until my user lists got up above a couple 100,000 or so... its too slow. I want to convert it to runspace to speed it up, but I am failing miserably. The original script is:
$USERLIST1 = gc .\TEMP.txt
$i = 0
ForEach ($User in $USERLIST1){
If (gc .\TEMP2.txt |Select-String $User -quiet){
$i = $i + 1
}
}
$Count = gc .\TEMP2.txt | Measure-object -Line
$decimal = $i / $count.lines
$percent = $decimal * 100
Write-Host "$percent %"
Sorry I am still new at powershell.
Not sure how much this will help you, I am new with runspaces as well but here is some code I used with a Windows Form running things asynchronously in a separate runspace, you might be able to manipulate it to do what you need:
$Runspace = [Management.Automation.Runspaces.RunspaceFactory]::CreateRunspace($Host)
$Runspace.ApartmentState = 'STA'
$Runspace.ThreadOptions = 'ReuseThread'
$Runspace.Open()
#Add the Form object to the Runspace environment
$Runspace.SessionStateProxy.SetVariable('Form', $Form)
#Create a new PowerShell object (a Thread)
$PowerShellRunspace = [System.Management.Automation.PowerShell]::Create()
#Initializes the PowerShell object with the runspace
$PowerShellRunspace.Runspace = $Runspace
#Add the scriptblock which should run inside the runspace
$PowerShellRunspace.AddScript({
[System.Windows.Forms.Application]::Run($Form)
})
#Open and run the runspace asynchronously
$AsyncResult = $PowerShellRunspace.BeginInvoke()
#End the pipeline of the PowerShell object
$PowerShellRunspace.EndInvoke($AsyncResult)
#Close the runspace
$Runspace.Close()
#Remove the PowerShell object and its resources
$PowerShellRunspace.Dispose()
Apart from runspace concept, next script could run a bit faster:
$USERLIST1 = gc .\TEMP.txt
$USERLIST2 = gc .\TEMP2.txt
$i = 0
ForEach ($User in $USERLIST1) {
if ($USERLIST2.Contains($User)) {
$i += 1
}
}
$Count = $USERLIST2.Count
$decimal = $i / $count
$percent = $decimal * 100
Write-Host "$percent %"

Parsing string into ARGV equivalent (Windows and Perl)

Edit - Answer posted below
I have a script that usually uses #ARGV arguments but in some cases it is invoked by another script (which I cannot modify) that instead only passes a config filename which among other things has the command line options that should have been passed directly.
Example:
Args=--test --pdf "C:\testing\my pdf files\test.pdf"
If possible I'd like a way to parse this string into an array that would be identical to #ARGV.
I have a workaround where I setup an external perl script that just echos #ARGV, and I invoke this script like below (standard boilerplate removed).
echo-args.pl
print join ("\n", #ARGV);
test-echo-args.pl
$my_args = '--test --pdf "C:\testing\my pdf files\test.pdf"';
#args = map { chomp ; $_ } `perl echo-args.pl $my_args`;
This seems inelegant but it works. Is there a better way without invoking a new process? I did try splitting and processing but there are some oddities on the command line e.g. -a"b c" becomes '-ab c' and -a"b"" becomes -ab" and I'd rather not worry about edge cases but I know that'll bite me one day if I don't.
Answer - thanks ikegami!
I've posted a working program below that uses Win32::API and CommandLineToArgvW from shell32.dll based on ikegami's advice. It is intentionally verbose in the hopes that it'll be more easy to follow for anyone like myself who is extremely rusty with C and pointer arithmetic.
Any tips are welcome, apart from the obvious simplifications :)
use strict;
use warnings;
use Encode qw( encode decode );
use Win32::API qw( );
use Data::Dumper;
# create a test argument string, with some variations, and pack it
# apparently an empty string returns $^X which is documented so check before calling
my $arg_string = '--test 33 -3-t" "es 33\t2 ';
my $packed_arg_string = encode('UTF-16le', $arg_string."\0");
# create a packed integer buffer for output
my $packed_argc_buf_ptr = pack('L', 0);
# create then call the function and get the result
my $func = Win32::API->new('shell32.dll', 'CommandLineToArgvW', 'PP', 'N')
or die $^E;
my $ret = $func->Call($packed_arg_string, $packed_argc_buf_ptr);
# unpack to get the number of parsed arguments
my $argc = unpack('L', $packed_argc_buf_ptr);
print "We parsed $argc arguments\n";
# parse the return value to get the actual strings
my #argv = decode_LPWSTR_array($ret, $argc);
print Dumper \#argv;
# try not to leak memory
my $local_free = Win32::API->new('kernel32.dll', 'LocalFree', 'N', '')
or die $^E;
$local_free->Call($ret);
exit;
sub decode_LPWSTR_array {
my ($ptr, $num) = #_;
return undef if !$ptr;
# $ptr is the memory location of the array of strings (i.e. more pointers)
# $num is how many we need to get
my #strings = ();
for (1 .. $num) {
# convert $ptr to a long, using that location read 4 bytes - this is the pointer to the next string
my $string_location = unpack('P4', pack('L', $ptr));
# make it human readable
my $readable_string_location = unpack('L', $string_location);
# decode the string and save it for later
push(#strings, decode_LPCWSTR($readable_string_location));
# our pointers are 32-bit
$ptr += 4;
}
return #strings;
}
# Copied from http://stackoverflow.com/questions/5529928/perl-win32api-and-pointers
sub decode_LPCWSTR {
my ($ptr) = #_;
return undef if !$ptr;
my $sW = '';
for (;;) {
my $chW = unpack('P2', pack('L', $ptr));
last if $chW eq "\0\0";
$sW .= $chW;
$ptr += 2;
}
return decode('UTF-16le', $sW);
}
In unix systems, it's the shell that parses that shell command into strings. But in Windows, it's up to each application. I think this is normally done using the CommandLineToArgv system call (which you could call with the help of Win32::API), but the spec is documented here if you want to reimplement it yourself.

How to split a huge folder?

We have a folder on Windows that's ... huge. I ran "dir > list.txt". The command lost response after 1.5 hours. The output file is about 200 MB. It shows there're at least 2.8 million files. I know the situation is stupid but let's focus the problem itself. If I have such a folder, how can I split it to some "manageable" sub-folders? Surprisingly all the solutions I have come up with all involve getting all the files in the folder at some point, which is a no-no in my case. Any suggestions?
Thank Keith Hill and Mehrdad. I accepted Keith's answer because that's exactly what I wanted to do but I couldn't quite get PS working quickly.
With Mehrdad's tip, I wrote this little program. It took 7+ hours to move 2.8 million files. So the initial dir command did finish. But somehow it didn't return to console.
namespace SplitHugeFolder
{
class Program
{
static void Main(string[] args)
{
var destination = args[1];
if (!Directory.Exists(destination))
Directory.CreateDirectory(destination);
var di = new DirectoryInfo(args[0]);
var batchCount = int.Parse(args[2]);
int currentBatch = 0;
string targetFolder = GetNewSubfolder(destination);
foreach (var fileInfo in di.EnumerateFiles())
{
if (currentBatch == batchCount)
{
Console.WriteLine("New Batch...");
currentBatch = 0;
targetFolder = GetNewSubfolder(destination);
}
var source = fileInfo.FullName;
var target = Path.Combine(targetFolder, fileInfo.Name);
File.Move(source, target);
currentBatch++;
}
}
private static string GetNewSubfolder(string parent)
{
string newFolder;
do
{
newFolder = Path.Combine(parent, Path.GetRandomFileName());
} while (Directory.Exists(newFolder));
Directory.CreateDirectory(newFolder);
return newFolder;
}
}
}
I use Get-ChildItem to index my whole C: drive every night into c:\filelist.txt. That's about 580,000 files and the resulting file size is ~60MB. Admittedly I'm on Win7 x64 with 8 GB of RAM. That said, you might try something like this:
md c:\newdir
Get-ChildItem C:\hugedir -r |
Foreach -Begin {$i = $j = 0} -Process {
if ($i++ % 100000 -eq 0) {
$dest = "C:\newdir\dir$j"
md $dest
$j++
}
Move-Item $_ $dest
}
The key is to do the move in a streaming manner. That is, don't collect up all the Get-ChildItem results into a single variable and then proceed. That would require all 2.8 million FileInfos to be in memory at once. Also, if you use the Name parameter on Get-ChildItem it will output a single string containing the file's path relative to the base dir. Even then, perhaps this size will just overwhelm the memory available to you. And no doubt, it will take quite a while to execute. IIRC correctly, my indexing script takes several hours.
If it does work, you should wind up with c:\newdir\dir0 thru dir28 but then again, I haven't tested this script at all so your mileage may vary. BTW this approach assumes that you're huge dir is a pretty flat dir.
Update: Using the Name parameter is almost twice as slow so don't use that parameter.
I found out the GetChildItem is the slowest option when working with many items in a directory.
Look at the results:
Measure-Command { Get-ChildItem C:\Windows -rec | Out-Null }
TotalSeconds : 77,3730275
Measure-Command { listdir C:\Windows | Out-Null }
TotalSeconds : 20,4077132
measure-command { cmd /c dir c:\windows /s /b | out-null }
TotalSeconds : 13,8357157
(with listdir function defined like this:
function listdir($dir) {
$dir
[system.io.directory]::GetFiles($dir)
foreach ($d in [system.io.directory]::GetDirectories($dir)) {
listdir $d
}
}
)
With this in mind, what I would do: I would stay in PowerShell but use more lowlevel approach with .NET methods:
function DoForFirst($directory, $max, $action) {
function go($dir, $options)
{
foreach ($f in [system.io.Directory]::EnumerateFiles($dir))
{
if ($options.Remaining -le 0) { return }
& $action $f
$options.Remaining--
}
foreach ($d in [system.io.directory]::EnumerateDirectories($dir))
{
if ($options.Remaining -le 0) { return }
go $d $options
}
}
go $directory (New-Object PsObject -Property #{Remaining=$max })
}
doForFirst c:\windows 100 {write-host File: $args }
# I use PsObject to avoid global variables and ref parameters.
To use the code you have to switch to .NET 4.0 runtime -- enumerating methods are new in .NET 4.0.
You can specify any scriptblock as -action parameter, so in your case it would be something like {Move-item -literalPath $args -dest c:\dir }.
Just try to list first 1000 items, I hope it will finish very quickly:
doForFirst c:\yourdirectory 1000 {write-host '.' -nonew }
And of course you can process all items at once, just use
doForFirst c:\yourdirectory ([long]::MaxValue) {move-item ... }
and each item should be processed immediately after it is returned. So the whole list is not read at once and then processed, but it is processed during reading.
How about starting with this:
cmd /c dir /b > list.txt
That should get you a list of all the file names.
If you're doing "dir > list.txt" from a powershell prompt, get-childitem is aliased as "dir". Get-childitem has known issues enumerating large directories, and the object collections it returns can get huge.

Resources