Basic Windows 10 Powershell question on variable initializaton and passing arguments - windows

I am currently learning about Powershell and how to write a script to launch an application. The following snippet of code I borrowed and have modified to learn how to launch notepad. The question I have is what does $args.Clone() do or derive from? (It is from the original code which had a different path and executable program being defined/called.) I realize that the variable $myArgs is being initialized to the left of the equal sign by the function on the right. However, I have not been successful finding resources about what can you can do with .Clone() so I thought I would try and ask here.
BTW, the script works as it launchs notepad.exe and names the text file 'pp'. If the file has not previously been created, it asks me if I want to name the text file 'pp'.
$exePath = $env:NOTEPAD_HOME + '/Windows/notepad.exe'
$myArgs = $args.Clone()
$myArgs += '-pp'
$myArgs += $env:NOTEPAD_HOME
& $exePath $myArgs

tl;dr
Your modification of $myArgs can be simplified as follows:
$myArgs = $args + '-pp' + $env:NOTEPAD_HOME
With an array as the LHS, operator + performs concatenation, i.e., it appends to the array, though note that appending means that a new array is created in the process, given that arrays are fixed-size data structures in .NET.
The fact that a new array is implicitly created anyway makes the call to .Clone() in your code unnecessary.
Background Information
Typically - and in the case of the automatic $args variable - the .Clone() method is an implementation of the System.ICloneable interface.
The purpose of .Clone() is to create a copy (clone) of an object, but - as the answer linked to by BACON explains - the semantics of this operation - shallow (see below) vs. deep copying - aren't prescribed, which is why use of System.ICloneable is generally discouraged.
$args is an array (System.Array), and for arrays, .Clone() creates a so-called shallow copy. That is, how the elements of the array are copied depends on whether they are instances of value types or reference types:
for value-type elements, the new array will have an independent copy of the data of the original array's elements.
for reference-type elements, the new array will have a copy of the reference of the original array's elements, which means that elements of both arrays point to same data (objects).
For more information about value types vs. reference types, see this answer.
PowerShell implicitly clones arrays in the way described when you use += and +.

Related

How to rewrite a variable in a for cycle for Ti-Basic

I'm trying to program a numerical method in my ti-89, in TI-Basic language, the problem is that when I overwrite the variable inside the loop it doesn't do it, I'm new to this language and I don't know if I'm omitting some detail behind
Item()
prgm
Input "f(x)",a //call the function in text mode
define s(z,t) = a //convert the text into a function
local xa,ya //declare local variables
x->xa //assign values that I already have saved to local variables
y->ya
local i
For i,1,10
s(xa,ya)->xa //evaluate the function, and I have to rewrite the first parameter
EndFor
EndPrgm
I'm not entirely sure what's causing the problem, but I would try using a list to store the data instead if that is all there is. You can rewrite the list and clear it easily. You can store strings in these lists as well. Lists can be edited at all times, which will fix your problem. Although this solution might not be as efficient, it should work. I don't have my calculator with me so I can't test it, sorry :(. These two functions below add things to the list and clear the list.
L1(x,y,z)
ClrList L1
Good luck! TI-Basic can be difficult sometimes

why Parentheses used for in this powershell command

I'm new with Powershell can someone explaine to me why Parentheses and point are used for :
(get-cluster).PreferredSite="SiteA"
why not just :
get-cluster | set-preferredSite -Name SiteA
Set-preferredSite looks like the (hypothetical) name of a cmdlet, and, as TessellatingHeckler points out, it would be practically impossible to build a cmdlet for each and every property name (leaving aside the fact that there can be dynamically created properties whose names you cannot predict).
Therefore, you have no choice but to use PowerShell's syntax for assigning a value to an object's property, which generally requires the use of an expression, which is not CLI-like (argument mode) but programming-language-like (expression mode).
PowerShell v3+ does offer a less "noisy" syntax with its so-called operation statement; e.g.:
Get-Cluster | % PreferredSite
which is the equivalent of the more verbose:
Get-Cluster | % { $_.PreferredSite }
However, operation statements do not support assignments, so the following does NOT work:
Get-Cluster | % PreferredSite = 'SiteA' # !! Does NOT work.
My guess as to why that is not supported is that it's rare to want to use the pipeline to set the property of each item in a potentially large input set to the same value.
For a smallish collection, (Get-Cluster).PreferredSite="SiteA" will do just fine, even though the output from Get-Cluster is collected in memory as a whole first (if Get-Cluster outputs more than 1 object, .PreferredSite is accessed on each item in the collection in PSv3+, a feature called member-access enumeration).
For large collections, use % (ForEach-Object) with a custom script block:
Get-Cluster | % { $_.PreferredSite = 'SiteA' }
Note how:
the statement inside { ... } is an assignment expression.
$_ is the automatic variable that refers to the input object at hand in each iteration.

Is c++11 operator[] equivalent to emplace on map insertion?

For C++11, is there still a performance difference between the following?
(for std::map<Foo, std::vector<Bar> > as an example)
map[key] = myVector and map.emplace(key, myVector)
The part I'm not figuring out is the exact internal of operator[]. My understanding so far has been (when key doesn't exist):
Create a new key and the associated empty default vector in place inside the map
Return the reference of the associated empty vector
Assign myVector to the reference???
The point 3 is the part I couldn't understand, how can you assign a new value to a reference in the first place?
Though I cannot sort through point 3 I think somehow there's just a copy/move required. Assuming C++11 will be smart enough to know it's gonna be a move operation, is this whole "[]" assignment then already cheaper than insert()? Is it almost equivalent to emplace()? ---- default construction and move content over, versus construct vector with content directly in place?
There are a lot of differences between the two.
If you use operator[], then the map will default construct the value. The return value from operator[] will be this default constructed object, which will then use operator= to assign to it.
If you use emplace, the map will directly construct the value with the parameters you provide.
So the operator[] method will always use two-stage construction. If the default constructor is slow, or if copy/move construction is faster than copy/move assignment, then it could be problematic.
However, emplace will not replace the value if the provided key already exists. Whereas operator[] followed by operator= will always replace the value, whether there was one there or not.
There are other differences too. If copying/moving throws, emplace guarantees that the map will not be changed. By contrast, operator[] will always insert a default constructed element. So if the later copy/move assignment fails, then the map has already been changed. That key will exist with a default constructed value_type.
Really, performance is not the first thing you should be thinking about when deciding which one to use. You need to focus first on whether it has the desired behavior.
C++17 will provide insert_or_assign, which has the effect of map[] = v;, but with the exception safety of insert/emplace.
how can you assign a new value to a reference in the first place?
It's fundamentally no different from assigning to any non-const reference:
int i = 5;
int &j = i;
j = 30;
i == 30; //This is true.

Can LINQ be used in PowerShell?

I am trying to use LINQ in PowerShell. It seems like this should be entirely possible since PowerShell is built on top of the .NET Framework, but I cannot get it to work. For example, when I try the following (contrived) code:
$data = 0..10
[System.Linq.Enumerable]::Where($data, { param($x) $x -gt 5 })
I get the following error:
Cannot find an overload for "Where" and the argument count: "2".
Never mind the fact that this could be accomplished with Where-Object. The point of this question is not to find an idiomatic way of doing this one operation in PowerShell. Some tasks would be light-years easier to do in PowerShell if I could use LINQ.
The problem with your code is that PowerShell cannot decide to which specific delegate type the ScriptBlock instance ({ ... }) should be cast.
So it isn't able to choose a type-concrete delegate instantiation for the generic 2nd parameter of the Where method. And it also does't have syntax to specify a generic parameter explicitly. To resolve this problem, you need to cast the ScriptBlock instance to the right delegate type yourself:
$data = 0..10
[System.Linq.Enumerable]::Where($data, [Func[object,bool]]{ param($x) $x -gt 5 })
Why does [Func[object, bool]] work, but [Func[int, bool]] does not?
Because your $data is [object[]], not [int[]], given that PowerShell creates [object[]] arrays by default; you can, however, construct [int[]] instances explicitly:
$intdata = [int[]]$data
[System.Linq.Enumerable]::Where($intdata, [Func[int,bool]]{ param($x) $x -gt 5 })
To complement PetSerAl's helpful answer with a broader answer to match the question's generic title:
Note: The following applies up to at least PowerShell 7.2. Direct support for LINQ - with syntax comparable to the one in C# - is being discussed for a future version of PowerShell Core in GitHub issue #2226.
Using LINQ in PowerShell:
You need PowerShell v3 or higher.
You cannot call the LINQ extension methods directly on collection instances and instead must invoke the LINQ methods as static methods of the [System.Linq.Enumerable] type to which you pass the input collection as the first argument.
Having to do so takes away the fluidity of the LINQ API, because method chaining is no longer an option. Instead, you must nest static calls, in reverse order.
E.g., instead of $inputCollection.Where(...).OrderBy(...) you must write [Linq.Enumerable]::OrderBy([Linq.Enumerable]::Where($inputCollection, ...), ...)
Helper functions and classes:
Some methods, such as .Select(), have parameters that accept generic Func<> delegates (e.g, Func<T,TResult> can be created using PowerShell code, via a cast applied to a script block; e.g.:
[Func[object, bool]] { $Args[0].ToString() -eq 'foo' }
The first generic type parameter of Func<> delegates must match the type of the elements of the input collection; keep in mind that PowerShell creates [object[]] arrays by default.
Some methods, such as .Contains() and .OrderBy have parameters that accept objects that implement specific interfaces, such as IEqualityComparer<T> and IComparer<T>; additionally, input types may need to implement IEquatable<T> in order for comparisons to work as intended, such as with .Distinct(); all these require compiled classes written, typically, in C# (though you can create them from PowerShell by passing a string with embedded C# code to the Add-Type cmdlet); in PSv5+, however, you may also use custom PowerShell classes, with some limitations.
Generic methods:
Some LINQ methods themselves are generic and therefore require one or more type arguments.
In PowerShell (Core) 7.2- and Windows PowerShell, PowerShell cannot directly call such methods and must use reflection instead, because it only supports inferring type arguments, which cannot be done in this case; e.g.:
# Obtain a [string]-instantiated method of OfType<T>.
$ofTypeString = [Linq.Enumerable].GetMethod("OfType").MakeGenericMethod([string])
# Output only [string] elements in the collection.
# Note how the array must be nested for the method signature to be recognized.
PS> $ofTypeString.Invoke($null, (, ('abc', 12, 'def')))
abc
def
For a more elaborate example, see this answer.
In PowerShell (Core) 7.3+, you now have the option of specifying type arguments explicitly (see the conceptual about_Calling_Generic_Methods help topic); e.g.:
# Output only [string] elements in the collection.
# Note the need to enclose the input array in (...)
# -> 'abc', 'def'
[Linq.Enumerable]::OfType[string](('abc', 12, 'def'))
The LINQ methods return a lazy enumerable rather than an actual collection; that is, what is returned isn't the actual data yet, but something that will produce the data when enumerated.
In contexts where enumeration is automatically performed, notably in the pipeline, you'll be able to use the enumerable as if it were a collection.
However, since the enumerable isn't itself a collection, you cannot get the result count by invoking .Count nor can you index into the iterator; however, you can use member-access enumeration (extracting the values of a property of the objects being enumerated).
If you do need the results as a static array to get the usual collection behavior, wrap the invocation in [Linq.Enumerable]::ToArray(...).
Similar methods that return different data structures exist, such as ::ToList().
For an advanced example, see this answer.
For an overview of all LINQ methods including examples, see this great article.
In short: using LINQ from PowerShell is cumbersome and is only worth the effort if any of the following apply:
you need advanced query features that PowerShell's cmdlets cannot provide.
performance is paramount - see this article.
If you want to achieve LINQ like functionality then PowerShell has some cmdlets and functions, for instance: Select-Object, Where-Object, Sort-Object, Group-Object. It has cmdlets for most of LINQ features like Projection, Restriction, Ordering, Grouping, Partitioning, etc.
See Powershell One-Liners: Collections and LINQ.
For more details on using Linq and possibly how to make it easier, the article LINQ Through Powershell may be helpful.
I ran accross LINQ, when wanting to have a stable sort in PowerShell (stable: if property to sort by has the same value on two (or more) elements: preserve their order). Sort-Object has a -Stable-Switch, but only in PS 6.1+. Also, the Sort()-Implementations in the Generic Collections in .NET are not stable, so I came accross LINQ, where documentation says it's stable.
Here's my (Test-)Code:
# Getting a stable sort in PowerShell, using LINQs OrderBy
# Testdata
# Generate List to Order and insert Data there. o will be sequential Number (original Index), i will be Property to sort for (with duplicates)
$list = [System.Collections.Generic.List[object]]::new()
foreach($i in 1..10000){
$list.Add([PSCustomObject]#{o=$i;i=$i % 50})
}
# Sort Data
# Order Object by using LINQ. Note that OrderBy does not sort. It's using Delayed Evaluation, so it will sort only when GetEnumerator is called.
$propertyToSortBy = "i" # if wanting to sort by another property, set its name here
$scriptBlock = [Scriptblock]::Create("param(`$x) `$x.$propertyToSortBy")
$resInter = [System.Linq.Enumerable]::OrderBy($list, [Func[object,object]]$scriptBlock )
# $resInter.GetEnumerator() | Out-Null
# $resInter is of Type System.Linq.OrderedEnumerable<...>. We'll copy results to a new Generic List
$res = [System.Collections.Generic.List[object]]::new()
foreach($elem in $resInter.GetEnumerator()){
$res.Add($elem)
}
# Validation
# Check Results. If PropertyToSort is the same as in previous record, but previous sequence-number is higher, than the Sort has not been stable
$propertyToSortBy = "i" ; $originalOrderProp = "o"
for($i = 1; $i -lt $res.Count ; $i++){
if(($res[$i-1].$propertyToSortBy -eq $res[$i].$propertyToSortBy) -and ($res[$i-1].$originalOrderProp -gt $res[$i].$originalOrderProp)){
Write-host "Error on line $i - Sort is not Stable! $($res[$i]), Previous: $($res[$i-1])"
}
}
There is a simple way to make Linq chaining fluent, by setting a using statement to the Linq namespace, Then you can call the where function directly, no need to call the static Where function.
using namespace System.Linq
$b.Where({$_ -gt 0})
$b is an array of bytes, and I want to get all bytes that are greater than 0.
Works perfect.

Using function arguments as local variables

Something like this (yes, this doesn't deal with some edge cases - that's not the point):
int CountDigits(int num) {
int count = 1;
while (num >= 10) {
count++;
num /= 10;
}
return count;
}
What's your opinion about this? That is, using function arguments as local variables.
Both are placed on the stack, and pretty much identical performance wise, I'm wondering about the best-practices aspects of this.
I feel like an idiot when I add an additional and quite redundant line to that function consisting of int numCopy = num, however it does bug me.
What do you think? Should this be avoided?
As a general rule, I wouldn't use a function parameter as a local processing variable, i.e. I treat function parameters as read-only.
In my mind, intuitively understandabie code is paramount for maintainability, and modifying a function parameter to use as a local processing variable tends to run counter to that goal. I have come to expect that a parameter will have the same value in the middle and bottom of a method as it does at the top. Plus, an aptly-named local processing variable may improve understandability.
Still, as #Stewart says, this rule is more or less important depending on the length and complexity of the function. For short simple functions like the one you show, simply using the parameter itself may be easier to understand than introducing a new local variable (very subjective).
Nevertheless, if I were to write something as simple as countDigits(), I'd tend to use a remainingBalance local processing variable in lieu of modifying the num parameter as part of local processing - just seems clearer to me.
Sometimes, I will modify a local parameter at the beginning of a method to normalize the parameter:
void saveName(String name) {
name = (name != null ? name.trim() : "");
...
}
I rationalize that this is okay because:
a. it is easy to see at the top of the method,
b. the parameter maintains its the original conceptual intent, and
c. the parameter is stable for the rest of the method
Then again, half the time, I'm just as apt to use a local variable anyway, just to get a couple of extra finals in there (okay, that's a bad reason, but I like final):
void saveName(final String name) {
final String normalizedName = (name != null ? name.trim() : "");
...
}
If, 99% of the time, the code leaves function parameters unmodified (i.e. mutating parameters are unintuitive or unexpected for this code base) , then, during that other 1% of the time, dropping a quick comment about a mutating parameter at the top of a long/complex function could be a big boon to understandability:
int CountDigits(int num) {
// num is consumed
int count = 1;
while (num >= 10) {
count++;
num /= 10;
}
return count;
}
P.S. :-)
parameters vs arguments
http://en.wikipedia.org/wiki/Parameter_(computer_science)#Parameters_and_arguments
These two terms are sometimes loosely used interchangeably; in particular, "argument" is sometimes used in place of "parameter". Nevertheless, there is a difference. Properly, parameters appear in procedure definitions; arguments appear in procedure calls.
So,
int foo(int bar)
bar is a parameter.
int x = 5
int y = foo(x)
The value of x is the argument for the bar parameter.
It always feels a little funny to me when I do this, but that's not really a good reason to avoid it.
One reason you might potentially want to avoid it is for debugging purposes. Being able to tell the difference between "scratchpad" variables and the input to the function can be very useful when you're halfway through debugging.
I can't say it's something that comes up very often in my experience - and often you can find that it's worth introducing another variable just for the sake of having a different name, but if the code which is otherwise cleanest ends up changing the value of the variable, then so be it.
One situation where this can come up and be entirely reasonable is where you've got some value meaning "use the default" (typically a null reference in a language like Java or C#). In that case I think it's entirely reasonable to modify the value of the parameter to the "real" default value. This is particularly useful in C# 4 where you can have optional parameters, but the default value has to be a constant:
For example:
public static void WriteText(string file, string text, Encoding encoding = null)
{
// Null means "use the default" which we would document to be UTF-8
encoding = encoding ?? Encoding.UTF8;
// Rest of code here
}
About C and C++:
My opinion is that using the parameter as a local variable of the function is fine because it is a local variable already. Why then not use it as such?
I feel silly too when copying the parameter into a new local variable just to have a modifiable variable to work with.
But I think this is pretty much a personal opinion. Do it as you like. If you feel sill copying the parameter just because of this, it indicates your personality doesn't like it and then you shouldn't do it.
If I don't need a copy of the original value, I don't declare a new variable.
IMO I don't think mutating the parameter values is a bad practice in general,
it depends on how you're going to use it in your code.
My team coding standard recommends against this because it can get out of hand. To my mind for a function like the one you show, it doesn't hurt because everyone can see what is going on. The problem is that with time functions get longer, and they get bug fixes in them. As soon as a function is more than one screen full of code, this starts to get confusing which is why our coding standard bans it.
The compiler ought to be able to get rid of the redundant variable quite easily, so it has no efficiency impact. It is probably just between you and your code reviewer whether this is OK or not.
I would generally not change the parameter value within the function. If at some point later in the function you need to refer to the original value, you still have it. in your simple case, there is no problem, but if you add more code later, you may refer to 'num' without realizing it has been changed.
The code needs to be as self sufficient as possible. What I mean by that is you now have a dependency on what is being passed in as part of your algorithm. If another member of your team decides to change this to a pass by reference then you might have big problems.
The best practice is definitely to copy the inbound parameters if you expect them to be immutable.
I typically don't modify function parameters, unless they're pointers, in which case I might alter the value that's pointed to.
I think the best-practices of this varies by language. For example, in Perl you can localize any variable or even part of a variable to a local scope, so that changing it in that scope will not have any affect outside of it:
sub my_function
{
my ($arg1, $arg2) = #_; # get the local variables off the stack
local $arg1; # changing $arg1 here will not be visible outside this scope
$arg1++;
local $arg2->{key1}; # only the key1 portion of the hashref referenced by $arg2 is localized
$arg2->{key1}->{key2} = 'foo'; # this change is not visible outside the function
}
Occasionally I have been bitten by forgetting to localize a data structure that was passed by reference to a function, that I changed inside the function. Conversely, I have also returned a data structure as a function result that was shared among multiple systems and the caller then proceeded to change the data by mistake, affecting these other systems in a difficult-to-trace problem usually called action at a distance. The best thing to do here would be to make a clone of the data before returning it*, or make it read-only**.
* In Perl, see the function dclone() in the built-in Storable module.
** In Perl, see lock_hash() or lock_hash_ref() in the built-in Hash::Util module).

Resources