Why powershell workflow is so slower than usual function with same functionality? - performance

workflow test1{
parallel{
1
2
Get-Process
4
5
}
}
function test2{
1
2
Get-Process
4
5
}
When running test1 and test2,test1 is obviously slower than test2.And measure-command also proves this.
Why this happended?In what case does workflow should be used?
I work on Windows 7,PS 5.1.

Related

Laravel 5.7 Database Design Layout / Average from Collection

I have a situation where each Order can have Feedback. In case the product is physical, the Feedback can have many packaging_feedbacks. The packaging_feedbacks are supposed to be a relation to the packaging_feedback_details.
Feedback Model
public function packagingFeedbacks()
{
return $this->hasManyThrough('App\PackagingFeedbackDetail', 'App\PackagingFeedback',
'feedback_id', 'id', 'id', 'user_selection');
}
packaging_feedback_details
id|type_id(used to group the "names" for each feedback option)|name
1 0 well packed
2 0 bad packaging
3 1 fast shipping
4 1 express delivery
packaging_feedbacks
id|feedback_id|user_selection (pointing to the ID of packaging_feedback_details)
1 1 2
2 1 6
3 1 7
4 1 12
5 1 15
6 1 17
7 2 1
8 2 6
9 2 7
10 2 12
11 2 15
12 2 17
13 3 1
14 3 6
15 3 7
16 3 12
17 3 15
18 3 17
Now I would like to be able to get the average selection of the users for a physical product. I started by using:
$result = Product::with('userFeedbacks.packagingFeedbacks')->where('id', 1)->first();
$collection = collect();
foreach ($result->userFeedbacks as $key) {
foreach ($key->packagingFeedbacks as $skey) {
$collection->push($skey);
}
}
foreach ($collection->groupBy('type_id') as $key) {
echo($key->average('type_id'));
}
But it returns not the average id since it will calculate the average not the way I need it to calculate. Is there some better way, because I think it's not the cleverest way to do so. Is my database design, in general, the "best" way to handle this?
The type of average you're looking for here is mode. Laravel's collection instances have a mode() method which was introduced in 5.2 which when provide a key returns an array containing the highest occurring value for that key.
If I have understood your question correctly this should give you what you're after:
$result->userFeedbacks
->flatMap->packagingFeedbacks
->groupBy('type_id')
->map->mode('id');
The above is taking advantage of flatMap() and higher order messages() on collections.

Limiting the CPU utilization of a user in windows

I want to know if there is any way to limit the number of cpu usage by the user name in windows? For example, there are 8 cores and I want to limit the global cpu usage of a user to 6. So, he can not run more than 6 serial jobs (each use one core).
In Linux, that can be done via scripting. But I haven't see any similar thing even with powershell scripts. Does that mean, it can not be done?
The keyword for this is Affinity.
Affinity starts at 0 being first core.
Affinity is a bitmap
10000000 = first core
01000000 = second core
11000000 = first and second core
00100000 = third core
10100000 = first and third core
11100000 = first second and third core
function Set-Affinity([string]$Username,[int[]]$core){
[int]$affinty = 0
$core | %{ $affinty += [math]::pow(2,$_)}
get-process -IncludeUserName | ?{$_.UserName -eq $Username} | %{
$_.ProcessorAffinity = $affinty
}
}
Set-Affinity -username "TESTDOMAIN\TESTUSER" -core 0,1,2,3

WINBUGS : adding time and product fixed effects in a hierarchical data

I am working on a Hierarchical panel data using WinBugs. Assuming a data on school performance - logs with independent variable logp & rank. All schools are divided into three categories (cat) and I need beta coefficient for each category (thus HLM). I am wanting to account for time-specific and school specific effects in the model. One way can be to have dummy variables in the list of variables under mu[i] but that would get messy because my number of schools run upto 60. I am sure there must be a better way to handle that.
My data looks like the following:
school time logs logp cat rank
1 1 4.2 8.9 1 1
1 2 4.2 8.1 1 2
1 3 3.5 9.2 1 1
2 1 4.1 7.5 1 2
2 2 4.5 6.5 1 2
3 1 5.1 6.6 2 4
3 2 6.2 6.8 3 7
#logs = log(score)
#logp = log(average hours of inputs)
#rank - rank of school
#cat = section red, section blue, section white in school (hierarchies)
My WinBUGS code is given below.
model {
# N observations
for (i in 1:n){
logs[i] ~ dnorm(mu[i], tau)
mu[i] <- bcons +bprice*(logp[i])
+ brank[cat[i]]*(rank[i])
}
}
}
# C categories
for (c in 1:C) {
brank[c] ~ dnorm(beta, taub)}
# priors
bcons ~ dnorm(0,1.0E-6)
bprice ~ dnorm(0,1.0E-6)
bad ~ dnorm(0,1.0E-6)
beta ~ dnorm(0,1.0E-6)
tau ~ dgamma(0.001,0.001)
taub ~dgamma(0.001,0.001)
}
As you can see in the data sample above, I have multiple observations for school over time. How can I modify the code to account for time and school specific fixed effects. I have used STATA in the past and we get fe,be,i.time options to take care of fixed effects in a panel data. But here I am lost.

Change affinity of process with windows script

In Windows, with
START /node 1 /affinity ff cmd /C "app.exe"
I can set the affinity of app.exe (number of cores used by app.exe).
With a windows script, How I can change the affinity of a running process ?
PowerShell can do this task for you
Get Affinity:
PowerShell "Get-Process app | Select-Object ProcessorAffinity"
Set Affinity:
PowerShell "$Process = Get-Process app; $Process.ProcessorAffinity=255"
Example: (8 Core Processor)
Core # = Value = BitMask
Core 1 = 1 = 00000001
Core 2 = 2 = 00000010
Core 3 = 4 = 00000100
Core 4 = 8 = 00001000
Core 5 = 16 = 00010000
Core 6 = 32 = 00100000
Core 7 = 64 = 01000000
Core 8 = 128 = 10000000
Just add the decimal values together for which core you want to use. 255 = All 8 cores.
All Cores = 255 = 11111111
Example Output:
C:\>PowerShell "Get-Process notepad++ | Select-Object ProcessorAffinity"
ProcessorAffinity
-----------------
255
C:\>PowerShell "$Process = Get-Process notepad++; $Process.ProcessorAffinity=13"
C:\>PowerShell "Get-Process notepad++ | Select-Object ProcessorAffinity"
ProcessorAffinity
-----------------
13
C:\>PowerShell "$Process = Get-Process notepad++; $Process.ProcessorAffinity=255"
C:\>
Source:
Here is a nicely detailed post on how to change a process's affinity:
http://www.energizedtech.com/2010/07/powershell-setting-processor-a.html
The accepted answer works, but only for the first process in the list. The solution to that in the comments does not work for me.
To change affinity of all processes with the same name use this:
Powershell "ForEach($PROCESS in GET-PROCESS processname) { $PROCESS.ProcessorAffinity=255}"
Where 255 is the mask as given in the accepted answer.
For anyone else looking for answers to this and not finding any, the solution I found was to use an app called WinAFC (or AffinityChanger). This is a partial GUI, partial command line app that allows you to specify profiles for certain executables, and will poll the process list for them. If it finds matching processes, it will change the affinity of those processes according to the settings in the loaded profile.
There is some documentation here: http://affinitychanger.sourceforge.net/
For my purposes, I created a profile that looked like this:
TestMode = 0
TimeInterval = 1
*\convert.exe := PAIR0+PAIR1
This profile sets any convert.exe process to use the first two CPU core pairs (CPU0, CPU1, CPU2, and CPU3), polling every second. TestMode is a toggle that allows you to see if your profile is working without actually setting affinities.
Hope someone finds this useful!
If you really like enums, you can do it this way. ProcessorAffinity is an IntPtr, so it takes a little extra type casting.
[flags()] Enum Cores {
Core1 = 0x0001
Core2 = 0x0002
Core3 = 0x0004
Core4 = 0x0008
Core5 = 0x0010
Core6 = 0x0020
Core7 = 0x0040
Core8 = 0x0080
}
$a = get-process notepad
[cores][int]$a.Processoraffinity
Core1, Core2, Core3, Core4
$a.ProcessorAffinity = [int][cores]'core1,core2,core3,core4'
wmic process where name="some.exe" call setpriority ProcessIDLevel
I think these are the priority levels .You can also use PID instead of process name.

Using Task with .Net 4.0 vs .Net 4.5?

When i ran this piece of code
private void button1_Click(object sender, EventArgs e)
{
Start(sender, e);
}
private void Start(object sender, EventArgs e)
{
for (int i = 0; i < 5; i++)
{
System.Threading.Tasks.Task.Factory.StartNew(() => dosomething(i));
Debug.WriteLine("Called " + i);
}
Debug.WriteLine("Finished");
}
public void dosomething(int i)
{
Debug.WriteLine("Enters " + i);
lock (this)
{
Debug.WriteLine("Working " + i);
Thread.Sleep(100);
}
Debug.WriteLine("Done " + i);
}
output is different with .Net version 4.0 and 4.5. With 4.0 number 5 is repeated I can see the reason value of i is moved to 5 before some of the Tasks executed but same code with 4.5 shows different output.
(output ran with VS 2010 .Net 4.0)
Called 0
Called 1
Enters 1
Working 1
Called 2
Called 3
Called 4
Finished
Enters 0
Done 1
Enters 5
Working 0
Working 5
Done 0
**Enters 5
Working 5
Done 5
Enters 5
Done 5
Working 5
Done 5**
but when i ran with .Net 4.5 (VS 2011 beta) the result is,
(output ran with VS 2011 beta .Net 4.5)
Enters 0
Working 0
Called 0
Called 1
Enters 2
Called 2
Enters 2
Enters 3
Called 3
Called 4
Finished
Done 0
Working 2
Enters 5
Done 2
Working 3
Done 3
Working 5
Done 5
Working 2
Done 2
I couldn't see changes done with Task under CLR 4.5? Can anyone point me what are the changes with .Net 4.5 please.
Your code has a race condition. Let's say the loop finishes executing before any of the tasks gets started. This is entirely possible.
Then i will have a value of 5 in all tasks. This is the bug.
Solution: Copy i to a loop-local variable and use this local in the task's lambda.
Your code has a race condition. What that means is that it can behave in various way, depending on the exact order of operations.
And any small change may affect the order of operations, so it's not unexpected that your code will behave differently under different versions of the framework. Actually, I would expect it to behave differently when run multiple times for the same version of .net.

Resources