This question already has answers here:
Using box-drawing Unicode characters in batch files
(3 answers)
Closed 3 years ago.
a while ago i were googling to find out how I could make a batch menu with a more "professional look" instead of using symbols such as:
|=====|
|-----|
|_____|
to make outlines around a menu in batch.
but I had no luck.
today i randomly found this article:
https://web.archive.org/web/20151204182221/https://http-server.carleton.ca/~dmcfet/menu.html
and it explains that by using ms-dos (edit.com) I can do this.
but as my computer is a 64 bit win 10. I dot have the edit.com so.... how could I make this kind of menu look by hand? (printing special characters shown on left side of the header "STEP 3, Lines, Lines, Lines.")
Here's a batch + PowerShell menu maker I've been working on. Set the values in the batch portion, and the PowerShell stuff will auto resize and reposition as needed. When a selection is made, the console buffer restores its former contents, effectively vanishing the menu.
It looks like this:
Here's the code. Save it with a .bat extension.
<# : Batch portion
#echo off & setlocal enabledelayedexpansion
set "menu[0]=Format C:"
set "menu[1]=Send spam to boss"
set "menu[2]=Truncate database *"
set "menu[3]=Randomize user password"
set "menu[4]=Download Dilbert"
set "menu[5]=Hack local AD"
set "default=0"
powershell -noprofile "iex (gc \"%~f0\" | out-string)"
echo You chose !menu[%ERRORLEVEL%]!.
goto :EOF
: end batch / begin PowerShell hybrid chimera #>
$menutitle = "=== MENU ==="
$menuprompt = "Use the arrow keys. Hit Enter to select."
$maxlen = $menuprompt.length + 6
$menu = gci env: | ?{ $_.Name -match "^menu\[\d+\]$" } | %{
$_.Value.trim()
$len = $_.Value.trim().Length + 6
if ($len -gt $maxlen) { $maxlen = $len }
}
[int]$selection = $env:default
$h = $Host.UI.RawUI.WindowSize.Height
$w = $Host.UI.RawUI.WindowSize.Width
$xpos = [math]::floor(($w - ($maxlen + 5)) / 2)
$ypos = [math]::floor(($h - ($menu.Length + 4)) / 3)
$offY = [console]::WindowTop;
$rect = New-Object Management.Automation.Host.Rectangle `
0,$offY,($w - 1),($offY+$ypos+$menu.length+4)
$buffer = $Host.UI.RawUI.GetBufferContents($rect)
function destroy {
$coords = New-Object Management.Automation.Host.Coordinates 0,$offY
$Host.UI.RawUI.SetBufferContents($coords,$buffer)
}
function getKey {
while (-not ((37..40 + 13 + 48..(47 + $menu.length)) -contains $x)) {
$x = $Host.UI.RawUI.ReadKey('NoEcho,IncludeKeyDown').VirtualKeyCode
}
$x
}
# http://goo.gl/IAmdR6
function WriteTo-Pos ([string]$str, [int]$x = 0, [int]$y = 0,
[string]$bgc = [console]::BackgroundColor, [string]$fgc = [Console]::ForegroundColor) {
if($x -ge 0 -and $y -ge 0 -and $x -le [Console]::WindowWidth -and
$y -le [Console]::WindowHeight) {
$saveY = [console]::CursorTop
$offY = [console]::WindowTop
[console]::setcursorposition($x,$offY+$y)
Write-Host $str -b $bgc -f $fgc -nonewline
[console]::setcursorposition(0,$saveY)
}
}
function center([string]$what) {
$what = " $what "
$lpad = " " * [math]::max([math]::floor(($maxlen - $what.length) / 2), 0)
$rpad = " " * [math]::max(($maxlen - $what.length - $lpad.length), 0)
WriteTo-Pos "$lpad $what $rpad" $xpos $line blue yellow
}
function menu {
$line = $ypos
center $menutitle
$line++
center " "
$line++
for ($i=0; $item = $menu[$i]; $i++) {
# write-host $xpad -nonewline
$rtpad = " " * ($maxlen - $item.length)
if ($i -eq $selection) {
WriteTo-Pos " > $item <$rtpad" $xpos ($line++) yellow blue
} else {
WriteTo-Pos " $i`: $item $rtpad" $xpos ($line++) blue yellow
}
}
center " "
$line++
center $menuprompt
1
}
while (menu) {
[int]$key = getKey
switch ($key) {
37 {} # left or up
38 { if ($selection) { $selection-- }; break }
39 {} # right or down
40 { if ($selection -lt ($menu.length - 1)) { $selection++ }; break }
# number or enter
default { if ($key -gt 13) {$selection = $key - 48}; destroy; exit($selection) }
}
}
FIX:
in order to create a menu like this. you will need to use an editor such as notepad++
open a new notepad ++ file. then goto format and select western europe OEM-US charset.
then simply go here: https://en.wikipedia.org/wiki/Box-drawing_character
and copy paste the symbols you want for your menu in notepad++ and save.
example:
Displayed in notepad++ :
echo ╔════════════════════════════╗
echo ║Bill register/database Menu ║
echo ╠════════════════════════════╣
echo ║ 1. -Register a bill ║
echo ║ 2. -Check Bill Info ║
echo ║ 3. Set payment ║
echo ╚════════════════════════════╝
displayed in normal notepad :
echo ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
echo ºBill register/database Menu º
echo ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ͹
echo º 1. -Register a bill º
echo º 2. -Check Bill Info º
echo º 3. Set payment º
echo ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
so as you can see. it works with normal notepad too. but its easier to work with it in notepad++ as it displays what it actually look like.
It's not necessary to worry about code pages or file encodings at all.
When you are on windows 10 you can use the Character Set - DEC Line Drawing Mode
This uses ansi-escape sequences to show borders.
The border characters are represented by the characters jklmnqtuvwx.
for /F "delims=#" %%E in ('"prompt #$E# & for %%E in (1) do rem"') do set "\e=%%E"
REM *** Enable DEC Line Drawing Mode
echo %\e%(0
echo lqqk
echo x x
echo x x
echo mqqnqqk
echo x x
echo mqqj
REM *** Enable ASCII Mode (Default)
echo %\e%(B
The output is
┌──┐
│ │
│ │
└──┼──┐
│ │
└──┘
for Notepad with ASCII and CHCP 1250 / CHCP 1252 characters are:
É Í » ş Č Ľ
Related
How to modify a string (LINE2 "line number LINE2 is on") in a windows ascii text file using search strings that are easy to read and easy to add/modify/delete using PowerShell 5. This script will parse a 2500 line file, find 139 instances of the strings, replace them and overwrite the original in less than 165ms on average depending on which method you use. Which method is faster? Which method is easier to add/modify/delete the strings?
Search for strings "AROUND LINE {1-9999}" and "LINE2 {1-9999}" and replace {1-9999} with the {line number} the code is on. The tests were done with a 2500 line file not the two line sample.bat.
sample.bat contains two lines:
ECHO AROUND LINE 5936
TITLE %TIME% DISPLAY TCP-IP SETTINGS LINE2 5937
Method One: Using Get-Content + -replace + Set-Content:
Measure-command {
copy-item $env:temp\sample9.bat -d $env:temp\sample.bat -force
(gc $env:temp\sample.bat) | foreach -Begin {$lc = 1} -Process {
$_ -replace 'AROUND LINE \d+', "AROUND LINE $lc" -replace 'LINE2 \d+', "LINE2 $lc"
++$lc
} | sc -Encoding Ascii $env:temp\sample.bat}
Results: 175ms-387ms in ten runs for an average of 215ms.
You modify the search by adding / removing / modifying -replace.
-replace 'AROUND LINE \d+', "AROUND LINE $lc" -replace 'LINE2 \d+', "LINE2 $lc" -replace 'PLACEMARK \d+', "PLACEMARK $lc"
powershell $env:temp\sample.ps1 $env:temp\sample.bat:
(gc $args[0]) | foreach -Begin {$lc = 1} -Process {
$_ -replace 'AROUND LINE \d+', "AROUND LINE $lc" -replace 'LINE2 \d+', "LINE2 $lc"
++$lc
} | sc -Encoding Ascii $args[0]
Method Two: Using switch and .NET frameworks:
Measure-command {
copy-item $env:temp\sample9.bat -d $env:temp\sample.bat -force
$file = "$env:temp\sample.bat"
$lc = 0
$updatedLines = switch -Regex ([IO.File]::ReadAllLines($file)) {
'^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$' { $Matches[1] + ++$lc + $Matches[2] }
default { ++$lc; $_ }
}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)}
Results: 73ms-816ms in ten runs for an average of 175ms.
Method Three: Using switch and .NET frameworks optimized version based on a precompiled regex:
Measure-command {
copy-item $env:temp\sample9.bat -d $env:temp\sample.bat -force
$file = "$env:temp\sample.bat"
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$lc = 0
$updatedLines = & {foreach ($line in [IO.File]::ReadLines($file)) {
$lc++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $lc + $g[2].Value
} else { $line }
}}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)}
Results: 71ms-236ms in ten runs for an average of 106ms.
Add/Modify/Delete your search string:
AROUND LINE|LINE2|PLACEMARK
AROUND LINE|LINE3
LINE4
powershell $env:temp\sample.ps1 $env:temp\sample.bat:
$file=$args[0]
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$lc = 0
$updatedLines = & {foreach ($line in [IO.File]::ReadLines($file
)) {
$lc++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $lc + $g[2].Value
} else { $line }
}}
[IO.File]::WriteAllLines($file
, $updatedLines, [Text.Encoding]::ASCII)
Editor's note: This is a follow-up question to Iterate a backed up ascii text file, find all instances of {LINE2 1-9999} replace with {LINE2 "line number the code is on"}. Overwrite. Faster?
The evolution of this question from youngest to oldest:
1. 54757890 2. 54737787 3. 54712715 4. 54682186
Update: I've used #mklement0 regex solution.
switch -Regex -File $file {
'^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$' { $Matches[1] + ++$lc + $Matches[2] }
default { ++$lc; $_ }
}
Given that regex ^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$ contains only 2 capture groups - the part of the line before the number to replace (\d+) and the part of the line after, you must reference these groups with indices 1 and 2 into the automatic $Matches variable in the output (not 2 and 3).
Note that (?:...) is a non-capturing group, so by design it isn't reflected in $Matches.
Instead of reading the file with [IO.File]::ReadAllLines($file), I'm using the -File option with switch, which directly reads the lines from file $file.
The ++$lc inside default { ++$lc; $_ } ensures that the line counter is also incremented for non-matching lines before passing the line at hand through ($_).
Performance notes
You can improve the performance slightly with the following obscure optimization:
# Enclose the switch statement in & { ... } to speed it up slightly.
$updatedLines = & { switch -Regex -File ... }
With high iteration counts (a large number of lines), using a precompiled [regex] instance rather than a string literal that PowerShell converts to a regex behind the scenes can speed things up further - see benchmarks below.
Additionally, if case-sensitive matching is sufficient, you can squeeze out a little more performance by adding the -CaseSensitive option to the switch statement.
At a high level, what makes the solution fast is the use of switch -File to process the lines, and, generally, the use of .NET types for file I/O (rather than cmdlets) (IO.File]::WriteAllLines() in this case, as shown in the question) - see also this related answer.
That said, marsze's answer offers a highly optimized foreach loop approach based on a precompiled regex that is faster with higher iteration counts - it is, however, more verbose.
Benchmarks
The following code compares the performance of this answer's switch approach with marsze's foreach approach.
Note that in order to make the two solutions fully equivalent, the following tweaks were made:
The & { ... } optimization was added to the switch command as well.
The IgnoreCase and CultureInvariant options were added to the foreach approach to match the options PS regexes implicitly use.
Instead of a 6-line sample file, performance is tested with a 600-line, a 3,000 and a 30,000-line file respectively, so as to show the effects of the iteration count on performance.
100 runs are being averaged.
Sample results from my Windows 10 machine running Windows PowerShell v5.1 - the absolute times aren't important, but hopefully the relative performance shown in the Factor column is generally representative:
VERBOSE: Averaging 100 runs with a 600-line file of size 0.03 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.023 # switch -Regex -File with regex string literal...
1.16 0.027 # foreach with precompiled regex and [regex].Match...
1.23 0.028 # switch -Regex -File with precompiled regex...
VERBOSE: Averaging 100 runs with a 3000-line file of size 0.15 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.063 # foreach with precompiled regex and [regex].Match...
1.11 0.070 # switch -Regex -File with precompiled regex...
1.15 0.073 # switch -Regex -File with regex string literal...
VERBOSE: Averaging 100 runs with a 30000-line file of size 1.47 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.252 # foreach with precompiled regex and [regex].Match...
1.24 0.313 # switch -Regex -File with precompiled regex...
1.53 0.386 # switch -Regex -File with regex string literal...
Note how at lower iteration counts switch -regex with a string literal is fastest, but at around 1,500 lines the foreach solution with a precompiled [regex] instance starts to get faster; using a precompiled [regex] instance with switch -regex pays off to a lesser degree, only with higher iteration counts.
Benchmark code, using the Time-Command function:
# Sample file content (6 lines)
$fileContent = #'
TITLE %TIME% NO "%zmyapps1%\*.*" ARCHIVE ATTRIBUTE LINE2 1243
TITLE %TIME% DOC/SET YQJ8 LINE2 1887
SET ztitle=%TIME%: WINFOLD LINE2 2557
TITLE %TIME% _*.* IN WINFOLD LINE2 2597
TITLE %TIME% %%ZDATE1%% YQJ25 LINE2 3672
TITLE %TIME% FINISHED. PRESS ANY KEY TO SHUTDOWN ... LINE2 4922
'#
# Determine the full path to a sample file.
# NOTE: Using the *full* path is a *must* when calling .NET methods, because
# the latter generally don't see the same working dir. as PowerShell.
$file = "$PWD/test.bat"
# Note: input is the number of 6-line blocks to write to the sample file,
# which amounts to 600 vs. 3,000 vs. 30,0000 lines.
100, 500, 5000 | % {
# Create the sample file with the sample content repeated N times.
$repeatCount = $_
[IO.File]::WriteAllText($file, $fileContent * $repeatCount)
# Warm up the file cache and count the lines.
$lineCount = [IO.File]::ReadAllLines($file).Count
# Define the commands to compare as an array of scriptblocks.
$commands =
{ # switch -Regex -File with regex string literal
& {
$i = 0
$updatedLines = switch -Regex -File $file {
'^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$' { $Matches[1] + ++$i + $Matches[2] }
default { ++$i; $_ }
}
[IO.File]::WriteAllLines($file, $updatedLines, [text.encoding]::ASCII)
}
}, { # switch -Regex -File with precompiled regex
& {
$i = 0
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$updatedLines = switch -Regex -File $file {
$regex { $Matches[1] + ++$i + $Matches[2] }
default { ++$i; $_ }
}
[IO.File]::WriteAllLines($file, $updatedLines, [text.encoding]::ASCII)
}
}, { # foreach with precompiled regex and [regex].Match
& {
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$i = 0
$updatedLines = foreach ($line in [IO.File]::ReadLines($file)) {
$i++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $i + $g[2].Value
} else { $line }
}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)
}
}
# How many runs to average.
$runs = 100
Write-Verbose -vb "Averaging $runs runs with a $lineCount-line file of size $('{0:N2} MB' -f ((Get-Item $file).Length / 1mb))..."
Time-Command -Count $runs -ScriptBlock $commands | Out-Host
}
Alternative solution:
$regex = [Regex]::new('^(.*? (?:AROUND LINE|LINE2) )\d+(.*)$', 'Compiled, IgnoreCase, CultureInvariant')
$lc = 0
$updatedLines = & {foreach ($line in [IO.File]::ReadLines($file)) {
$lc++
$m = $regex.Match($line)
if ($m.Success) {
$g = $m.Groups
$g[1].Value + $lc + $g[2].Value
} else { $line }
}}
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)
This code works. I just want to see how much faster someone can make it work.
Backup your Windows 10 batch file in case something goes wrong. Find all instances of string {LINE2 1-9999} and replace with {LINE2 "line number the code is on"}. Overwrite, encoding as ASCII.
If _61.bat is:
TITLE %TIME% NO "%zmyapps1%\*.*" ARCHIVE ATTRIBUTE LINE2 1243
TITLE %TIME% DOC/SET YQJ8 LINE2 1887
SET ztitle=%TIME%: WINFOLD LINE2 2557
TITLE %TIME% _*.* IN WINFOLD LINE2 2597
TITLE %TIME% %%ZDATE1%% YQJ25 LINE2 3672
TITLE %TIME% FINISHED. PRESS ANY KEY TO SHUTDOWN ... LINE2 4922
Results:
TITLE %TIME% NO "%zmyapps1%\*.*" ARCHIVE ATTRIBUTE LINE2 1
TITLE %TIME% DOC/SET YQJ8 LINE2 2
SET ztitle=%TIME%: WINFOLD LINE2 3
TITLE %TIME% _*.* IN WINFOLD LINE2 4
TITLE %TIME% %%ZDATE1%% YQJ25 LINE2 5
TITLE %TIME% FINISHED. PRESS ANY KEY TO SHUTDOWN ... LINE2 6
Code:
Copy-Item $env:windir\_61.bat -d $env:temp\_61.bat
(gc $env:windir\_61.bat) | foreach -Begin {$lc = 1} -Process {
$_ -replace "LINE2 \d*", "LINE2 $lc";
$lc += 1
} | Out-File -Encoding Ascii $env:windir\_61.bat
I expect this to take less than 984 milliseconds. It takes 984 milliseconds. Can you think of anything to speed it up?
The key to better performance in PowerShell code (short of embedding C# code compiled on demand with Add-Type, which may or may not help) is to:
avoid use of cmdlets and the pipeline in general,
especially invocation of a script block ({...}) for each pipeline input object, such as with ForEach-Object and Where-Object
However, it isn't the pipeline per se that is to blame, it is the current inefficient implementation of these cmdlets - see GitHub issue #10982 - and there is a workaround that noticeably improves pipeline performance:
# Faster alternative to:
# 1..10 | ForEach-Object { $_ * 10 }
1..10 | . { process { $_ * 10 } }
# Faster alternative to:
# 1..10 | Where-Object { $_ -gt 5 }
1..10 | . { process { if ($_ -gt 5) { $_ } } }
avoiding the pipeline requires direct use of the .NET framework types as an alternative to cmdlets.
if feasible, use switch statements for array or line-by-line file processing - switch statements generally outperform foreach loops.
To be clear: The pipeline and cmdlets offer clear benefits, so avoiding them should only be done if optimizing performance is a must.
In your case, the following code, which combines the switch statement with direct use of the .NET framework for file I/O seems to offer the best performance - note that the input file is read into memory as a whole, as an array of lines, and a copy of that array with the modified lines is created before it is written back to the input file:
$file = "$env:temp\_61.bat" # must be a *full* path.
$lc = 0
$updatedLines = & { switch -Regex -File $file {
'^(.*? LINE2 )\d+(.*)$' { $Matches[1] + ++$lc + $Matches[2] }
default { ++$lc; $_ } # pass non-matching lines through
} }
[IO.File]::WriteAllLines($file, $updatedLines, [Text.Encoding]::ASCII)
Note:
Enclosing the switch statement in & { ... } is an obscure performance optimization explained in this answer.
If case-sensitive matching is sufficient, as suggested by the sample input, you can improve performance a little more by adding the -CaseSensitive option to the switch command.
In my tests (see below), this provided a more than 4-fold performance improvement in Windows PowerShell relative to your command.
Here's a performance comparison via the Time-Command function:
The commands compared are:
The switch command from above.
A slightly streamlined version of your own command.
A PowerShell Core v6.1+ alternative that uses the -replace operator with the array of lines as the LHS and a scriptblock as the replacement expression.
Instead of a 6-line sample file, a 6,000-line file is used.
100 runs are being averaged.
It's easy to adjust these parameters.
# Sample file content (6 lines)
$fileContent = #'
TITLE %TIME% NO "%zmyapps1%\*.*" ARCHIVE ATTRIBUTE LINE2 1243
TITLE %TIME% DOC/SET YQJ8 LINE2 1887
SET ztitle=%TIME%: WINFOLD LINE2 2557
TITLE %TIME% _*.* IN WINFOLD LINE2 2597
TITLE %TIME% %%ZDATE1%% YQJ25 LINE2 3672
TITLE %TIME% FINISHED. PRESS ANY KEY TO SHUTDOWN ... LINE2 4922
'#
# Determine the full path to a sample file.
# NOTE: Using the *full* path is a *must* when calling .NET methods, because
# the latter generally don't see the same working dir. as PowerShell.
$file = "$PWD/test.bat"
# Create the sample file with the sample content repeated N times.
$repeatCount = 1000 # -> 6,000 lines
[IO.File]::WriteAllText($file, $fileContent * $repeatCount)
# Warm up the file cache and count the lines.
$lineCount = [IO.File]::ReadAllLines($file).Count
# Define the commands to compare as an array of scriptblocks.
$commands =
{ # switch -Regex -File + [IO.File]::Read/WriteAllLines()
$i = 0
$updatedLines = & { switch -Regex -File $file {
'^(.*? LINE2 )\d+(.*)$' { $Matches[1] + ++$i + $Matches[2] }
default { ++$lc; $_ }
} }
[IO.File]::WriteAllLines($file, $updatedLines, [text.encoding]::ASCII)
},
{ # Get-Content + -replace + Set-Content
(Get-Content $file) | ForEach-Object -Begin { $i = 1 } -Process {
$_ -replace "LINE2 \d*", "LINE2 $i"
++$i
} | Set-Content -Encoding Ascii $file
}
# In PS Core v6.1+, also test -replace with a scriptblock operand.
if ($PSVersionTable.PSVersion.Major -ge 6 -and $PSVersionTable.PSVersion.Minor -ge 1) {
$commands +=
{ # -replace with scriptblock + [IO.File]::Read/WriteAllLines()
$i = 0
[IO.File]::WriteAllLines($file,
([IO.File]::ReadAllLines($file) -replace '(?<= LINE2 )\d+', { (++$i) }),
[text.encoding]::ASCII
)
}
} else {
Write-Warning "Skipping -replace-with-scriptblock command, because it isn't supported in this PS version."
}
# How many runs to average.
$runs = 100
Write-Verbose -vb "Averaging $runs runs with a $lineCount-line file of size $('{0:N2} MB' -f ((Get-Item $file).Length / 1mb))..."
Time-Command -Count $runs -ScriptBlock $commands
Here are sample results from my Windows 10 machine (the absolute timings aren't important, but hopefully the relative performance show in in the Factor column is somewhat representative); the PowerShell Core version used is v6.2.0-preview.4
# Windows 10, Windows PowerShell v5.1
WARNING: Skipping -replace-with-scriptblock command, because it isn't supported in this PS version.
VERBOSE: Averaging 100 runs with a 6000-line file of size 0.29 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.108 # switch -Regex -File + [IO.File]::Read/WriteAllLines()...
4.22 0.455 # Get-Content + -replace + Set-Content...
# Windows 10, PowerShell Core v6.2.0-preview 4
VERBOSE: Averaging 100 runs with a 6000-line file of size 0.29 MB...
Factor Secs (100-run avg.) Command
------ ------------------- -------
1.00 0.101 # switch -Regex -File + [IO.File]::Read/WriteAllLines()…
1.67 0.169 # -replace with scriptblock + [IO.File]::Read/WriteAllLines()…
4.98 0.503 # Get-Content + -replace + Set-Content…
I have the following code in command shell code.
SET MYdir=%NewPath%\%CUST%\SuppliesTypes
SET "MYsCount=1"
SET /p MYsCount="Number of MYs in project? (default: %MYSCount%): "
for /L %%a in (1,1,%MYsCount%) do (
SET /p MYNums="Enter %%a MY Number: "
call md "%MYdir%\MY_%%MYNums%%"
)
SET "MYsCount="
However, I am converting my code from CMD to PowerShell. I do not fully understand the correct way to convert over. Here might be how it should be done, but it's not working as it just jumps right through.
SET MYdir=%NewPath%\%CUST%\Product
SET "MYsCount=1"
SET /p MYsCount="Number of MYs in project? (default: %MYSCount%): "
For ($MYsCount = 1; $MYsCount -eq 10; $MYsCount++){
SET /p MyNums="Enter %%a Product Numbers: "
CALL MD "%MYdir%\%CUST%\Product_%%"
}
SET "$MYsCount="
I've looked at the following sites and articles:
PowerShell Basics: Programming With Loops (Helped validate)
How to do a forloop in a Django template? (Didn't really help)
Windows PowerShell Cookbook, 3rd Edition (Page 170)
I am running this code inside a While-Loop.
Thanks for your help!
You have an interesting amalgam of batch file and powershell there in your second code block. It is hard to read when some things are one language and some things are another. Let's see if we can't get it all into PowerShell here.
$MYdir = "$NewPath\$CUST\Product"
$MYsCount = 1
$UserMYsCount = Read-Host "Number of MYs in project? (default: $MYSCount): "
If([string]::IsNullOrEmpty($UserMYsCount){
$UserMYsCount = $MYsCount
}
For ($i = 1; $i -le $UserMYsCount; $I++){
$MyNums = Read-Host "Enter $i Product Numbers: "
New-Item -Path "$MYdir\MY_$MyNums" -ItemType Directory
}
I believe the issue is coming from how you are declaring your variables. SET creates the variables as environment variables that powershell does not access natively. Below is how I would write up your section of code:
$MYDir = "$env:NewPath\$env:CUST\SuppliesTypes"
$MYsCount = 1
$MYsCount = read-host -prompt "Number of MYs in project? (default: $MYSCount): "
foreach ($a in 0..$MYsCount){
$MYNums = Read-Host -Prompt "Enter $a Product Numbers: "
New-Item -Path "$MYDir\MY_$MYNums" -ItemType Directory
}
$MYsCount = $null
I used a foreach loop instead of a normal for loop because you were incrementing by one each time and I have noticed a small performance gain from using foreach when the step is not complicated. 0..$variable is a short hand for using each number from 0 to the declared variable.
If you did want to use a for loop as you mentioned then you could use:
For ($MYsCount = 1; $MYsCount -eq 10; $MYsCount++){
as you had expected. This loop will only stop if the $MYsCount variable equals 10 though so if someone set the variable to something above 10 it would run indefinitely.
I need to retrieve the last n lines of huge files (1-4 Gb), in Windows 7.
Due to corporate restrictions, I cannot run any command that is not built-in.
The problem is that all solutions I found appear to read the whole file, so they are extremely slow.
Can this be accomplished, fast?
Notes:
I managed to get the first n lines, fast.
It is ok if I get the last n bytes. (I used this https://stackoverflow.com/a/18936628/2707864 for the first n bytes).
Solutions here Unix tail equivalent command in Windows Powershell did not work.
Using -wait does not make it fast. I do not have -tail (and I do not know if it will work fast).
PS: There are quite a few related questions for head and tail, but not focused on the issue of speed. Therefore, useful or accepted answers there may not be useful here. E.g.,
Windows equivalent of the 'tail' command
CMD.EXE batch script to display last 10 lines from a txt file
Extract N lines from file using single windows command
https://serverfault.com/questions/490841/how-to-display-the-first-n-lines-of-a-command-output-in-windows-the-equivalent
powershell to get the first x MB of a file
https://superuser.com/questions/859870/windows-equivalent-of-the-head-c-command
If you have PowerShell 3 or higher, you can use the -Tail parameter for Get-Content to get the last n lines.
Get-content -tail 5 PATH_TO_FILE;
On a 34MB text file on my local SSD, this returned in 1 millisecond vs. 8.5 seconds for get-content |select -last 5
How about this (reads last 8 bytes for demo):
$fpath = "C:\10GBfile.dat"
$fs = [IO.File]::OpenRead($fpath)
$fs.Seek(-8, 'End') | Out-Null
for ($i = 0; $i -lt 8; $i++)
{
$fs.ReadByte()
}
UPDATE. To interpret bytes as string (but be sure to select correct encoding - here UTF8 is used):
$N = 8
$fpath = "C:\10GBfile.dat"
$fs = [IO.File]::OpenRead($fpath)
$fs.Seek(-$N, [System.IO.SeekOrigin]::End) | Out-Null
$buffer = new-object Byte[] $N
$fs.Read($buffer, 0, $N) | Out-Null
$fs.Close()
[System.Text.Encoding]::UTF8.GetString($buffer)
UPDATE 2. To read last M lines, we'll be reading the file by portions until there are more than M newline char sequences in the result:
$M = 3
$fpath = "C:\10GBfile.dat"
$result = ""
$seq = "`r`n"
$buffer_size = 10
$buffer = new-object Byte[] $buffer_size
$fs = [IO.File]::OpenRead($fpath)
while (([regex]::Matches($result, $seq)).Count -lt $M)
{
$fs.Seek(-($result.Length + $buffer_size), [System.IO.SeekOrigin]::End) | Out-Null
$fs.Read($buffer, 0, $buffer_size) | Out-Null
$result = [System.Text.Encoding]::UTF8.GetString($buffer) + $result
}
$fs.Close()
($result -split $seq) | Select -Last $M
Try playing with bigger $buffer_size - this ideally is equal to expected average line length to make fewer disk operations. Also pay attention to $seq - this could be \r\n or just \n.
This is very dirty code without any error handling and optimizations.
When the file is already opened, it's better to use
Get-Content $fpath -tail 10
because of "exception calling "OpenRead" with "1" argument(s): "The process cannot access the file..."
This is not an answer, but a large comment as reply to sancho.s' answer.
When you want to use small PowerShell scripts from a Batch file, I suggest you to use the method below, that is simpler and allows to keep all the code in the same Batch file:
#PowerShell ^
$fpath = %2; ^
$fs = [IO.File]::OpenRead($fpath); ^
$fs.Seek(-%1, 'End') ^| Out-Null; ^
$mystr = ''; ^
for ($i = 0; $i -lt %1; $i++) ^
{ ^
$mystr = ($mystr) + ([char[]]($fs.ReadByte())); ^
} ^
Write-Host $mystr
%End PowerShell%
With the awesome answer by Aziz Kabyshev, which solves the issue of speed, and with some googling, I ended up using this script
$fpath = $Args[1]
$fs = [IO.File]::OpenRead($fpath)
$fs.Seek(-$Args[0], 'End') | Out-Null
$mystr = ''
for ($i = 0; $i -lt $Args[0]; $i++)
{
$mystr = ($mystr) + ([char[]]($fs.ReadByte()))
}
$fs.Close()
Write-Host $mystr
which I call from a batch file containing
#PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '.\myscript.ps1' %1 %2"
(thanks to How to run a PowerShell script from a batch file).
Get last n bytes of a file:
set file="C:\Covid.mp4"
set n=7
copy /b %file% tmp
for %i in (tmp) do set /a m=%~zi-%n%
FSUTIL file seteof tmp %m%
fsutil file createnew temp 1
FSUTIL file seteof temp %n%
type temp >> tmp
fc /b tmp %file% | more +1 > temp
REM problem parsing file with byte offsets in hex from fc, to be converted to decimal offsets before output
type nul > tmp
for /f "tokens=1-3 delims=: " %i in (temp) do set /a 0x%i >> tmp & set /p=": " <nul>> tmp & echo %j %k >> tmp
set /a n=%m%+%n%-1
REM output
type nul > temp
for /l %j in (%m%,1,%n%) do (find "%j: "< tmp || echo doh: la 00)>> temp
(for /f "tokens=3" %i in (temp) do set /p=%i <nul) & del tmp & del temp
Tested on Win 10 cmd Surface Laptop 1
Result: 1.43 GB file processed in 10 seconds
I've got a CSV-file from HR with aprox 1000 lines (employees) that I feed to AD with Powershell.
This works, but I am a bit uncertain if I am doing this the right way.
This are my major concerns:
I am setting the attributes one at a time. Should I put the "changes" into an some kind of array/hasthable/object and do it all at once at the end of the script? But How? "New-Object"?
Should I use functions? But how can I return values (and continue based on the result from the function)?
All programming hints, corrections would be GREATLY appreciated. I really understand this wonderful community of knowledgable people so, let me have it. If you have the time please tell me how I can do this better..
This is my code:
Add-PSSnapin Microsoft.Exchange.Management.PowerShell.Admin -ErrorAction silentlycontinue
Add-PSSnapin quest.activeroles.admanagement -ErrorAction silentlycontinue
$file = "\Scripts\employees.csv" # Location of the input file
$file2 = "\Scripts\employees2.csv" # Temp file
$logfile = "\Scripts\logfile.txt" # log file
remove-item $logfile -Force -ErrorAction SilentlyContinue
Get-Content $file | Out-File -Encoding UTF8 $file2 # Convert to UTF8 (we don't touch the original inputfile)
$ListEmployees = Import-Csv $file2 -Delimiter ";" # Import the file to CSV
foreach ($ListEmployee in $ListEmployees) {
$ListDisplayName = $ListEmployee.firstname + " " + $ListEmployee.lastname
if($ADemployee = Get-QADUser -displayname $ListDisplayName -IncludedProperties employeeid )
{
## CHECK NAME
if($($ADEmployee.displayname) -eq $($ListDisplayName))
{
echo "MATCH: $($ADEmployee.displayname)"
}
## CHECK COMPANY
if($($ADEmployee.company) -ne $($ListEmployee.company))
{
echo " CHANGE - Company: '$($ADEmployee.company)' to '$($ListEmployee.company)'"
Set-QADUser -identity $($ADEmployee.samaccountname) -Company $($ListEmployee.company) -WhatIf
}
else
{
echo " OK - Company : no change '$($ListEmployee.company)'"
}
## CHECK OFFICE
if($($ADEmployee.office) -ne $($ListEmployee.office))
{
echo " CHANGE - Office '$($ADEmployee.office)' to '$($ListEmployee.office)'"
Set-QADUser -identity $($ADEmployee.samaccountname) -Office $($ListEmployee.Office) -WhatIf
}
else
{
echo " OK - Office : no change '$($ListEmployee.office)'"
}
## CHECK MOBILE
if( $listemployee.mobile -match '\S' )
{
if($($ADEmployee.mobile) -ne $($ListEmployee.mobile))
{
echo " CHANGE - Mobile : '$($ADEmployee.mobile)' to '$($ListEmployee.mobile)'"
Set-QADUser -identity $($ADEmployee.samaccountname) -Mobile $($ListEmployee.mobile) -WhatIf
}
else
{
echo " OK - Mobile : no change '$($ListEmployee.mobile)'"
}
}
## CHECK EMPLOYEEID
if($($ADEmployee.employeeid) -ne $($ListEmployee.employeeid))
{
echo " CHANGE - EmployeeID: '$($ADEmployee.employeeid)' to '$($ListEmployee.employeeid)'"
Set-QADUser -identity $($ADEmployee.samaccountname) -ObjectAttributes #{employeeID = $($ListEmployee.employeeid)} -WhatIf
}
else
{
echo " OK - EmployeeID : no change '$($ListEmployee.employeeid)'"
}
$match++
}
else
{
if($EXContact = Get-Contact $ListDisplayName -ErrorAction SilentlyContinue)
{
echo "MATCH CONTACT: $ListDisplayName (contact)"
## CHECK MOBILE
if( $listemployee.mobile -match '\S' )
{
if($($EXContact.Mobilephone) -ne $($ListEmployee.mobile))
{
echo " CHANGE - Mobile : '$($EXContact.Mobilephone)' to '$($ListEmployee.mobile)'"
}
else
{
echo " OK - Mobile ; No change ($($ListEmployee.mobile))"
}
}
## CHECK COMPANY
if($($EXContact.company) -ne $($ListEmployee.company))
{
echo " CHANGE - Company: '$($EXContact.company)' to '$($ListEmployee.company)'"
}
else
{
echo " OK - Company : No change($($ListEmployee.company))"
}
## CHECK OFFICE
if($($EXContact.office) -ne $($ListEmployee.office))
{
echo " CHANGE - Office '$($EXContact.office)' to '$($ListEmployee.office)'"
}
else
{
echo " OK - Office : No Change($($ListEmployee.office))"
}
$contactmatch++
}
else
{
echo "$ListDisplayName" | Out-File $logfile -Append
echo "NO MATCH: $ListDisplayName"
$nomatch++
}
}
$i++
}
echo " "
echo "List contains $i accounts"
echo "Accounts: $match matches"
echo "Contacts: $contactmatch"
echo "No Match: $nomatch"
And; If you think this is cr*p, tell me! I'd rather hear it from you than you staying silent just to be polite! I am "quite" new to this so I deserve it:)
Something that seems odd about the whole thing is using display name as your identity reference. As an identity reference, it't both volatile and potentially ambiguos in AD, and seems a poor choice to use to drive a maintenance script.
Here is my opinion :
1) I really think that the problem #mjolinor point is important, and you will meet troubles (I mean need human check) if you don't use one of the identity attributes fixed by Microsoft (samAccountName, userPrincipalName or better objectGuid, objectSid ...) as a key to find your users in Active-Directory.
If it's not possible you perhaps can buid a filter on the top of multiples attributes. If you CSV comes from another LDAP Directory you perhaps can integrate their unique ID in you Schema (in this case see Microsoft Services for UNIX 3.5 (MSFU3.5) schema extensions to Active Directory).
2) Once you find one of your CSV entry in your Active-Directory, you check each attributes, and then replace 'one by one' the ones in your AD with the one in your CSV.
Here my advice will be to check all the differencies between your CSV and AD entry, and them made an unique change into the Directory. In fact, on one différence, I will change them all in one command. I don't know how Set-QADUser is written, but in the low level layers all the attributes replacement can be made one shot (LDAP_REPLACE, or in a single ADSI commit)
3) Just a remark : begining PowerShell V2 (Seven, W2K8) an Active-Directory module is given by Microsoft.