Convert .etl files to .txt in windows powershell - windows

I am trying to convert .etl file into .txt file.
Rightnow, I am using the following command to get .txt files from .etl file:
Get-WinEvent -Path $Path -Oldest -ErrorAction SilentlyContinue -ErrorVariable errors | ForEach-Object { "{0},{1},{2},{3},{4}" -f $_.TimeCreated.ToString("yyyy-MM-ddTHH:mm:ss.ffffff"), $_.Id,$_.Level,$_.ProviderName,$_.Message } `
| Add-Content -Path $LogFilePath
However, the .etl file is quite huge and takes about a hour to complete.
I was wondering if there's any other way to convert those etl file in txt file without much overhead.
I tried looking into tracerpt tool,however it only converts .etl file into .csv/.xml files.

Perhaps this is not the answer you're looking for, but I recommend nonetheless.
Unfortunately, there is no out-of-the-box .NET way of doing this.
Event tracing can be involving, but if you figure it out, you will gain on performance.
Microsoft have a couple of examples on how to read .etl files using C++ and the native APIs.
check this out:
https://learn.microsoft.com/en-us/windows/win32/etw/using-tdhformatproperty-to-consume-event-data

The .Net System.IO calls seem to be faster than the Get-Content & Set-Content. Doing a straight CSV to TXT conversion was almost twice as fast and should be for any other formats you throw at it:
# $file1 is a 71,070 line CSV
$file1 = "C:\Users\Username\Desktop\_test.csv"
$file2 = "C:\Users\Username\Desktop\_test.txt"
#### Test 1 - Get-Content -Raw
$start1 = Get-Date
Get-Content $file1 -Raw | Set-Content $file2
$end1 = Get-Date
#### Test 2 - .NET's System.IO
$start2 = Get-Date
$txt = [System.IO.File]::ReadAllText("$file1")
[System.IO.File]::WriteAllText("$file2", $txt)
$end2 = Get-Date
New-TimeSpan –Start $Start1 –End $End1
New-TimeSpan –Start $Start2 –End $End2
# TotalMilliseconds : Attempt [1] 109.3493 [2] 93.7438 [3] 78.1255
# TotalMilliseconds : Attempt [1] 49.4906 [2] 46.8493 [3] 46.8738
.NET was also faster when doing string manipulations such as modifying the time & date format like you have in your example. For a better breakdown, check out all the Stack O question & answers here

Related

Extract value from key value pair using powershell

I have a file having key-value data in it. I have to get the value of a specific key in that file. I have the Linux equivalent command:
File:
key1=val1
key2=val2
..
Command:
cat path/file | grep 'key1' | awk -F '=' '{print $2}'
Output:
val1
I want to achieve the same output on windows as well. I don't have any experience working in power shell but I tried with the following:
Get-Content "path/file" | Select-String -Pattern 'key1' -AllMatches
But I'm getting output like this:
key1=val1
What am i doing wrong here?
<# required powershell version 5.1 or later
#'
key1=val1
key2=val2
'# | out-file d:\temp.txt
#>
(Get-Content d:\temp.txt | ConvertFrom-StringData).key1
Note:
With your specific input format (key=value lines), Алексей Семенов's helpful answer offers the simplest solution, using ConvertFrom-StringData; note that it ignores whitespace around = and trailing whitespace after the value.
The answer below focuses generally on how to implement grep and awk-like functionality in PowerShell.
It is not the direct equivalent of your approach, but a faster and PowerShell-idiomatic solution using a switch statement:
# Create a sample file
#'
key1=val1
key2=val2
'# > sample.txt
# -> 'val1'
switch -Regex -File ./sample.txt { '^\s*key1=(.*)' { $Matches[1]; break } }
The -Regex option implicitly performs a -match operation on each line of the input file (thanks to -File), and the results are available in the automatic $Matches variable.
$Matches[1] therefore returns what the first (and only) capture group ((...)) in the regex matched; break stops processing instantly.
A more concise, but slower option is to combine the -match and -split operators, but note that this will only work as intended if only one line matches:
((Get-Content ./sample.txt) -match '^\s*key1=' -split '=')[1]
Also note that this invariably involves reading the entire file, by loading all lines into an array up front via Get-Content.
A comparatively slow version - due to using a cmdlet and thereby implicitly the pipeline - that fixes your attempt:
(Select-String -List '^\s*key1=(.*)' ./sample.txt).Matches[0].Groups[1].Value
Note:
Select-String outputs wrapper objects of type Microsoft.PowerShell.Commands.MatchInfo that wrap metadata around the matching strings rather than returning them directly (the way that grep does); .Matches is the property that contains the details of the match, which allows accessing what the capture group ((...)) in the regex captured, but it's not exactly obvious how to access that information.
The -List switch ensures that processing stops at the first match, but note that this only works with a direct file argument rather than with piping a file's lines individually via Get-Content.
Note that -AllMatches is for finding multiple matches in a single line (input object), and therefore not necessary here.
Another slow solution that uses ForEach-Object with a script block in which each line is -split into the key and value part, as suggested by Jeroen Mostert:
Get-Content ./sample.txt | ForEach-Object {
$key, $val = $_ -split '='
if ($key -eq 'key1') { $val }
}
Caveat: This invariably processes all lines, even after the key of interest was found.
To prevent that, you can append | Select-Object -First 1 to the command.
Unfortunately, as of PowerShell 7.1 there is no way to directly exit a pipeline on demand from a script block; see long-standing GitHub feature request #3821.
Note that break does not work as intended - except if you wrap your pipeline in a dummy loop statement (such as do { ... } while ($false)) to break out of.

How to rename multiple files with existing Unix time in file name?

While I know how I would do this in PHP, it doesn't make sense to install IIS, install PHP just to get this done.
I have a folder D:\Data that has several subfolders in it. These folders contain files which are backups created with a program that adds a time stamp to the name to allow multiple copies of the file to be backed up.
These files need to be named:
usera.dat
But they are named currently:
usera.dat.17383947323.dat
In PHP, I would load the file name into a string, explode the string on ".", then rename the file using the [0] and [3] elements to rename the file, i.e. without the loop to read the directories:
$filename = $existing_file
$explodename = explode(".",$filename);
$newfilename = $explodename[0] . "." . $explodename[3];
rename($filename, $newfilename);
Does anyone have any recommendation on how to do this with PowerShell or a batch file looping over all the subfolders in D:\Data?
Right now I am manually editing each file removing the extra Unix time stamp.dat part.
Translating this from PHP to PowerShell should be a breeze, let's give it a try:
$files = Get-ChildItem -Filter *.dat.*.dat
foreach($file in $files){
$filename = $file.Name
$explodename = $filename.Split('.')
$newfilename = "$($explodename[0]).$($explodename[3])"
Rename-Item $file.FullName -NewName $newfilename
}
As shown above:
PowerShell does not have an explode() function, but we can use the String.Split() method on any string and get a string array back
. is not a string concat operator in PowerShell, but we can use subexpressions $(...) inside an expandable string.
The Rename-Item cmdlet will take care of renaming
A more PowerShell-idiomatic solution would be to leverage the pipeline though:
Get-ChildItem -Filter *.dat.*.dat | Rename-Item -NewName {$explodename = $_.Name.Split('.');"$($explodename[0]).$($explodename[3])"}
You could also use a regex pattern in place of the Split() method and string concatenation:
Get-ChildItem -Filter *.dat.*.dat | Rename-Item -NewName {$_.Name -replace '^([^\.])\..*\..*\.([^\.])$','$1.$2'}
Or do the concatenation with the -join operator:
Get-ChildItem -Filter *.dat.*.dat | Rename-Item -NewName {$_.Name.Split('.')[0,3] -join '.'}
Whatever you fancy, Get-ChildItem and Rename-Item are definitely the commands you'd want to use here
One-liner in batch from the prompt:
#FOR /r "U:\sourcedir" %a IN (*) DO #FOR %b IN ("%~na") DO #FOR %c IN ("%~nb") DO #IF "%~xc" neq "" ECHO REN "%a" "%~nxc"
Note: echoes the rename command for testing. Remove the echo keyword after testing to execute the rename.
I used u:\sourcedir as a test directory.
Translation: for each filename in the subtree (%%a) take the name part only and assign to %%b, repeat for %%c and if there was no extension part in the result, then do the rename.
Mathias R. Jessen's answer contains many helpful pointers, but it seems that the simpler and more robust approach would be to simply drop the final 2 extensions, assuming that all files in the folder tree have these 2 extraneous extensions:
Get-ChildItem -File -Recurse D:\Data |
Rename-Item -NewName { $_.Name -replace '(\.[^.]*){2}$' }

Insert timestamp into filename when exporting into .csv

I am trying to insert the timestamp into the filename of an output file generated by PowerShell scrip.
So instead of filelist.csv, it names the output file with the actual timestamp (of the outpufile's modification time), like YYYYMMDD_HHMMSS_filelist.csv.
Get-childitem -recurse -file | select-object #{n="Hash";e={get-filehash -algorithm MD5 -path $_.FullName | Select-object -expandproperty Hash}},lastwritetime,length,fullname | export-csv filelist.csv -notypeinformation
Any suggestions as to what is the missing code from the above line for timestamping the output file?
Change:
export-csv filelist.csv -notypeinformation
To:
Export-Csv "$((Get-Date).ToString("yyyyMMdd_HHmmss"))_filelist.csv" -NoTypeInformation
Edit - .ToString() and -UFormat, generally
When using .ToString() or -Format/-UFormat, you are passing a DateTime object and getting back a formatted string. In both cases you have to specify the format of the string you are after. The ways this format is specified differs.
Check out this list of .NET format strings that can be used with ToString()/-Format. Note how e.g. MM is months 01-12, M is months 1-12 and mm is minutes 00-59.
-UFormat is documented to use UNIX formatting. I'm not familair with this at all, but the Notes section goes into detail. You can see here %m is month while %M
In your file name, do as follows:
Export-Csv -Path "$(Get-Date -UFormat '%Y%m%d_%H%M%S')_filelist.csv" -NoTypeInformation

How do I prevent this infinite loop in PowerShell?

Say I have several text files that contain the word 'not' and I want to find them and create a file containing the matches. In Linux, I might do
grep -r not *.txt > found_nots.txt
This works fine. In PowerShell, the following echos what I want to the screen
get-childitem *.txt -recurse | select-string not
However, if I pipe this to a file:
get-childitem *.txt -recurse | select-string not > found_nots.txt
It runs for ages. I eventually CTRL-C to exit and take a look at the found_nots.txt file which is truly huge. It looks as though PowerShell includes the output file as one of the files to search. Every time it adds more content, it finds more to add.
How can I stop this behavior and make it behave more like the Unix version?
Use the -Exclude option.
get-childitem *.txt -Exclude 'found_nots.txt' -recurse | select-string not > found_nots.txt
First easy solution is rename file output extension to another

replacing text in app.config using powershell script

I am trying a simple replace script to replace text in a app.cofig file. But it just processed and do nothing:
$old = 'Z:\gene'
$new = 'z:\gene\scripts'
Get-ChildItem z:\gene\scripts\Test\App.config -Recurse | Where {$_ -IS [IO.FileInfo]} |
% {
(Get-Content $_.FullName) -replace $old,$new | Set-Content $_.FullName
Write-Host "Processed: " + $_.FullName
}
Any Idea what I am doing wrong. As same script works fine for .txt file
Thanks
App.config is xml formatted but it's a text file as well, it should work the same. My guess is that you have a different values that your working on and they are not hitting. If you rename the file to app.txt does it work ? You might also consider using nant xmlpoke if you are running from nant script.

Resources