How to read length of video file from cmd - cmd

I have a bunch of mp4 files in a folder and I want to create a text file with all the names and the length of the files as in:
01_Welcome.mp4 00.01.23
02_Tools.mp4 00.03.12
I know how to read the names of the files buy how do I get the length attribute? When I click a file the length appears in the status bar, so there should be a way to read that property. And I would like to do it from the command line, not through a third-party package.

In ubuntu there you can
ffmpeg -i myvideo 2>&1 | grep Duration | cut -d ' ' -f 4 | sed s/,//
But in Windows MediaInfo is the one option

In Windows' PowerShell you can do the following to extract length of a single media file:
$Folder = 'C:\Path\To\Parent\Folder'
$File = 'Video.mp4'
$LengthColumn = 27
$objShell = New-Object -ComObject Shell.Application
$objFolder = $objShell.Namespace($Folder)
$objFile = $objFolder.ParseName($File)
$Length = $objFolder.GetDetailsOf($objFile, $LengthColumn)
Iteration over the folder content is left as an exercise for the reader.
Source

Related

Bash script to PowerShell NET-SNMP

I have an issue changing an script I did in bash to powershell, the script is the following:
#! /bin/sh
for IPVAR in 172.27.41.202 172.27.41.203
do
TIEMPO=$(date +"%m-%d-%y")
FILENAME=${IPVAR}_${TIEMPO}
date +"%c" >> $FILENAME.txt
snmpget -v 2c -c public $IPVAR -mALL 1.3.6.1.4.1.41413.1.1.0 1.3.6.1.4.1.41413.1.4.0 1.3.6.1.4.1.41413.1.2.0 1.3.6.1.4.1.41413.1.3.0 1.3.6.1.4.1.41413.10.3.4.1.1.1 1.3.6.1.4.1.41413.10.3.4.1.2.1 1.3.6.1.4.1.41413.10.3.4.1.3.1 1.3.6.1.4.1.41413.10.3.4.1.4.1 1.3.6.1.4.1.41413.10.3.4.1.5.1 1.3.6.1.4.1.41413.10.3.4.1.6.1 1.3.6.1.4.1.41413.10.3.4.1.7.1 1.3.6.1.4.1.41413.10.3.4.1.8.1 1.3.6.1.4.1.41413.10.3.4.1.9.1 1.3.6.1.4.1.41413.10.3.4.1.10.1 >> $FILENAME.txt
done
In my Linux enviroment works fine but I installed NET-SNMP in a Windows Server because there is where we need the files to be but I can seem to make it work I did this:
$IPS = (10.96.90.2)
$TIEMPO = get-date -f yyyy-MM-dd
Foreach ($IPVAR in $IPS) {snmpget -v 2c -c public -m ALL $IPVAR 1.3.6.1.4.1.41413.1.1.0 1.3.6.1.4.1.41413.1.4.0 1.3.6.1.4.1.41413.1.2.0 1.3.6.1.4.1.41413.1.3.0 1.3.6.1.4.1.41413.10.3.4.1.1.1 1.3.6.1.4.1.41413.10.3.4.1.2.1 1.3.6.1.4.1.41413.10.3.4.1.3.1 1.3.6.1.4.1.41413.10.3.4.1.4.1 1.3.6.1.4.1.41413.10.3.4.1.5.1 1.3.6.1.4.1.41413.10.3.4.1.6.1 1.3.6.1.4.1.41413.10.3.4.1.7.1 1.3.6.1.4.1.41413.10.3.4.1.8.1 1.3.6.1.4.1.41413.10.3.4.1.9.1 1.3.6.1.4.1.41413.10.3.4.1.10.1 >> "$IPVAR_$TIEMPO".txt}
If I run only the "snmpget" command it works fine but I have troubles with the scripting part here.
Hope you can help me.
Regards,
Try the code below (this hasn't been tested as I don't have snmpget, but the method works with other command line apps):
$IPS = #('172.27.41.202', '172.27.41.203')
$IPS | ForEach-Object {
$snmpgetParams = #(
'-v', '2c' ,'-c' ,'public' ,'-m' ,'ALL', $_, '1.3.6.1.4.1.41413.1.1.0 1.3.6.1.4.1.41413.1.4.0 1.3.6.1.4.1.41413.1.2.0 1.3.6.1.4.1.41413.1.3.0 1.3.6.1.4.1.41413.10.3.4.1.1.1 1.3.6.1.4.1.41413.10.3.4.1.2.1 1.3.6.1.4.1.41413.10.3.4.1.3.1 1.3.6.1.4.1.41413.10.3.4.1.4.1 1.3.6.1.4.1.41413.10.3.4.1.5.1 1.3.6.1.4.1.41413.10.3.4.1.6.1 1.3.6.1.4.1.41413.10.3.4.1.7.1 1.3.6.1.4.1.41413.10.3.4.1.8.1 1.3.6.1.4.1.41413.10.3.4.1.9.1 1.3.6.1.4.1.41413.10.3.4.1.10.1'
)
$TIEMPO = Get-Date -f yyyy-MM-dd
$FILENAME="$_`_$TIEMPO`.txt"
snmpget #snmpgetParams | Set-Content $FILENAME -Force
}
Line 1 declares an array of IP addresses.
Line 2 starts a foreach loop which will iterate through each IP in the $IPS array.
Lines 3,4,5 create an array of parameters to pass to the snmpget command. The $_ parameter is the current IP address within the loop.
Line 7 sets the $TIEMPO variable with the date.
Line 8 sets the $FILENAME variable with the IP address, followed by an underscore, followed by the date. The backticks ` tell PowerShell to not treat the following characters as part of the preceding variable name. An example filename: 172.27.41.202_2016-08-31.txt
Line 10 calls the snmpget command. The #snmpgetParams 'splats' the parameter array. The output is piped into the Set-Content command, which, with the Force option creates or overwrites the file contents for that IP & date.
Line 11 closes the loop.

Powershell - Read a single text file and sort contents to multiple files based on text within the line

I'm looking for some direction on how to read a file line by line, then copy the line based on a search criteria to a newly created file. Since my description is probably poor, I've tried to illustrate below:
Single Text File Sample:
Name=N0060093G
Name=N0060093H
Name=N400205PW
Name=N400205PX
Name=N966O85Q0
Name=N966O85Q1
The script would read each line and use the "###" after "Name=N", to create a new file name after the identifier, "###" to copy each appropriate line to the new file. So, lines "Name=N0060093G"and "Name=N0060093H" would go to "006.txt"; "Name=N400205PW" and "Name=N400205PX" would write to "400.txt", etc.
A RegEx style approach:
$File = 'test.txt'
Get-Content $File | ForEach {
If ($_ -match '^Name\=N(?<filename>\d{3}).*') {
$_ | Out-File -Append "$($Matches.Filename).txt" -WhatIf
}
}

Powershell: Count instances of strings in a file using a list

I am trying to get the number of times a string (varying from 40 to 400+ characters) in "file1" occurs in "file2" in an effective way. file1 has about 2k lines and file2 has about 130k lines. I currently have a Unix solution that does it in about 2 mins in a VM and about 5 in Cygwin, but I am trying to do it with Powershell/Python since the files are in windows and I am using the output in excel and use it with automation (AutoIT.)
I have a solution, but it takes WAY too long (in about the same times that the Cygwin finished - all 2k lines - I had only 40-50 lines in Powershell!)
Although I haven't prepare a solution yet, I am open to use Python as well if there is a solution that can be fast and accurate.
Here is the Unix Code:
while read SEARCH_STRING;
do printf "%s$" "${SEARCH_STRING}";
grep -Fc "${SEARCH_STRING}" file2.csv;
done < file1.csv | tee -a output.txt;
And here is the Powershell code I currently have
$Target = Get-Content .\file1.csv
Foreach ($line in $Target){
#Just to keep strings small, since I found that not all
#strings were being compared correctly if they where 250+ chars
$line = $line.Substring(0,180)
$Coll = Get-Content .\file2.csv | Select-string -pattern "$line"
$cnt = $Coll | measure
$cnt.count
}
Any ideas of suggestions will help.
Thanks.
EDIT
I'm trying a modified solution suggested by C.B.
del .\output.txt
$Target = Get-Content .\file1.csv
$file= [System.IO.File]::ReadAllText( "C:\temp\file2.csv" )
Foreach ($line in $Target){
$line = [string]$line.Substring(0, $line.length/2)
$cnt = [regex]::matches( [string]$file, $line).count >> ".\output.txt"
}
But, since my strings in file1 are varying in length I keept getting OutOfBound exceptions for the SubString function, so I halved (/2) the input string to try to get a match. And when I try to halve them, if I it had an open parentheses, it tells me this:
Exception calling "Matches" with "2" argument(s): "parsing "CVE-2013-0796,04/02/2013,MFSA2013-35 SeaMonkey: WebGL
crash with Mesa graphics driver on Linux (C" - Not enough )'s."
At C:\temp\script_test.ps1:6 char:5
+ $cnt = [regex]::matches( [string]$file, $line).count >> ".\output.txt ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : ArgumentException
I don't know if there is a way to raise the input limit in powershell (My biggest size at the moment is 406, but could be bigger in the future) or just give up and try a Python solution.
Thoughts?
EDIT
Thanks to #C.B. I got the correct answer and it matches the output of the Bash script perfectly. Here is the full code that outputs results to a text file:
$Target = Get-Content .\file1.csv
$file= [System.IO.File]::ReadAllText( "C:\temp\file2.csv" )
Foreach ($line in $Target){
$cnt = [regex]::matches( $file, [regex]::escape($line)).count >> ".\output.txt"
}
Give this a try:
$Target = Get-Content .\file1.csv
$file= [System.IO.File]::ReadAllText( "c:\test\file2.csv" )
Foreach ($line in $Target){
$line = $line.Substring(0,180)
$cnt = [regex]::matches( $file, [regex]::escape($line)).count
}
One issue with your script is that you read file2.csv over and over again, for each line from file1.csv. Reading the file just once and storing the content in a variable should significantly speed things up. Try this:
$f2 = Get-Content .\file2.csv
foreach ($line in (gc .\file1.csv)) {
$line = $line.Substring(0,180)
#($f2 | ? { $_ -match $line }).Count
}

Searching Multiple Strings in Huge log files

Powershell question
Currently i have 5-10 log files all about 20-25GB each and need to search through each of them to check if any of 900 different search parameters match. i have written a basic powershell script that will search through the whole log file for 1 search parameter. if it matches it will dump out the results into a seperate text file, the problem is it is pretty slow. i was wondering if there is a way to speed this up by either making it search for all 900 parameters at once and only looking through the log once. any help would be good even if its just improving the script.
basic overview :
1 csv file with all the 900 items listed under an "item" column
1 log file (.txt)
1 result file (.txt)
1 ps1 file
here is the code i have below for powershell in a PS1 file:
$search = filepath to csv file<br>
$log = "filepath to log file"<br>
$result = "file path to result text file"<br>
$list = import-csv $search <br>
foreach ($address in $list) {<br>
Get-Content $log | Select-String $address.item | add-content $result <br>
*"#"below is just for displaying a rudimentary counter of how far through searching it is <br>*
$i = $i + 1 <br>
echo $i <br>
}
900 search terms is quite large a group. Can you reduce its size by using regular expressions? A trivial solution is based on reading the file row-by-row and looking for matches. Set up a collection that contains regexps or literal strings for search terms. Like so,
$terms = #("Keyword[12]", "KeywordA", "KeyphraseOne") # Array of regexps
$src = "path-to-some-huge-file" # Path to the file
$reader = new-object IO.StreamReader($src) # Stream reader to file
while(($line = $reader.ReadLine()) -ne $null){ # Read one row at a time
foreach($t in $terms) { # For each search term...
if($line -match $t) { # check if the line read is a match...
$("Hit: {0} ({1})" -f $line, $t) # and print match
}
}
}
$reader.Close() # Close the reader
Surely this is going to be incredibly painful on any parser you use just based on the file sizes you have there, but if your log files are of a format that is standard (for example IIS log files) then you could consider using a Log parsing app such as Log Parser Studio instead of Powershell?

UNIX format files with Powershell

How do you create a unix file format in Powershell? I am using the following to create a file, but it always creates it in the windows format.
"hello world" | out-file -filepath test.txt -append
As I understand, the new line characters CRLF make it to be a Windows format file whereas the unix format needs only a LF at the end of the line. I tried replacing the CRLF with the following, but it didn't work
"hello world" | %{ $_.Replace("`r`n","`n") } | out-file -filepath test.txt -append
There is a Cmdlet in the PowerShell Community Extensions called ConvertTo-UnixLineEnding
One ugly-looking answer is (taking input from dos.txt outputting to unix.txt):
[string]::Join( "`n", (gc dos.txt)) | sc unix.txt
but I would really like to be able to make Set-Content do this by itself and this solution does not stream and therefore does not work well on large files...
And this solution will end the file with a DOS line ending as well... so it is not 100%
I've found that solution:
sc unix.txt ([byte[]][char[]] "$contenttext") -Encoding Byte
posted above, fails on encoding convertions in some cases.
So, here is yet another solution (a bit more verbose, but it works directly with bytes):
function ConvertTo-LinuxLineEndings($path) {
$oldBytes = [io.file]::ReadAllBytes($path)
if (!$oldBytes.Length) {
return;
}
[byte[]]$newBytes = #()
[byte[]]::Resize([ref]$newBytes, $oldBytes.Length)
$newLength = 0
for ($i = 0; $i -lt $oldBytes.Length - 1; $i++) {
if (($oldBytes[$i] -eq [byte][char]"`r") -and ($oldBytes[$i + 1] -eq [byte][char]"`n")) {
continue;
}
$newBytes[$newLength++] = $oldBytes[$i]
}
$newBytes[$newLength++] = $oldBytes[$oldBytes.Length - 1]
[byte[]]::Resize([ref]$newBytes, $newLength)
[io.file]::WriteAllBytes($path, $newBytes)
}
make your file in the Windows CRLF format. then convert all lines to Unix format in new file:
$streamWriter = New-Object System.IO.StreamWriter("\\wsl.localhost\Ubuntu\home\user1\.bashrc2")
$streamWriter.NewLine = "`n"
gc "\\wsl.localhost\Ubuntu\home\user1\.bashrc" | % {$streamWriter.WriteLine($_)}
$streamWriter.Flush()
$streamWriter.Close()
not a one-liner, but works for all lines, including EOF. new file now shows as Unix format in Notepad on Win11.
delete original file & rename new file to original, if you like:
ri "\\wsl.localhost\Ubuntu\home\user1\.bashrc" -Force
rni "\\wsl.localhost\Ubuntu\home\user1\.bashrc2" "\\wsl.localhost\Ubuntu\home\user1\.bashrc"
Two more examples on how you can replace CRLF by LF:
Example:
(Get-Content -Raw test.txt) -replace "`r`n","`n" | Set-Content test.txt -NoNewline
Example:
[IO.File]::WriteAllText('C:\test.txt', ([IO.File]::ReadAllText('C:\test.txt') -replace "`r`n","`n"))
Be aware, this does really just replace CRLF by LF. You might need to add a trailing LF if your Windows file does not contain a trailing CRLF.

Resources