I'm using FromFile to get the image out of files, and it has the following error for the png's on the FromFile line:
Exception calling "FromFile" with "1" argument(s): "The given path's
format is not supported."
So, I'm trying to convert the bmp's to jpg, (see convert line above FromFile below) but all the examples I see (that seem usable) are saving the file. I don't want to save the file in the dir. All I need is the image format, so FromFile can use it like this example. I saw ConvertTo-Jpeg, but I don't think this is a standard powershell module, or don't see how to install it.
I saw this link, but I don't think that would leave the image in the format needed by FromFile.
This is my code:
$imageFile2 = Get-ChildItem -Recurse -Path $ImageFullBasePath -Include #("*.bmp","*.jpg","*.png") | Where-Object {$_.Name -match "$($pictureName)"} #$imageFile | Select-String -Pattern '$($pictureName)' -AllMatches
Write-Host $imageFile2
if($imageFile2.Exists)
{
if($imageFile2 -Match "png")
{
$imageFile2 | .\ConvertTo-Jpeg #I don't think this will work with FromFile below
}
$image = [System.Drawing.Image]::FromFile($imageFile2) step
}
else {
Write-Host "$($imageFile2) does not exist"
}
And then I put it in excel:
$xlsx = $result | Export-Excel -Path $outFilePath -WorksheetName $errCode -Autosize -AutoFilter -FreezeTopRow -BoldTopRow -PassThru # -ClearSheet can't ClearSheet every time or it clears previous data ###left off
$ws = $xlsx.Workbook.Worksheets[$errCode]
$ws.Dimension.Columns #number of columns
$tempRowCount = $ws.Dimension.Rows #number of rows
#only change width of 3rd column
$ws.Column(3).Width
$ws.Column(3).Width = 100
#Change all row heights
for ($row = 2 ;( $row -le $tempRowCount ); $row++)
{
#Write-Host $($ws.Dimension.Rows)
#Write-Host $($row)
$ws.Row($row).Height
$ws.Row($row).Height = 150
#place the image in spreadsheet
#https://github.com/dfinke/ImportExcel/issues/1041 https://github.com/dfinke/ImportExcel/issues/993
$drawingName = "$($row.PictureID)_Col3_$($row)" #Name_ColumnIndex_RowIndex
Write-Host $image
$picture = $ws.Drawings.AddPicture("$drawingName",$image)
$picture.SetPosition($row - 1, 0, 3 - 1, 0)
if($ws.Row($row).Height -lt $image.Height * (375/500)) {
$ws.Row($row).Height = $image.Height * (375/500)
}
if($ws.Column(3).Width -lt $image.Width * (17/120)){
$ws.Column(3).Width = $image.Width * (17/120)
}
}
Update:
I just wanted to reiterate that FromFile can't be used for a png image. So where Hey Scripting Guy saves the image like this doesn't work:
$image = [drawing.image]::FromFile($imageFile2)
I figured out that the $imageFile2 path has 2 filenames in it. It must be that two met the Get-ChildItem/Where-Object/match criteria. The images look identical, but have similar names, so will be easy to process. After I split the names, it does FromFile ok.
i have a Problem with powershell Performance while searching a 40gb log file.
i Need to check if any of 1000 email adresses are included in this 40gb file. This would take 180 hours :D any ideas?
$logFolder = "H:\log.txt"
$adressen= Get-Content H:\Adressen.txt
$ergebnis = #()
foreach ($adr in $adressen){
$suche = Select-String -Path $logFolder -Pattern "\[\(\'from\'\,.*$adr.*\'\)\]" -List
$aktiv= $false
$adr
if ($suche){
$aktiv = $true
}
if ($aktiv -eq $true){
$ergebnis+=$adr + ";Ja"
}
else{
$ergebnis+=$adr + ";Nein"
}
}
$ergebnis |Out-File H:\output.txt
Don't read the file 1000 times.
Build a regexp line with all 1000 addresses (it's gonna be a huge line, but hey, much smaller than 40TB). Like:
$Pattern = "\[\(\'from\'\,.*$( $adressen -join '|' ).*\'\)\]"
Then do your Select-String, and save the result to do an address-by-address search in it. Hopefully, the result will be much smaller than 40Gb, and should be much faster.
As mentioned in the comments, replace
$ergebnis = #()
with
$ergebnis = New-Object System.Collections.ArrayList
and
$ergebnis+=$adr + ";Ja"
with
$ergebnis.add("$adr;Ja")
or respective
$ergebnis.add("$adr;Nein")
This will speed up your script quite a bit.
I need to get first 35 rows (including empty rows) from column A to a variable.
I looked in the internet but I cannot find the answer anywhere. $data = $worksheet.Range("A1:A35").text returns only cell A1. I tried with Cell.item etc. but with no success. Does anyone know how to extract cell range A1:A35 from excel into variable and save it to the text file? Thanks in advance.
$excel = New-Object -ComObject Excel.Application
$workbook = $excel.workbooks.open("*PATH_TO_THE_FILE*.xlsx")
$worksheet = $workbook.sheets.item("MatrixFill")
$data = $worksheet.Range("A1:A35").text
$excel.Quit()
the text property you are trying to access is actually an object, so you have to treat it as such. Also for ranges you will need to use a , instead of :. Below will give you what you need. It worked for me when I tested it.
$excel = New-Object -ComObject Excel.Application
$workbook = $excel.workbooks.open("*PATH_TO_THE_FILE*.xlsx")
$worksheet = $workbook.sheets.item("MatrixFill")
$worksheet.Range("A1","A35") | select -expand text |out-file "textfilename.txt"
$excel.Quit()
or
$excel = New-Object -ComObject Excel.Application
$workbook = $excel.workbooks.open("*PATH_TO_THE_FILE*.xlsx")
$worksheet = $workbook.sheets.item("MatrixFill")
$data = $worksheet.Range("A1","A35") | select -expand text
$data | out-file "textfilename.txt"
$excel.Quit()
I'm using Export-Csv to export [pscustomobject]s. Then I'm using a second function to convert that into a xlsx. Which works perfect. But what if I wanted to export into the second spreadsheet and rename it to something different?
I know Export-Csv doesn't support multi spread sheets.
Function SaveAsXLXS
{
#Hide Old File
(Get-Item $ResultsFilePath -Force).Attributes = "Hidden"
#Opens Old File
$Excel = New-Object -ComObject Excel.Application
$Workbook = $Excel.Workbooks.Open($ResultsFilePath)
#Formating
if ($GroupsTab.IsSelected -or $OrgBoxesTab.IsSelected)
{
$Workbook.Worksheets.Item(1).Columns.Item(1).Font.Bold = $True
$Workbook.Worksheets.Item(1).Columns.Item(1).Font.Size = 12
}
$Workbook.Worksheets.Item(1).Rows.Item(1).Font.Bold = $True
$Workbook.Worksheets.Item(1).Rows.Item(1).Font.Size = 15
$Workbook.Worksheets.Item(1).UsedRange.EntireColumn.Autofit()
#Creates Name for New File
$ExcelOut = $ResultsFilePath -replace '\.csv$', '.xlsx'
$dir = Split-Path $ExcelOut
$FilePathBase = $(Split-Path $ExcelOut -Leaf) -replace '\.xlsx$'
$FilePath = $ExcelOut
$n = 1
while (Test-Path $FilePath) {
$FilePath = Join-Path $dir $($FilePathBase + "-$n" + '.xlsx')
$n++
}
#Saves New File
$Workbook.SaveAs($FilePath, 51)
#Exits Old File
$Excel.Quit()
#Removes Old File
Remove-Item $ResultsFilePath -Force
}
You're opening the CSV as a new workbook, so you just need to open the workbook to which you want to add it as well and move/copy the sheet.
...
$Workbook = $Excel.workbooks.open($ResultsFilePath)
...
$wb2 = $Excel.Workbooks.Open('C:\path\to\other.xlsx')
$Workbook.Sheets.Item(1).Name = 'whatever' # rename sheet
$Workbook.Sheets.Item(1).Copy($wb2.Sheets.Item(1)) # copy sheet
$Workbook.Close($false) # close CSV without saving
$wb2.Save() # save & close workbook
$wb2.Close()
Of course, if you want to insert multiple CSVs into a workbook you'd open the xlsx file just once and save/close it after all sheets were inserted.
If you want to insert sheets from a CSV after a particular sheet in the destination workbook change the Copy() call to something like this:
$Workbook.Sheets.Item(1).Copy([Type]::Missing, $wb2.Sheets.Item(3))
Ok so we have a manual process that runs through PL/SQL Developer to run a query and then export to csv.
I am trying to automate that process using powershell since we are working in a windows environment.
I have created two files that seems to be exact duplicates from the automated and manual process but they don't work the same so I assume I am missing some hidden characters but I can't find them or figure out how to remove them.
The most obvious example of them working differently is opening them in excel. The manual file opens in excel automatically putting each column in it's own seperate column. The automated file instead puts everything into one column.
Can anybody shed some light? I am hoping that by resolving this or at least getting some info will help with the bigger problem of it not processing correctly.
Thanks.
ex one column
"rownum","year","month","batch","facility","transfer_facility","trans_dt","meter","ticket","trans_product","trans","shipper","customer","supplier","broker","origin","destination","quantity"
ex seperate column
"","ROWNUM","RPT_YR","RPT_MO","BATCH_NBR","FACILITY_CD","TRANSFER_FACILITY_CD","TRANS_DT","METER_NBR","TKT_NBR","TRANS_PRODUCT_CD","TRANS_CD","SHIPPER_CD","CUSTOMER_NBR","SUPPLIER_NBR","BROKER_CD","ORIGIN_CD","DESTINATION_CD","NET_QTY"
$connectionstring = "Data Source=database;User Id=user;Password=password"
$connection = New-Object System.Data.OracleClient.OracleConnection($connectionstring)
$command = New-Object System.Data.OracleClient.OracleCommand($query, $connection)
$connection.Open()
Write-Host -ForegroundColor Black " Opening Oracle Connection"
Start-Sleep -Seconds 2
#Getting data from oracle
Write-Host
Write-Host -ForegroundColor Black "Getting data from Oracle"
$Oracle_data=$command.ExecuteReader()
Start-Sleep -Seconds 2
if ($Oracle_data.read()){
Write-Host -ForegroundColor Green "Connection Success"
while ($Oracle_data.read()) {
#Variables for recordset
$rownum = $Oracle_data.GetDecimal(0)
$rpt_yr = $Oracle_data.GetDecimal(1)
$rpt_mo = $Oracle_data.GetDecimal(2)
$batch_nbr = $Oracle_data.GetString(3)
$facility_cd = $Oracle_data.GetString(4)
$transfer_facility_cd = $Oracle_data.GetString(5)
$trans_dt = $Oracle_data.GetDateTime(6)
$meter_nbr = $Oracle_data.GetString(7)
$tkt_nbr = $Oracle_data.GetString(8)
$trans_product_cd = $Oracle_data.GetString(9)
$trans_cd = $Oracle_data.GetString(10)
$shipper_cd = $Oracle_data.GetString(11)
$customer_nbr = $Oracle_data.GetString(12)
$supplier_nbr = $Oracle_data.GetString(13)
$broker_cd = $Oracle_data.GetString(14)
$origin_cd = $Oracle_data.GetString(15)
$destination_cd = $Oracle_data.GetString(16)
$net_qty = $Oracle_data.GetDecimal(17)
#Define new file
$filename = "Pipeline" #Get-Date -UFormat "%b%Y"
$filename = $filename + ".csv"
$fileLocation = $newdir + "\" + $filename
$fileExists = Test-Path $fileLocation
#Create object to hold record
$obj = new-object psobject -prop #{
rownum = $rownum
year = $rpt_yr
month = $rpt_mo
batch = $batch_nbr
facility = $facility_cd
transfer_facility = $transfer_facility_cd
trans_dt = $trans_dt
meter = $meter_nbr
ticket = $tkt_nbr
trans_product = $trans_product_cd
trans = $trans_cd
shipper = $shipper_cd
customer = $customer_nbr
supplier = $supplier_nbr
broker = $broker_cd
origin = $origin_cd
destination = $destination_cd
quantity = $net_qty
}
$records += $obj
}
}else {
Write-Host -ForegroundColor Red " Connection Failed"
}
#Write records to file with headers
$records | Select-Object rownum,year,month,batch,facility,transfer_facility,trans_dt,meter,ticket,trans_product,trans,shipper,customer,supplier,broker,origin,destination,quantity |
ConvertTo-Csv |
Select -Skip 1|
Out-File $fileLocation
Why are you skipping the first row(usually the headers)? Also, try using Export-CSV instead:
#Write records to file with headers
$records | Select-Object rownum, year, month, batch, facility, transfer_facility, trans_dt, meter, ticket, trans_product, trans, shipper, customer, supplier, broker, origin, destination, quantity |
Export-Csv $fileLocation -NoTypeInformation