Using Powershell to export big output from Oracle to CSV - oracle

I need to export an quite a big CSV file from Oracle once a week.
I tried two approaches.
Adapter.fill(dataset)
Looping through columns and rows to save into a CSV file one line at a time.
The first one is running out of memory when running (the server machine has only 4 GB of RAM), the second one takes about an hour as there are over 4 milion rows to export.
Here's code #1:
#Your query. It cannot contain any double quotes otherwise it will break.
$query = "SELECT manycolumns FROM somequery"
#Oracle login credentials and other variables
$username = "username"
$password = "password"
$datasource = "database address"
$output = "\\NetworkLocation\Sales.csv"
#creates a blank CSV file and make sure it's in ASCI
Out-File $output -Force ascii
#This here will look for "Oracle.ManagedDataAccess.dll" file inside "C:\Oracle" folder. We usually have two versions of Oracle installed so the adaptor can be in different locations. Needs changing if the Oracle is installed elsewhere.
$location = Get-ChildItem -Path C:\Oracle -Filter Oracle.ManagedDataAccess.dll -Recurse -ErrorAction SilentlyContinue -Force
#Establishes connection to Oracle using the DLL file
Add-Type -Path $location.FullName
$connectionString = 'User Id=' + $username + ';Password=' + $password + ';Data Source=' + $datasource
$connection = New-Object Oracle.ManagedDataAccess.Client.OracleConnection($connectionString)
$connection.open()
$command=$connection.CreateCommand()
$command.CommandText=$query
#Creates a table in memory and fills it with results from the query. Then, export the virtual table into CSV.
$DataSet = New-Object System.Data.DataSet
$Adapter = New-Object Oracle.ManagedDataAccess.Client.OracleDataAdapter($command)
$Adapter.Fill($DataSet)
$DataSet.Tables[0] | Export-Csv $output -NoTypeInformation
$connection.Close()
And here's #2
#Your query. It cannot contain any double quotes otherwise it will break.
$query = "SELECT manycolumns FROM somequery"
#Oracle login credentials and other variables
$username = "username"
$password = "password"
$datasource = "database address"
$output = "\\NetworkLocation\Sales.csv"
$tempfile = $env:TEMP + "\Temp.csv"
#creates a blank CSV file and make sure it's in ASCI
Out-File $tempfile -Force ascii
#This here will look for "Oracle.ManagedDataAccess.dll" file inside "C:\Oracle" folder. Needs changing if the Oracle is installed elsewhere.
$location = Get-ChildItem -Path C:\Oracle -Filter Oracle.ManagedDataAccess.dll -Recurse -ErrorAction SilentlyContinue -Force
#Establishes connection to Oracle using the DLL file
Add-Type -Path $location.FullName
$connectionString = 'User Id=' + $username + ';Password=' + $password + ';Data Source=' + $datasource
$connection = New-Object Oracle.ManagedDataAccess.Client.OracleConnection($connectionString)
$connection.open()
$command=$connection.CreateCommand()
$command.CommandText=$query
#Reads results column by column. This way you don't have to specify how many columns it has.
$reader = $command.ExecuteReader()
while($reader.Read()) {
$props = #{}
for($i = 0; $i -lt $reader.FieldCount; $i+=1) {
$name = $reader.GetName($i)
$value = $reader.item($i)
$props.Add($name, $value)
}
#Exports each line to CSV file. Works best when the file is on local drive as it saves it after each line.
new-object PSObject -Property $props | Export-Csv $tempfile -NoTypeInformation -Append
}
Move-Item $tempfile $output -Force
$connection.Close()
Ideally, I would like to use the first code as it is way faster than the second one but to somehow avoid running out of memory.
Do you guys and gals know if there's some way to "fill" first 1 milion records, append them to CSV, clean the "DataSet" table, next 1 milion etc? After the code finishes running the CSV weights ~1.3 GB but when it runs, even 8 GB of Memory is not enough for it (my laptop has 8 but server only has 4 GB and it really hits it hard).
Any tips will be appreciated.

In the *nix community we love one-liners!
You can set markup to 'csv on' in sqlplus (>= 12)
Create the query file
cat > query.sql <<EOF
set head off
set feed off
set timing off
set trimspool on
set term off
spool output.csv
select
object_id,
owner,
object_name,
object_type,
status,
created,
last_ddl_time
from dba_objects;
spool off
exit;
EOF
Spool the output.csv like this:
sqlplus -s -m "CSV ON DELIM ',' QUOTE ON" user/password#\"localhost:1521/<my_service>\" #query.sql
Another option is SQLcl (the SQL Developer CLI tool. Binary name: 'sql' renamed by me to 'sqlcl')
Create the query file (Note! term on|off)
cat > query.sql <<EOF
set head off
set feed off
set timing off
set term off
set trimspool on
set sqlformat csv
spool output.csv
select
object_id,
owner,
object_name,
object_type,
status,
created,
last_ddl_time
from dba_objects
where rownum < 5;
spool off
exit;
EOF
Spool the output.csv like this:
sqlcl -s system/oracle#\"localhost:1521/XEPDB1\" #query.sql
Viola!
cat output.csv
9,"SYS","I_FILE#_BLOCK#","INDEX","VALID",18.10.2018 07:49:04,18.10.2018 07:49:04
38,"SYS","I_OBJ3","INDEX","VALID",18.10.2018 07:49:04,18.10.2018 07:49:04
45,"SYS","I_TS1","INDEX","VALID",18.10.2018 07:49:04,18.10.2018 07:49:04
51,"SYS","I_CON1","INDEX","VALID",18.10.2018 07:49:04,18.10.2018 07:49:04
And the winner is sqlplus for 77k rows! (removed filter rownum < 5)
time sqlcl -s system/oracle#\"localhost:1521/XEPDB1\" #query.sql
real 0m23.776s
user 0m39.542s
sys 0m1.293s
time sqlplus -s -m "CSV ON DELIM ',' QUOTE ON" system/oracle#localhost/XEPDB1 #query.sql
real 0m3.066s
user 0m0.700s
sys 0m0.265s
wc -l output.csv
77480 output.csv
You can experiment with formats in SQL Developer.
select /*CSV|HTML|JSON|TEXT|<TONSOFOTHERFORMATS>*/ from dba_objects;
If you are loading CSV into the database, this tool will do it!
https://github.com/csv2db/csv2db
Best of luck!

Thank you all for the responses, I learned about Oracle scripts and sql*plus that I never knew existed. I probably will use them in the future but I guess I will have to update my Oracle Developer package.
I have found a way to edit my code to work using the documentation here:
https://docs.oracle.com/database/121/ODPNT/OracleDataAdapterClass.htm#i1002865
It's not perfect as it pauses every 1 milion rows, saves the output and re-run the query which re-evaluates it (the one I'm running takes about 1-2 minutes to evaluate).
It's basically the same as running one code x times (where x is a ceiling of number of rows in milions) doing "fetch first 1'000'000 rows only" then "Offset 1'000'00 rows Fetch next 1'000'000 rows only" etc. and saving it into CSV appending at the bottom.
Here's the code:
#Your query. It cannot contain any double quotes otherwise it will break.
$query = "SELECT
A lot of columns
FROM
a lot of tables joined together
WHERE
a lot of conditions
"
#Oracle login credentials and other variables
$username = myusername
$password = mypassword
$datasource = TNSnameofmyDatasource
$output = "$env:USERPROFILE\desktop\Sales.csv"
#creates a blank CSV file and make sure it's in ASCII as that's what the output of my query is
Out-File $output -Force ascii
#This here will look for "Oracle.ManagedDataAccess.dll" file inside "C:\Oracle" folder. Needs changing if the Oracle is installed elsewhere.
$location = Get-ChildItem -Path C:\Oracle -Filter Oracle.ManagedDataAccess.dll -Recurse -ErrorAction SilentlyContinue -Force
#Establishes connection to Oracle using the DLL file
Add-Type -Path $location.FullName
$connectionString = 'User Id=' + $username + ';Password=' + $password + ';Data Source=' + $datasource
$connection = New-Object Oracle.ManagedDataAccess.Client.OracleConnection($connectionString)
$connection.open()
$command=$connection.CreateCommand()
$command.CommandText=$query
#Creates a table in memory to be filled up with results from the query using ODAC
$DataSet = New-Object System.Data.DataSet
$Adapter = New-Object Oracle.ManagedDataAccess.Client.OracleDataAdapter($command)
#Declaring variables for the loop
$fromrecord = 0
$numberofrecords = 1000000
$timesrun = 0
#Loop as long as the number of Rows in the virtual table are equal to specified $numberofrecords
while(($timesrun -eq 0) -or ($DataSet.Tables[0].Rows.Count -eq $numberofrecords))
{
$DataSet.Clear()
$Adapter.Fill($DataSet,$fromrecord,$numberofrecords,'*') | Out-Null #Suppresses writing to console the number of rows filled
Write-progress "Saved: $fromrecord Rows"
$DataSet.Tables[0] | Export-Csv $output -Append -NoTypeInformation
$fromrecord=$fromrecord+$numberofrecords
$timesrun++
}
$connection.Close()

Related

How to display the output (query result) from an update query which is executed by a powershell script?

On a daily basis, I need to update certain number of records in a DB.
Now to update this DB, I am using Merge --> Select--> Update sequentially.
But I need to display the output from this update statement (in a log file)
Code: update_status.ps1
$FilePath = $HOME+"\bin\ORACLE_CONNECTION_HOME\oracle_config.properties"
$SID=Select-String -Pattern "oracle_SID" -Path $FilePath
$Data_Source=$SID.ToString().split('=')[1]
$user_name=Select-String -Pattern "oracle_user_name" -Path $FilePath
$User=$user_name.ToString().split('=')[1]
$user_password=Select-String -Pattern "oracle_user_password" -Path $FilePath
$Pwd=$user_password.ToString().split('=')[1]
$connectionString= "Data Source=$Data_Source;User Id=$User;Password=$Pwd;Integrated Security=no"
[System.Reflection.Assembly]::LoadWithPartialName("System.Data.OracleClient") | Out-Null
$connection = New-Object System.Data.OracleClient.OracleConnection($connectionString)
function Oracle_Connection ( $query)
{
$connectionString= "Data Source=$Data_Source;User Id=$User;Password=$Pwd;Integrated Security=no"
[System.Reflection.Assembly]::LoadWithPartialName("System.Data.OracleClient") | Out-Null
$connection = New-Object System.Data.OracleClient.OracleConnection($connectionString)
$queryString = $query
$command = New-Object System.Data.OracleClient.OracleCommand($queryString, $connection)
$connection.Open()
$dataset = New-Object System.Data.DataTable
$oracleadapter = New-Object System.Data.OracleClient.OracleDataAdapter $command
$resultcount = $oracleadapter.fill($dataset)
$result = $command.ExecuteScalar()
Write-Host $result
$connection.Close()
}
function Update_p2c ($p2c, $c2p)
{
Write-Host "Updating P2C"
$query_sub_p2c ="MERGE INTO TABLE TB USING (SELECT ...) src ON ( NAME = src.NAME) WHEN MATCHED THEN UPDATE SET TB.P2C = src.ID";
Oracle_Connection $query_p2c
if ($resultcount -gt 0) { Write-Host "$resultcount rows were updated"} else {Write-Host "No rows were updated"}
}
##Initial setup completed.
#Defining Source and Target variables used in functions
$p2c = 'P2C'
$c2p = 'C2P'
Update_p2c -p2c $p2c -c2p $c2p
##End
Result:
PS D:\
Updating P2C
No rows were updated
However, I see that when I run the select & update manually in DB, I can see the rows getting selected as well as updated respectively.
This script is triggered by a .bat file in a task scheduler and it generates a log file
bat file:
pushd "%~dp0"
start /B /WAIT powershell -File "D:\bin\update_status.ps1" >> D:\log\update_status_%USERNAME%_%date%_log.log 2>&1
exit
My requirement is: I need to get the output from the update ( so and so rows updated from db) into the log file. Even if no rows get updated, it should show the same.
Please let me know if my ask is not clear.
Any help would be greatly appreciated :)
In principle you do not need a batch to run a powershell script from the task scheduler.
Task -> Actions -> Program/script = powershell.exe
Task -> Actions -> Add arguments = -File "D:\bin\update_status.ps1"
If the data you are looking for ist stored in the variable $result, simply wirte it to the logfile:
$result | set-content "D:\log\update_status_$($env:username)_$(get-date -format 'yyyy-dd-MM')_log.log"
Note $env:username will always be the caller identity of the scheduled task.

Compare columns between 2 files and delete non common columns using Powershell

I have a bunch of files in folder A and their corresponding metadata files in folder B. I want to loop though the data files and check if the columns are the same in the metadata file, (since incoming data files could have new columns added at any position without notice). If the columns in both files match, no action to is to be taken. If Data file has more columns than metadata file, then those columns should be deleted from incoming data file. Any help would be appreciated. Thanks!
Data file is ps_job.txt
“empid”|”name”|”deptid”|”zipcode”|”salary”|”gender”
“1”|”Tom”|”10″|”11111″|”1000″|”M”
“2”|”Ann”|”20″|”22222″|”2000″|”F”
Meta data file is ps_job_metadata.dat
“empid”|”name”|”zipcode”|”salary”
I would like my output to be
“empid”|”name”|”zipcode”|”salary”
“1”|”Tom”|”11111″|”1000″
“2”|”Ann”|”22222″|”2000″
That's a seemingly simple question with a very complicated answer. However, I've broken down the code for what you will need to do. Here are the steps that need to happen in order for powershell to do everything you're asking of it.
Read the .dat file
Save the .dat data into an object
Read the .txt file
Save the .txt header into an object
Check for the differences
Delete the old text file (that had too many columns)
Create a new text file with the new columns
I've made some assumptions in how this looks. However, with the way I've structured the code, it should be easy enough to make modifications as necessary if my assumptions are wrong. Here are my assumptions:
The text file will always have all of the columns that the DAT file has (even though it will sometimes have more)
The dat file is structured like a text file and can be directly imported into powershell.
And here is the code, with comments. I've done my best to explain the purpose of each section, but I've written this with the expectation that you have a basic knowledge of powershell, especially arrays. If you have questions I'll do my best to answer, though I'll ask that you refer to the section of code you have questions on.
###
### The paths. I'm sure you will have multiples of each file. However, I didn't want to attempt to pull in
### the files with this sample code as it can vary so much in your environment.
###
$dat = "C:\StackOverflow\thingy.dat"
$txt = "C:\stackoverflow\ps_job.txt"
###
### This is the section to process the DAT file
###
# This will read the file and put it in a variable
$dat_raw = get-content -Path $dat
# Now, let's seperate out the punctuation and give us our object
$dat_array = $dat_raw.split("|")
$dat_object = #()
foreach ($thing in $dat_array)
{
$dat_object+=$thing.Replace("""","")
}
###
### This is the section to process the TXT file
###
# This will read the file and put it into a variable
$txt_raw = get-content -Path $txt
# Now, let's seperate out the punctuation and give us our object
$txt_header_array = $txt_raw[0].split("|")
$txt_header_object = #()
foreach ($thing in $txt_header_array)
{
$txt_header_object += $thing.Replace("""","")
}
###
### Now, let's figure out which columns we're eliminating (if any)
###
$x = 0
$total = $txt_header_object.count
$to_keep = #()
While ($x -le $total)
{
if ($dat_object -contains $txt_header_object[$x])
{
$to_keep += $x
}
$x++
}
### Now that we know which objects to keep, we can apply the changes to each line of the text file.
### We will save each line to a new variable. Then, once we have the new variable, we will delete
### The existing file with a new file that has only the data we want.Note, we will only run this
### Code if there's a difference in the files.
if ($total -ne $to_keep.count)
{
### This first section will go line by line and 'fix' the number of columns
$new_text_file = #()
foreach ($line in $txt_raw)
{
if ($line.Length -gt 0)
{
# Blank out the array each time
$line_array = #()
foreach ($number in $to_keep)
{
$line_array += ($line.split("|"))[$number]
}
$new_text_file += $line_array -join "|"
}
else
{
$new_text_file +=""
}
}
### This second section will delete the original file and replace it with our good
### file that has been created.
Remove-item -Path $txt
$new_text_file | out-file -FilePath $txt
}
This small example can be a start for your solution :
$ps_job = Import-Csv D:\ps_job.txt -Delimiter '|'
$ps_job_metadata = (Get-Content D:\ps_job_metadata.txt) -split '\|'-replace '"'
foreach( $d in (Compare-Object $column $ps_job_metadata))
{
if($d.SideIndicator -eq '<=')
{
$ps_job | %{ $_.psobject.Properties.Remove($d.InputObject) }
}
}
$ps_job | Export-Csv -Path D:\output.txt -Delimiter '|' -NoTypeInformation
I tried this and it works.
$outputFile = "C:\Script_test\ps_job_mod.dat"
$sample = Import-Csv -Path "C:\Script_test\ps_job.dat" -Delimiter '|'
$metadataLine = Get-Content -Path "C:\Script_test\ps_job_metadata.txt" -First 1
$desiredColumns = $metadataLine.Split("|").Replace("`"","")
$sample | select $desiredColumns | Export-Csv $outputFile -Encoding UTF8 -NoTypeInformation -Delimiter '|'
Please note that the smart quotes are in consistent over the rows and there are empty lines between the rows (I highly recommend to reformat/update your question).
Anyways, as long as the quoting of the header is consistent between the two (ps_job.txt and ps_job_metadata.dat) files:
# $JobTxt = Get-Content .\ps_job.txt
$JobTxt = #'
“empid”|”name”|”deptid”|”zipcode”|”salary”|”gender”
“1”|”Tom”|”10″|”11111″|”1000″|”M”
“2”|”Ann”|”20″|”22222″|”2000″|”F”
'#
# $MetaDataTxt = Get-Content .\ps_job_metadata.dat
$MetaDataTxt = #'
“empid”|”name”|”zipcode”|”salary”
'#
$Job = ConvertFrom-Csv -Delimiter '|' $JobTxt
$MetaData = ConvertFrom-Csv -Delimiter '|' (#($MetaDataTxt) + 'x|')
$Job | Select-Object $MetaData.PSObject.Properties.Name
“empid” ”name” ”zipcode” ”salary”
------- ------ --------- --------
“1” ”Tom” ”11111″ ”1000″
“2” ”Ann” ”22222″ ”2000″
Here's the same answer I posted to your question on Powershell.org
$jobfile = "ps_job.dat"
$metafile = "ps_job_metadata.dat"
$outputfile = "some_file.csv"
$meta = ((Get-Content $metafile -First 1 -Encoding UTF8) -split '\|')
Class ColumnSelector : System.Collections.Specialized.OrderedDictionary {
Select($line,$meta)
{
$meta | foreach{$this.add($_,(iex "`$line.$_"))}
}
ColumnSelector($line,$meta)
{
$this.select($line,$meta)
}
}
import-csv $jobfile -Delimiter '|' |
foreach{[pscustomobject]([columnselector]::new($_,$meta))} |
Export-CSV $outputfile -Encoding UTF8 -NoTypeInformation -Delimiter '|'
Output
PS C:\>Get-Content $outputfile
"empid"|"name"|"zipcode"|"salary"
"1"|"Tom"|"11111"|"1000"
"2"|"Ann"|"22222"|"2000"
Provided you want to keep those curly quotes and your code page and console font supports all the characters, you can do the following:
# Create array of properties delimited by |
$headers = (Get-Content .\ps_job_metadata.dat -Encoding UTF8) -split '\|'
Import-Csv ps_job.dat -Delimiter '|' -Encoding utf8 | Select-Object $headers

Powershell to Audit Folder Permissions Showing Group Membership

Background: I've been using Netwrix to audit permissions to network shares for a few years now and It's only ever worked smoothly 1 time..... So I've decided to move on to just an automated powershell script. I've run into a block. When I try to parse out the group members, it doesn't like the network name in front of the group name (TBANK). Then I also need to take the next step of just showing the name instead of the whole output of get-adgroupmember. Any help would be greatly appreciated as I'm very to to scripting with powershell. Current script below:
$OutFile = "C:\users\user1\Desktop\test.csv" # Insert folder path where you want to save your file and its name
$Header = "Folder Path,IdentityReference, Members,AccessControlType,IsInherited,InheritanceFlags,PropagationFlags"
$FileExist = Test-Path $OutFile
If ($FileExist -eq $True) {Del $OutFile}
Add-Content -Value $Header -Path $OutFile
$Folder = "\\server1.tbank.com\share1"
$ACLs = get-acl $Folder | ForEach-Object { $_.Access }
Foreach ($ACL in $ACLs){
$ID = $ACL.IdentityReference
$ID = $ID -replace 'TBANK\' , ''
$ACType = $ACL.AccessControlType
$ACInher = $ACL.IsInherited
$ACInherFlags = $ACL.InheritanceFlags
$ACProp = $ACL.PropagationFlags
$Members = get-adgroupmember $ID.
$OutInfo = $Folder + "," + $ID + "," + $Members + "," + $ACType + "," + $ACInher + "," + $ACInherFlags + "," + $ACProp
Add-Content -Value $OutInfo -Path $OutFile
}
First of all, there is a way better way to output a CSV file than by trying to write each row yourself (with the risk of missing out required quotes), called Export-Csv.
To use that cmdlet, you wil need to create an array of objects which is not hard to do.
$OutFile = "C:\users\user1\Desktop\test.csv" # Insert folder path where you want to save your file and its name
$Folder = "\\server1.tbank.com\share1"
# get the Acl.Access for the folder, loop through and collect PSObjects in variable $result
$result = (Get-Acl -Path $Folder).Access | ForEach-Object {
# -replace uses regex, so you need to anchor to the beginning of
# the string with '^' and escape the backslash by doubling it
$id = $_.IdentityReference -replace '^TBANK\\' # remove the starting string "TBANK\"
# Get-ADGroupMember can return users, groups, and computers. If you only want users, do this:
# $members = (Get-ADGroupMember -Identity $id | Where-Object { $_.objectClass -eq 'user'}).name -join ', '
$members = (Get-ADGroupMember -Identity $id).name -join ', '
# output an onbject with all properties you need
[PsCustomObject]#{
'Folder Path' = $Folder
'IdentityReference' = $id
'Members' = $members
'AccessControlType' = $_.AccessControlType
'IsInherited' = $_.IsInherited
'InheritanceFlags' = $_.InheritanceFlags -join ', '
'PropagationFlags' = $_.PropagationFlags -join ', '
}
}
# output on screen
$result | Format-List
# output to CSV file
$result | Export-Csv -Path $OutFile -Force -UseCulture -NoTypeInformation
I've added a lot of inline comments to hopefully make things clear for you.
The -UseCulture switch in the Export-Csv line makes sure the field delimiter that is used matches what is set in your system as list separator. This helps when opening the csv file in Excel.
P.S> the Get-ADGroupMember cmdlet also has a switch called -Recursive. With that, it will also get the members from groups inside groups

Rename files based on SQL query

I have a table with filenames in it and also a userID for the person whose file it is. What I need to do is rename the file so that the userID is in the name of the file. The file right now might be 01234_main1.3gp and the user might be 987654 so I would want it to become 987654_main1.3gp.
What I have below renames the files to system.Data.Datarow+_main1.3gp. I know the variable works, immediately before the rename command I do a Write-Output and the correct number sequence is there.
$items = Get-ChildItem -Name -Path 'c:\main1'
foreach ($item in $items) {
$Query = #"
SELECT [su_ID] as $SU_ID
FROM [database].[dbo].[table]
WHERE (assetLog_m1audio = '$item')
"#
Invoke-Sqlcmd -ServerInstance surveyname -Database database -Query $Query
Write-Output $su_ID
Rename-Item C:\main1\$item $su_ID+'_main1.3gp'
}

SQLPackage.exe filename with the date

I am trying to automate a backup of an Azure database to my local machine using SQLPackage.exe. I am trying to add the date onto the filename so that every night it doesn't get overwritten.
The following line will pick up the date but will then stop the backup running with the error shown below
CMD
"C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /Action:Export /ssn:SERVER_NAME_HERE /sdn:DATABASE_NAME /su:USERNAME /sp:PASSWORD /tf:C:\Users\William\Desktop\BackupTest\BACKUPFILE'%date%'.bacpac
ERROR
*** Unrecognized command line argument '23/06/2017'.bacpac'.
I have tried using
+%date%+
+%date
And other options but no luck. Can anyone suggest anything?
More fundamentally, it is not recommend using bacpac to backup database. Bacpac is for load & move data in and out of Azure on demand.
SQLDB on Azure has backup service on by default so a scheduled backup is already provided by the service.
In addition, to properly make a bacpac, the database needs to be copied first then make a bacpac from the copy. Otherwise transactional consistency is not guaranteed and importing the bacpac can fail in the worst case.
You can add it using PowerShell as explained on below example.
Param(
[Parameter(Position=0,Mandatory=$true)]
[string]$ServerName
)
cls
try {
if((Get-PSSnapin -Name SQlServerCmdletSnapin100 -ErrorAction SilentlyContinue) -eq $null){
Add-PSSnapin SQlServerCmdletSnapin100
}
}
catch {
Write-Error "This script requires the SQLServerCmdletSnapIn100 snapin"
exit
}
$script_path = Split-Path -Parent $MyInvocation.MyCommand.Definition
$sql = "
SELECT name
FROM sys.databases
WHERE name NOT IN ('master', 'model', 'msdb', 'tempdb','distribution')
"
$data = Invoke-sqlcmd -Query $sql -ServerInstance $ServerName -Database master
$data | ForEach-Object {
$DatabaseName = $_.name
$now=get-Date
#
# Run sqlpackage
#
&"C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" `
/Action:extract `
/SourceServerName:$ServerName `
/SourceDatabaseName:$DatabaseName `
/TargetFile:$script_path\DACPACs\$DatabaseName$now.dacpac `
/p:ExtractReferencedServerScopedElements=False `
/p:IgnorePermissions=False
}
Hope this helps.
Regards,
Alberto Morillo
SQLCoffee.com

Resources