I have 2000+ records in my table .table has clob data .
I want to save all records as single files on my local machine.
I am using pl/sql developer and Oracle 11g database.
There is a way to do this if you have access to the file system on your database.
You can find the solution here.
Related
I've been task with figuring out a way to get data from Oracle and store it in a SQLite database. The back story is we currently use SQLite for our local storage on a mobile application and we currently populate that data via a file download, because the data is a large amount it could take up to 5 minutes to populate the database. An easy solution for us would be to build the table on the server and download it via http. The data is currently stored in a Oracle database on the server. My question is is it possible to create a DBLink from Oracle to SQLite to insert the data into the SQLite database on the server? If this is not possible are there any other solutions that would achieve this?
Thanks
I have to insert table data from the test server database to prod server database in Oracle. Both table definitions are same.
You could use the DATA PUMP feature. It is a server based bulk data movement infrastructure.
EXPDP from source database and create a dump file on the source server.
Move the file to the destination server.
IMPDP on the destination database.
Look at few examples here.
I have an Oracle view containing very large amount of data in it and I want to migrate this data in a table in Greenplum database. Is there any way I can write any query in Postgresql to fetch that Oracle view's data?
If not possible by query in Postgresql, kindly suggest me some way to access Oracle view from Linux server, so that I can create data file from that Oracle view to my Linux server and load that file via gpfdist to a Greenplum table.
NOTE: an Oracle view is from third party, I only have an access to view that data (I have all the connection info) I can access that view via SQL Developer
NOTE: Exporting data from SQL Developer to my local machine is not feasible here as the data is very large
Thanks,
Sunny
The last time I used Greenplum (3 years ago) I don't think there were any untrusted languages like plperlu, so fetching directly from Oracle from within Greenplum might not be possible. If the data has a primary key, are you able to fetch in batches, compress it, then ship it to Greenplum?
Do you have a Greenplum support contract? If so, you could also try them if you haven't already: https://sso.emc.com/sso/login.htm
I recall that gpfdist can be configured to fetch from remote servers with a bit of fiddling, so if you are able to copy out the Oracle data to disk, you could fetch it using gpfdist without any intermediary steps.
I have a Oracle server which is very highly loaded. I want to Select big table with several millions rows of data and download the data using SQL Developer. How I can do this with little performance impact of the production database?
use oracle's export tool for most efficient dump of data from a table.
example
exp user/password tables=mytable query="optional where clause here"
Note you have to have oracle client installed to use it.
I have SampleApplite.rpd as repository sample from obiee11g, in that repository, there is some table , for example :samp_targets_f.
I want to check data inside those table, so in toad i log in as sys, but i couldn't find tables belong to sampleapplite.rpd
I try it in my localhost for practice, so i open that repository offline.
How to access those table from toad?could i try to log in in other schema?
Thanks
The table are actually XML files in the BI server file system: BI_EE_HOME/sample/SampleAppFiles/data