I have an Oracle 8.1.7 Server running on Windows 2000 Advanced Server in a Virtual Machine. We are currently using MS Virtual Server to host this. (The allocated hardware is powerful enough - we have 3.5GB RAM assigned, and a single 2GHz processor core, more than most servers in 1999)
One of the limitations of Virtual Server i sthe maximum size of Virtual Hard Disk (127GB) and the database I'm trying to import is 143GB.
To get round this problem, I'm trying to create the DB Datafiles on the physical HDD, which has sufficient space.
My problem is that I'm having difficulty creating a database instance on a network share.
Does anyone know how I can do this while retaining my youthful good looks (and hair!)?
Cheers,
Brian
You need the account your Oracle service is started under to have access to the network share.
Can't say it's a good idea to create an Oracle datafile on a network share, but it's a viable solution if you don't mess much with you datafiles and share accessibility.
You say 'import'. If you are using exp/imp, one option may be to only import individual users or tables, and slim them down individually.
Also, the size of an IMP file doesn't correlate to the size of the database. A 140GB exp/imp file may result in a much smaller database (or conversely, it could be larger as the exp/imp file only has the index metadata). Even a database with datafiles totalling 140GB could be smaller if those datafiles contain a lot of unused space.
Related
We think about migrating our artifactory from disk filestorage to store all the aritfacts as blobs in Oracle DB
https://www.jfrog.com/confluence/display/JFROG/Oracle
Unfortunately, there's not much info about such practice. So my question did anyone do it? My main concern is regarding performance. Is it as fast as local filestorage?
Technically, it's possible to use the Database for full metadata and binary storage in Artifactory, but this is not recommended.
From best practices for managing your artifactory filestore:
DATABASE
By default, Artifactory separates the binaries from their metadata.
Binaries are stored in the filestore and their metadata is stored in a
database. While it’s possible to store the binaries in the database as a
BLOB, it is not recommended because databases have limitations when
storing large files and this can lead to performance degradation and
negatively impact operational costs.
So it's not recommended. Database is usually slower than file or object storage.
Your best bet is to simply test it and see if it meets your required KPIs.
It's possible to store binary files within an Oracle DB, but it's not something I'd recommend doing for a massive amount of files.
A much better practice is to store the files within storage and use the database for the file paths.
The reason why I wouldn't recommend using Oracle is because of the retrieval speeds. You will put strain on the Database and it might end up slowing down slightly or even significantly.
The most I'd save on a database are single files directly linked to an entry, which are not often retrieved. Such as documentation in a .pdf file.
I have a remote server running a huge Oracle 11g database on a Docker that I need to export/import on my local machine.
I have already tried 2 approaches:
an attempt to copy db through SQLDeveloper led to Time Out exception
save running container into an image and load it after also
didn't help as an initialization error occurred. The size of resulting image.tar made up 10.5GB.
By reason of Oracle is commonly used in production environment and designed to cope with big amounts of data I'm sure there must be clear off-the-shelf solution to export/import a db from one host to another.
Could you give me any ideas, please?
So I have this task: export full database. There is a remote machine, on which Oracle 11g server is running. It is low on disk space, so exporting using expdp won't work.
Also, I do not have Oracle server on my local computer, so exporting using network link will not work for me. I used exp instead, but it has already been 4 days since I started export to my local disk (~380 GB already), but I need the dump file of the database.
P.S. I can connect to the remote machine using RDP. So if there are options that would allow me to export database dump using RDP I would appreciate if you could point to where to look.
I tried to search everywhere, even on different languages.
You can create a nfs/cifs share on your system, mount it on the server and use datapump to export the data directly on the mounted share. You will still be limited by the throughput of the network link however.
You should also check the network connectivity between your machine and the server - there is no fast way to transfer 1TB through a 100Mbps network, for example.
Also, when using windows remote desktop, there is a way for the remote machine to mount and access your local drives. This way you can also use datapump export on a mounted drive. Just make sure not to disconnect the remote desktop session. But it will most probably be very slow.
Since you need to move a lot of data, I suggest that you consider using some kind of NAS or SAN storage, if available. And again - the most likely bottleneck will be the network.
Also, in order to speed up the export - consider excluding the statistics and synonyms. In earlier versions of 11g datapump had issues with exporting the statistics and excluding them sped up the process significantly.
In my current workplace, an existing app is being replaced by a new 3rd party app. The database of the existing app, in Oracle 10g, needs to migrated. The existing app stored various documents as BLOBs. Per the new app's data model, they store the documents in files. I am tasked with conversion of existing BLOBs to files.
There are around 5 million records amounting to a total of 1 TB.
I am wondering if we can leverage the idea of Oracle SecureFile in this process. We do have some Oracle 11g environments available. This is my idea:
1) Import the existing 10g BLOBs into 11g SecureFiles.
2) Convert the Oracle SecureFiles (DBFS) to Windows file system (CIFS?).
The advantage with this idea is that the BLOB to File conversion process would be native and is taken care of by Oracle (in other words, performant, tested and exception-handled process). I have no clue about the file system conversion though.
Experts, is this a feasible idea? Dont know if this helps... but the new app is on Oracle 11gR2.
You can convert the blobs to docs and drop them in a dbfs. If you define the dbfs to use securefiles - recommended - and during the initial load with filesystem like logging, you have a good performant filesystem, comparable with nfs performance.
Problem with the windows environment is that you can not mount a dbfs native on windows (AFAIK). You could however mount it on Linux and pass in through to cifs. Not exactly an ideal solution but maybe useable as a workaround until native dbfs mount becomes available on Windows.
The filesystem like logging is good for performance, not for recovery or feeding to standby databases. This is because only the file metadata is logged, not the contents. You should include this knowledge in your recovery process, or swith to full logging after the initial load/conversion completes. That would be my preference.
dbfs is great, combined with advanced compression it can save quite a lot of space.
I have been given a task to install oracle 11g on Dell T110 server which contains 16Gb ram and 1.5TB of disk space. It is running with - Oracle Enterprise Linux 5 (RH Kernel) X86-64-, I have successfully installed one db instance on this machine and now i need to create another 3 instances for this server. I belive i need to have 4 different oracle SID s for this. Iam not very familir with linux and even to get installed one instance took nearly 3/4 of a day. I need ur support to create 3 more instances of this db. please help me providing all the commands i should execute on shell. would the problem solve if i create 3 more linux users and install oracle db again for each user with different sids?
First off, are you certain that you need four separate database instances rather than four schemas in a single database? Ideally, you'd have only one Oracle database per server but would run multiple applications in multiple schemas in that single database. What other products (i.e. SQL Server) refer to as a "database" is similar in Oracle to a schema.
Assuming you do need four separate database instances, you don't need separate operating system users. You can use the same operating system user you used previously (generally "oracle"). When you're doing the install, you just need to choose a different Oracle Home for each database. I'm also assuming that you are setting the memory-related parameters appropriately during each install rather than relying on the defaults which will cause each database to use a substantial fraction of the 16 GB of physical RAM you have available. You'd need to ensure that the total memory allocated to all four database instances plus whatever RAM is required to run the operating system and whatever other applications you run on this server is less than the 16 GB of physical RAM available.