How to increase the TEMP TABLE Space value in Oracle? - oracle

Currently my Oracle 11g temp TABLESPACE value is 34GB. I need to increase the table space value to a large value (45GB)
I tired the following sql command to increase the temp table space.
ALTER TABLESPACE temp ADD TEMPFILE '/oradata/temp01.dbf' SIZE 45G
The error:
SQL Error: ORA-01144: File size (5536951 blocks) exceeds maximum of
4194303 blocks
01144. 00000 - "File size (%s blocks) exceeds maximum of %s blocks"
*Cause: Specified file size is larger than maximum allowable size value.
*Action: Specify a smaller size.
SELECT value FROM v$parameter WHERE name = 'db_block_size';
The "db_block_size" value is 8192
How do I decide the maximum allowed db_block_size and the corresponding temp TABLESPACE value
How do I increase the TEMP tablespace?

The error message is pretty clear, the maximum file size is 4194303 blocks. If you multiply that out
4194303 blocks * 8192 bytes/ block / 1024^3 = 32 GB
So you're limited to individual data/ temp files of up to 32 GB. You can, however, have thousands of data files in a tablespace. So you could have a 32 GB temp file and another 13 GB temp file or 2 22.5 GB temp files or 9 5 GB temp files.

Related

Block size effect hadoop

iam working on hadoop apache 2.7.1
and iam adding files of size that doesn't exceed 100 Kb
so if i configure block size to be 1 mb or to be the default value which is
128 mb
that will not affect my files because they will be saved on one block only
and one block will be retrieved when we download the file
but what will be the difference in block storage size
i mean does storing files on 1 mb block size differs from storing them on 128 mb block size when files are smaller than 1 mb
i mean when file of 1 mb is stored in a block of size 128 m will it reserve this whole block and this block is not going to be used for other files ,or empty space is going to be used for other files with a pointer refer to file start location in a block
i found no difference in uploading and downloading time
is there any other points that i have to consider
I am going to cite the (now discontinued) SO documentation for this, written by me, because why not.
Say for example you have a file of size 1024 MBs. if your block size is 128 MB, you will get 8 blocks of 128MB each. This means that your namenode will need to store metadata of 8 x 3 = 24 files (3 being the replication factor).
Consider the same scenario with a block size of 4 KBs. It will result in 1GB / 4KB = 250000 blocks and that will require the namenode to save the metadata for 750000 blocks for just a 1GB file. Since all these metadata related information is stored in-memory, larger block size is preferred to save that bit of extra load on the NameNode.

Isn't Default block size in HDFS minimum file size?

HDFS has the default block size as 60MB. So, does that mean the minimum size of a file in HDFS is 60MB?.
i.e. if we create/copy a file which is less than 60MB in size (say 5bytes) then my assumption is that the actual size if that file in HDFS is 1block i.e. 60MB. But, when I copy a 5bytes file to HDFS then when I see the size of the file (through ls command) I still see the size of that file as 5bytes. Shouldn't that be 60MB?.
or is the ls command showing the size of the data in the file instead of the block size of the file on HDFS?
The default size of hdfs block does not means that it will use all the space whatever we have specified i.e. 60 MB. if data is more that 60 MB then it will split the data into the blocks (data/60 MB) , that number of blocks will be created.
If you are doing the ls command then it will only show you currently using space.
ex:-- i have uploaded test.txt file in hdfs and block size i have set to 128 MB and replication is 2 but our actual file size is only 193 B.
**Permission Owner Group Size Last Modified Replication Block Size Name
-rw-r--r-- hduser supergroup 193 B 10/27/2016, 2:58:41 PM 2 128 MB test.txt**
The default block size is a maximum size of a block. Each file consists of blocks, which are distributed (and replicated) to different datanodes on HDFS. The namenode knows which blocks constitute a file, and where to find them. Perhaps it's easier to understand this with the following image:
If a file exceeds 60MB (120MB in the new version), it cannot be written using a single block, it will need at least two.
Of course, if it is less than 60MB it can be written in a single block, which will occupy as much space, as necessary (less than 60MB).
After all, it doesn't make sense that a 5-byte file will occupy 60MB.

how to create a tablespace in oracle with initial data size of 10MB and let it grow all the way to the capacity of the disk?

how to create a tablespace in oracle with initial data size of 10MB and let it grow all the way to the capacity of the disk? so if the disk is 100GB the table should be able to grow until 100GB without any change in the configuration
Use MAXSIZE ( IMHO I don't think it's a good idea to fill your disk )
CREATE BIGFILE TABLESPACE BIGTBS1 DATAFILE '/u01/app/oracle/oradata/OCM/bigtbs01.dbf'
SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 100G;

What is maximum limit space per table in oracle 10g and 11g?

I have a production table. The table is updating and deleting rows daily. So want to know about how many space allowed per table in oracle 10g and 11g? If exceed the limit how we fix the problem?
Practically, there is no limit. At least not one that any human is likely to hit.
If you want to be pedantic, a bigfile tablespace can be up to 32 TB in size if the database is using 8k blocks or 128 TB if the database is using 32k blocks. A table can have up to 1024k partitions. Each partition could be in a different tablespace but you can only have 64k tablespaces. So if you have 64k partitions and each partition in a separate bigfile tablespace with nothing else in the tablespace, you could have up to 128 TB * 64k = 8192 PB = 8 EB (exabytes). That's roughly 1,000 times all the data stored in the Library of Congress. If your table is that large, you've done something extraordinarily wrong.
Each version of the database has a couple of sections in the Oracle Reference on the physical limits and on the logical limits of the database that you can use to answer these sorts of questions.

Is an Oracle tablespace the same as disk space, conceptually?

Oracle has the concept of a tablespace. Does a tablespace resemble actual physical storage space on the disk? If this is true, and I delete a million records from the database, will the space on the disk be freed up immediately?
No and no.
A tablespace is made using one or more datafiles. A datafile comes closest to a disk. The data in a tablespace is striped over the datafiles. If you delete rows from a table, there will be free space in that table. If you then shrink that table, the free space in the tablespace grows and the same in the supporting tablespaces.
If you are lucky and your table data was in the end of the datafile[s], you might be able to also shrink the datafile, after which you end up with more free space on the OS.
Yes a tablespace represents actual physical storage space on the disks.
No, if you delete a million records then no space will be freed.
A tablespace is a collection of datafiles on the disk that are pre-sized; by removing data from one of these datafiles you don't free up any space because you haven't reduced the size of the tablespace. It has a predetermined size that has not changed.
You can manually resize a datafile but only if the amount of data stored in that file (not the tablespace) is less than the size you want to reduce it to. To use Oracle's example syntax:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf'
RESIZE 100M;
This is not something to be playing around with. There are often good reasons for the size of datafiles and tablespaces. You should talk to your DBA before doing anything.

Resources