external table retain data where?【green plum db】 - greenplum

when I query to the external table that LOCATION is hdfs,
I don't make sense that where dose greenplum retain data(including tmp & cashdata used for process).
is that any rule to hold data in greenplum?
for instance
1.a lot of data : gp's hdd
2.little data : gp's memory
3.No,do not retain in gp at all. gp is just display them.
4.etc...

Data from external tables is only hold in memory, not stored on disk (except temporarily if swapping is required). If you want to store data permanently, you can use:
INSERT INTO <internal table> SELECT * FROM <external table>;
And future queries can use "internal table" to speed up performance.

Related

In Greenplum where is data temporarily inserted into tables within a function actually stored?

Our project has a database function that stores millions of records from queries into intermediate tables as part of its steps, and then joins those intermediate tables to get the final results, which are stored in a final table. The intermediate tables are standard database tables in the schema (our security team does not allow tables to be created dynamically), and the intermediate tables are truncated before the function returns. When determining the amount of database storage space that our project needs, how do these intermediate tables figure into the determination of space needed? Do I need to figure out the worst-case of how much data is stored in the final table plus these intermediate tables while these functions are executing or does the fact that the data in the intermediate tables does not remain once the function returns mean that I do not need to include the data that would be stored in the intermediate tables when calculating the overall storage needs? I know that those millions of records are stored somewhere before the table is truncated and the function returns, but I was looking for a definitive answer.
Thanks in advance
You should count for the worst case of this intermediate table size. Intermediate table is the same table and store data the same way as your target table. If you have inserted data into it this data would reside on the same HDDs (of course, depends on filespace settings, but usually they are the same)
When you truncate the table, its data files are removed from filesystem and table metadata is altered in pg_class

Oracle PL/SQL Table memory usage vs temporary tables

I am incrementally pulling records from a Web Service. I need to set the status of any records in our local table that were not returned by the Web Service to 'DELETED';
I had intended to store the complete list of record IDs in a PL/SQL table and then perform a single UPDATE statement based on that.
I now estimate the memory usage of the record set to be ~615MB. How does the memory footprint of a PL/SQL table compare to using a global temporary table instead? Would it use a different part of Oracle's memory, PGA vs SGA for instance?
Performance isn't a major concern because this nightly job already runs in Production in an acceptable amount of time. I don't believe adding the 'DELETED' status piece will increase the run duration to affect users.
A global temporary table would not use memory in that way. It stores the values in a temporary segment to which only your session has access, and which is dropped when no longer needed.
Have you considered what happens if your session disconnects? In either method you lose the values you have accumulated. You might like to just use a regular table.

Difference between Hive internal tables and external tables?

Can anyone tell me the difference between Hive's external table and internal tables.
I know the difference comes when dropping the table. I don't understand what you mean by the data and metadata is deleted in internal and only metadata is deleted in external tables.
Can anyone explain me in terms of nodes please.
Hive has a relational database on the master node it uses to keep track of state.
For instance, when you CREATE TABLE FOO(foo string) LOCATION 'hdfs://tmp/';, this table schema is stored in the database.
If you have a partitioned table, the partitions are stored in the database(this allows hive to use lists of partitions without going to the file-system and finding them, etc). These sorts of things are the 'metadata'.
When you drop an internal table, it drops the data, and it also drops the metadata.
When you drop an external table, it only drops the meta data. That means hive is ignorant of that data now. It does not touch the data itself.
Hive tables can be created as EXTERNAL or INTERNAL. This is a choice that affects how data is loaded, controlled, and managed.
Use EXTERNAL tables when:
The data is also used outside of Hive. For example, the data files are read and processed by an existing program that doesn't lock the files.
Data needs to remain in the underlying location even after a DROP TABLE. This can apply if you are pointing multiple schemas (tables or views) at a single data set or if you are iterating through various possible schemas.
You want to use a custom location such as ASV.
Hive should not own data and control settings, dirs, etc., you have another program or process that will do those things.
You are not creating table based on existing table (AS SELECT).
Use INTERNAL tables when:
The data is temporary.
You want Hive to completely manage the lifecycle of the table and data.
To answer you Question :
For External Tables, Hive stores the data in the LOCATION specified during creation of the table(generally not in warehouse directory). If the external table is dropped, then the table metadata is deleted but not the data.
For Internal tables, Hive stores data into its warehouse directory. If the table is dropped then both the table metadata and the data will be deleted.
For your reference,
Difference between Internal & External tables :
For External Tables -
External table stores files on the HDFS server but tables are not linked to the source file completely.
If you delete an external table the file still remains on the HDFS server.
As an example if you create an external table called “table_test” in HIVE using HIVE-QL and link the table to file “file”, then deleting “table_test” from HIVE will not delete “file” from HDFS.
External table files are accessible to anyone who has access to HDFS file structure and therefore security needs to be managed at the HDFS
file/folder level.
Meta data is maintained on master node, and deleting an external table from HIVE only deletes the metadata not the data/file.
For Internal Tables-
Stored in a directory based on settings in hive.metastore.warehouse.dir,
by default internal tables are stored in the following directory “/user/hive/warehouse” you can change it by updating the location in the config file .
Deleting the table deletes the metadata and data from master-node and HDFS respectively.
Internal table file security is controlled solely via HIVE. Security needs to be managed within HIVE, probably at the schema level (depends
on organization).
Hive may have internal or external tables, this is a choice that affects how data is loaded, controlled, and managed.
Use EXTERNAL tables when:
The data is also used outside of Hive. For example, the data files are read and processed by an existing program that doesn’t lock the files.
Data needs to remain in the underlying location even after a DROP TABLE. This can apply if you are pointing multiple schema (tables or views) at a single data set or if you are iterating through various possible schema.
Hive should not own data and control settings, directories, etc., you may have another program or process that will do those things.
You are not creating table based on existing table (AS SELECT).
Use INTERNAL tables when:
The data is temporary.
You want Hive to completely manage the life-cycle of the table and data.
Source :
HDInsight: Hive Internal and External Tables Intro
Internal & external tables in Hadoop- HIVE
An internal table data is stored in the warehouse folder, whereas an external table data is stored at the location you mentioned in table creation.
So when you delete an internal table, it deletes the schema as well as the data under the warehouse folder, but for an external table it's only the schema that you will loose.
So when you want an external table back you again after deleting it, can create a table with the same schema again and point it to the original data location. Hope it is clear now.
The only difference in behaviour (not the intended usage) based on my limited research and testing so far (using Hive 1.1.0 -cdh5.12.0) seems to be that when a table is dropped
the data of the Internal (Managed) tables gets deleted from the HDFS file system
while the data of the External tables does NOT get deleted from the HDFS file system.
(NOTE: See Section 'Managed and External Tables' in https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL which list some other difference which I did not completely understand)
I believe Hive chooses the location where it needs to create the table based on the following precedence from top to bottom
Location defined during the Table Creation
Location defined in the Database/Schema Creation in which the table is created.
Default Hive Warehouse Directory (Property hive.metastore.warehouse.dir in hive.site.xml)
When the "Location" option is not used during the "creation of a hive table", the above precedence rule is used. This is applicable for both Internal and External tables. This means an Internal table does not necessarily have to reside in the Warehouse directory and can reside anywhere else.
Note: I might have missed some scenarios, but based on my limited exploration, the behaviour of both Internal and Extenal table seems to be the same except for the one difference (data deletion) described above. I tried the following scenarios for both Internal and External tables.
Creating table with and without Location option
Creating table with and without Partition Option
Adding new data using the Hive Load and Insert Statements
Adding data files to the Table location outside of Hive (using HDFS commands) and refreshing the table using the "MSCK REPAIR TABLE command
Dropping the tables
In external tables, if you drop it, it deletes only schema of the table, table data exists in physical location. So to deleted the data use hadoop fs - rmr tablename .
Managed table hive will have full control on tables. In external tables users will have control on it.
INTERNAL : Table is created First and Data is loaded later
EXTERNAL : Data is present and Table is created on top of it.
Internal tables are useful if you want Hive to manage the complete lifecycle of your data including the deletion, whereas external tables are useful when the files are being used outside of Hive.
External hive table has advantages that it does not remove files when we drop tables,we can set row formats with different settings , like serde....delimited
Also Keep in mind that Hive is a big data warehouse. When you want to drop a table you dont want to lose Gigabytes or Terabytes of data. Generating, moving and copying data at that scale can be time consuming.
When you drop a 'Managed' table hive will also trash its data.
When you drop a 'External' table only the schema definition from hive meta-store is removed. The data on the hdfs still remains.
Consider this scenario which best suits for External Table:
A MapReduce (MR) job filters a huge log file to spit out n sub log files (e.g. each sub log file contains a specific message type log) and the output i.e n sub log files are stored in hdfs.
These log files are to be loaded into Hive tables for performing further analytic, in this scenario I would recommend an External Table(s), because the actual log files are generated and owned by an external process i.e. a MR job besides you can avoid an additional step of loading each generated log file into respective Hive table as well.
The best use case for an external table in the hive is when you want to create the table from a file either CSV or text
Both Internal and External tables are owned by HIVE. The only difference being the ownership of data. The commands for creating both tables are shown below. Only an additional EXTERNAL keyword comes in case of external table creation. Both tables can be created/deleted/modified using SQL Statements.
In case of Internal Tables, both the table and the data contained in the tables are managed by HIVE. That is, we can add/delete/modify any data using HIVE. When we DROP the table, along with the table, the data will also get deleted.
Eg: CREATE TABLE tweets (text STRING, words INT, length INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
In case of External Tables, only the table is managed by HIVE. The data present in these tables can be from any storage locations like HDFS. We cant add/delete/modify the data in these tables. We can only use the data in these tables using SELECT statements. When we DROP the table, only the table gets deleted and not the data contained in it. This is why its said that only meta-data gets deleted. When we create EXTERNAL tables, we need to mention the location of the data.
Eg: CREATE EXTERNAL TABLE tweets (text STRING, words INT, length INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/user/hive/warehouse/tweets';
hive stores only the meta data in metastore and original data in out side of hive when we use external table we can give location' ' by these our original data wont effect when we drop the table
When there is data already in HDFS, an external Hive table can be created to describe the data. It is called EXTERNAL because the data in the external table is specified in the LOCATION properties instead of the default warehouse directory.
When keeping data in the internal tables, Hive fully manages the life cycle of the table and data. This means the data is removed once the internal table is dropped. If the external table is dropped, the table metadata is deleted but the data is kept. Most of the time, an external table is preferred to avoid deleting data along with tables by mistake.
For managed tables, Hive controls the lifecycle of their data. Hive stores the data for managed tables in a sub-directory under the directory defined by hive.metastore.warehouse.dir by default.
When we drop a managed table, Hive deletes the data in the table.But managed tables are less convenient for sharing with other tools. For example, lets say we have data that is created and used primarily by Pig , but we want to run some queries against it, but not give Hive ownership of the data.
At that time, external table is defined that points to that data, but doesn’t take ownership of it.
In Hive We can also create an external table. It tells Hive to refer to the data that is at an existing location outside the warehouse directory.
Dropping External tables will delete metadata but not the data.
I would like to add that
Internal tables are used when the data needs to be updated or some rows need to be deleted because ACID properties can be supported on the Internal tables but ACID properties cannot be supported on the external tables.
Please ensure that there is a backup of the data in the Internal table because if a internal table is dropped then the data will also be lost.
In simple words, there are two things:
Hive can manage things in warehouse i.e. it will not delete data out of warehouse.
When we delete table:
1) For internal tables the data is managed internally in warehouse. So will be deleted.
2) For external tables the data is managed eternal from warehouse. So can't be deleted and clients other then hive can also use it.

How to create partition on remote tablespace in Oracle?

I'm a noob in Oracle, is it possible to partition a table to remote server using db link? is it possible at all?
I'm trying something like this:
CREATE TABLE Test (
TestID integer not null,
Name varchar2(20) not null
)
PARTITION BY LIST (TestID)
(PARTITION testPart1 VALUES (1)
TABLESPACE tbspc1,
PARTITION testPart2 VALUES (2)
TABLESPACE tbspc2#RemoteServer);
thank you
No, that will not work. You're confusing an instance with a database.
A database is the physical storage of the data and metadata. The number of disks you use and the location of the disks is up for management. You can put indexes one place, data in another; you can put some data on local drives and some on mounted drives. That's a database.
An Instance is the memory structures and computer processes which access a database and make it possible to query it, write to it, update it, etc.
When you say #DB_LINK... you're saying "That set of memory and cpu processes".
When you say Tablespace, you're saying "On those files, on those disks"
If you want to store data on the same drives that your #dblink is storing data then mount that drive and build a new tablespace there.
If you're trying to OPEN the database with more than one instance, that's called RAC and it's bit over your head. <-- I say this because you have to have these concepts mastered before you'd ever consider RAC.
It's hard to say that something is impossible, but based on the syntax diagram for CREATE TABLE this doesn't look possible. For the select statement you can see that the dblink syntax (# dblink): http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_10002.htm#i2126073 But for partitioning storage there is no such remote syntax: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#CJADDEEH
Perhaps the following would be a reasonable starting point:
Have a database at the "home office" to contain the common data.
Have a "local" database at each branch to store branch-specific data, with links to the "home office" database to access the common data.
To help eliminate the "single point of failure" which could occur if the central database was to go down or communications were to be lost, you might try replicating the common data from the central database to the branch databases so that each branch has a complete copy of the common data which could be updated on some sort of regular schedule.
Share and enjoy.

PostgreSQL temporary tables

I need to perform a query 2.5 million times. This query generates some rows which I need to AVG(column) and then use this AVG to filter the table from all values below average. I then need to INSERT these filtered results into a table.
The only way to do such a thing with reasonable efficiency, seems to be by creating a TEMPORARY TABLE for each query-postmaster python-thread. I am just hoping these TEMPORARY TABLEs will not be persisted to hard drive (at all) and will remain in memory (RAM), unless they are out of working memory, of course.
I would like to know if a TEMPORARY TABLE will incur disk writes (which would interfere with the INSERTS, i.e. slow to whole process down)
Please note that, in Postgres, the default behaviour for temporary tables is that they are not automatically dropped, and data is persisted on commit. See ON COMMIT.
Temporary table are, however, dropped at the end of a database session:
Temporary tables are automatically dropped at the end of a session, or
optionally at the end of the current transaction.
There are multiple considerations you have to take into account:
If you do want to explicitly DROP a temporary table at the end of a transaction, create it with the CREATE TEMPORARY TABLE ... ON COMMIT DROP syntax.
In the presence of connection pooling, a database session may span multiple client sessions; to avoid clashes in CREATE, you should drop your temporary tables -- either prior to returning a connection to the pool (e.g. by doing everything inside a transaction and using the ON COMMIT DROP creation syntax), or on an as-needed basis (by preceding any CREATE TEMPORARY TABLE statement with a corresponding DROP TABLE IF EXISTS, which has the advantage of also working outside transactions e.g. if the connection is used in auto-commit mode.)
While the temporary table is in use, how much of it will fit in memory before overflowing on to disk? See the temp_buffers option in postgresql.conf
Anything else I should worry about when working often with temp tables? A vacuum is recommended after you have DROPped temporary tables, to clean up any dead tuples from the catalog. Postgres will automatically vacuum every 3 minutes or so for you when using the default settings (auto_vacuum).
Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows.
Temporary tables provide only one guarantee - they are dropped at the end of the session. For a small table you'll probably have most of your data in the backing store. For a large table I guarantee that data will be flushed to disk periodically as the database engine needs more working space for other requests.
EDIT:
If you're absolutely in need of RAM-only temporary tables you can create a table space for your database on a RAM disk (/dev/shm works). This reduces the amount of disk IO, but beware that it is currently not possible to do this without a physical disk write; the DB engine will flush the table list to stable storage when you create the temporary table.

Resources