BigQuery: insert rows, but it's not written - insert

I am updating a list of rows using the method tabledata().insertAll from the bigquery object. After the execution, the return shows no errors. However, my tables still continue with no data written.
Could be a problem of permission. If so, why there's no errors returned?
Thanks

This can happen if you do the insert right after deleting and re-creating the table.
The streaming buffer of a deleted table is not deleted right at the time that table is deleted, which can cause new inserts to be delivered to this old streaming buffer.
From BigQuery documentation:
deleting and/or recreating a table may create a period of time where streaming inserts are effectively delivered to the old table and will not be present in the newly created table.
And in this case, no errors would be returned.
References:
https://cloud.google.com/bigquery/troubleshooting-errors#metadata-errors-for-streaming-inserts
https://github.com/GoogleCloudPlatform/google-cloud-php/issues/871#issuecomment-361339800
https://cloud.google.com/bigquery/streaming-data-into-bigquery

Related

Is there any time limit on the records in table user_tab_modifications table?

I'm using user_tab_modifications table to monitor all my table's change in DB, but sometimes the records disappeared.
For example, I updated the data in table A, and ran the following SQL to flush the table user_tab_modifications so that I can see the latest information there.
exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
Then
SELECT * FROM USER_TAB_MODIFICATIONS;
So I can see the record about table A in there.
But then I found the record about table A disappeared after about 1 minute even though I didn't do anything in Oracle.
(other records in user_tab_modifications do not change. No problems)
That's why and can I do some settings to change it (make sure the records there will not disappear)? Thank you.
From the documentation:
USER_TAB_MODIFICATIONS describes modifications to all tables owned by the current user that have been modified since the last time statistics were gathered on the tables.
You might want to check if some stat gathering process was running in the background on the concerned table between the time when the changes were done and when you saw the stat record disappear.

How to safely update hive external table

I have an external hive table and I would like to refresh the data files on a daily basis. What is the recommended way to do this?
If I just overwrite the files, and if we are unlucky enough to have some other hive queries to execute in parallel against this table, what will happen to those queries? Will they just fail? Or will my HDFS operations fail? Or will they block until the queries complete?
If availability is a concern and space isn't an issue, you can do the following:
Make a synonym for the external table. Make sure all queries use this synonym when accessing the table.
When loading new data, load it to a new table with a different name.
When the load is complete, point the synonym to the newly loaded table.
After an appropriate length of time (long enough for any running queries to finish), drop the previous table.
First of all.. if you are accessing any table it may have two types of locks:
exclusive(if data is getting added) and shared(if data is getting read)..
so if you insert overwrite and add data into the table then at that time if you access the table with other queries, they wont get executed because there will be an exclusive lock on it and once the insert overwrite query completes then you may access the table.
Please refer to the following link:
https://cwiki.apache.org/confluence/display/Hive/Locking

ORA-08103: object no longer exists - insert query fails

I'm encountering this error multiple times, but it appears to be random.
I perform an INSERT query where I attempt to insert a BLOB file into a designated table.
I do not know if there's a connection between the BLOB and the error.
Worth mentioning that the table is partitioned.
Here is the complete query:
INSERT INTO COLLECTION_BLOB_T
(OBJINST_ID, COLINF_ID, COLINF_PARTNO, BINARY_FILE_NAME, BINARY_FILE_SIZE, BINARY_FILE)
VALUES (:p1, :p2, :p3, :p4, :p5, EMPTY_BLOB());
This is the only INSERT/UPDATE into this table in the entire application.
So I doubt that any other query is locking it, and the error is not about a locked resource.
What can be the cause?
As I've mentioned, this appears to occur randomly.
Thanks in advance.
The table is partitioned as I've mentioned, so between midnight - 3:00AM the partitioned changes and this under some instances the error occurs.

Oracle Clear Cached Sequence

I have a very simple table that has an ID (generated by a sequence), and a NAME. I inserted a couple rows which got cached but after a while I wanted to remove them because I wanted to redo my table so I issued a couple of DELETE statements to remove all records (I don't have the privileges to do a TRUNCATE).
After deleting the old rows, I again inserted a couple other records but I didn't bother resetting the sequence.
On PHP, when I SELECT everything on that table, I still get those old deleted rows. But on PL/SQL when I SELECT on that table, it only shows me the new records.
Is the problematic cache on the PHP or Oracle side? If it's on the Oracle side, how do I clear it out?
Thanks!

How to update/insert a table without creating a new table (temporary or otherwise)

Background: My team has an etl job that updates an aggregate table. Each row contains data for a particular date, but this row can and will get updated after the row date (which means any row can contain data from multiple jobs). This ETL job missed some data for one day last week and now I need to backfill it.
Problem: I have the missing data, and what I was planning on doing was dumping that data into a temporary table and then merging it with the agg table. That way I can deal with whether the ETL job already contains a row for that data (update) or whether a new row needs to be added (insert), but I don't have sufficient permissions to create a temp table, and I'd prefer not to involve the DBA.
Question: Can I do an insert/update sort of behavior without creating a temporary table (this is Oracle SQL by the way).
Edit: The data is coming from a tsv file.
Why do you want to avoid involving the DBA? The DBA should have full knowledge of what's going on in the database, as they are ultimately responsible for the condition of the data within it. So you shouldn't be playing sneaky commando with them.
As you have a file of missing data, the easiest way to present it to the database is with an external table. This requires the creation of the table and probably a directory object as well. You will need the DBA's help with this task.
The only way to avoid creating database objects is to convert your TSV file into a series of DML statements. An IDE which supports regex and/or records macros will prove invaluable here. I like TextPad; other editors are available.
The DML statement for doing upserts in Oracle is the MERGE statement. The one thing you need to watch for is recency. Your missing data comes from last week. If a row exists it may have have been added or amended in the intervening period. You must write your MERGE statement so it does not overwrite more recent data with the older stuff. Hopefully your table has useful metadata columns such as DATE_CREATED and LAST_UPDATED.

Resources