Loaded query tables either shows error in refresh or leave many blank rows between tables - powerquery

Dears,
I really need your help with this one as it's driving me crazy, I have 3 queries loaded as tables under each other in my excel workbook.
when I choose the properties option: insert cells for new data, delete unused cells from properties >> it shows an error message in refresh (it would move cells in a table on your worksheet).
and when I go for the second option: insert entire rows for new data, it refreshes but leaves many blank rows between each table and the second one.
What shall I do to refresh all queries in the workbook without messing up the form .. as I just need to leave 1 empty row between each loaded table?

Related

Create workflow first time insert then update

I'm using Informatica PowerCenter 9.1.0 and to put it simple I have two identical tables as source (table A) and target (table B). The columns are ID and EMAIL.
I need to make a workflow where the very first time it runs all the records are copied from table A to B.
Then every day I need to update in the target table B the rows modified in A (the mail can change). If in the source table the record is deleted I still want to see it in the target table.
I used these values
Treat source rows as : "Insert"
Then in the Mapping tab I have checked the Attribute "Insert" and "Update as Update"
In the first time I have all the record in the target table but then if after few days some emails change I see no update. I still see the first email inserted the first time.
I changed the value of Treat source rows as to "Update" but in the first run (table B is empty ) it copies no row.
It's possible to have the workflow that in the first run insert all the rows the first time then in the next ones update the records without change the Treat source rows as value?
Select the option "Update else insert" in the mapping tab. Keep "treat source rows as" as Update

Oracle APEX automatic column add/remove in region

I have made a view that changes it's columns frequently with dynamic SQL. I use a pivot to make rows into columns. I display the view in an interactive grid. The SQL query that is executed:
select * from <DB>.<VIEWNAME>.
On refresh it updates the ROWS in the grid, but not the COLUMNS. The select * does not take column changes into account, BUT if I alter the SQL query by adding a space ( or any other thing in the query) and then saving the page in the page designer, the columns sync up to the view.
Does someone know a good solution to my problem? Where can I find the procedure that executes this refresh? If I know where it is I can possiby use it after the insertion of a column (or delete / update). Any tips? Warning, I am a total novice in oracle apex and sql developer.
Thanks in advance!
This is the wrong way to go about this. In Apex, and in Oracle in general, columns are determined when the query is parsed. If you change the underlying structure, your query has to be reparsed and only then do the columns change.
Think about it. If the first column in your result set was a DATE and you had your Apex column attributes set up to format and display that data, then your query changed to a NUMBER, its not clear what would happen.
What you probably want to do is create your region based on a function that returns a sql query as a VARCHAR2. (I think you can do this in 18.x; I'm still mostly using 5.2.) Your function gets parsed when the region is displayed. You can even use another function to return a colon-separated list of column headers if the names are dynamic.

unique constraint violated error performance

I am inserting in a table millions of records , such operation will need hours or maybe a day. After 2 hours the connection through my pc was disconnected, so I want to repeat the insert from the start.
My Question
which is faster ? truncate the table and repeat it again , or creating a primary key and continue, however an error will be raised because of 'unique constraint violated' for every record that was inserted in the last 2 hours.
Truncating the table (If full refresh)is the best option Hands down. There's also SKIP parameter, if you use Oracle's SQL*Loader utility. Let me explain to some extent!
Also try loading the table with SQL*Loader using DIRECT load option. Which means loading the table by loading into the data blocks, instead of conventional INSERT statements.
By this kind of loading, you can enable UNRECOVERABLE , which means no/less redo log written, so the loading is very fast >70% than conventional INSERT.
But, the downside of this loading is, ALL indexes on this table, except NULL constraints will be made UNUSABLE, before the start of loading, and the data will be loaded. And on SUCCESSFUL completion, SQL*Loader tries to re-enable the index, by rebuilding it. So, if in case my any reason, the loading had interrupted, the error messages will be logged properly, and the index would be left UNUSABLE.
More Details on : Please find Here
(DIRECT/CONVENTIONAL Loading)
Also, using SQL*Loader, you can load using Conventional loading, which means SQL*Loader would generate the chunk of INSERTs using the file , and process it. In this type of loading, all the INDEXES will be left as such, and the table remains unharmed.
If at all any error happens, SQL*Loader will log a SKIP parameter, which means , by next run, if you specify that number, the table will loaded from that point of the file.
More Details on SQL*Loader : Here
Not sure how you are loading your table but this is a classic situation where you should use external table of Oracle.

prevent last record in access table to be deleted

I made a loginform for my access database but how to prevent anyuser to delete the last record
Ex: if there is two records in login_table or more. The user can delete all the record but not the lastone
There are many ways to do that:
1. Create constraint on your server side code to check whether there is only one record at the time of deleting the records.
2. Create a trigger on the table which prevents the user from deleting the last record.
Probably the easiest way to accomplish this in Access 2013 would be to create a "Before Delete" data macro that looks like this:
If DCount("*","Table1")<2 Then
RaiseError
Error Number 1
Error Description You cannot delete the last remaining record in this table.
End If
To create this data macro, open the table in Design View, then on the "Design" tab of the ribbon click "Create Data Macros" and choose "Before Delete". (Remember to replace "Table1" with the actual name of your table.)
The previous record is saved in the table so it can be deleted. The record being entered is not actually saved in the table until the form is closed or an action is taken to enter another record.
On an entry form I create a duplicate table with the same fields. The entry form places the data temporarily into the first table. Then I created two queries. One to update from the temporary table to the secondary table. The second query clears the first table making it ready for new data entry. The action of the query requires to command entry to save the record prior to running the two queries. I perform this by creating a macro to perform the actions in sequence. 1. save record, 2 copy the data to the second table, 3 clear the first table.
You will have better control over the data.

create index before adding columns vs. create index after adding columns - does it matter?

In Oracle 10g, does it matter what order create index and alter table comes in?
Say i have a query Q with a where clause on column C in table T. Now i perform one of the following scenarios:
I create index I(C) and then add columns X,Y,Z.
Add columns X,Y,Z then create index I(C).
Q is 'select * from T where C = whatever'
Between 1 and 2 will there be a significant difference in performance of Q on table T when T contains a very large number of rows?
I personally make it a practice to do #2 but others seem to have a different opinion.
thanks
It makes no difference if you add columns to a table before or after creating an index. The optimizer should pick the same plan for the query and the execution time should be unchanged.
Depending on the physical storage parameters of the table, it is possible that adding the additional columns and populating them with data may force quite a bit of row migration to take place. That row migration will generate changes to the indexes on the table. If the index exists when you are populating the three new columns with data, it is possible that populating the data in X, Y, and Z will take a bit longer because of the additional index maintenance.
If you add columns without populating them, then it is pretty quick as it is just a metadata change. Adding an index does require the table to be read (or potentially another index) so that can be very time consuming and of much greater impact than the simple metadata change of recording the new index details.
If the new columns are going to be populated as part of the ALTER TABLE, it is a different matter.
The database may undergo an unplanned shutdown during the course of adding that data to every row of the table data
The server memory may not have room to record every row changed in that table
Therefore those row changes may be written to datafiles before commit, and are therefore written as dirty blocks
The next read of those blocks, after the ALTER table has successfully completed will do a delayed block cleanout (ie record the fact that the change has been committed)
If you add the columns (with data) first, then the create index will (probably) read the table and do the added work of the delayed block cleanout.
If you create the index first then add the columns, the create index may be faster but the delayed block cleanout won't happen and that housekeeping will be picked up by the application later (potentially by the select * from T where C = whatever)

Resources