Insert into not working but query does [closed] - oracle

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
my issue is that I have a insert into inside of a stored procedure, and sometimes it says "No data found" the problem is that I know that there's data that can be selected with that criteria.
Also, the weird part is that I'm sure that the data is there because I use dbms_output.put_line to print on console the query with the values of the variables used, so I know it's exactly the same query executed inside the stored procedure, and If I execute the printed query it does return data.
Any idea of what's happening?
Thank you.

I manage to find what was happening, seems that oracle has an issue when working with dates, the query inside of the procedure was receiving some dates, and even when it was oracle itself generating the dates, it had some interpretation issues, so I just used
TO_CHAR(b, 'YYYY/MM/DD') to query the dates at the beginning and problem solved

Related

running an oracle sql command without waiting for result [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have an oracle database which I am accessing using delphi with an ODAC component.
I would like to populate a table using a select statement and don't want to wait for the sql to complete before moving on to next delphi command.
I have tried using TOraSQL with non-blocking set to true but although the program moves on without any delay the sql doesn't populate the table. Any ideas?
I don't have any Delphi-related ideas (as I don't know it), but - as far as Oracle is concerned - you could
put that code into a stored procedure
schedule a job (using DBMS_SCHEDULER or older (but simpler) DBMS_JOB) to run right now from Delphi
job (i.e. the procedure) would run in the background, while ...
... your Delphi code would go on

Export all the tables at once with data from oracle sql developer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to export all the 100 tables with data from one schema at once from oracle sql developer... Like we export one table and that table get's saved where we want to save as an excel file. Is there any way to do this?... Instead of exporting one table at a time with data.
There's Data Pump Export (and Import) which does that. However, the result is a DMP file which is certainly not recognizable by Excel; think of it as of a binary file readable only by Data Pump Import.
So, if you want Excel files (actually, a CSV format), you'll have to either export them one-by-one (what a tedious job!) or write your PL/SQL procedure which would use UTL_FILE package. Note that (generally speaking) the result resides in a directory located on the database server, not your local PC, so you'll have to talk to your DBA about it. Shouldn't be a problem (in my opinion), you should be granted read/write access to a directory designed for such purposes.
Tools, Database Export
Select your tables. Select your output method (Excel), hit go.
Bigger question, what are you gonna do with these 100 Excel files?
Also, how big are these tables? Exporting to CSV might be better, but again we don't know why you want Excel files...
Finally, if you want to take this data and use it to put in another Oracle Database at some point, you should be using Data Pump.
You can try writing a scheduler for following task using PL/SQL
Use oracle documentation for help

Creating event-driven SQL scripts [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am creating a database that stores GPS data. As soon as the database updates with a data point , I want the server to check to see if that point is within a certain area and send a message or update another database (haven't decided what action it should take yet). Is this event-driven operation possible in PL/SQL? I am only familiar with passive querying and running scheduled scripts.
Yes there is such feature called database triggers. On insert or update (actually there are much more event types) of the data you can check if some conditions are met and call PL/SQL procedure to handle the event.

using materialised views to fix bugs and reduce code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The application I'm working on has a legacy problem where 2 tables were created ADULT and CHILD in an oracle 11g dB.
This has led to a number of related tables that have both a field for ADULT and CHILD no FK applied.
The bugs have arisen where poor development has mapped relationships to the wrong field.
Our technical architect plans to merge the ADULT and CHILD tables in to a new ADULT_CHILD table and create materialised views in place of the tables. The plan is to also create a new id value and replace the I'd values in all associated tables so even if the plsql/apex code maps to the wrong field the data mapping will still be correct.
The reasoning behind this solution it it does not require that we change any other code.
My opinion is this is a fudge but I'm more a Java/.NET OO.
What arguments can I use to convince the architect this is wrong and not a real solution. I'm concerned we are creating a more complex solution and performance will be an issue.
Thanks for any pointers
While it may be a needed solution it might also create new issues. If you really do need to use an MV that is up to date at all times, you need on commit refresh and that in turn tends to make all updates sequential. Meaning that all processes writing to it waits in line for the one updating the table to commit. Not, the table, not the row.
So it is prudent to test the approach with realistic loads. Why does it have to become a single table? Could they not stay separate, add a FK? If you need more control on the updates, rename them and put views with instead-of triggers in their place.

Why oracle does not have autoincrement feature for primary keys? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can someone enlighten on why is that oracle does not support an autoincrement feature for primary keys?
I know the same feature can be achieved with the help of sequence and triggers, but why oracle didn't introduce the autoincrement keyword which will internally create a sequence and a trigger. I bet guys in oracle would have definitely thought about this. There must be some reason for not giving this feature. Any thoughts?
It may just be terminology.
'AUTOINCREMENT' implies that that record '103' will get created between records '102' and '104'. In clustered environments, that isn't necessarily the case for sequences. One node may insert '100','101','102' while the other node is inserting '110','111','112', so the records are 'out of order'. [Of course, the term 'sequence' has the same implication.]
If you choose not to follow the sequence model, then you introduce locking and serialization issues. Do you force an insert to wait for the commit/rollback of another insert before determining what the next value is, or do you accept that, if a transaction rolls back, you get gaps in the keys.
Then there's the issue about what you do if someone wants to insert a row into the table with a specific value for that field (ie is it allowed, or does it work like a DEFAULT) or if someone tries to update it. If someone inserts '101', does the autoincrement 'jump' to '102' or do you risk attempted duplicate values.
It can have implications for their IMP utilities and direct path writes and backwards compatibility.
I'm not saying it couldn't be done. But I suspect in the end someone has looked at it and decided that they can spend the development time better elsewhere.
Edit to add:
In Oracle 12.1, support for an IDENTITY column was added.
"The identity column will be assigned an increasing or decreasing integer value from a sequence generator for each subsequent INSERT statement. You can use the identity_options clause to configure the sequence generator."
https://docs.oracle.com/database/121/SQLRF/statements_7002.htm#CJAHJHJC
This has been a bone of contention for quite some time between the various DB camps. For a database system as polished and well-built as Oracle, it still stuns me that it requires so much code and effort to enable this commonly-used and valuable feature.
I recommend just putting some kind of incremental-primary-key builder/function/tool in your toolkit and have it handy for Oracle work. And write your congressman and tell him how bad they need to make this feature available from the GUI or using a single line of SQL!
Because it has sequences, which can do everything autoincrement does, and then some.
Many have complained of this, but the answer generally is that you can create one easily enough with a sequence and a trigger.
Sequences can get out of sync easily (someone inserts a record manually in the database without updating the sequence). Oracle should have implemeted this ages ago!
Sequences are easy to use but not as easy as autoincrement (they require extra bit of coding).

Resources