I am facing a weird problem. I have a table (observation_measurement) in oracle DB and it has many fields. One field name is observation_name. this observation_name field stores different measurements with it's value from a text file.
For example, observation_name stores four measurements a,b,c,d (name of the measurements) and their corresponding values 1,2,3,4 (values of those measurements).
Later it is reading same text file. This time that text file has three measurements a,b,d (c is not there) and their values are 7,8,9 and then store in the table. So, if I need the latest values for all observation_names then I should get a=7,b=8,c=null,d=9. But it is giving me
a=7,b=8,c=3,d=9. I dont know why it is getting old data for c measurement.
Any ideas?
NULL has to be handled specially in Oracle, like IS NULL or IS NOT NULL.
Hope, your update logic involves some validation over the column and it leaves NULL values untreated.
Since, some validation fails because of NULL, the old value is retained in the table.
Can you please update your question with the Query used to UPDATE the table.
Related
My first task is to add two new columns to a table, first column stores the values of M and X fields values in a single column(as a single unit with a pipe separator) and second column stores O and Z fields values in a single column(as a single unit with a pipe separator).
second task selecting agency and external letter rating(shown in image) from drop down and after saving the form the value from fields M and X should move to N and Y and this values should be stored in table column that are created from task one, Now if we save the form the values should move to O and Z fields in forms and this should continue.
Can any one help me how to proceed with this and I don't know how to separate a column value into pieces and display on form.
Better if you propose any new method that does the same work.
Adding columns:
That's a bad idea. Concatenating values is easy; storing them into a column as well. But, then - in the next step - you have to split those values into two values (columns? rows?) to be joined to another value and produce result. Can you do it? Sure. Should you? No.
What to do? If you want to store 4 values, then add 4 columns to a table.
Alternatively, see if you can create a master-detail relationship between two tables so you'd actually create a new table (with a foreign key to existing table) with two additional columns:
one that says is value stored related to M or Y
value itself
It looks like more job to do, but - should pay off in the future.
Layout:
That really looks like a tabular form, which only supports what I previously said. You can't "dynamically" add rows (or, even if you could, that's really something you should avoid because you'd have to add (actually, display) separate items (not rows that share the same item name).
I need to save before and after value changes of certain fields of an items table to an items_log table. Changes are saved by an after change trigger on the items table.
Some of the items table columns are varchar2 type and some are number(*) type.
What is the better approach? Saving to separate two before and after number fields and two before and after varchar2 fields? Or conserving space by saving everything to two before and after varchar2 fields?
The purpose of this log table is to record which user changed a field and the before and after values.
Could saving a float value to a string field lead to an unexpected diversion from the original value?
Thanks in advance
"What is the better approach?"
There is no "better" approach. There is only an approach that's good enough for your application. If your table will have a few thousand rows in it, it doesn't really matter. If your table will have a few million rows, then space may be more of a concern.
If your goal is to display to a user what changes occurred to your item and it's not going to see a lot of activity, storing everything as a varchar may be good enough. You probably don't want to store rows for fields that did not change.
I use APC's approach often. The items_log table is the same as the item table, and includes a history id, timestamp, action (I, U, or D), and user along with all the columns of the item row. Everything is maintained by a trigger. There are also built-in Oracle auditing features to do auditing for you.
In Cassandra, a row can be very long and store units of time relevant data. For example, one row could look like the following:
RowKey: "weather"
name=2013-01-02:temperature, value=90,
name=2013-01-02:humidity, value=23,
name=2013-01-02:rain, value=false",
name=2013-01-03:temperature, value=91,
name=2013-01-03:humidity, value=24,
name=2013-01-03:rain, value=false",
name=2013-01-04:temperature, value=90,
name=2013-01-04:humidity, value=23,
name=2013-01-04:rain, value=false".
9 columns of 3 days' weather info.
time is a primary key in this row. So the order of this row would be time based.
My question is, is there any way for me to do a query like: what is the last/first day's humidity value in this row? I know I could use a Order By statement in CQL but since this row is already sorted by time, there should be some way to just get the first/last one directly, instead of doing another sort. Or is cassandra optimizing it already with Order By statement under the hood?
Another way I could think of is, store another column in this row called "last_time_stamp" that always updates itself as new data is inserted in. But that would require one more update every time I insert new weather data.
Thanks for any suggestion!:)
Without seeing more of your actual table, I suggest using a timestamp (or timeuuid if there is a possibility for collisions) as the second component in a compound primary key. Using this, you can get the last "row" by selecting ORDER BY t DESC LIMIT 1.
You could also change the clustering order in your schema to order it naturally for "last N" queries.
Please see examples and linked resource in this answer.
Is there any implemented function/procedure in Oracle 11g to simply transform a standard normalized table into a nested table? Or does someone has an idea for the following problem?
Reason: All of our tables are normalized in a way that if a certain attribute value is set twice or more for one record the whole record (except this attribute value)has to be stored redundant. Meaning, if I have in my person table an attribute "ID_CARD_NUMBER" and one person has two nationalities (and two ID cards) there is a second record with redundant attribute values except "ID_CARD_NUMBER". Outsourcing such an attribute in an extra table is NOT an option and I also don't want to denormalize in a way that one attribute value consists of a concatenation of prior, separated attribute values.
I have two files, A and B. The records in both files share the same format and the first n characters of a record is its unique identifier. The record is of fixed length format and consists of m fields (field1, field2, field3, ...fieldm). File B contains new records and records in file A that have changed. How can I use cloverETL to determine which fields have changed in a record that appears in both files?
Also, how can I gather metrics on the frequency of changes for individual fiels. For example, I would like to know how many records had changes in fieldm.
This is typical example of Slowly Changing Dimension problem. Solution with CloverETL is described on theirs blog: Building Data Warehouse with CloverETL: Slowly Changing Dimension Type 1 and Building Data Warehouse with CloverETL: Slowly Changing Dimension Type 2.