Cassandra JDBC interface - jdbc

I am trying to perform crud database operations on cassandra database,but eventually I came across a problem that cassandra jdbc driver requires predefined column family defination in the database, created using create columnfamily cql command.
More over the insert statement also fails if I try to insert a new column-value , other then specified in earlier create columnfamily statement.
I want to know if it is an usual behaviour or something going wrong.
Any help will be appreciated.

As you are using cql,You have to first create the column definition .On second half It seems problem because it is possible to insert any key value at any time whether it is defined or not.
can you show code here so that i can trace the problem

Related

Oracle Data Integrator- ODI 12.2.1--Loadplan Issue no of records count issue

I come across a scenario in my project.I am loading data from file to Table using ODI.I am running My interfaces through loadplan.I've 1000 Records in my source file,and also getting 1000 records in target file.but when I'm checking ODI loadplan execution log its showing number of insert is 2000.can anyone please help.or is it a ODI bug.?
The number of inserts does not only show the inserts in the target table but also all the insert happening in temporary tables. Depending on the knowledge modules (KMs) used in an interface, ODI might load data in a C$_ table (LKM) or I$_ table (IKM/CKM). The rows loaded in these table will also be counted.
You can look at the code generated in the operator to check if your KMs are using using these temporary. You can also simulate an execution to see the code generated.

informatica execute sql in sql transformation

Background: I am really new. Informatica Developer for PowerCenter Express Version: 9.6.1 HotFix 2
I want to execute a t-sql statement as one step in a work flow:
truncate table dbo.stage_customer
I tried create a mapping, add a sql transformation on it. Input above query in sql query window. I added the mapping to a workflow of just start, the mapping, and the end. When I validate the flow I got this error:
The group [Input] in transformation xxx must have at least one port
I have no idea what ports are needed since this (the truncate statement) basically doesn't need input or output.
Use your query " truncate table dbo.stage_customer" in Pre-SQL command
As Aswin suggested use the built in option in the session property.
But in the production environments user may not have truncate table access for the table in a database. In this case, informatica workflow will fail if you check the truncate target table option. It is good to have a stored procedure to truncate the target table and use that stored procedure in informatica mapping to avoid workflow failures in case of user having no truncate access to the database.
if you would like to truncate a target table before loading why don't you use the in-built option present in session properties?
goto workflow manager-> open session->mapping tab->click on target table listed left side->choose the property "Truncate table option" just enable it
to answer you question, I think you have to connect at least one input and output port into SQL transformation (because it is not unconnected). Just create dummy ports and try again
try this article - click here

Why Phoenix always add a extra column (named _0) to hbase when I execute UPSERT command?

When I execute the UPSERT command on apache phoenix, I always see that Phoenix add an extra column (named _0) with an empty value in the hbase, this column(_0) is auto generate by phoenix, but I don't need it, like this:
ROW COLUMN+CELL
abc column=F:A,timestamp=1451305685300,value=123
abc column=F:_0, timestamp=1451305685300, value=  # I want to avoid generate this row
Could you tell me how to avoid that? Thank you very much!
"At create time, to improve query performance, an empty key value is
added to the first column family of any existing rows or the default
column family if no column families are explicitly defined. Upserts will also add this empty key value. This improves query performance by having a key value column we can guarantee always being there and thus minimizing the amount of data that must be projected and subsequently returned back to the client."
Apache Phoenix Documentation
Regarding your question if that is avoidable:
You could work around the problem by adding the following statements at the end of your sql:
ALTER TABLE "<your-table>" ADD "<your-cf>"."_0" VARCHAR(1);
ALTER TABLE "<your-table>" DROP COLUMN "<your-cf>"."_0";
You should only do this if you query some table with phoenix but then access the table with another system that is not aware of this phoenix-specific dummy value.

Can Oracle database transactions help in this scenario?

I am working with an Oracle database (11g Release 2). Imagine multiple connections doing the following simultaneously:
Start transaction
Check if a specific value exists in a table of unique values
If the value does not exist, insert it
Commit transaction
It seems to me that the only way to prevent conflicts is to block connections from performing the above 4-step sequence while any other connection is currently performing the 4-step sequence.
Can transactions achieve this kind of broad locking/blocking in Oracle?
Thanks in advance for your answers and advice on how to best deal with this scenario.
Add a unique check constraint, and implement an exception handler to get the next sequence and try again.
This is assuming you're using pl/sql.
An alternative would be using an Oracle sequence, with cache size 1. This will also ensure no gaps in the sequence
2. SELECT * FROM table_name FOR UPDATE to block all reads from other sessions...

Update actions from one sql table to another using Message Broker

i wanna use Wepshoere Message Broker to get the actions that any user perform on any table to be applied on another table
Example
1- User insert a record on Table x in TestDB
2- Message broker take this newly added record and add it to Table y in TestDB
could you support me with detailed information. and thanks in advance
I suggest you instead add an insert trigger to Table x, this is a normal function for database triggers.
It would help if you update your question with the DBMS and OS you are using.
You could approach this using a dbInput node to fire a message when the table is updated and then use normal ESQL or the mapping node to do the insert in the target table.
The key thing is going to be configuring your db input node correctly, you can get more info on how to do that here:
http://publib.boulder.ibm.com/infocenter/wmbhelp/v8r0m0/topic/com.ibm.etools.mft.doc/bc34041_.htm

Resources