How to create tables and columns in accumulo - hadoop

I am new to accumulo.I dont know how to create a table and also columns in accumulo. can anyone suggest me with a sample how to create table and columns in accumulo. thanks in advance.

I think you really need to re-read the user manual at https://accumulo.apache.org/1.7/accumulo_user_manual.html
Accumulo doesn't have columns in the traditional sense of relational databases. Instead of having lots of different columns in a table with a schema, you have rows, which have a ColumnQualifier and ColumnFamily. This is a fairly significant difference from what you are trying to do. Again, re-read the user document and then re-ask/reformat your question.

Are you trying to convert a relational table to Accumulo? That's fairly easy using the Primary Key as your row, then use the Field Name as your Column Family, leave the Column Qualifier blank and finally use the Field Value as your value.
There is no open source gui tool that I know of to view Accumulo data.

Related

Check all table columns for a value

Ok, tricky question I am trying to figure out where a database schema is storing a particular pointer. I know the pointer value I just don't what table it is in or what column. I know the pointer is 123123123. How do I check all table columns to see if any of them have that value?
Thanks.
In h2 you can use fulltext search, but then you would need to add all tables in the search scope and indexing.
If you need to index only primary keys, then it might be better but you still need to come up with individual FT_CREATE_INDEX() calls for each table. You can automate this with several languages or with ETLs (like scriptella).
If you've enough disk space, you could dump a SQL from your db and use a viewer for big files like glogg.
The advantage of the first solution is no external tools but you need to work out a specific indexing script for SQL for any existing or new table. The 2nd solution is a 1 time fix.
I use SQL Search from RedGate. It's free and it helps you find any text anywhere in the database.
https://www.red-gate.com/products/?gclid=CjwKEAjwiYG9BRCkgK-G45S323oSJABnykKAE7IH_EMhnmq7OdLdXljfIkdGZrDD6OnOrT4VB0agahoCVn3w_wcB

Simulating a columnar store using cluster tables

I have a client that mostly uses calculations on a single column of many rows from a table (each time another column), which is classic for a columnar DB.
The problem is that he is using Oracle, so what I thought of doing was to build a bunch of cluster table where each table has just one column besides the PK and this way allow him to work in a pseudo-columnar model.
What are you thoughts on the subject?
Will it even work as expected or am I just forcing the solution here ?
Thanks,
Daniel
I didn't test it in the end but I did achieve close to vertical performance time using sorted table hash cluster.

Oracle copy data between databases honoring relationships

I have a question regarding the oracle copy command:
Is it possible to copy data between databases (were the structure is the same) and honor relationships in one go without(!) writing procedures?
To be more precise:
Table B refers (by B.FK) to table A (A.PK) by a foreign key (B.FK -> A.PK; no relationship information is stored in the db itself). The keys are generated by a sequence, which is used to create the PK for all tables.
So how to copy table A and B while keeping the relationship intact and use the target DBs sequence to generate new primary keys for the copied data (i cannot use the "original" PK values as they might already be used in the same table for a different dataset)?
I doubt that the copy command is capable to handle this situation but what is the way to achieve the desired behavior?
Thanks
Matthias
Oracle has several different ways of moving data from one database to another, of which the SQL*Plus copy command is the most basic and the least satisfactory. Writing your own replication routine (as #OldProgrammer suggests) isn't much better.
You're using 11g, so move into the 21st century by using the built-in Streams functionality.
There is no way to synchronize sequences across databases. There is a workaround, which is explained by the inestimable Tom Kyte.
I generally prefer db links and then use sql insert statements to copy over data.
In your scenario , first insert data of Table A using DB link and then table. If you try otherway round, you will error.
For info on DB link , you canc heck this link: http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm

Is there a way to find when a value was inserted into a particular cell on an Oracle database?

There is a specific row in a table that I would like to research. I'd like to know when this row was inserted, when its individual fields were modified, the various values each field in this row might have had etc.. In short, its audit.. Is it possible ? How ?..
I'm using Oracle 11g
You can enable auditing. If this is after the fact, no I don't think there's much you can do.
You can try LogMiner after the fact, but that depends on whether you've got access to the necessary redo log files.

Full text indexing on a column in a foreign table

My entire database is in INNDB. I love the features, hands down. However it doesn't allow full text indexing on TEXT-type columns. So I have to take my current TEXT column from my main table (INNODB) and create a MYISAM table and reference back to the original table. But because MYISAM doesn't allow FK constraints I realize I've created a potential weakness. If the original table index changes it won't cascade down into the MYISAM table. Vice versa if I create a FK link from the original table to the MYISAM table, and the MYISAM row is deleted, then I have linked to a nonexistent entry. The data consistency check is simply not there.
In short, INNODB got me too comfortable and dependent on FK constraints for my own good.
I would consider not using the MyISAM fulltext indexing at all, and instead using a proper search engine alongside your db. Lucene/Solr, sphinx and xapian seem to be the leading choices (I've only used Lucene/Solr myself).
see this question for more :)
edit: also this question.
If you are using some sort of framework, the framework can control the referential integrity for you. CakePHP does a nice job of this with their Model classes.

Resources