How can I retrieve the column names of a table in Apache Derby, via an SQL query? - derby

I want to retrieve the column names of a table (e.g., MAIN_ENGINE_DATA), under a specific schema (e.g., APP) using an SQL query.
How can I achieve this in Apache Derby?

Ok, I found the solution. The SQL query would be like:
SELECT COLUMNNAME FROM SYS.SYSCOLUMNS INNER JOIN SYS.SYSTABLES ON SYS.SYSCOLUMNS.REFERENCEID = SYS.SYSTABLES.TABLEID WHERE TABLENAME = 'MAIN_ENGINE_DATA'

Related

plsql. is there way to get column ddl?

I use dbms_metadata.get_ddl(...) to get object ddl but it is not generate ddl for column.
is there way to get column ddl oracle 12 version?
thanks
I think following query may usefull for you.
SELECT *
FROM ADMIN.DDL_HISTORY_LOG L
WHERE L.OBJECT_TYPE = 'TABLE'
AND L.DDL = 'ALTER'
AND L.OBJECT_NAME = 'TABLE_NAME' --just change table name here
AND UPPER(L.DDL_SQL) LIKE '%ALTER%TABLE%ADD%'
AND UPPER(L.DDL_SQL) NOT LIKE '%ADD%CONSTRAINT%'
this oracle dictionary is holding all historic DDL statements. so, If you specify your table name, you can get all DDL statements for this table in the past.

Dynamic query using spring JPARepository

I am using jpa2.1 with spring 4.x.x in my project. I have to create a dynamic query on the basis of column name.
for ex:-
With the help from table1 col1 data A,B,C I have to create a dynamic query on table2 which will give me all the data of table2.
String s = "select A,B,C from table2 where Enable=Y";
and pass that value on the repository. But I am not able to create a dynamic query.
How can I create a dynamic query in JPA by using column name? So that I get a desired output from table2

what's SparkSQL SQL query to write into JDBC table?

For SQL query in Spark.
For read, we can read jdbc by
CREATE TEMPORARY TABLE jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS dbtable ...;
For write, what is the query to write the data to the remote JDBC table using SQL?
NOTE: I want it to be SQL query.
plz provide the pure "SQL query" that can write to jdbc when using HiveContext.sql(...) of SparkSQL.
An INSERT OVERWRITE TABLE will write to your database using the JDBC connection:
DROP TABLE IF EXISTS jdbcTemp;
CREATE TABLE jdbcTemp
USING org.apache.spark.sql.jdbc
OPTIONS (...);
INSERT OVERWRITE TABLE jdbcTemp
SELECT * FROM my_spark_data;
DROP TABLE jdbcTemp;
You can write the dataframe with jdbc similar to follows.
df.write.jdbc(url, "TEST.BASICCREATETEST", new Properties)
Yes, you can. If you want to save a dataframe into an existing table you can use
df.insertIntoJDBC(url, table, overwrite)
and if you want to create new table to save this dataframe, the you can use
df.createJDBCTable(url, table, allowExisting)

how to extract tables attributes like column name , datatype ,nullable for all the tables from the database from oracle pl/sql

Is there any query that can be used to retrive the Tables and its column attributes like column name , datatype, nullable etc for all the tables inside the database
For Oracle Pl/SQL
The Oracle SQL you need would be the following (run as user 'SYS'):
select owner, table_name, column_name, data_type, nullable
from dba_tab_columns;
If you do a desc dba_tab_columns you will get a list of many more columns which may be of interest to you as part of your result set.
You can use a SQL tool (i.e. SQL*Plus) to run this query or you can use PL/SQL to call this query and put the results in PL/SQL variables then print them out via DBMS_OUTPUT.PUT_LINE().
HTH

How to "insert into values" using Hive,Pig or MapReduce?

I am new for hadoop and big data concepts. I am using Hortonworks sandbox and trying to manipulate values of a csv file. So I imported the file using file browser and created a table in hive to do some query. Actually I want to have an "insert into values" query to select some rows, change the value of columns(for example change string to binary 0 or 1) and insert it into a new table. SQL LIKE query could be something like this:
Insert into table1 (id, name, '01')
select id, name, graduated
from table2
where university = 'aaa'
Unfortunately hive could not insert (constant) values (without importing from file) and I don`t know how to solve this problem using hive,pig or even mapreduce scripts.
Please help me to fine the solution,I really need to it.
Thanks in advance.
In Hive,
CREATE TABLE table1 as SELECT id, name, graduated FROM table2
WHERE university = 'aaa'
should create a new table with the results of the query.

Resources