How to use directly query using mybatis? - spring

i want use this query 'ANALYZE TABLE {tableName}' but i think mybatis supports only CRUD.
how to use 'ANALYZE TABLE' in mybatis?

Just declare it as a normal select and specify Map as the return type.
#Select("analyze table ${tableName}")
Map<String, Object> analyzeTable(String tableName);
#Test
public void testAnalyzeTable() {
try (SqlSession sqlSession = sqlSessionFactory.openSession()) {
Mapper mapper = sqlSession.getMapper(Mapper.class);
Map<String, Object> result = mapper.analyzeTable("users");
assertEquals("test.users", result.get("Table"));
assertEquals("analyze", result.get("Op"));
assertEquals("status", result.get("Msg_type"));
assertEquals("OK", result.get("Msg_text"));
}
}
Tested using...
MariaDB 10.4.10
MariaDB Connector/J 2.5.4

Related

How do you map the output of a Spring stored procedure execute?

I am using Spring and stored procedures to retrieve data from a mySQL database. I have the stored procedure and parameters working OK but I'm having problems mapping the result set. At the moment I have some truly ugly code to get the values and I'm sure there has to be a better, cleaner and more elegant way. Can anyone guide me to a better solution?
After the stored procedure class, I have:
List<String> outList = new ArrayList<String>();
Map<String,Object> outMap = execute(parameters_map);
List list = (List) outMap.get("#result-set-1");
for (Object object : list) {
Map map2 = (Map) object;
list.add(map2.get("runname"));
}
return outList;
runname is the column from the database query.
Is there a better way to achieve this?
Example from spring docs using RowMapper:
public class JdbcActorDao implements ActorDao {
private SimpleJdbcCall procReadAllActors;
public void setDataSource(DataSource dataSource) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.setResultsMapCaseInsensitive(true);
this.procReadAllActors = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName("read_all_actors")
.returningResultSet("actors",
BeanPropertyRowMapper.newInstance(Actor.class));
}
public List getActorsList() {
Map m = procReadAllActors.execute(new HashMap<String, Object>(0));
return (List) m.get("actors");
}
// ... additional methods
}
took a while to interpret the Spring docs but I finally got there.
My solution:
SimpleJdbcCall simpleJdbcCall = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName("DistinctRunNames")
.withoutProcedureColumnMetaDataAccess();
simpleJdbcCall.addDeclaredParameter(new SqlParameter("environment", Types.VARCHAR));
simpleJdbcCall.addDeclaredParameter(new SqlParameter("username", Types.VARCHAR));
simpleJdbcCall.addDeclaredParameter(new SqlParameter("test_suite", Types.VARCHAR));
SqlParameterSource parameters = new MapSqlParameterSource().addValue("environment", environment)
.addValue("username", username).addValue("test_suite", testSuite);
Map map = simpleJdbcCall.returningResultSet("runnames", new ParameterizedRowMapper<RunNameBean>() {
public RunNameBean mapRow(ResultSet rs, int rowNum) throws SQLException {
RunNameBean runNameBean = new RunNameBean();
runNameBean.setName(rs.getString("runname"));
return runNameBean;
}
}).execute(parameters);
return (List) map.get("runnames");
Had problems with expected parameters versus actual, had to break up the simpleJdbcCall object. Maps the results into a list beautifully.
Thank you for answers, helped me to learn about Spring mapping.

Does hive jdbc support java.sql.PreparedStatement?

I am trying to query hive using java.sql.PreparedStatement and getting an empty result set, Same query giving proper resultset when executed using java.sql.Statement. I am using hive jdbc 1.2.2 jar and hives server is in Hortonworks hdp stack.
Yes, it does:
public class HivePreparedStatement extends HiveStatement implements java.sql.PreparedStatement
As can be seen, internally Hive does implement the JDBC interface PreparedStatement and thus, the driver supports this JDBC feature.
For reference see: https://hive.apache.org/javadocs/r1.2.2/api/org/apache/hive/jdbc/HivePreparedStatement.html
Hope it helps.
Only formally.
https://github.com/apache/hive/blob/ab4c53de82d4aaa33706510441167f2df55df15e/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java#L116
private String updateSql(String sql, HashMap<Integer, String> parameters) throws SQLException {
List<String> parts = this.splitSqlStatement(sql);
StringBuilder newSql = new StringBuilder((String)parts.get(0));
for(int i = 1; i < parts.size(); ++i) {
if (!parameters.containsKey(i)) {
throw new SQLException("Parameter #" + i + " is unset");
}
newSql.append((String)parameters.get(i));
newSql.append((String)parts.get(i));
}
return newSql.toString();
}

Calling Stored Procedure using Spring Data JPA

I want to know whether it is possible to call stored procedure using Spring Data JPA which is having resultset and multiple out parameter.
I found Git issue for same https://github.com/spring-projects/spring-data-examples/issues/80
If it is resolved, could someone provide one example with Spring Boot?
The way I've accomplished this in the past is to add custom behavior to a Spring Data JPA repository (link). Inside that I get the EntityManager and use java.sql.Connection and CallableStatement
Edit: Adding high level sample code. Sample makes the assumption that you are using Hibernate but idea should be applicable to others as well
Assuming you have an EntityRepository
public interface EntityRepositoryCustom {
Result storedProcCall(Input input);
}
public class EntityRepositoryImpl implements EntityRepositoryCustom {
#PersistenceContext
private EntityManager em;
#Override
public Result storedProcCall(Input input) {
final Result result = new Result();
Session session = getSession();
// instead of anonymous class you could move this out to a private static class that implement org.hibernate.jdbc.Work
session.doWork(new Work() {
#Override
public void execute(Connection connection) throws SQLException {
CallableStatement cs = null;
try {
cs = connection.prepareCall("{call some_stored_proc(?, ?, ?, ?)}");
cs.setString(1, "");
cs.setString(2, "");
cs.registerOutParameter(3, Types.VARCHAR);
cs.registerOutParameter(4, Types.VARCHAR);
cs.execute();
// get value from output params and set fields on return object
result.setSomeField1(cs.getString(3));
result.setSomeField2(cs.getString(4));
cs.close();
} finally {
// close cs
}
}
});
return result;
}
private Session getSession() {
// get session from entitymanager. Assuming hibernate
return em.unwrap(org.hibernate.Session.class);
}
}

Integrating Spark SQL and Apache Drill through JDBC

I would like to create a Spark SQL DataFrame from the results of a query performed over CSV data (on HDFS) with Apache Drill. I successfully configured Spark SQL to make it connect to Drill via JDBC:
Map<String, String> connectionOptions = new HashMap<String, String>();
connectionOptions.put("url", args[0]);
connectionOptions.put("dbtable", args[1]);
connectionOptions.put("driver", "org.apache.drill.jdbc.Driver");
DataFrame logs = sqlc.read().format("jdbc").options(connectionOptions).load();
Spark SQL performs two queries: the first one to get the schema, and the second one to retrieve the actual data:
SELECT * FROM (SELECT * FROM dfs.output.`my_view`) WHERE 1=0
SELECT "field1","field2","field3" FROM (SELECT * FROM dfs.output.`my_view`)
The first one is successful, but in the second one Spark encloses fields within double quotes, which is something that Drill doesn't support, so the query fails.
Did someone managed to get this integration working?
Thank you!
you can add JDBC Dialect for this and register the dialect before using jdbc connector
case object DrillDialect extends JdbcDialect {
def canHandle(url: String): Boolean = url.startsWith("jdbc:drill:")
override def quoteIdentifier(colName: java.lang.String): java.lang.String = {
return colName
}
def instance = this
}
JdbcDialects.registerDialect(DrillDialect)
This is how the accepted answer code looks in Java:
import org.apache.spark.sql.jdbc.JdbcDialect;
public class DrillDialect extends JdbcDialect {
#Override
public String quoteIdentifier(String colName){
return colName;
}
public boolean canHandle(String url){
return url.startsWith("jdbc:drill:");
}
}
Before creating the Spark Session register the Dialect:
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.jdbc.JdbcDialects;
public static void main(String[] args) {
JdbcDialects.registerDialect(new DrillDialect());
SparkSession spark = SparkSession
.builder()
.appName("Drill Dialect")
.getOrCreate();
//More Spark code here..
spark.stop();
}
Tried and tested with Spark 2.3.2 and Drill 1.16.0. Hope it helps you too!

Retrieve values fom database using JDBC Template in hashmap

i am using JDBC template for getting data from database in Spring MVC.
my query is:
SELECT count(A.MEETING_ID),ITEM_TBL.REG_EMAIL FROM ITEM_TBL,MEETINGS_TBL WHERE ITEM_TBL.MEETING_ID=MEETINGS_TBL.MEETING_ID
GROUP BY ITEM_TBL.REG_EMAIL
this is returning rows like:
11 nishant#gmail.com
12 abhilasha#yahoo.com
13 shiwani#in.com
i want to store these value into Hash MAP. Can you please help how can i do this using JDBC TEMPLATE?
Thanks
You need ResultExtractor.
You can achieve that using below code.
String sql = "SELECT count(A.MEETING_ID),ITEM_TBL.REG_EMAIL FROM ITEM_TBL,MEETINGS_TBL WHERE ITEM_TBL.MEETING_ID=MEETINGS_TBL.MEETING_ID
GROUP BY ITEM_TBL.REG_EMAIL";
ResultExtractor mapExtractor = new ResultSetExtractor() {
public Object extractData(ResultSet rs) throws SQLException {
Map<String, String> mapOfKeys = new HashMap<String, String>();
while (rs.next()) {
String key = rs.getString("MEETING_ID");
String obj = rs.getString("REG_EMAIL");
/* set the business object from the resultset */
mapOfKeys.put(key, obj);
}
return mapOfKeys;
}
};
Map map = (HashMap) jdbcTemplate.query(sql.toString(), mapExtractor);

Resources