After finishing some integration tests I found that my expected H2 database files did not exist.
With a url of "jdbc:h2:/tmp/casper" I expected to have a /tmp/casper.mv.db file however there was none.
The reason is that while initializing the database I used "drop all objects delete files" After all my work, it disappeared after the test when the datasource was closed.
Demonstration in my answer to this question.
package org.javautil.h2;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import java.io.File;
import java.sql.Connection;
import java.sql.Statement;
import org.junit.Test;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class H2DropAllObjectsTest {
#Test
public void casper() throws Exception {
final HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:h2:/tmp/casper");
config.setUsername("sr");
config.setPassword("tutorial");
config.setAutoCommit(true);
HikariDataSource dataSource = new HikariDataSource(config);
Connection connection = dataSource.getConnection();
File f = new File("/tmp/casper.mv.db");
assertTrue (f.exists());
Statement s = connection.createStatement();
s.execute("drop all objects delete files");
assertTrue (f.exists());
s.execute("create table a (b number(9))");
/* do a lot of work */
connection.commit();
s.close();
connection.close();
assertTrue (f.exists());
dataSource.close();
assertFalse (f.exists());
}
}
Just handle your database as update don't use create/drop
dataSource:
dbCreate: update
Related
I'm trying to call a Google BigQuery stored procedure (Routine) using Spring boot. I tried all the methods of the routines to extract data. However, it didn't help.
Has anyone ever created and called a BigQuery stored procedure (Routine) through the Spring boot? If so, how?
public static Boolean executeInsertQuery(String query, TableId tableId, String jobName) {
log.info("Starting {} truncate query", jobName);
BigQuery bigquery = GCPConfig.getBigQuery(); // bqClient
// query configuration
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query)
.setUseLegacySql(false)
.setAllowLargeResults(true)
.setDestinationTable(tableId) .setWriteDisposition(JobInfo.WriteDisposition.WRITE_TRUNCATE).build();
try {
// build the query job.
QueryJob queryJob = new QueryJob.Builder(queryConfig).bigQuery(bigquery).jobName(jobName).build();
QueryJob.Result result = queryJob.execute();
} catch (JobException e) {
log.error("{} unsuccessful. job id: {}, job name: {}. exception: {}", jobName, e.getJobId(),
e.getJobName(), e.toString());
return false;
}
}
package ops.google.com;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryError;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.EncryptionConfiguration;
import com.google.cloud.bigquery.InsertAllRequest;
import com.google.cloud.bigquery.InsertAllResponse;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableResult;
import com.google.common.collect.ImmutableList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.google.auth.oauth2.GoogleCredentials;
import com.google.auth.oauth2.ServiceAccountCredentials;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class SelectFromBigQueryFunction {
private static final Logger logger = LogManager.getLogger(SelectFromBigQueryFunction.class);
public boolean tableSelectFromJoin(String key_path) {
String projectID = "ProjectID";
String datasetName = "DataSetName";
String tableName1 = "sample_attribute_type";
String tableName2 = "sample_attribute_value";
boolean status = false;
try {
//Call BQ Function/Routines, functinon name->bq_function_name
//String query = String.format("SELECT DataSetName.bq_function_name(1, 1)");
//Call BQ Stored Procedure, procedure name-> bq_stored_procedure_name
String query = String.format("CALL DataSetName.bq_stored_procedure_name()");
File credentialsPath = new File(key_path);
FileInputStream serviceAccountStream = new FileInputStream(credentialsPath);
GoogleCredentials credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
// Initialize client that will be used to send requests. This client only needs to be created
BigQuery bigquery = BigQueryOptions.newBuilder()
.setProjectId(projectID)
.setCredentials(credentials)
.build().getService();
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
TableResult results = bigquery.query(queryConfig);
results.iterateAll().forEach(row -> row.forEach(val -> System.out.printf("%s,", val.toString())));
logger.info("Query performed successfully with encryption key.");
status = true;
} catch (BigQueryException | InterruptedException e) {
logger.error("Query not performed \n" + e.toString());
}catch(Exception e){
logger.error("Some Exception \n" + e.toString());
}return status;
}
}
Recently I started working on BDD using JBehave.
So far if I run using maven, my maven project is getting successfully build. And then its coming into the story file but then its not proceeding further.
I tried by running with junit but I am getting the same result..
I think my problem is with executor file.
I searched in many sites and even Jbehave.org and many stackoverflow queries..But in vain
Help me to come out of this problem...Let me know if you need any additional information
I spent so much time rectifying this.But couldn't able to find the solution.
Here is my runner file..
package runnerFile;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.jbehave.core.configuration.Configuration;
import org.jbehave.core.configuration.MostUsefulConfiguration;
import org.jbehave.core.io.CodeLocations;
import org.jbehave.core.io.LoadFromClasspath;
import org.jbehave.core.io.StoryFinder;
import org.jbehave.core.junit.JUnitStories;
import org.jbehave.core.junit.JUnitStory;
import org.jbehave.core.reporters.Format;
import org.jbehave.core.reporters.StoryReporterBuilder;
import org.jbehave.core.steps.InjectableStepsFactory;
import org.jbehave.core.steps.InstanceStepsFactory;
import org.jbehave.core.steps.ScanningStepsFactory;
import org.jbehave.core.steps.Steps;
public class TestRunner extends JUnitStories{
#Override
public Configuration configuration() {
return new MostUsefulConfiguration()
.useStoryLoader(
new LoadFromClasspath(this.getClass().getClassLoader()))
.useStoryReporterBuilder(
new StoryReporterBuilder()
.withDefaultFormats()
.withFormats(Format.HTML, Format.CONSOLE)
.withRelativeDirectory("jbehave-report")
);
}
#Override
public InjectableStepsFactory stepsFactory() {
// ArrayList<Object> stepFileList = new ArrayList<Object>();
ArrayList<Steps> stepFileList = new ArrayList<Steps>();
stepFileList.add(new Steps(configuration()));
return new InstanceStepsFactory(configuration(), stepFileList);
//return new ScanningStepsFactory(configuration(), "org.jbehave.examples.core.steps", "my.other.steps"`enter code here` ).matchingNames(".*Steps").notMatchingNames(".*SkipSteps");
}
#Override
protected List<String> storyPaths() {
return new StoryFinder().
findPaths(CodeLocations.codeLocationFromClass(
this.getClass()),
Arrays.asList("**/TC_2.story"),
Arrays.asList(""));
}
}
I kept my story file inside src/test/resources . and step definition inside src/test/java
****story:****
**src/test/resources**
Narrative:
In order to communicate effectively to the business some functionality
As a development team
I want to use Behaviour-Driven Development
Scenario: A scenario is a collection of executable steps of different type
Given I launch the url
When I login with username <Username> and password <Password>
Then I should see the homepage
Examples:
|Username|Password|
|test#gmail.com|test1234|
**stepDefinition**
**src/test/java:**
package definition;
import org.jbehave.core.annotations.Given;
import org.jbehave.core.annotations.Named;
import org.jbehave.core.annotations.Then;
import org.jbehave.core.annotations.When;
import pages.Homepage_Pages;
public class HomePage {
Homepage_Pages home;
#Given("I launch the url")
public void url()
{
home.launchUrl();
}
#When("I login with username <Username> and password <Password>")
public void login(#Named("Username") String Username, #Named("Password") String Password)
{
System.out.println(Username);
}
#Then("I should see the homepage")
public void homePageVerification()
{
System.out.println("Heello");
}
}
Maven Console:
Try the following code, which is a stripped-down simple testrunner that does nothing fancy, but simply runs all stories found in sub-folders of the main folder, and includes all step classes in the define steps files location. My original had a lot of those things hard-coded but I changed them to final Strings so it should be easy enough to replace your situation and run with this file. Obviously, change "com.yourpackage.steps" with whatever package folder you place your steps files in. Hope this helps.
package testrunner;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import org.jbehave.core.configuration.Configuration;
import org.jbehave.core.configuration.MostUsefulConfiguration;
import org.jbehave.core.embedder.EmbedderControls;
import org.jbehave.core.io.CodeLocations;
import org.jbehave.core.io.StoryFinder;
import org.jbehave.core.junit.JUnitStories;
import org.jbehave.core.reporters.CrossReference;
import org.jbehave.core.reporters.Format;
import org.jbehave.core.reporters.StoryReporterBuilder;
import org.jbehave.core.steps.InjectableStepsFactory;
import org.jbehave.core.steps.InstanceStepsFactory;
import org.junit.runner.RunWith;
import de.codecentric.jbehave.junit.monitoring.JUnitReportingRunner;
#RunWith(JUnitReportingRunner.class)
public class TestRunner extends JUnitStories {
private Configuration configuration;
public TestRunner() {
super();
CrossReference crossReference = new CrossReference();
configuration = new MostUsefulConfiguration();
configuration.useStoryReporterBuilder(
new StoryReporterBuilder().withFormats(Format.HTML, Format.STATS, Format.CONSOLE)
.withCodeLocation(CodeLocations.codeLocationFromPath("target/."))
.withCrossReference(crossReference));
EmbedderControls embedderControls = configuredEmbedder().embedderControls();
embedderControls.doBatch(false);
embedderControls.doGenerateViewAfterStories(true);
embedderControls.doSkip(false);
embedderControls.doVerboseFailures(true);
embedderControls.doVerboseFiltering(true);
embedderControls.useThreads(1);
embedderControls.useStoryTimeouts("1800");
}
#Override
protected List<String> storyPaths()
{
return new StoryFinder().findPaths(CodeLocations.codeLocationFromClass(this.getClass()), "**/*.story", "");
}
#Override
public Configuration configuration() {
return configuration;
}
#Override
public InjectableStepsFactory stepsFactory() {
final String stepsPackage = "com.yourpackage.steps";
final String stepsLoc = "src/test/java/" + stepsPackage.replace(".", "/");
List<Object> stepList = new ArrayList<Object>();
File steps = new File(stepsLoc);
File[] fileList = steps.listFiles();
int size = fileList.length;
for (int i = 0; i < size; i++) {
if (fileList[i].isFile()) { // also returns folders (directories)
String value = fileList[i].getName().replace(".java", ""); // strip extensions
if (!value.toLowerCase().contains("testrunner")) { // ignore testrunner itself
try {
Object stepObject = Class.forName((stepsPackage + "." + value)).newInstance();
stepList.add(stepObject);
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
}
}
return new InstanceStepsFactory(configuration(), stepList);
}
}
Thanks in advance.
We are loading data into Hbase using Java. It's pretty straight and works fine when we run the program on the client node (edge node). But we want to run this program remotely (outside the hadoop cluster) within our network to load the data.
Is there anything required to do this in terms of security on the hadoop cluster? When I run the program outside the cluster it's hanging..
Please advise. Greatly appreciate your help.
Thanks
Code here
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import com.dev.stp.cvsLoadEventConfig;
import com.google.protobuf.ServiceException;
public class LoadData {
static String ZKHost;
static String ZKPort;
private static Configuration config = null;
private static String tableName;
public LoadData (){
//Set Application Config
LoadDataConfig conn = new LoadDataConfig();
ZKHost = conn.getZKHost();
ZKPort = conn.getZKPort();
config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", ZKHost);
config.set("hbase.zookeeper.property.clientPort", ZKPort);
config.set("zookeeper.znode.parent", "/hbase-unsecure");
tableName = "E_DATA";
}
//Insert Record
try {
HTable table = new HTable(config, tableName);
Put put = new Put(Bytes.toBytes(eventId));
put.add(Bytes.toBytes("E_DETAILS"), Bytes.toBytes("E_NAME"),Bytes.toBytes("test data 1"));
put.add(Bytes.toBytes("E_DETAILS"), Bytes.toBytes("E_TIMESTAMP"),Bytes.toBytes("test data 2"));
table.put(put);
table.close();
} catch (IOException e) {
e.printStackTrace();
}
}
I'm developing a Spring based web application with postgresql as database. I'm using JSON Datatype in postgresql. I have configured the entity with hibernate custom user type to support JSON Datatype.
Now i want to test my DAO objects using any embedded DB. Is there any embedded DB that support JSON data type which can be used in spring application.
When you use database specific features - like JSON support in PostgreSQL, for safety you have to use the same type of database for testing. In your case you want to test your DAO objects:
assume that PostgreSQL is installed on localhost and make sure that it is the case for all environments where tests run
or even better - try using otj-pg-embedded which downloads and starts PostgreSQL for JUnit tests (I haven't used it in real life projects)
Update
If you are able to run Docker in your test environment instead of embedded databases use real Postgres via TestContainers
package miniCodePrjPkg;
import java.util.List;
import java.util.Map;
import org.apache.commons.dbutils.QueryRunner;
import org.apache.commons.dbutils.handlers.MapListHandler;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
//import com.wix.mysql.EmbeddedMysql;
//
//import static com.wix.mysql.EmbeddedMysql.anEmbeddedMysql;
//import static com.wix.mysql.ScriptResolver.classPathScript;
//import static com.wix.mysql.distribution.Version.v5_7_latest;
public class DslQueryCollList {
public static void main(String[] args) throws Exception {
// apache comm coll cant ,only array is ok..cant json_object eff
// Map m=Maps.
Map myMap = Maps.newHashMap(ImmutableMap.of("name", 999999999, "age", 22));
Map myMap2 = Maps.newHashMap(ImmutableMap.of("name", 8888888, "age", 33));
List li = new ImmutableList.Builder().add(myMap).add(myMap2).build();
System.out.println(li);
// /db/xx.sql
// EmbeddedMysql mysqld = anEmbeddedMysql(v5_7_latest)
// .addSchema("aschema", classPathScript("iniListCache.sql"))
// .start();
// this just start..and u need a cliednt as common to conn..looks trouble than
// sqlite
String sql = " json_extract(jsonfld,'$.age')>30";
List<Map<String, Object>> query = queryList(sql, li);
System.out.println(query);
// run.query(conn, sql, rsh)
}
private static List<Map<String, Object>> queryList(String sql_query, List li)
throws ClassNotFoundException, SQLException, JsonProcessingException {
sql_query="SELECT * FROM sys_data where "+sql_query;
String sql = null;
Class.forName("org.sqlite.JDBC");
Connection c = DriverManager.getConnection("jdbc:sqlite:test.db");
Statement stmt = c.createStatement();
String sql2 = "drop TABLE sys_data ";
exeUpdateSafe(stmt, sql2);
sql2 = "CREATE TABLE sys_data (jsonfld json )";
exeUpdateSafe(stmt, sql2);
// insert into facts values(json_object("mascot", "Our mascot is a dolphin name
// sakila"));
//
for (Object object : li) {
String jsonstr = new ObjectMapper().writeValueAsString(object);
sql = "insert into sys_data values('" + jsonstr + "');";
// sql = "insert into sys_data values('{\"id\":\"19\", \"name\":\"Lida\"}');";
exeUpdateSafe(stmt, sql);
}
//sql = "SELECT json_extract(jsonfld,'$.name') as name1 FROM sys_data limit 1;";
// System.out.println(sql);
QueryRunner run = new QueryRunner();
// maphandler scare_handler
System.out.println(sql_query);
List<Map<String, Object>> query = run.query(c, sql_query, new MapListHandler());
System.out.println(query);
List li9=Lists.newArrayList();
for (Map<String, Object> map : query) {
li9.add(map.get("jsonfld"));
}
return li9;
}
private static void exeUpdateSafe(Statement stmt, String sql2) throws SQLException {
try {
System.out.println(sql2);
System.out.println(stmt.executeUpdate(sql2));
} catch (Exception e) {
e.printStackTrace();
}
}
}
Accordinong to EJB 3.0 specification: While an instance is in a transaction, the instance must not attempt to use the resource-manager specific transaction demarcation API (e.g. it must not invoke the
commit or rollback method on the java.sql.Connection interface or on the
javax.jms.Session interface) In 13.3.3 of Specification.
I tried one example - where in BEAN managed transaction I included java.sql.Connection.commit() - created Stateless bean in NetBeans as EE5, deployed on Glassfish 3.1 and container did not complain? Bean method updates the database without any errors in Glassfish log. Is this expected behavior?
Also, there is no such restriction on using java.sql.Connection.commit() for beans with container transaction managed transactions mentioned in specification.
Thanks
Branislav
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package ejb;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.annotation.Resource;
import javax.ejb.*;
import javax.persistence.Transient;
import javax.sql.DataSource;
import javax.transaction.*;
/**
*
* #author bane
*/
#Stateless
#TransactionManagement(TransactionManagementType.BEAN)
public class MySession implements MySessionRemote {
#Resource(name = "SAMPLE")
private DataSource SAMPLE;
//
#Resource UserTransaction utx;
//gore je novi kod
#Override
public String getResult() {
return "This is my Session Bean";
}
public void doSomething() {
try {
Connection conn = SAMPLE.getConnection();
Statement stmt = conn.createStatement();
String q = "select * from BOOK";
String up = "update BOOK set PRICE = PRICE + 1";
utx.begin();
int num = stmt.executeUpdate(up);
System.out.println("num: "+num);
ResultSet rs = stmt.executeQuery(q);
//is conn.commit() legal?
conn.commit();
String name = null;
int price = 0;
while (rs.next()) {
name = rs.getString(2);
price = rs.getInt(3);
System.err.println(name+" , "+price);
}
utx.commit();
} catch (SQLException ex) {
Logger.getLogger(MySession.class.getName()).log(Level.SEVERE, null, ex);
} catch (Exception ex) {
Logger.getLogger(MySession.class.getName()).log(Level.SEVERE, null, ex);
}
}
// Add business logic below. (Right-click in editor and choose
// "Insert Code > Add Business Method")
}