I have a small query regarding the Spring boot logic of calling the sql files. I know that the schema sql files are called automatically if they are kept inside the resource directory of the spring boot app project.
But i would like to call them from a different directory. I have written the below code to try and achieve that but fails with below error message.
Code :
public void executeSqlScript()throws SQLException{
Connection connection = dataSource.getConnection();
connection.setAutoCommit(false);
ScriptUtils.executeSqlScript(connection, new EncodedResource(new ClassPathResource("C:\\react-file-upload-master\\createConfigurationTableAndData.sql")));
connection.commit();
}
Error Message : java.io.FileNotFoundException: class path resource [C:/react-file-upload-master/createConfigurationTableAndData.sql] cannot be opened because it does not exist.
Its is not reading my directory. Would be greatfull if you could educate me on the same. Thank you.
try using FileSystemResource instead of ClassPathResource and give absolute path.
public void executeSqlScript()throws SQLException{
Connection connection = dataSource.getConnection();
connection.setAutoCommit(false);
ScriptUtils.executeSqlScript(connection, new FileSystemResource("C:\\react-file-upload-master\\createConfigurationTableAndData.sql"));
connection.commit();
}
because ClassPathResource will point to class path only. assuming your file is not in classpath.
Related
I am new to spring integration framework. Currently i am working on a project which has a requirement to download the files to a local directory.
My goal is to complete the below task
1.Download the files by suing spring integration to a local directory
2.Trigger a batch job.It means to read the file and extract a specific column information.
I am able to connect to SFTP server.But facing difficulty how to use spring integration java DSL to download the files and trigger a batch job.
Below code to connect to SFTP Session Factory
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(privateKeyPassPhrase);
} else {
factory.setPassword("sftpPassword");
}
factory.setPassword("sftpPassword");
logger.info("Connecting to SFTP Server" + factory.getSession());
System.out.println("Connecting to SFTP Server" + factory.getSession());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<ChannelSftp.LsEntry>(factory);
}
Below code to download the files from remote to local
#Bean
public IntegrationFlowBuilder integrationFlow() {
return IntegrationFlows.from(Sftp.inboundAdapter(sftpSessionFactory()));
}
I am using spring integration dsl. i am not able to get what to code here.
I am trying many possible ways to do this.But not able to get how to proceed with this requirement.
Can anyone one help me how to approach at this and if possible share me a sample code for reference?
The Sftp.inboundAdapter() produces messages with a File as a payload. So, having that IntegrationFlows.from(Sftp.inboundAdapter(sftpSessionFactory())) you can treat as a first task done.
Your problem from here that you don't make an integrationFlow, but rather return that IntegrationFlowBuilder and register it as a #Bean. That's where it doesn't work for you.
You need to continue a flow definition and call its get() in the end to return an integrationFlow instance which already has to be registered as a bean. If this code flow is confusing a bit, consider to implement an IntegrationFlowAdapter as a #Component.
To trigger a batch job you need consider to use a FileMessageToJobRequest in a .transform() EIP-method and then a JobLaunchingGateway in a .handle() EIP-method.
See more info in docs:
https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl
https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-inbound
https://docs.spring.io/spring-batch/docs/4.3.x/reference/html/spring-batch-integration.html#spring-batch-integration-configuration
BTW, the last one has a flow sample exactly for your use-case.
I have created a microservice in springboot, there is folder under resource folder and then a file under this folder
i,e.
resource
mycustomfolder
myfile.txt
I am creating a bean, which filed populated by myfile
#Value("${file-path}")
private String filePath;
#Bean
public MyBean byBean() throws IOException {
//read file path
String path = ResourceUtils.getURL(filePath).getPath();
//populated by bean
MyBean myBean = myservice.populatedMyBean(path);
return myBean;
}
filePath value is set in application.property
dataload-config-file=src/main/resources/mycustomfolder/myfile.txt
when i am executing this springboot app it's working file.
But when i am creating a jar of it and deploying with spring cloud data flow it giving me error on creating myBean
showing exception cause
Caused by: java.io.FileNotFoundException: /tmp/spring-cloud-dataflow-4865534318197521357/test-1506882530191/test.process/src/main/resources/mycustomfolder/myfile.txt (No such file or directory)
why this is happning normally working fine, but throwing error with spring-cloud-dataflow?
Spring Cloud Data Flow can only orchestrate Spring Cloud Stream (SCSt) or Spring Cloud Task (SCT) based microservice applications. It is unclear whether your Spring Boot application complies to previously mentioned frameworks.
Please use the SCSt and SCT samples for reference. If your application does comply with the SCSt/SCT programming model, it'd be better you share the source code for review.
I am working on jdbc connection and I am using eclipse. I have placed connection driver that is mysql-connector-java-5.1.6.jar file in WebContent/WEB-INF/lib folder. After that I am writing this code to simply create and test connection between application and driver
import java.lang.ClassNotFoundException;
public class implementation {
public static void main(String[]arg)
{
try
{
System.out.println("conneting to driver...");
Class.forName("com.mysql.jdbc.driver");
System.out.println("Connection Successful");
}
catch(ClassNotFoundException error)
{
System.out.println("Error:" + error.getMessage());
}
}
}
when I am running this program, I am getting this error.
connecting to driver.
Error:com.mysql.jdbc.driver
can you please help to solve this issue. thank you for giving me your important time.
You are getting ClassNotFoundException because the correct driver class name is com.mysql.jdbc.Driver and not com.mysql.jdbc.driver.
The 'D' of Driver is capital(standard Camel Case notation)
Hope this helps.
Add that jar file to BuildPath of the project.
Right Click on the project --> BuildPath -- Configure builaPath -->Add external jars.
Because You are not running a web application.
Class.forName("com.mysql.jdbc.driver");
By typing driver name manually like above, we are getting ClassNotFoundException because of small spell mistakes
That's why always better to use when the fully qualified class name is the input for method
for example,
Class.forName(Driver.class.getName().toString());
Before this we need to set the mysql-version.jar file into the buid path
I'm new to JDBC. I've installed GlassFish 3.1.1 on Centos 6.2 and need to use it with an application that connects to an Oracle 11G database on another server. I've read through the documentation for GlassFish and think I understand how to create a JDBC connection pool as well as a JDBC resource. My question is, how do I use this information when coding the java middle-tier to connect to the database?
Currently (with just GlassFish install and no JDBC configuration), I am relying on the CentOS enviroment variables for java (such as CLASSPATH) to allow the web application to use the JDBC drivers. However, I'm getting the following error:
java.lang.NoClassDefFoundError: oracle/jdbc/pool/OracleDataSource
Thus, my attempt to create a JDBC connection pool and resource in GlassFish (so the app can use the JDBC driver). My java file starts out:
import java.sql.*;
import oracle.jdbc.*;
import oracle.jdbc.pool.OracleDataSource;
class JDBCexample {
public static void main(String args[]) throws SQLException {
Connection conn;
Statement stmt;
ResultSet rset;
String query;
String sqlString;
String person_firstName;
String person_lastName;
String person_email;
int person_salary;
// connect to database
OracleDataSource ds = new OracleDataSource();
ds.setURL("jdbc:oracle:thin:myID/myPWD#192.168.0.1:1521:mySID");
conn = ds.getConnection();
// read something in database
stmt = conn.createStatement();
query = "SELECT first_name, last_name, email, salary FROM HR.Employees where rownum < 6";
rset = stmt.executeQuery(query);
while (rset.next()) {
person_firstName = rset.getString("first_name");
person_lastName = rset.getString("last_name");
person_email = rset.getString("email");
person_salary = rset.getInt("salary");
System.out.format(person_firstName + " " + person_lastName + " " + person_email + " %d%n", person_salary)
}
and so on...
QUESTION: How would I change the above code after I create a JDBC Connection Pool (named: myPool) and a JDBC Resource (named: myDBPool)? If it matters, I'm using Oracle 11.2, CentOS 6.2, GlassFish 3.1.1 with mod_jk and fronted by Apache 2.2 webserver, JDK 1.6. I don't have any clustering or load-balancing.
UPDATE 1: I thought this link was a good reference (see section titled: "Creating a Data Source Instance, Registering with JNDI, and Connecting"). But when I modify the above Java file as follows (just preparing the java file; haven't touched GlassFish yet),
// Add These:
import javax.naming.Context;
import javax.naming.InitialContext;
// Change from this:
// connect to database
OracleDataSource ds = new OracleDataSource();
ds.setURL("jdbc:oracle:thin:myID/myPWD#192.168.0.1:1521:mySID");
conn = ds.getConnection();
// To this:
// connect to database
Context ctext = new InitialContext();
OracleDataSource ds = (OracleDataSource)ctext.lookup("jdbc/myDBPool");
conn = ds.getConnection();
I get the errors:
JitterClass.java:67: unreported exception javax.naming.NamingException; must be caught or declared to be thrown
Context ctext = new InitialContext();
^
JitterClass.java:68: unreported exception javax.naming.NamingException; must be caught or declared to be thrown
OracleDataSource ds = (OracleDataSource)ctext.lookup("jdbc/myDBPool");
^
UPDATE 2: I cleared those compile errors using cyril's comments below (to throw all exceptions). Then I created JDBC Connection Pool and JDBC Resource, and the Ping was successful. So then I run the application from the client and observe the following error:
java.lang.ClassCastException : com.sun.gjc.spi.jdbc40.DataSource40 cannot be cast to oracle.jdbc.pool.OracleDataSource
At this point, if I add an include javax.sql.DataSource to the program, and change this line:
OracleDataSource ds = (OracleDataSource)ctext.lookup("jdbc/myDBPool");
to become this line:
DataSource ds = (DataSource)ctext.lookup("jdbc/myDBPool");
it compiles without errors. But now I'm confused... aren't we supposed to be using OracleDataSource here? Or, does GlassFish somehow implement OracleDataSource since I do see a setting for this connection pool for Datasource Classname set to oracle.jdbc.pool.OracleDataSource (?). Hoping someone can explain this.
Do pings on your connection pool work?
If not, check your pool configuration w/ http://docs.oracle.com/cd/E18930_01/html/821-2416/beamw.html#beanh
Once pings work and the JDBC Resource is configured, you should be able to access it in your app code through JNDI:
InitialContext context = new InitialContext();
DataSource ds = (DataSource) context.lookup("jdbc/myDBPool"); // or whatever name you used when creating the resource
conn = ds.getConnection();
Hope it helps,
RESPONSE TO UPDATE 1:
That's just the compiler telling you to formally catch or declare a checked exception which may be thrown by JNDI.
For testing purposes, the easiest way out of this (and future errors like this) is to just widen your method signature to throw all exceptions, i.e.:
public static void main(String args[]) throws /*SQL*/Exception {
RESPONSE TO UPDATE 2:
There's no reason to cast JDBC interfaces down to Oracle implementations unless you need to access any custom feature not specified in the JDBC spec. The purpose of a DataSource is to be a factory for Connections, whose API is defined in the JDBC interface, so that should be all you need. When you define a connection pool and resource in GlassFish, the app server is adding value by wrapping the JDBC driver classes and proxying them seamlessly for you as long as you stick to import java.sql.*. No need for oracle imports :) The main advantage being that if you ever decide to switch to MySQL or some other data store later on, your code is then portable and doesn't need any change.
To add to cyril's good answer:
Instead of the JNDI lookup, you can also use Resource Injection to set up your DataSource:
#Resource(name = "jdbc/Your_DB_Res")
private DataSource ds;
On startup, the application server will then inject the JDBC ressource. This section of Java EE Tutorial has more on that matter.
By using resource injection, you can reduce the amount of boilerplate code. This article introduces the concepts.
Besides adding the driver to your classpath, you should try adding the appserv-rt.jar file to your project's build path (the jar is located in Glassfish's lib directory). If you don't want to include all the other jars you should first create a library containing the appserv-rt jar and then add it to your project's build path.
Using Linq to sql and server explorer, I mapped to a loginvalidation stored proc. So I write the following code:
ClientReportingDataContext db = new ClientReportingDataContext();
var data = db.ADMIN_LoginValidation(login, password);
It throws up an exception on the following line:
public ClientReportingDataContext() :
base(global::System.Configuration.ConfigurationManager.ConnectionStrings["FeedsConnectionString"].ConnectionString, mappingSource)
Exception thrown:
Object reference not set to an instance of an object.
I'm calling this function from a unit test class. I cann feedsconnectionstring in web.config.
I put the web.config in the unit tests folder, and also under debug and debug/bin. Not sure what I'm missing.
Thanks in advance for any advise.
For a unit test,
ConnectionStrings["FeedsConnectionString"].ConnectionString
won't be reading from your web.config file; it will be reading from the application configuration file for the test runner. Therefore, unless you've put FeedsConnectionString in the application configuration file for your test runner,
ConnectionStrings["FeedsConnectionString"]
is null and so
ConnectionStrings["FeedsConnectionString"].ConnectionString
is going to throw a NullReferenceException.
This is why testing and application configuration files don't get along well.
You should consider the following:
public ClientReportingDataContext(string connectionString) :
base(connectionString, mappingSource)
Then inject your connection string in your test.