Package org.apache.flink.api.java.io.jdbc does not exist - maven

I want to use the JDBC connector in an Apache Flink application. But maven doesn't find the flink JDBC package.
I added the following dependency to my pom.xml in the "build-jar" section:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-jdbc_2.11</artifactId>
<version>1.13.1</version>
</dependency>
The jar files were downloaded by maven and are available in the local maven directory.
My code looks like this.
// standard, not relevant flink imports
import org.apache.flink.api.java.io.jdbc.JDBCInputFormat;
import org.apache.flink.api.java.io.jdbc.JDBCOutputFormat;
public class BatchLayerExec {
public static void main( final String[] args ) {
//Definition of Strings for the connection to the database
try {
ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
final TypeInformation<?>[] fieldTypes =
new TypeInformation<?>[] { ... };
final RowTypeInfo rowTypeInfo = new RowTypeInfo(fieldTypes);
//Define Input Format Builder
JDBCInputFormat.JDBCInputFormatBuilder inputBuilder = JDBCInputFormat
.buildJDBCInputFormat()
.setDrivername(driverName)
.setDBUrl(dbURL + sourceDB)
.setQuery(selectQuery)
.setRowTypeInfo(rowTypeInfo)
.setUsername(dbUser)
.setPassword(dbPassword)
.setRowTypeInfo(rowTypeInfo);
DataSet<Row> sourceTable = environment.createInput(inputBuilder.finish());
// Transformation
// ...
// Print for debugging
transformedTable.print();
// Output transformed data to output table
//Define Output Format Builder
JDBCOutputFormat.JDBCOutputFormatBuilder outputBuilder = JDBCOutputFormat
.buildJDBCOutputFormat()
.setDrivername(driverName)
.setDBUrl(dbURL + sourceDB)
.setQuery(insertQuery)
.setSqlTypes(new int[] { ... })
.setUsername(dbUser)
.setPassword(dbPassword);
//Define dataSink
transformedTable.output(outputBuilder.finish());
environment.execute();
} catch(final Exception e) {
System.out.println(e);
}
}
}
But during the build process with mvn clean package -Pbuild-jar, I get the error message:
package org.apache.flink.api.java.io.jdbc does not exist.
I removed some not relevant definitions and steps in the code (see comments). Please comment if you need more information.

I found out that the package org.apache.flink.api.java.io.jdbc is deprecated.
Importing the package org.apache.flink.connector.jdbc works.
EDIT
Note that this requires changing the JDBCInputFormat and JDBCOutputFormat classes to JdbcInputFormat and JdbcOutputFormat.

Related

Groovy #CompileStatic and #TypeChecked order, bug or misunderstanding

I started getting a strange failure when compiling a gradle task class. This is the task I created:
package sample
import groovy.transform.CompileStatic
import groovy.transform.TypeChecked
import org.gradle.api.artifacts.Dependency
import org.gradle.api.provider.Property
import org.gradle.api.tasks.AbstractCopyTask
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.Internal
import org.gradle.api.tasks.bundling.Zip
import sample.internal.DataSourceXmlConfig
#TypeChecked
#CompileStatic
class DataSource extends Zip {
#Internal
final Property<File> warFile = project.objects.property(File.class)
DataSource() {
warFile.convention(project.provider {
def files = project.configurations.getByName('warApp').fileCollection { Dependency d ->
d.name == (archiveFileName.getOrElse("") - (~/\.[^.]+$/))
}
files.empty ? null : files.first()
})
}
/**
* This function is used to specify the location of data-sources.xml
* and injects it into the archive
* #param dsConf The configuration object used to specify the location of the
* file as well as any extra variables which should be injected into the file
*/
#Input
void dataSourceXml(#DelegatesTo(DataSourceXmlConfig) Closure dsConf) {
filesToUpdate {
DataSourceXmlConfig ds = new DataSourceXmlConfig()
dsConf.delegate = ds
dsConf.resolveStrategy = Closure.DELEGATE_FIRST
dsConf.call()
exclude('**/WEB-INF/classes/data-sources.xml')
from(ds.source) {
if (ds.expansions) {
expand(ds.expansions)
}
into('WEB-INF/classes/')
rename { 'data-sources.xml' }
}
}
}
private def filesToUpdate(#DelegatesTo(AbstractCopyTask) Closure action) {
action.delegate = this
action.resolveStrategy = Closure.DELEGATE_FIRST
if (warFile.isPresent()) {
from(project.zipTree(warFile)) {
action.call()
}
}
}
}
When groovy compiles this class, I get the following error:
Execution failed for task ':buildSrc:compileGroovy'.
BUG! exception in phase 'class generation' in source unit '/tmp/bus-server/buildSrc/src/main/groovy/sample/DataSource.groovy'
At line 28 column 28 On receiver: archiveFileName.getOrElse() with
message: minus and arguments: .[^.]+$ This method should not have
been called. Please try to create a simple example reproducing this
error and file a bug report at
https://issues.apache.org/jira/browse/GROOVY
Gradle version: 5.6
Groovy version: localGroovy() = 2.5.4
tl;dr, is this a bug or am I missing something about how these annotations work?
The first thing I tried to do was to remove either one of #TypeChecked and #CompileStatic annotations to see if the error goes away.
This actually fixed the problem right away. Compiling the source with either annotations added was successful, but fails when both are present.
I read some questions and answers regarding the use of both annotations, but none of them seemed to suggest that one cannot use both at the same time.
Finally, I tried switching the order of the annotations to see if that helps and to my surprise, it worked! No compilation errors!
This works:
#CompileStatic
#TypeChecked
class DataSource extends Zip { ... }
At this point, I guess my question would be, is this a bug or is there something I am not understanding about the use of both of these annotations? I'm leaning more towards it being a bug just because of the fact that the order made the error message go away.

Error while adding nodes and properties using jcr apis Adobe Experience Manager6.0

Using JCR apis while I am trying to add node and property.I am getting the following error:
7520 [main] ERROR org.apache.jackrabbit.jcr2spi.hierarchy.ChildNodeEntriesImpl - ChildInfo iterator contains multiple entries with the same name|index or uniqueID -> ignore ChildNodeInfo.
I have added the following dependency in Pom.xml:
<dependency>
<groupId>org.apache.jackrabbit</groupId>
<artifactId>jackrabbit-jcr-commons</artifactId>
<version>2.12.1</version></dependency>
<dependency>
<groupId>org.apache.jackrabbit</groupId>
<artifactId>jackrabbit-jcr2dav</artifactId>
<version>2.0-beta6</version> </dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.5.8</version></dependency>
Java code:
package com.adobe.cq.impl;
import javax.jcr.Node;
import javax.jcr.Repository;
import javax.jcr.Session;
import javax.jcr.SimpleCredentials;
import org.apache.jackrabbit.commons.JcrUtils;
public class GetRepository {
public static void main(String[] args) {
try {
Repository repository = JcrUtils.getRepository("http://localhost:4502/crx/server");
Session session=repository.login(new SimpleCredentials("admin", "admin".toCharArray()));
Node root=session.getRootNode();
Node adobe = root.addNode("adobe");
Node day = adobe.addNode("cq");
day.setProperty("message", "Adobe Experience Manager is part of the Adobe Digital Marketing Suite!");
// Retrieve content
Node node = root.getNode("adobe/cq");
System.out.println(node.getPath());
System.out.println(node.getProperty("message").getString());
// Save the session changes and log out
session.save();
session.logout();
}
catch(Exception e){
e.printStackTrace();
}
}}
Same name siblings are not allowed in the repository. Going by your code, there is no check if the node "adobe" is already present below the root node. Hence, if the node is / was already created / present and the above code executes for the second time, you may face this issue.
Try checking for node availability as shown below.
Node adobe;
if (!root.hasNode("adobe")) {
adobe = root.addNode("adobe");
} else {
adobe = root.getNode("adobe");
}
if (!adobe.hasNode("cq")) {
Node day = adobe.addNode("cq");
}

Run SQL script on JDBC connection, minimal approach

Long story short: I want to run a SQL script on an HSQLDB database.
I want to follow a minimalistic approach, which means:
Absolutely no manual parsing of SQL
No additional dependencies except for general Utilities. I make the distinction here because, for example I refuse to pull in Ibatis or Hibernate which are larger scope frameworks, but I will accept an apache commons or guava type utils library.
The library MUST BE AVAILABLE ON MAVEN. No small-time pet-project stuff.
(EDIT 12/5/15) Must have the ability to execute SQL file from classpath.
To give you some context:
try {
connection = DriverManager.getConnection("jdbc:hsqldb:file:mydb", "sa", "");
// Run script here
} catch (SQLException e) {
throw new RuntimeException("Unable to load database", e);
}
A one-liner would be great. Something like:
FancyUtils.runScript(connection, new File("myFile.sql"));
I did find org.hsqldb.persist.ScriptRunner but it takes a Database object as an argument and I can't seem to figure out how to get an instance. Also, I don't like the description of "Restores the state of a Database", so does that mean my database will be cleared first? That's definitely not what I want.
I just tried using the SqlFile object in SqlTool and it worked for me. The Maven dependency I used was
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>sqltool</artifactId>
<version>2.4.1</version>
</dependency>
The SQL script file I wanted to execute was "C:/Users/Public/test/hsqldbCommands.sql":
INSERT INTO table1 (id, textcol) VALUES (2, 'stuff');
INSERT INTO table1 (id, textcol) VALUES (3, 'more stuff');
and my Java test code was
package hsqldbMaven;
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import org.hsqldb.cmdline.SqlFile;
public class HsqldbMavenMain {
public static void main(String[] args) {
String connUrl = "jdbc:hsqldb:file:C:/Users/Public/test/hsqldb/personal";
String username = "SA";
String password = "";
try (Connection conn = DriverManager.getConnection(connUrl, username, password)) {
// clear out previous test data
try (Statement st = conn.createStatement()) {
st.executeUpdate("DELETE FROM table1 WHERE ID > 1");
}
System.out.println("Before:");
dumpTable(conn);
// execute the commands in the .sql file
SqlFile sf = new SqlFile(new File("C:/Users/Public/test/hsqldbCommands.sql"));
sf.setConnection(conn);
sf.execute();
System.out.println();
System.out.println("After:");
dumpTable(conn);
try (Statement st = conn.createStatement()) {
st.execute("SHUTDOWN");
}
} catch (Exception e) {
e.printStackTrace(System.err);
}
}
private static void dumpTable(Connection conn) throws SQLException {
try (
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("SELECT id, textcol FROM table1")) {
while (rs.next()) {
System.out.printf("%d - %s%n", rs.getInt("id"), rs.getString("textcol"));
}
}
}
}
producing
Before:
1 - Hello world!
After:
1 - Hello world!
2 - stuff
3 - more stuff
Edit: 2018-08-26
If you want to bundle your SQL script file into the project as a resource then see the example in the other answer.
Note also that this approach is not restricted to HSQLDB databases. It can be used for other databases as well (e.g., MySQL, SQL Server).
This uses the SqlTool library, but reads the script directly from the classpath by using the SqlFile class:
try(InputStream inputStream = getClass().getResourceAsStream("/script.sql")) {
SqlFile sqlFile = new SqlFile(new InputStreamReader(inputStream), "init", System.out, "UTF-8", false, new File("."));
sqlFile.setConnection(connection);
sqlFile.execute();
}
Even though iBatis was mentioned by the OP as a non-requirement, I still want to recommend MyBatis - the iBatis fork by the original creators.
The core library (org.mybatis:mybatis) requires no dependencies (all of its dependencies are optional) and while larger than HSQLDB SqlTool, at 1.7MB binary it is not horribly big for most uses and is continuously maintained (the last release, 3.5, was last month as of this writing).
You can initialize ScriptRunner with a JDBC Connection, then call runScript(new InputStreamReader(sqlinputst, Standard chartered.UTF_8)) to run whatever SQL script you can get an input steam of.

How to update the <latest> tag in maven-metadata-local when installing an artifact?

In the project I am working on we have various multi-module projects being developed in parallel, some of which are dependent on others. Because of this we are using using version ranges, e.g. [0.0.1,), for our internal dependencies during development so that we can always work against the latest snapshot versions. (I understand that this isn't considered best practice, but for now at least we are stuck with the current project structure.) We have build profiles set up so that when we perform a release all the version ranges get replaced with RELEASE to compile against the latest released version.
We have to use ranges as opposed to LATEST because when installing an artifact locally, the <latest> tag inside maven-metadata-local.xml is never updated, and so specifying LATEST will get the last version deployed to our Artifactory server. The problem with the ranges though is that the build process seems to have to download all the metadata files for all the versions of an artifact to be able to determine the latest version. As our project goes on we are accumulating more and more versions and artifacts so our builds are taking longer and longer. Specifying LATEST avoids this but means that changes from local artifact installs are generally not picked up.
Is there any way to get the <latest> tag in the maven-metadata-local.xml file to be updated when installing an artifact locally?
If you are working with SNAPSHOT's you don't need version ranges apart from that never use version ranges (only in extrem rare situtions). With version ranges your build is not reproducible which should be avoided in my opinion under any circumstance.
But you can use things like this:
<version>[1.2.3,)</version
but as you already realized that caused some problems, but I would suggest to use the versions-maven-plugin as an alternative to update the projects pom files accordingly.
mvn clean versions:use-latest-versions scm:checkin deploy -Dmessage="update versions" -DperformRelease=true
This can be handled by CI solution like Jenkins. But I got the impression that you are doing some basic things wrong. In particular if you need to use version ranges.
I had the same problem, so I wrote a maven plugin to handle it for me. It's a pretty extreme workaround, but it does work.
The documentation for creating maven plugins is on The Apache Maven Project. You could just create a plugin project from the command line archetype and add this mojo to your project.
/**
* Inserts a "latest" block into the maven-metadata-local.xml in the user's local
* repository using the currently configured version number.
*
* #version Sep 23, 2013
*/
#Mojo( name = "latest-version", defaultPhase = LifecyclePhase.INSTALL )
public class InstallLatestVersionMojo extends AbstractMojo {
/**
* Location of the .m2 directory
*/
#Parameter( defaultValue = "/${user.home}/.m2/repository", property = "outputDir", required = true )
private File repositoryLocation;
#Parameter( defaultValue = "${project.groupId}", property = "groupId", required = true )
private String groupId;
#Parameter( defaultValue = "${project.artifactId}", property = "artifactId", required = true )
private String artifactId;
/**
* Version to use as the installed version
*/
#Parameter( defaultValue = "${project.version}", property = "version", required = true )
private String version;
public void execute() throws MojoExecutionException, MojoFailureException {
try {
// Fetch the xml file to edit from the user's repository for the project
File installDirectory = getInstallDirectory(repositoryLocation, groupId, artifactId);
File xmlFile = new File(installDirectory, "maven-metadata-local.xml");
Document xml = getXmlDoc(xmlFile);
if (xml != null) {
// Fetch the <latest> node
Node nodeLatest = getNode(xml, "/metadata/versioning/latest");
if (nodeLatest == null) {
// If <latest> does not yet exist, insert it into the <versioning> block before <versions>
nodeLatest = xml.createElement("latest");
Node versioningNode = getNode(xml, "/metadata/versioning");
if (versioningNode != null) {
versioningNode.insertBefore(nodeLatest, getNode(xml, "metadata/versioning/versions"));
}
}
// set the version on the <latest> node to the newly installed version
nodeLatest.setTextContent(version);
// save the xml
save(xmlFile, xml);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void save(File xmlFile, Document xml) throws TransformerFactoryConfigurationError, TransformerException {
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
Result output = new StreamResult(xmlFile);
Source input = new DOMSource(xml);
transformer.transform(input, output);
}
private Node getNode(Document source, String path) throws XPathExpressionException{
Node ret = null;
XPathExpression xPath = getPath(path);
NodeList nodes = (NodeList) xPath.evaluate(source, XPathConstants.NODESET);
if(nodes.getLength() > 0 ) {
ret = nodes.item(0);
}
return ret;
}
private XPathExpression getPath(String path) throws XPathExpressionException{
XPath xpath = XPathFactory.newInstance().newXPath();
return xpath.compile(path);
}
private File getInstallDirectory(File repositoryLocation, String groupId, String artifactId) {
String group = groupId.replace('.', '/');
return new File(repositoryLocation, group + "/" + artifactId);
}
private Document getXmlDoc(File xmlFile) throws ParserConfigurationException, SAXException, IOException {
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
return dBuilder.parse(xmlFile);
}
}
How about defining those internal dependencies as modules in one reactor pom? That way you'll compile against the compiled sources (in target/classes) instead of against a jar, and you'll always have the latest code.

How can I provide custom logic in a Maven archetype?

I'm interested in creating a Maven archetype, and I think I have most of the basics down. However, one thing I'm stuck on is that sometimes I want to use custom logic to fill in a template. For example, if somebody generates my archetype and specifies the artifactId as hello-world, I'd like to generate a class named HelloWorld that simply prints out "Hello World!" to the console. If another person generates it with artifactId = howdy-there, the genned class would be HowdyThere and it would print out "Howdy There!".
I know that under the covers, Maven's archetype mechanism leverages the Velocity Template Engine, so I read this article on creating custom directives. This seemed to be what I was looking for, so I created a class called HyphenatedToCamelCaseDirective that extends org.apache.velocity.runtime.directive.Directive. In that class, my getName() implementation returns "hyphenatedCamelCase". In my archetype-metadata.xml file, I have the following...
<requiredProperties>
<requiredProperty key="userdirective">
<defaultValue>com.jlarge.HyphenatedToCamelCaseDirective</defaultValue>
</requiredProperty>
</requiredProperties>
My template class looks like this...
package ${package};
public class #hyphenatedToCamelCase('$artifactId') {
// userdirective = $userdirective
public static void main(String[] args) {
System.out.println("#hyphenatedToCamelCase('$artifactId')"));
}
}
After I install my archetype and then do an archetype:generate by specifying artifactId = howdy-there and groupId = f1.f2, the resulting class looks like this...
package f1.f2;
public class #hyphenatedToCamelCase('howdy-there') {
// userdirective = com.jlarge.HyphenatedToCamelCaseDirective
public static void main(String[] args) {
System.out.println("#hyphenatedToCamelCase('howdy-there')"));
}
}
The result shows that even though userdirective is being set the way I expected it to, It's not evaulating the #hyphenatedToCamelCase directives like I was hoping. In the directive class, I have the render method logging a message to System.out, but that message doesn't show up in the console, so that leads me to believe that the method never got executed during archetype:generate.
Am I missing something simple here, or is this approach just not the way to go?
The required properties section of the archetype-metatadata xml is used to pass additional properties to the velocity context, it is not meant for passing velocity engine configuration. So setting a property called userDirective will only make the variable $userDirective availble and not add a custom directive to the velocity engine.
If you see the source code, the velocity engine used by maven-archetype plugin does not depend on any external property source for its configuration. The code that generates the project relies on an autowired (by the plexus container) implementation of VelocityComponent.
This is the code where the velocity engine is initialized:
public void initialize()
throws InitializationException
{
engine = new VelocityEngine();
// avoid "unable to find resource 'VM_global_library.vm' in any resource loader."
engine.setProperty( "velocimacro.library", "" );
engine.setProperty( RuntimeConstants.RUNTIME_LOG_LOGSYSTEM, this );
if ( properties != null )
{
for ( Enumeration e = properties.propertyNames(); e.hasMoreElements(); )
{
String key = e.nextElement().toString();
String value = properties.getProperty( key );
engine.setProperty( key, value );
getLogger().debug( "Setting property: " + key + " => '" + value + "'." );
}
}
try
{
engine.init();
}
catch ( Exception e )
{
throw new InitializationException( "Cannot start the velocity engine: ", e );
}
}
There is a hacky way of adding your custom directive. The properties you see above are read from the components.xml file in the plexus-velocity-1.1.8.jar. So open this file and add your configuration property
<component-set>
<components>
<component>
<role>org.codehaus.plexus.velocity.VelocityComponent</role>
<role-hint>default</role-hint>
<implementation>org.codehaus.plexus.velocity.DefaultVelocityComponent</implementation>
<configuration>
<properties>
<property>
<name>resource.loader</name>
<value>classpath,site</value>
</property>
...
<property>
<name>userdirective</name>
<value>com.jlarge.HyphenatedToCamelCaseDirective</value>
</property>
</properties>
</configuration>
</component>
</components>
</component-set>
Next add your custom directive class file to this jar and run archetype:generate.
As you see this is very fraglie and you will need to figure a way to distribute this hacked plexus-velocity jar. Depending on what you are planning to use this archetype for it might be worth the effort.

Resources