Error while adding nodes and properties using jcr apis Adobe Experience Manager6.0 - maven

Using JCR apis while I am trying to add node and property.I am getting the following error:
7520 [main] ERROR org.apache.jackrabbit.jcr2spi.hierarchy.ChildNodeEntriesImpl - ChildInfo iterator contains multiple entries with the same name|index or uniqueID -> ignore ChildNodeInfo.
I have added the following dependency in Pom.xml:
<dependency>
<groupId>org.apache.jackrabbit</groupId>
<artifactId>jackrabbit-jcr-commons</artifactId>
<version>2.12.1</version></dependency>
<dependency>
<groupId>org.apache.jackrabbit</groupId>
<artifactId>jackrabbit-jcr2dav</artifactId>
<version>2.0-beta6</version> </dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.5.8</version></dependency>
Java code:
package com.adobe.cq.impl;
import javax.jcr.Node;
import javax.jcr.Repository;
import javax.jcr.Session;
import javax.jcr.SimpleCredentials;
import org.apache.jackrabbit.commons.JcrUtils;
public class GetRepository {
public static void main(String[] args) {
try {
Repository repository = JcrUtils.getRepository("http://localhost:4502/crx/server");
Session session=repository.login(new SimpleCredentials("admin", "admin".toCharArray()));
Node root=session.getRootNode();
Node adobe = root.addNode("adobe");
Node day = adobe.addNode("cq");
day.setProperty("message", "Adobe Experience Manager is part of the Adobe Digital Marketing Suite!");
// Retrieve content
Node node = root.getNode("adobe/cq");
System.out.println(node.getPath());
System.out.println(node.getProperty("message").getString());
// Save the session changes and log out
session.save();
session.logout();
}
catch(Exception e){
e.printStackTrace();
}
}}

Same name siblings are not allowed in the repository. Going by your code, there is no check if the node "adobe" is already present below the root node. Hence, if the node is / was already created / present and the above code executes for the second time, you may face this issue.
Try checking for node availability as shown below.
Node adobe;
if (!root.hasNode("adobe")) {
adobe = root.addNode("adobe");
} else {
adobe = root.getNode("adobe");
}
if (!adobe.hasNode("cq")) {
Node day = adobe.addNode("cq");
}

Related

Package org.apache.flink.api.java.io.jdbc does not exist

I want to use the JDBC connector in an Apache Flink application. But maven doesn't find the flink JDBC package.
I added the following dependency to my pom.xml in the "build-jar" section:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-jdbc_2.11</artifactId>
<version>1.13.1</version>
</dependency>
The jar files were downloaded by maven and are available in the local maven directory.
My code looks like this.
// standard, not relevant flink imports
import org.apache.flink.api.java.io.jdbc.JDBCInputFormat;
import org.apache.flink.api.java.io.jdbc.JDBCOutputFormat;
public class BatchLayerExec {
public static void main( final String[] args ) {
//Definition of Strings for the connection to the database
try {
ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
final TypeInformation<?>[] fieldTypes =
new TypeInformation<?>[] { ... };
final RowTypeInfo rowTypeInfo = new RowTypeInfo(fieldTypes);
//Define Input Format Builder
JDBCInputFormat.JDBCInputFormatBuilder inputBuilder = JDBCInputFormat
.buildJDBCInputFormat()
.setDrivername(driverName)
.setDBUrl(dbURL + sourceDB)
.setQuery(selectQuery)
.setRowTypeInfo(rowTypeInfo)
.setUsername(dbUser)
.setPassword(dbPassword)
.setRowTypeInfo(rowTypeInfo);
DataSet<Row> sourceTable = environment.createInput(inputBuilder.finish());
// Transformation
// ...
// Print for debugging
transformedTable.print();
// Output transformed data to output table
//Define Output Format Builder
JDBCOutputFormat.JDBCOutputFormatBuilder outputBuilder = JDBCOutputFormat
.buildJDBCOutputFormat()
.setDrivername(driverName)
.setDBUrl(dbURL + sourceDB)
.setQuery(insertQuery)
.setSqlTypes(new int[] { ... })
.setUsername(dbUser)
.setPassword(dbPassword);
//Define dataSink
transformedTable.output(outputBuilder.finish());
environment.execute();
} catch(final Exception e) {
System.out.println(e);
}
}
}
But during the build process with mvn clean package -Pbuild-jar, I get the error message:
package org.apache.flink.api.java.io.jdbc does not exist.
I removed some not relevant definitions and steps in the code (see comments). Please comment if you need more information.
I found out that the package org.apache.flink.api.java.io.jdbc is deprecated.
Importing the package org.apache.flink.connector.jdbc works.
EDIT
Note that this requires changing the JDBCInputFormat and JDBCOutputFormat classes to JdbcInputFormat and JdbcOutputFormat.

How does the POI Event API read data from Excel and why does it use less RAM?

I am currently writing my bachelor thesis and I am using the POI Event API from Apache. In short, my work is about a more efficient way to read data from Excel.
I get asked by developers again and again how exactly this is meant with Event API. Unfortunately I don't find anything on the Apache page about the basic principle.
Following code, how I use the POI Event API (This is from the Apache example for XSSF and SAX):
import java.io.InputStream;
import java.util.Iterator;
import org.apache.poi.ooxml.util.SAXHelper;
import org.apache.poi.openxml4j.opc.OPCPackage;
import org.apache.poi.xssf.eventusermodel.XSSFReader;
import org.apache.poi.xssf.model.SharedStringsTable;
import org.xml.sax.Attributes;
import org.xml.sax.ContentHandler;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.XMLReader;
import org.xml.sax.helpers.DefaultHandler;
import javax.xml.parsers.ParserConfigurationException;
public class ExampleEventUserModel {
public void processOneSheet(String filename) throws Exception {
OPCPackage pkg = OPCPackage.open(filename);
XSSFReader r = new XSSFReader( pkg );
SharedStringsTable sst = r.getSharedStringsTable();
XMLReader parser = fetchSheetParser(sst);
// To look up the Sheet Name / Sheet Order / rID,
// you need to process the core Workbook stream.
// Normally it's of the form rId# or rSheet#
InputStream sheet2 = r.getSheet("rId2");
InputSource sheetSource = new InputSource(sheet2);
parser.parse(sheetSource);
sheet2.close();
}
public void processAllSheets(String filename) throws Exception {
OPCPackage pkg = OPCPackage.open(filename);
XSSFReader r = new XSSFReader( pkg );
SharedStringsTable sst = r.getSharedStringsTable();
XMLReader parser = fetchSheetParser(sst);
Iterator<InputStream> sheets = r.getSheetsData();
while(sheets.hasNext()) {
System.out.println("Processing new sheet:\n");
InputStream sheet = sheets.next();
InputSource sheetSource = new InputSource(sheet);
parser.parse(sheetSource);
sheet.close();
System.out.println("");
}
}
public XMLReader fetchSheetParser(SharedStringsTable sst) throws SAXException, ParserConfigurationException {
XMLReader parser = SAXHelper.newXMLReader();
ContentHandler handler = new SheetHandler(sst);
parser.setContentHandler(handler);
return parser;
}
/**
* See org.xml.sax.helpers.DefaultHandler javadocs
*/
private static class SheetHandler extends DefaultHandler {
private SharedStringsTable sst;
private String lastContents;
private boolean nextIsString;
private SheetHandler(SharedStringsTable sst) {
this.sst = sst;
}
public void startElement(String uri, String localName, String name,
Attributes attributes) throws SAXException {
// c => cell
if(name.equals("c")) {
// Print the cell reference
System.out.print(attributes.getValue("r") + " - ");
// Figure out if the value is an index in the SST
String cellType = attributes.getValue("t");
if(cellType != null && cellType.equals("s")) {
nextIsString = true;
} else {
nextIsString = false;
}
}
// Clear contents cache
lastContents = "";
}
public void endElement(String uri, String localName, String name)
throws SAXException {
// Process the last contents as required.
// Do now, as characters() may be called more than once
if(nextIsString) {
int idx = Integer.parseInt(lastContents);
lastContents = sst.getItemAt(idx).getString();
nextIsString = false;
}
// v => contents of a cell
// Output after we've seen the string contents
if(name.equals("v")) {
System.out.println(lastContents);
}
}
public void characters(char[] ch, int start, int length) {
lastContents += new String(ch, start, length);
}
}
public static void main(String[] args) throws Exception {
ExampleEventUserModel example = new ExampleEventUserModel();
example.processOneSheet(args[0]);
example.processAllSheets(args[0]);
}
}
Can someone please explain to me how the Event API works? Is it the same as the event-based architecture or is it something else?
A *.xlsx file, which is Excel stored in Office Open XML and is what apache poi handles as XSSF, is a ZIP archive containing the data in XML files within a directory structure. So we can unzip the *.xlsx file and get the data directly from the XML files then.
There is /xl/sharedStrings.xml having all the string cell values in it. And there is /xl/workbook.xml describing the workbook structure. And there are /xl/worksheets/sheet1.xml, /xl/worksheets/sheet2.xml, ... which are storing the sheets' data. And there is /xl/styles.xml having the style settings for all cells in the sheets.
Per default while creating a XSSFWorkbook all those parts of the *.xlsx file will become object representations as XSSFWorkbook, XSSFSheet, XSSFRow, XSSFCell, ... and further objects of org.apache.poi.xssf.*.* in memory.
To get an impression of how memory consuming XSSFSheet, XSSFRow and XSSFCell are, a look into the sources will be good. Each of those objects contains multiple Lists and Maps as internally members and of course multiple methods too. Now imagine a sheet having hundreds of thousands of rows each containing up to hundreds of cells. Each of those rows and cells will be represented by a XSSFRow or a XSSFCell in memory. This cannot be an accusation to apache poi because those objects are necessary if working with those objects is needed. But if the need is really only getting the content out of the Excel sheet, then those objects are not all necessary. That's why the XSSF and SAX (Event API) approach.
So if the need is only reading data from sheets one could simply parsing the XML of all the /xl/worksheets/sheet[n].xml files without the need for creating memory consuming objects for each sheet and for each row and for each cell in those sheets.
Parsing XML in event based mode means that the code goes top down through the XML and has callback methods defined which get called if the code detects the start of an element, the end of an element or character content within an element. The appropriate callback methods then handle what to do on start, end or with character content of an element. So reading the XML file only means running top down through the file once, handle the events (start, end, character content of an element) and are able getting all needed content out of it. So memory consuming is reduced to storing the text data gotten from the XML.
XSSF and SAX (Event API) uses class SheetHandler which extends DefaultHandler for this.
But if we are already at this level where we get at the underlying XML data and process it, then we could go one more step back too. Native Java is able handling ZIP and parsing XML. So we would not even need additional libraries at all. See how read excel file having more than 100000 row in java? where I have shown this. My code uses Package javax.xml.stream which also provides using event based XMLEventReader but not using callbacks but linear code. Maybe this code is simpler to understand because it is all in one.
For detecting whether a number format is a date format, and so the formatted cell contains a date / time value, one single apache poi class org.apache.poi.ss.usermodel.DateUtil is used. This is done to simplify the code. Of course even this class we could have coded our self.

apache VFS2 uriStyle - root absolute path ends with double slash

while working on an ftp server with the vfs2 library I noticed, that I had to enable VFS.setUriStyle(true) so the library would change the working directory to parent directory of the target file I am operating on (cwd directoryName).
But if UriStyle is enabled, everything is being resolved relativly to the root. Which would not be a Problem if the root was not "//".
The class GenericFileName sets the absolutePath of the root to "/", which makes the Method getPath() return "/"+getUriTrailer() which in the case of the root always returns "//". Everything that is resolved relativly to // has two dots proceeding to their path.
Which means if I execute the following code:
public class RemoteFileTest {
public static void main(String[] args) {
// Options for a RemoteFileObject connection
VFS.setUriStyle(true);
FileSystemOptions options = new FileSystemOptions();
// we doing an ftp connection, hence we use the ftpConfigBuilder
// we want to work in passive mode
FtpFileSystemConfigBuilder.getInstance().setPassiveMode(options, true);
FtpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(options, false);
// DefaultFileSystemConfigBuilder.getInstance().setRootURI(options, "/newRoot/");
// System.out.println(DefaultFileSystemConfigBuilder.getInstance().getRootURI(options));
// ftp://localhost:21/
StaticUserAuthenticator auth = new StaticUserAuthenticator("", "user", "pass");
try {
DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator(options, auth);
} catch (FileSystemException e) {
e.printStackTrace();
return;
}
// A FileSystemManager creates an abstract FileObject linked to are desired RemoteFile.
// That link is just simulated and not yet real.
FileSystemManager manager;
try {
manager = VFS.getManager();
} catch (FileSystemException e) {
e.printStackTrace();
return;
}
try (FileObject remoteFile = manager.resolveFile("ftp://localhost:21/sub_folder/test.txt", options)) {
System.out.println("Is Folder " + remoteFile.isFolder());
System.out.println("Is File " + remoteFile.isFile());
} catch (FileSystemException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return;
}
}}
I receive this interaction with the ftp server:
USER user
PASS ****
TYPE I
CWD //
SYST
PASV
LIST ..sub_folder/
PWD
CWD ..sub_folder/
I want the interaction to be just like this, but without the two dots infront of the directory.
Kind regards
Barry
Fixed it as described below:
Disabled uriStyle again.
Wrote my own VFS class which creates my custom written Manager.
That Manager overwrites the FtpFileProvider with my custom one, which simply sets the root to a custom selected one, which causes the desired behaviour.
import org.apache.commons.vfs2.FileName;
import org.apache.commons.vfs2.FileObject;
import org.apache.commons.vfs2.FileSystem;
import org.apache.commons.vfs2.FileSystemException;
import org.apache.commons.vfs2.FileSystemOptions;
import org.apache.commons.vfs2.impl.DefaultFileSystemConfigBuilder;
import org.apache.commons.vfs2.provider.ftp.FtpFileProvider;
public class AdvancedFtpFileProvider extends FtpFileProvider {
public AdvancedFtpFileProvider() {
super();
// setFileNameParser(AdvancedFtpFileNameParser.getInstance());
}
#Override
protected FileObject findFile(FileName name, FileSystemOptions fileSystemOptions) throws FileSystemException {
// Check in the cache for the file system
//getContext().getFileSystemManager().resolveName... resolves the configured RootUri relative to the selected root (name.getRoot()). This calls cwd to the selectedRoot and operates from there with relatives urls towards the new root!
final FileName rootName = getContext().getFileSystemManager().resolveName(name.getRoot(), DefaultFileSystemConfigBuilder.getInstance().getRootURI(fileSystemOptions));
final FileSystem fs = getFileSystem(rootName, fileSystemOptions);
// Locate the file
// return fs.resolveFile(name.getPath());
return fs.resolveFile(name);
}
}
Came across this question because I was having the same issue with the following
ftp://user:pass#host//home/user/file.txt
becoming... (note the single slash after 'home')
ftp://user:pass#host/home/user/file.txt
I did this to solve the issue...
// Setup some options, add as many as you need
FileSystemOptions opts = new FileSystemOptions( );
// This line tells VFS to treat the URI as the absolute path and not relative
FtpsFileSystemConfigBuilder.getInstance( ).setUserDirIsRoot( opts, false );
// Retrieve the file from the remote FTP server
FileObject realFileObject = fileSystemManager.resolveFile( fileSystemUri, opts );
I hope this can help someone, if not then provide a reference for the next time this stumps me.

I have created a single queue with daily rolling

I have created a single queue with daily rolling. On the next day, I can't read the latest appended message. I found that the tailer index doesn't move to the latest cycle automatically after reading all messages in the previous cycle. By the way the java process was shut down at night and restarted on the next day.
I use Chronicle Queue V4.52.
Thanks.
This should work, we have tests which show messages are read from one cycle to the next.
Would you be able to include a test which reproduces this. There are quite a few unit tests you can use as examples.
this should now be fixed in the latest version
<dependency>
<groupId>net.openhft</groupId>
<artifactId>chronicle-bom</artifactId>
<version>1.13.15</version>
<type>pom</type>
<scope>import</scope>
</dependency>
or if you prefer
<dependency>
<groupId>net.openhft</groupId>
<artifactId>chronicle-queue</artifactId>
<version>4.5.7</version>
</dependency>
also see test case net.openhft.chronicle.queue.impl.single.SingleChronicleQueueTest#testReadingWritingWhenCycleIsSkipped
#Test
public void testReadingWritingWhenCycleIsSkipped() throws Exception {
final Path dir = Files.createTempDirectory("demo");
final RollCycles rollCycle = RollCycles.TEST_SECONDLY;
// write first message
try (ChronicleQueue queue = ChronicleQueueBuilder
.single(dir.toString())
.rollCycle(rollCycle).build()) {
queue.acquireAppender().writeText("first message");
}
Thread.sleep(2100);
// write second message
try (ChronicleQueue queue = ChronicleQueueBuilder
.single(dir.toString())
.rollCycle(rollCycle).build()) {
queue.acquireAppender().writeText("second message");
}
// read both messages
try (ChronicleQueue queue = ChronicleQueueBuilder
.single(dir.toString())
.rollCycle(rollCycle).build()) {
ExcerptTailer tailer = queue.createTailer();
Assert.assertEquals("first message", tailer.readText());
Assert.assertEquals("second message", tailer.readText());
}
}

Run SQL script on JDBC connection, minimal approach

Long story short: I want to run a SQL script on an HSQLDB database.
I want to follow a minimalistic approach, which means:
Absolutely no manual parsing of SQL
No additional dependencies except for general Utilities. I make the distinction here because, for example I refuse to pull in Ibatis or Hibernate which are larger scope frameworks, but I will accept an apache commons or guava type utils library.
The library MUST BE AVAILABLE ON MAVEN. No small-time pet-project stuff.
(EDIT 12/5/15) Must have the ability to execute SQL file from classpath.
To give you some context:
try {
connection = DriverManager.getConnection("jdbc:hsqldb:file:mydb", "sa", "");
// Run script here
} catch (SQLException e) {
throw new RuntimeException("Unable to load database", e);
}
A one-liner would be great. Something like:
FancyUtils.runScript(connection, new File("myFile.sql"));
I did find org.hsqldb.persist.ScriptRunner but it takes a Database object as an argument and I can't seem to figure out how to get an instance. Also, I don't like the description of "Restores the state of a Database", so does that mean my database will be cleared first? That's definitely not what I want.
I just tried using the SqlFile object in SqlTool and it worked for me. The Maven dependency I used was
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>sqltool</artifactId>
<version>2.4.1</version>
</dependency>
The SQL script file I wanted to execute was "C:/Users/Public/test/hsqldbCommands.sql":
INSERT INTO table1 (id, textcol) VALUES (2, 'stuff');
INSERT INTO table1 (id, textcol) VALUES (3, 'more stuff');
and my Java test code was
package hsqldbMaven;
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import org.hsqldb.cmdline.SqlFile;
public class HsqldbMavenMain {
public static void main(String[] args) {
String connUrl = "jdbc:hsqldb:file:C:/Users/Public/test/hsqldb/personal";
String username = "SA";
String password = "";
try (Connection conn = DriverManager.getConnection(connUrl, username, password)) {
// clear out previous test data
try (Statement st = conn.createStatement()) {
st.executeUpdate("DELETE FROM table1 WHERE ID > 1");
}
System.out.println("Before:");
dumpTable(conn);
// execute the commands in the .sql file
SqlFile sf = new SqlFile(new File("C:/Users/Public/test/hsqldbCommands.sql"));
sf.setConnection(conn);
sf.execute();
System.out.println();
System.out.println("After:");
dumpTable(conn);
try (Statement st = conn.createStatement()) {
st.execute("SHUTDOWN");
}
} catch (Exception e) {
e.printStackTrace(System.err);
}
}
private static void dumpTable(Connection conn) throws SQLException {
try (
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("SELECT id, textcol FROM table1")) {
while (rs.next()) {
System.out.printf("%d - %s%n", rs.getInt("id"), rs.getString("textcol"));
}
}
}
}
producing
Before:
1 - Hello world!
After:
1 - Hello world!
2 - stuff
3 - more stuff
Edit: 2018-08-26
If you want to bundle your SQL script file into the project as a resource then see the example in the other answer.
Note also that this approach is not restricted to HSQLDB databases. It can be used for other databases as well (e.g., MySQL, SQL Server).
This uses the SqlTool library, but reads the script directly from the classpath by using the SqlFile class:
try(InputStream inputStream = getClass().getResourceAsStream("/script.sql")) {
SqlFile sqlFile = new SqlFile(new InputStreamReader(inputStream), "init", System.out, "UTF-8", false, new File("."));
sqlFile.setConnection(connection);
sqlFile.execute();
}
Even though iBatis was mentioned by the OP as a non-requirement, I still want to recommend MyBatis - the iBatis fork by the original creators.
The core library (org.mybatis:mybatis) requires no dependencies (all of its dependencies are optional) and while larger than HSQLDB SqlTool, at 1.7MB binary it is not horribly big for most uses and is continuously maintained (the last release, 3.5, was last month as of this writing).
You can initialize ScriptRunner with a JDBC Connection, then call runScript(new InputStreamReader(sqlinputst, Standard chartered.UTF_8)) to run whatever SQL script you can get an input steam of.

Resources