I have a Java 7's Path. I'd like to compute the md5 of the content of the file represented by that path.
I usually use Guava's hashing mechanism (ByteSource#hash(HashFunction)).
How do I go from a Java 7's Path to a guava's ByteSource so I can compute its md5? do I have to go through an intermediary java.io.File?
Yes, I know ByteSource and Path serve the same purpose. But some parts of my application use ByteSource and some other use Path.
P.S. I know I could use java.security.DigestInputStream. This question is an example of question about how to integrate Guava's ByteSources with Java 7's Path.
You can easily write a ByteSource for a Path yourself. Minimal example:
public class PathByteSource extends ByteSource {
private final Path path;
public PathByteSource(Path path) {
this.path = path;
}
#Override
public InputStream openStream() throws IOException {
return java.nio.file.Files.newInputStream(path);
}
}
It may be prudent to override other methods like size() and read() for more efficiency.
I guess you really have to go though File. Guava works for Java 6 (and even gets back ported to 5), so they can't refer to classes introduced in Java 7. Are there any problems with using path.toFile()?
Related
I'm using Optaplanner 8.3.0.Final with optaplanner-spring-boot-starter and everything works as expected except that I can't figure out how to implement a problemFactChange.
This question: How is the scoreDirector accessed when using the autowired SolverManager with Optaplanner? mentions autowiring solverfactory and then using Solverfactory .getScoreDirectorFactory(). But I can't see how to use that to access the solver being used by the wired solverManager, which I believe is all I need to "addProblemFactChange", which should then change the problem fact when the solver can do so.
There is an API gap that SolverManager lacks an addProblemFactChange() method.
Vote for it.
Workaround
Without the high-level SolverManager API, the workaround is to use the low-level Solver API instead:
#Autowired
SolverFactory<MySolution> solverFactory
public void runSolver() { // don't call this directly in an HTTP servlet/rest thread
Solver<MySolution> solver = solverFactory().buildSolver();
solver.solve(myProblem); // hogs the current thread
}
public void doChange() {
solver.addProblemFactChange( ... /* do change */);
}
I am writing a Servlet Filter and would like to use one of my Liferay components using #Reference:
package my.filter;
import my.Compo;
import org.osgi.service.component.annotations.Reference;
public class MyFilter implements Filter {
#Override
public void doFilter(...) {
compo.doTheThing();
}
#Reference(unbind = "-")
protected my.Compo compo;
}
I get this Java compilation error:
annotation type not applicable to this kind of declaration
What am I doing wrong?
Is it maybe impossible to achieve this?
As tipped by Miroslav, #Reference can only be used in an OSGi component, and a servlet filter is not one.
The solution in Liferay 7 is to develop a filter component.
The procedure to do so is explained at http://www.javasavvy.com/liferay-dxp-filter-tutorial/
You can make a simple filer like: https://www.e-systems.tech/blog/-/blogs/filters-in-liferay-7 and http://www.javasavvy.com/liferay-dxp-filter-tutorial/
But you can also use regular filters, as long you configure you Liferay webapp for that -> there are two consequences if you use regular filters though: you will be out of osgi application and you will have to keep track of this whenever you update your bundle. That is why you should not go with regular implementation. (just complementing the OP answer with the underlining reason to avoid the initial track)
I was just starting a new coding project. I may be ahead of myself, but I've gotten kinda stuck. I wanted to implement an Abstract Factory for the GUI, similar to the example on Wikipedia. However various systems have their own parameters for creating windows. At present I have come up with the following solutions to my dilemma:
Create a type which varies based on compiler directives
Don't use compiler directives and just put everything in a type that contains every possible data member
Create a polymorphic hierarchy and use dynamic casting inside each window function
Use some sort of intermediate singleton that holds the information. This seems esp. unhelpful and would likely also involve casting.
Use a different pattern, such as builder instead.
My objective is to create high level interfaces that are uniform, so that creating a window, etc. is the same for all platforms.
I hesitate to do #5 simply because it seems like this would be a common enough problem that there should already be a solution. This is just a toy, so it's more about learning than building a practical application. I know I could use existing code bases, but that wouldn't achieve my real objective.
Thanks in advance.
I think, it depends on the situation. But how about using abstract factory with builder (inside factory) and decorator with some default values for GUI componets, where decorator will have same interface for similar components from different GUI libraries and extends class from GUI library.
After reading more I've realized I can use Dependency Injection to create the concrete factory first. Since entry point knows what kind of factory it's using, that can be passed to the client. I can't believe I didn't see it before, but I don't think that Dependency Injection "clicked" until now.
I would put the system-specific parameters in the constructor for each abstract factory.
public interface WindowFactory {
public Window build();
}
public class WindowsWindowFactory implements WindowFactory {
private param1, param2, param3;
public WindowsWindowFactory(param1,param2,param3) {} // set params
public Window build() {} // use params
}
public class LinuxWindowFactory implements WindowFactory {
private param1, param2;
public LinuxWindowFactory(param1,param2) {} // set params
public Window build() {} // use params
}
I am getting stack overflow error while accessing haddop file using java code.
import java.io.InputStream;
import java.net.URL;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.io.IOUtils;
public class URLCat
{
static
{
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception
{
InputStream in = null;
try
{
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
}
finally
{
IOUtils.closeStream(in);
}
}
}
i used eclipse to debug this code then i came to know line
in = new URL(args[0]).openStream();
producing error.
I am runnung this code by passing hadoop file path i.e
hdfs://localhost/user/jay/abc.txt
Exception (pulled from comments) :
Exception in thread "main" java.lang.StackOverflowError
at java.nio.Buffer.<init>(Buffer.java:174)
at java.nio.ByteBuffer.<init>(ByteBuffer.java:259)
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:52)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:350)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:373)
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:237)
at java.lang.StringCoding.encode(StringCoding.java:272)
at java.lang.String.getBytes(String.java:946)
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
.. stack trace truncated ..
1) This is because of the bug in the FSURLStreamHandlerFactory class provided by hadoop. Please note that the bug is fixed in the latest jar which contains this class.
2) This file is located in hadoop-common-2.0.0-cdh4.2.1.jar. To understand the problem completely we have to understand how the java.net.URL class works.
Working of URL object
When we create a new URL using any one of its constructor without passing "URLStreamHandler" (either through passing null for its value or calling constructor which does not take URLStreamHandler object as its parameter) then internally it calls a method called getURLStreamHandler(). This method returns the URLStreamHandler object and sets a member
variable in URL class.
This object knows how to construct a connection of a particular scheme like "http", "file"... and so on. This URLStreamHandler is constructed by the factory called
URLStreamHandlerFactory.
3) In the problem example given above the URLStreamHandlerFactory was set to "FsUrlStreamHandlerFactory" by calling the following static method.
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
So when we create a new URL then this "FSUrlStreamHandlerFactory" is used to create the URLStreamHandler object for this new URL by calling its createURLStreamHandler(protocol) method.
This method inturn calls a method called loadFileSystems() of FileSystem class. The loadFileSystems() method invokes the ServiceLoader.load("FileSystem.class") so it tries to read the binary names of the FileSystem implementation classes by searching all META-INF/services/*.FileSystem files of all jar files in classpath and reading its entries.
4) Remember that the each jar is handled as URL object meaning for each jar an URL object is created by the ClassLoader internally. The class loader supplies the URLStreamHandler object
when constructing the URL for these jars so these URLs will not be affected by the "FSUrlStreamHandlerFactory" we set because the URL has already having the "URLStreamHandler". Since we are
dealing with jar files the class loader sets the "URLStreamHandler" as of type "sun.net.www.protocol.jar.Handler".
5) Now inorder to read the entries inside the jar files for the FileSystem implementation classes the "sun.net.www.protocol.jar.Handler" needs to construct the URL object for each entry by
calling the URL constructor without the URLStreamHandler object. Since we already defined the URLStreamHandlerFactory as "FSUrlStreamHandlerFactory" it calls the createURLStreamHandler
(protocol) method which causes to recurse indefinetly and lead to the "StackOverflowException".
This bug is known as the "HADOOP-9041" by the Hadoop committters. The link is https://issues.apache.org/jira/browse/HADOOP-9041.
I know this is somewhat complicated.
So in short the solution to this problem is given below.
1) Use the latest jar hadoop-common-2.0.0-cdh4.2.1.jar which has the fix for this bug
or
2) Put the following statement in the static block before setting the URLStreamHandlerFactory.
static {
FileSystem.getFileSystemClass("file",new Configuration());
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
Note that the first statement inside the static block doesn't depend on FsUrlStreamHandlerFactory now and uses the default handler for file:// to read the file entires in META-INF/services/*.FileSystem files.
I have a workaround.
It would be great if someone more familiar with the current state of the Hadoop world (Jan 2014) would enlighten us and/or explain the behavior.
I encountered the same of StackOverflowError when trying to run URLCat from Haddop The Definitive Guide Third Edition Tom White
I have the problem with Cloudera QuickStart 4.4.0 and 4.3.0
Using both jdk1.6.0_32 and jdk1.6.0_45
The problem occurs during initializion/class loading of org.apache.hadoop.fs.FileSystem underneath java.net.URL
There is some kind of recursive exception handling that is kicking in.
I did the best I could to trace it down.
The path leads to java.util.ServiceLoader which then invokes sun.misc.CompoundEnumeration.nextElement()
Unfortunately, the source for sun.misc.CompoundEnumeration is not included in the jdk src.zip ... perhaps an oversight because it is in java package sun.misc
In an attempt to trigger the error through another execution path I came up with a workaround ...
You can avoid the conditions that lead to StackOverflowError by invoking org.apache.hadoop.fs.FileSystem.getFileSystemClass(String, Configuration) prior to registering the StreamHandlerFactory.
This can be done by modifying the static initialization block (see original listing above):
static {
Configuration conf = new Configuration();
try {
FileSystem.getFileSystemClass("file", conf);
} catch (Exception e) {
throw new RuntimeException(e.getMessage());
};
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
This can also be accomplished by moving the contents of this static block to your main().
I found another reference to this error from Aug 2011 at stackoverflow with FsUrlStreamHandlerFactory
I am quite puzzled that more hadoop newbies have not stumbled onto this problem ... buy the Hadoop book ... download Cloudera QuickStart ... try a very simple example ... FAIL!?
Any insight from more experienced folks would be appreciated.
I am attempting to use Datanucleus with the datanucleus-spatial plugin. I am using annotations for my mappings. I'm am attempting with both PostGIS and Oracle spatial. I am going back to the tutorials from datanucleus. What I'm experiencing doesn't make any sense. My development environment is Netbeans 7.x (I've attempted 7.0, 7.2, and 7.3) with MAven 2.2.1. Using the Position class in Datanucleus's tutorial found at http://www.datanucleus.org/products/datanucleus/jdo/guides/spatial_tutorial.html, I find that if I do not include the datanucleus-spatial plugin in my Maven dependencies, it connects to PostGIS or Oracle no problem, and commits the data, the spatial data being stored as a blob (I expected this since not spatial plugins are present). Using PostGIS, the tutorial works just fine.
I modify the Position class by replacing the org.postgis.Point class with oracle.spatial.geometry.JGeometry and point my connection to a Oracle server. Without spatial, again the point is stored as a blob. With spatial I get the following exception:
java.lang.ClassCastException: org.datanucleus.store.rdbms.datasource.dbcp.PoolingDataSource$PoolGuardConnectionWrapper cannot be cast to oracle.jdbc.OracleConnection
The modified class looks like the following:
#PersistenceCapable
public class Position
{
#PrimaryKey
private String name;
#Persistent
private JGeometry point;
public Position(String name, double x, double y)
{
this(name, JGeometry.createPoint(new double[]{x, y}, 2, 4326));
}
public Position(String name, JGeometry point)
{
this.name = name;
this.point = point;
}
public String getName()
{
return name;
}
public JGeometry getPoint()
{
return point;
}
#Override
public String toString()
{
return "[name] "+ name + " [point] "+point;
}
}
Is there something I'm missing in the fabulous world of DataNucleus Spatial? Why does it fail whenever spatial is added? Do I need the JDO xml file even though I'm annotating? Are there annotations not presented in the tutorial? If the jdo xml file shown in the tutorial is required and the reason I'm getting these errors, where do I put it? I'm currently 3 weeks behind on my project and am about to switch to Hibernate if this is not fixed soon.
You don't present a stack trace, so impossible to tell other than it is DBCP causing the problem, and you could easily enough use any of the other connection pools that are supported. If some Oracle "Connection" object cannot be cast to some other JDBC connection then maybe the Oracle JDBC driver is for a different version of JDBC than what this version of DBCP is ? (and some versions of JDBC break backwards compatibility). No info in the post is provided to confirm or rule that out (the log tells you some of that). As already said, there are ample other connection pools available.
The DN Spatial Tutorial is self-contained, and has Download and GitHub links, and that defines where a JDO XML file would go if using it. The tutorial, as provided, works
Finally, this may be worth a read ...
In order to avoid cannot be cast to oracle.jdbc.OracleConnection error I suggest you to use datanucleus-geospatial 3.2.7 version that can be found on central maven repository.