multithreading (unable to accessing global variables like ann) - stanford-nlp

I am trying multithreaded annotation, but complaining with accessing ann
from run(). Do you have any idea?
Properties props = new Properties();
props.setProperty("annotators", "tokenize,ssplit,parse");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Annotation ann = new Annotation("your sentence here");
for (int i = 0; i < 100; ++i) {
new Thread() {
#Override public void run() {
pipeline.annotate(ann);
Tree tree = ann.get(SentencesAnnotation.class).get(0).get(TreeAnnotation.class);
}
}.start();
}

This seems to be a java question, if it is not and it's specifically a question about stanford-nlp, I might be wrong.
It would help if you posted the exact error you are getting with your code. I guess it is about accessing a local variable from an anonymous inner class.
Due to how closure and scoping work in java such variables have to be (effectively depending on java version) final.
Does changing the line to
final Annotation ann = new Annotation("your sentence here");
help?

Related

Sprint Boot 2.0 Applications (Reactive MongoDB) hangs up on startup

I'm new to Spring Data with Reactive MongoDB and am having troubles with my generator. Both setup methods won't return. The first one is
#PostConstruct
public void setup() {
personRepository.deleteAll().block();
LOG.info("Never happens");
}
The deleteAll() call will block indefinitely. I'm experiencing the same problem when executing this
#PostConstruct
public void setup2() {
List<Person> personList = new LinkedList<>();
for (int i = 0; i < 200; i++) {
personList.add(Person.PersonBuilder.aPerson().uuid(UUID.randomUUID()).name("Name " + i).build());
}
personRepository.saveAll(personList).blockLast();
}
It seems like the repository won't close the connection when using block() or blockLast(). In the case of saveAll() many connections are opened but not closed.
Edit: I know this is not really reactive, but I don't want have to chain everything in this class. In case there is no way this could work, I'm happy to see your suggestions. As I said I'm fairly new to this topic.
After looking further I found this post.
Then I got the idea to provide a CommandLineRunner Bean and insert the data there. Now everything works as expected. I also updated the repository.

Dump Spring boot Configuration

Our Ops guys want the Spring boot configuration (i.e. all properties) to be dumped to the log file when the app starts. I assume this can be done by injecting the properties with annotation #ConfigurationProperties and printing them.
The questions is whether there is a better or built-in mechanism to achieve this.
Given there does not seem to be a built in solution besides, I was try to cook my own. Here is what I came up with:
#Component
public class ConfigurationDumper {
#Autowired
public void init(Environment env){
log.info("{}",env);
}
}
The challenge with this is that it does not print variables that are in my application.yml. Instead, here is what I get:
StandardServletEnvironment
{
activeProfiles=[],
defaultProfiles=[default],
propertySources=[
servletConfigInitParams,
servletContextInitParams,
systemProperties,
systemEnvironment,
random,
applicationConfig: [classpath: /application.yml]
]
}
How can I fix this so as to have all properties loaded and printed?
If you use actuator , env endpoint will give you all the configuration properties set in ConfigurableEnvironment and configprops will give you the list of #ConfigurationProperties, but not in the log.
Take a look at the source code for this env endpoint, may be it will give you an idea of how you could get all the properties you are interested in.
There is no built-in mechanism and it really depends what you mean by "all properties". Do you want only the actual keys that you wrote or you want all properties (including defaults).
For the former, you could easily listen for ApplicationEnvironmentPreparedEvent and log the property sources you're interested in. For the latter, /configprops is indeed a much better/complete output.
This logs only the properties configured *.properties file.
/**
* maps given property names to its origin
* #return a map where key is property name and value the origin
*/
public Map<String, String> fromWhere() {
final Map<String, String> mapToLog = new HashMap<>();
final MutablePropertySources propertySources = env.getPropertySources();
final Iterator<?> it = propertySources.iterator();
while (it.hasNext()) {
final Object object = it.next();
if (object instanceof MapPropertySource) {
MapPropertySource propertySource = (MapPropertySource) object;
String propertySourceName = propertySource.getName();
if (propertySourceName.contains("properties")) {
Map<String, Object> sourceMap = propertySource.getSource();
for (String key : sourceMap.keySet()) {
final String envValue = env.getProperty(key);
String env2Val = System.getProperty(key);
String source = propertySource.getName().contains("file:") ? "FILE" : "JAR";
if (envValue.equals(env2Val)) {
source = "ENV";
}
mapToLog.putIfAbsent(key, source);
}
}
}
}
return mapToLog;
}
my example output which depicts the property name, value and from where it comes. My property values are describing from where they come.:
myprop: fooFromJar from JAR
aPropFromFile: fromExternalConfFile from FILE
mypropEnv: here from vm arg from ENV
ENV means that I have given it by -D to JVM.
JAR means it is from application.properties inside JAR
FILE means it is from application.properties outside JAR

Access all Environment properties as a Map or Properties object

I am using annotations to configure my spring environment like this:
#Configuration
...
#PropertySource("classpath:/config/default.properties")
...
public class GeneralApplicationConfiguration implements WebApplicationInitializer
{
#Autowired
Environment env;
}
This leads to my properties from default.properties being part of the Environment. I want to use the #PropertySource mechanism here, because it already provides the possibility to overload properties through several fallback layers and different dynamic locations, based on the environment settings (e.g. config_dir location). I just stripped the fallback to make the example easier.
However, my problem now is that I want to configure for example my datasource properties in default.properties. You can pass the settings to the datasource without knowing in detail what settings the datasource expects using
Properties p = ...
datasource.setProperties(p);
However, the problem is, the Environment object is neither a Properties object nor a Map nor anything comparable. From my point of view it is simply not possible to access all values of the environment, because there is no keySet or iterator method or anything comparable.
Properties p <=== Environment env?
Am I missing something? Is it possible to access all entries of the Environment object somehow? If yes, I could map the entries to a Map or Properties object, I could even filter or map them by prefix - create subsets as a standard java Map ... This is what I would like to do. Any suggestions?
You need something like this, maybe it can be improved. This is a first attempt:
...
import org.springframework.core.env.PropertySource;
import org.springframework.core.env.AbstractEnvironment;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
...
#Configuration
...
#org.springframework.context.annotation.PropertySource("classpath:/config/default.properties")
...
public class GeneralApplicationConfiguration implements WebApplicationInitializer
{
#Autowired
Environment env;
public void someMethod() {
...
Map<String, Object> map = new HashMap();
for(Iterator it = ((AbstractEnvironment) env).getPropertySources().iterator(); it.hasNext(); ) {
PropertySource propertySource = (PropertySource) it.next();
if (propertySource instanceof MapPropertySource) {
map.putAll(((MapPropertySource) propertySource).getSource());
}
}
...
}
...
Basically, everything from the Environment that's a MapPropertySource (and there are quite a lot of implementations) can be accessed as a Map of properties.
This is an old question, but the accepted answer has a serious flaw. If the Spring Environment object contains any overriding values (as described in Externalized Configuration), there is no guarantee that the map of property values it produces will match those returned from the Environment object. I found that simply iterating through the PropertySources of the Environment did not, in fact, give any overriding values. Instead it produced the original value, the one that should have been overridden.
Here is a better solution. This uses the EnumerablePropertySources of the Environment to iterate through the known property names, but then reads the actual value out of the real Spring environment. This guarantees that the value is the one actually resolved by Spring, including any overriding values.
Properties props = new Properties();
MutablePropertySources propSrcs = ((AbstractEnvironment) springEnv).getPropertySources();
StreamSupport.stream(propSrcs.spliterator(), false)
.filter(ps -> ps instanceof EnumerablePropertySource)
.map(ps -> ((EnumerablePropertySource) ps).getPropertyNames())
.flatMap(Arrays::<String>stream)
.forEach(propName -> props.setProperty(propName, springEnv.getProperty(propName)));
I had the requirement to retrieve all properties whose key starts with a distinct prefix (e.g. all properties starting with "log4j.appender.") and wrote following Code (using streams and lamdas of Java 8).
public static Map<String,Object> getPropertiesStartingWith( ConfigurableEnvironment aEnv,
String aKeyPrefix )
{
Map<String,Object> result = new HashMap<>();
Map<String,Object> map = getAllProperties( aEnv );
for (Entry<String, Object> entry : map.entrySet())
{
String key = entry.getKey();
if ( key.startsWith( aKeyPrefix ) )
{
result.put( key, entry.getValue() );
}
}
return result;
}
public static Map<String,Object> getAllProperties( ConfigurableEnvironment aEnv )
{
Map<String,Object> result = new HashMap<>();
aEnv.getPropertySources().forEach( ps -> addAll( result, getAllProperties( ps ) ) );
return result;
}
public static Map<String,Object> getAllProperties( PropertySource<?> aPropSource )
{
Map<String,Object> result = new HashMap<>();
if ( aPropSource instanceof CompositePropertySource)
{
CompositePropertySource cps = (CompositePropertySource) aPropSource;
cps.getPropertySources().forEach( ps -> addAll( result, getAllProperties( ps ) ) );
return result;
}
if ( aPropSource instanceof EnumerablePropertySource<?> )
{
EnumerablePropertySource<?> ps = (EnumerablePropertySource<?>) aPropSource;
Arrays.asList( ps.getPropertyNames() ).forEach( key -> result.put( key, ps.getProperty( key ) ) );
return result;
}
// note: Most descendants of PropertySource are EnumerablePropertySource. There are some
// few others like JndiPropertySource or StubPropertySource
myLog.debug( "Given PropertySource is instanceof " + aPropSource.getClass().getName()
+ " and cannot be iterated" );
return result;
}
private static void addAll( Map<String, Object> aBase, Map<String, Object> aToBeAdded )
{
for (Entry<String, Object> entry : aToBeAdded.entrySet())
{
if ( aBase.containsKey( entry.getKey() ) )
{
continue;
}
aBase.put( entry.getKey(), entry.getValue() );
}
}
Note that the starting point is the ConfigurableEnvironment which is able to return the embedded PropertySources (the ConfigurableEnvironment is a direct descendant of Environment). You can autowire it by:
#Autowired
private ConfigurableEnvironment myEnv;
If you not using very special kinds of property sources (like JndiPropertySource, which is usually not used in spring autoconfiguration) you can retrieve all properties held in the environment.
The implementation relies on the iteration order which spring itself provides and takes the first found property, all later found properties with the same name are discarded. This should ensure the same behaviour as if the environment were asked directly for a property (returning the first found one).
Note also that the returned properties are not yet resolved if they contain aliases with the ${...} operator. If you want to have a particular key resolved you have to ask the Environment directly again:
myEnv.getProperty( key );
The original question hinted that it would be nice to be able to filter all the properties based on a prefix. I have just confirmed that this works as of Spring Boot 2.1.1.RELEASE, for Properties or Map<String,String>. I'm sure it's worked for while now. Interestingly, it does not work without the prefix = qualification, i.e. I do not know how to get the entire environment loaded into a map. As I said, this might actually be what OP wanted to begin with. The prefix and the following '.' will be stripped off, which might or might not be what one wants:
#ConfigurationProperties(prefix = "abc")
#Bean
public Properties getAsProperties() {
return new Properties();
}
#Bean
public MyService createService() {
Properties properties = getAsProperties();
return new MyService(properties);
}
Postscript: It is indeed possible, and shamefully easy, to get the entire environment. I don't know how this escaped me:
#ConfigurationProperties
#Bean
public Properties getProperties() {
return new Properties();
}
As this Spring's Jira ticket, it is an intentional design. But the following code works for me.
public static Map<String, Object> getAllKnownProperties(Environment env) {
Map<String, Object> rtn = new HashMap<>();
if (env instanceof ConfigurableEnvironment) {
for (PropertySource<?> propertySource : ((ConfigurableEnvironment) env).getPropertySources()) {
if (propertySource instanceof EnumerablePropertySource) {
for (String key : ((EnumerablePropertySource) propertySource).getPropertyNames()) {
rtn.put(key, propertySource.getProperty(key));
}
}
}
}
return rtn;
}
Spring won't allow to decouple via java.util.Properties from Spring Environment.
But Properties.load() still works in a Spring boot application:
Properties p = new Properties();
try (InputStream is = getClass().getResourceAsStream("/my.properties")) {
p.load(is);
}
The other answers have pointed out the solution for the majority of cases involving PropertySources, but none have mentioned that certain property sources are unable to be casted into useful types.
One such example is the property source for command line arguments. The class that is used is SimpleCommandLinePropertySource. This private class is returned by a public method, thus making it extremely tricky to access the data inside the object. I had to use reflection in order to read the data and eventually replace the property source.
If anyone out there has a better solution, I would really like to see it; however, this is the only hack I have gotten to work.
Working with Spring Boot 2, I needed to do something similar. Most of the answers above work fine, just beware that at various phases in the app lifecycles the results will be different.
For example, after a ApplicationEnvironmentPreparedEvent any properties inside application.properties are not present. However, after a ApplicationPreparedEvent event they are.
For Spring Boot, the accepted answer will overwrite duplicate properties with lower priority ones. This solution will collect the properties into a SortedMap and take only the highest priority duplicate properties.
final SortedMap<String, String> sortedMap = new TreeMap<>();
for (final PropertySource<?> propertySource : env.getPropertySources()) {
if (!(propertySource instanceof EnumerablePropertySource))
continue;
for (final String name : ((EnumerablePropertySource<?>) propertySource).getPropertyNames())
sortedMap.computeIfAbsent(name, propertySource::getProperty);
}
I though I'd add one more way. In my case I supply this to com.hazelcast.config.XmlConfigBuilder which only needs java.util.Properties to resolve some properties inside the Hazelcast XML configuration file, i.e. it only calls getProperty(String) method. So, this allowed me to do what I needed:
#RequiredArgsConstructor
public class SpringReadOnlyProperties extends Properties {
private final org.springframework.core.env.Environment delegate;
#Override
public String getProperty(String key) {
return delegate.getProperty(key);
}
#Override
public String getProperty(String key, String defaultValue) {
return delegate.getProperty(key, defaultValue);
}
#Override
public synchronized String toString() {
return getClass().getName() + "{" + delegate + "}";
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
if (!super.equals(o)) return false;
SpringReadOnlyProperties that = (SpringReadOnlyProperties) o;
return delegate.equals(that.delegate);
}
#Override
public int hashCode() {
return Objects.hash(super.hashCode(), delegate);
}
private void throwException() {
throw new RuntimeException("This method is not supported");
}
//all methods below throw the exception
* override all methods *
}
P.S. I ended up not using this specifically for Hazelcast because it only resolves properties for XML file but not at runtime. Since I also use Spring, I decided to go with a custom org.springframework.cache.interceptor.AbstractCacheResolver#getCacheNames. This resolves properties for both situations, at least if you use properties in cache names.
To get ONLY properties, defined in my hibernate.properteies file:
#PropertySource(SomeClass.HIBERNATE_PROPERTIES)
public class SomeClass {
public static final String HIBERNATE_PROPERTIES = "hibernate.properties";
#Autowired
private Environment env;
public void someMethod() {
final Properties hibProps = asProperties(HIBERNATE_PROPERTIES);
}
private Properties asProperties(String fileName) {
return StreamSupport.stream(
((AbstractEnvironment) env).getPropertySources().spliterator(), false)
.filter(ps -> ps instanceof ResourcePropertySource)
.map(ps -> (ResourcePropertySource) ps)
.filter(rps -> rps.getName().contains(fileName))
.collect(
Properties::new,
(props, rps) -> props.putAll(rps.getSource()),
Properties::putAll);
}
}
A little helper to analyze the sources of a property, which sometimes drive me crazy . I used this discussion to write SpringConfigurableEnvironment.java on github.
It could be used in a test:
#SpringBootTest
public class SpringConfigurableEnvironmentTest {
#Autowired
private ConfigurableEnvironment springEnv;
#Test
public void testProperties() {
SpringConfigurableEnvironment properties = new SpringConfigurableEnvironment(springEnv);
SpringConfigurableEnvironment.PropertyInfo info = properties.get("profile.env");
assertEquals("default", properties.get(info.getValue());
assertEquals(
"Config resource 'class path resource [application.properties]' via location 'optional:classpath:/'",
info.getSourceList.get(0));
}
}
All answers above have pretty much covers everything, but be aware of overridden values from environment variables. They may have different key values.
For example, if a user override my.property[1].value using environment variable MY_PROPERTY[1]_VALUE, iterating through EnumerablePropertySources.getPropertyNames() would give you both my.property[1].value and MY_PROPERTY[1]_VALUE key values.
What even worse is that if my.property[1].value is not defined in applications.conf (or applications.yml), a MY_PROPERTY[1]_VALUE in environment variables would not give you my.property[1].value but only MY_PROPERTY[1]_VALUE key value from EnumerablePropertySources.getPropertyNames().
So it is developers' job to cover the those properties from environment variables. Unfortunately, there is no one-on-one mapping between environment variables schema vs the normal schema, see the source code of SystemEnvironmentPropertySource. For example, MY_PROPERTY[1]_VALUE could be either my.property[1].value or my-property[1].value

How to write multiple outputs of different formats in a hadoop reducer?

How do you use the MultipleOutputs class in a reducer to write multiple outputs, each of which can have its own unique configuration? There is some documentation in the MultipleOutputs javadoc, but it seems limited to Text outputs. It turns out that MultipleOutputs can handle the output path, key class and value class of each output, but attempts to use output formats that require the use of other configuration properties fail.
(This question has come up several times but my attempts to answer it have been thwarted because the asker actually had a different problem. Since this question has taken more than a few days of investigation for me to answer, I'm answering my own question here as suggested by this Meta Stack Overflow question.)
I've crawled through the MultipleOutputs implementation and have found that it doesn't support any OutputFormatType that has properties other than outputDir, key class and value class. I tried to write my own MultipleOutputs class, but that failed because it needs to call a private method somewhere in the Hadoop classes.
I'm left with only one workaround that seems to work in all cases and all combinations of output formats and configurations: Write subclasses of the OutputFormat classes that I want to use (these turn out to be reusable). These classes understand that other OutputFormats are in use concurrently and know how to store away their properties. The design exploits the fact that an OutputFormat can be configured with the context just before being asked for its RecordWriter.
I've got this to work with Cassandra's ColumnFamilyOutputFormat:
package com.myorg.hadoop.platform;
import org.apache.cassandra.hadoop.ColumnFamilyOutputFormat;
import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.conf.Configuration;
public abstract class ConcurrentColumnFamilyOutputFormat
extends ColumnFamilyOutputFormat
implements Configurable {
private static String[] propertyName = {
"cassandra.output.keyspace" ,
"cassandra.output.keyspace.username" ,
"cassandra.output.keyspace.passwd" ,
"cassandra.output.columnfamily" ,
"cassandra.output.predicate",
"cassandra.output.thrift.port" ,
"cassandra.output.thrift.address" ,
"cassandra.output.partitioner.class"
};
private Configuration configuration;
public ConcurrentColumnFamilyOutputFormat() {
super();
}
public Configuration getConf() {
return configuration;
}
public void setConf(Configuration conf) {
configuration = conf;
String prefix = "multiple.outputs." + getMultiOutputName() + ".";
for (int i = 0; i < propertyName.length; i++) {
String property = prefix + propertyName[i];
String value = conf.get(property);
if (value != null) {
conf.set(propertyName[i], value);
}
}
}
public void configure(Configuration conf) {
String prefix = "multiple.outputs." + getMultiOutputName() + ".";
for (int i = 0; i < propertyName.length; i++) {
String property = prefix + propertyName[i];
String value = conf.get(propertyName[i]);
if (value != null) {
conf.set(property, value);
}
}
}
public abstract String getMultiOutputName();
}
For each Cassandra (in this case) output you want for your reducer, you'd have a class:
package com.myorg.multioutput.ReadCrawled;
import com.myorg.hadoop.platform.ConcurrentColumnFamilyOutputFormat;
public class StrongOutputFormat extends ConcurrentColumnFamilyOutputFormat {
public StrongOutputFormat() {
super();
}
#Override
public String getMultiOutputName() {
return "Strong";
}
}
and you'd configure it in your mapper/reducer configuration class:
// This is how you'd normally configure the ColumnFamilyOutputFormat
ConfigHelper.setOutputColumnFamily(job.getConfiguration(), "Partner", "Strong");
ConfigHelper.setOutputRpcPort(job.getConfiguration(), "9160");
ConfigHelper.setOutputInitialAddress(job.getConfiguration(), "localhost");
ConfigHelper.setOutputPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.RandomPartitioner");
// This is how you tell the MultipleOutput-aware OutputFormat that
// it's time to save off the configuration so no other OutputFormat
// steps all over it.
new StrongOutputFormat().configure(job.getConfiguration());
// This is where we add the MultipleOutput-aware ColumnFamilyOutputFormat
// to out set of outputs
MultipleOutputs.addNamedOutput(job, "Strong", StrongOutputFormat.class, ByteBuffer.class, List.class);
Just to give another example, the MultipleOutput subclass for FileOutputFormat uses these properties:
private static String[] propertyName = {
"mapred.output.compression.type" ,
"mapred.output.compression.codec" ,
"mapred.output.compress" ,
"mapred.output.dir"
};
and would be implement just like ConcurrentColumnFamilyOutputFormat above except that it would use the above properties.
I have implemented MultipleOutputs support for Cassandra (see this JIRA ticket, and it is currently scheduled for release in 1.2. If you need it now, you can apply the patch in the ticket. Also check out this presentation on the topic which gives examples on its usage.

jersey-freemarker

I'm developing a small tool based on jersey and freemarker, which will enable designers to test there freemarker templates, locally, using some mok-objects.
I'm sorry to write here, but I cant find any documentation about it except some code and javadocs.
To do that I did the following:
1 Dependencies:
<dependency>
<groupId>com.sun.jersey.contribs</groupId>
<artifactId>jersey-freemarker</artifactId>
<version>1.9</version>
</dependency>
2 Starting grizzly, telling where to find freemarker templates:
protected static HttpServer startServer() throws IOException {
System.out.println("Starting grizzly...");
Map<String, Object> params = new HashMap<String, Object>();
params.put("com.sun.jersey.freemarker.templateBasePath", "/");
ResourceConfig rc = new PackagesResourceConfig("resource.package");
rc.setPropertiesAndFeatures(params);
HttpServer server = GrizzlyServerFactory.createHttpServer(BASE_URI, rc);
server.getServerConfiguration().addHttpHandler(
new StaticHttpHandler("/libs"), "/libs");
return server;
}
3 Creates the root resource and binds freemarker files:
#Context ResourceConfig resourceConfig;
#Path("{path: ([^\\s]+(\\.(?i)(ftl))$)}")
public Viewable renderFtl (#PathParam("path") String path) throws IOException {
Viewable view = new Viewable("/"+path);
return view;
}
Everything works fine, except that freemarker files are not rendered. I have an empty white page, but file exists and debugger enter inside renderFtl method right.
Do you know how can I do that?
I read a lot of articles here and around the web, but old posts only or articles talking about spring integration and I don't want to integrate it because I don't need it.
I really like Jersey, I think is one of the most complete and power framework on java world, but anytime I try to find documentation on specific features or contribs libraries, I'm lost... There no escape from groups forums :)
Where can I find a complete documentation about it?
Tanks a lot David
Updates:
Trying to solve I understood I cannot use built-in jersey support, because it needs to use files placed in resources tree. So What I did is to build freemarker configuration, in test for now, directly #runtime and returns a StreamingOutput object:
#Path("{path: ([^\\s]+(\\.(?i)(ftl))$)}")
public StreamingOutput renderFtl (#PathParam("path") String path) throws Exception {
Configuration cfg = new Configuration();
// Specify the data source where the template files come from.
// Here I set a file directory for it:
cfg.setDirectoryForTemplateLoading(new File("."));
// Create the root hash
Map<String, Object> root = new HashMap<String, Object>();
Template temp = cfg.getTemplate(path);
return new FTLOutput(root, temp);
}
FTLOutput is here:
This is not a good code, but is for test only...
class FTLOutput implements StreamingOutput {
private Object root;
private Template t;
public FTLOutput(Object root, Template t) {
this.root = root;
this.t = t;
}
#Override
public void write(OutputStream output) throws IOException {
Writer writer = new OutputStreamWriter(output);
try {
t.process(root, writer);
writer.flush();
} catch (TemplateException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
I have no errors evidence on debug and freemarker tells me that template is found and rendered, but jersey still no give me a result...
I really don't know why!
Why are you using Jersey 1.9? 1.11 is already out, you should update if you can
Have you seen "freemarker" sample from Jersey? It demonstrates simple usecase of using freemarker with jersey.
Where are your resources?
Templates are being found by calling [LastMatchedResourceClass].getResources(...), so if your templates are not accessible as resources, they can't be rendered correctly. you can checkout Jersey source and place some breakpoints into FreemarkerViewProcessor, it should tell you where exactly the problem is..

Resources