Unexpected transaction.log file created when starting elasticsearch embedded in Tomcat - elasticsearch

I'm using elasticsearch-transport-wares to run an Elasticsearch embedded in Tomcat.
Whatever the value of path.logs is in the elasticsearch.yml, an empty transaction.log file is always created at the path where I run the command to start my Tomcat.
When I start a not embedded Elasticsearch server I didn't see any transaction.log file created anywhere...
My question is:
Do you know what is the purpose of this empty transaction.log file ?
How can I configure elasticsearch the choose where this file must be stored (such like path.logs to decide where your elasticsearch logs are stored) ?
Consequently when my server is started from a path with no write permission, I got this stacktrace (log as INFO):
2016-01-28 10:26:15,174 | INFO | Initializing elasticsearch Node 'node' [o.a.c.c.C.[.[.[/es-embedded]<localhost-startStop-1>]
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: transaction.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.elasticsearch.common.logging.log4j.Log4jESLoggerFactory.newInstance(Log4jESLoggerFactory.java:39)
at org.elasticsearch.common.logging.ESLoggerFactory.newInstance(ESLoggerFactory.java:74)
at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:66)
at org.elasticsearch.common.logging.Loggers.getLogger(Loggers.java:122)
at org.elasticsearch.common.MacAddressProvider.<clinit>(MacAddressProvider.java:32)
at org.elasticsearch.common.TimeBasedUUIDGenerator.<clinit>(TimeBasedUUIDGenerator.java:41)
at org.elasticsearch.common.Strings.<clinit>(Strings.java:48)
at org.elasticsearch.common.settings.ImmutableSettings.<init>(ImmutableSettings.java:73)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.build(ImmutableSettings.java:1142)
at org.elasticsearch.node.NodeBuilder.settings(NodeBuilder.java:89)
at org.elasticsearch.wares.NodeServlet.init(NodeServlet.java:111)
at javax.servlet.GenericServlet.init(GenericServlet.java:158)
at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1087)
at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5267)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5557)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1095)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1930)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

You can change the path.logs settings in the elasticsearch.yml file and then add your elasticsearch.yml configuration file inside the /WEB-INF folder of your WAR and it will be picked up.
You should be good to go.
As for the transaction.log file, are you sure it's not a file that you have configured in your Log4J configuration?

Related

Kafka Streams failed to delete the state directory - DirectoryNotEmptyException

I noticed the exception stream-thread [x-CleanupThread] Failed to delete the state directory with our kafka streams application. Application uses a windowed state store and is defined as:
Stores.windowStoreBuilder(
Stores.persistentWindowStore(
storeName,
retentionPeriod,
retentionWindowSize,
false),
Serdes.String(),
Serdes.String()).withCachingEnabled();
This is not a test issue using topology driver. This is in actual deployed stream application. Every ten minutes it will try to delete the directory and fail with the stack trace below. Checking the directory it appears empty using ls -al. I have also tried modifying the permissions of the directory using chmod 777 with no luck.
Stack Trace:
java.nio.file.DirectoryNotEmptyException: /data/consumer/0_17
at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$2.postVisitDirectory(Utils.java:763)
at org.apache.kafka.common.utils.Utils$2.postVisitDirectory(Utils.java:746)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:746)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:290)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:253)
at org.apache.kafka.streams.KafkaStreams$2.run(KafkaStreams.java:794)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)

How to stream data from kafka avro console to HDFS using kafka-connect-hdfs?

I am trying to run kafka-connect-hdfs without any success.
I have added the following line to .bash_profile and ran 'source ~/.bash_profile'
export LOG_DIR=~/logs
The quickstart-hdfs.properties configuration file is
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
hdfs.url=xxx.xxx.xxx.xxx:xxxx # placeholder
flush.size=3
hadoop.conf.dir = /etc/hadoop/conf/
logs.dir = ~/logs
topics.dir = ~/topics
topics=test_hdfs
I am following the quickstart instructions outlined in
https://docs.confluent.io/current/connect/connect-hdfs/docs/hdfs_connector.html
The contents of the connector-avro-stanalone.properties file are:
bootstrap.servers=yyy.yyy.yyy.yyy:yyyy # This is the placeholder for the Kafka broker url with the appropriate port
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
I have the quickstart-hdfs.properties and connector-avro-stanalone.properties in my home directory and I run:
confluent load hdfs-sink -d quickstart-hdfs.properties
I am not sure how I am accessing the information in the connector-avro-stanalone.properties file in my home directory.
When I run: 'confluent log connect', I get the following error:
[2018-04-26 17:36:00,217] INFO Couldn't start HdfsSinkConnector: (io.confluent.connect.hdfs.HdfsSinkTask:90)
org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException
at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:56)
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:213)
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:101)
at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:82)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:267)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:163)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:51)
... 12 more
Caused by: java.io.IOException: No FileSystem for scheme: xxx.xxx.xxx.xxx -> This is the hdfs_url in quickstart-hdfs.properties file without the port
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2691)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:420)
at io.confluent.connect.hdfs.storage.HdfsStorage.<init>(HdfsStorage.java:56)
... 17 more
[2018-04-26 17:36:00,217] INFO Shutting down HdfsSinkConnector. (io.confluent.connect.hdfs.HdfsSinkTask:91)
Any help to fix this issue will be greatly appreciated.
Caused by: java.io.IOException: No FileSystem for scheme: xxx.xxx.xxx.xxx
You need hdfs.url=hdfs://xxx.yyy.zzz.abc along with the namenode port
Also, you'll want to remove the spaces around the equals signs in the properties files

Spring Boot 1.3.1.and spring 4.2.4.java.lang.RuntimePermission accessClassInPackage.sun.reflect.annotation

I have upgraded spring boot to 1.3.1.RELEASE and spring framework 4.2.4.RELEASE and i am running into issues. I wasn't given security permission for sun.reflect.annotation in tomcat catalina policy.
Can anyone tell why is spring framework trying scan system jar file/classes for web servlet annotations ? Is there a way in spring boot not to scan system file/classes ?
22-Dec-2015 14:15:41.523 SEVERE [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start:
org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/hccore]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725)
at org.apache.catalina.core.ContainerBase.access$000(ContainerBase.java:131)
at org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:153)
at org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:143)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:699)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:945)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1798)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "accessClassInPackage.sun.reflect.annotation")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564)
at sun.reflect.misc.ReflectUtil.checkPackageAccess(ReflectUtil.java:188)
at sun.reflect.misc.ReflectUtil.checkPackageAccess(ReflectUtil.java:164)
at java.lang.reflect.Proxy.getInvocationHandler(Proxy.java:827)
at org.springframework.core.annotation.AnnotationUtils.synthesizeAnnotation(AnnotationUtils.java:1364)
at org.springframework.core.annotation.AnnotationUtils.findAnnotation(AnnotationUtils.java:685)
at org.springframework.core.annotation.AnnotationUtils.findAnnotation(AnnotationUtils.java:660)
at org.springframework.core.annotation.OrderUtils.getOrder(OrderUtils.java:67)
at org.springframework.core.annotation.OrderUtils.getOrder(OrderUtils.java:56)
at org.springframework.core.annotation.AnnotationAwareOrderComparator.findOrder(AnnotationAwareOrderComparator.java:84)
at org.springframework.core.OrderComparator.getOrder(OrderComparator.java:127)
at org.springframework.core.OrderComparator.getOrder(OrderComparator.java:116)
at org.springframework.core.OrderComparator.doCompare(OrderComparator.java:87)
at org.springframework.core.OrderComparator.compare(OrderComparator.java:73)
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
at java.util.TimSort.sort(TimSort.java:220)
at java.util.Arrays.sort(Arrays.java:1438)
at java.util.List.sort(List.java:478)
at java.util.Collections.sort(Collections.java:175)
at org.springframework.core.annotation.AnnotationAwareOrderComparator.sort(AnnotationAwareOrderComparator.java:116)
at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:171)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5170)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 14 more
This is probably related to your tomcat policy file. You can find a description of a similar issue and the solution here. Try to check that you have something like this in your tomcat policy file:
permission java.lang.RuntimePermission "accessClassInPackage.sun.reflect.annotation";
permission java.lang.RuntimePermission "accessClassInPackage.sun.reflect.annotation.*";
permission java.lang.RuntimePermission "accessDeclaredMembers";

error in streaming twitter data to Hadoop using flume

I am using Hadoop-1.2.1 on Ubuntu 14.04
I am trying to stream data from twitter to HDFS by using Flume-1.6.0. I have downloaded flume-sources-1.0-SNAPSHOT.jar and included it in flume/lib folder. I have set path of flume-sources-1.0-SNAPSHOT.jar as FLUME_CLASSPATH in conf/flume-env.sh . This is my flume agent conf file:
#setting properties of agent
Twitter-agent.sources=source1
Twitter-agent.channels=channel1
Twitter-agent.sinks=sink1
#configuring sources
Twitter-agent.sources.source1.type=com.cloudera.flume.source.TwitterSource
Twitter-agent.sources.source1.channels=channel1
Twitter-agent.sources.source1.consumerKey=<consumer-key>
Twitter-agent.sources.source1.consumerSecret=<consumer Secret>
Twitter-agent.sources.source1.accessToken=<access Toekn>
Twitter-agent.sources.source1.accessTokenSecret=<acess Token Secret>
Twitter-agent.sources.source1.keywords= morning, night, hadoop, bigdata
#configuring channels
Twitter-agent.channels.channel1.type=memory
Twitter-agent.channels.channel1.capacity=10000
Twitter-agent.channels.channel1.transactionCapacity=100
#configuring sinks
Twitter-agent.sinks.sink1.channel=channel1
Twitter-agent.sinks.sink1.type=hdfs
Twitter-agent.sinks.sink1.hdfs.path=flume/twitter/logs
Twitter-agent.sinks.sink1.rollSize=0
Twitter-agent.sinks.sink1.rollCount=1000
Twitter-agent.sinks.sink1.batchSize=100
Twitter-agent.sinks.sink1.fileType=DataStream
Twitter-agent.sinks.sink1.writeFormat=Text
when I run this agent, I am getting an error like this:
15/06/22 14:14:49 INFO source.DefaultSourceFactory: Creating instance of source source1, type com.cloudera.flume.source.TwitterSource
15/06/22 14:14:49 ERROR node.PollingPropertiesFileConfigurationProvider: Unhandled error
java.lang.NoSuchMethodError: twitter4j.conf.Configuration.isStallWarningsEnabled()Z
at twitter4j.TwitterStreamImpl.<init>(TwitterStreamImpl.java:60)
at twitter4j.TwitterStreamFactory.<clinit>(TwitterStreamFactory.java:40)
at com.cloudera.flume.source.TwitterSource.<init>(TwitterSource.java:64)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:442)
at org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:44)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:322)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:97)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My flume/lib folder already has twitter4j-core-3.0.3.jar
How do I rectify this error?
I found a solution to this issue. As flume-sources-1.0-SNAPSHOT.jar and twitter4j-stream-3.0.3.jar contains the same FilterQuery.class, there arises a jar conflict. All twitter4j-3.x.x uses this class so it would be better to download twitter jars of version 2.2.6(twitter4j-core,twitter4j-stream,twitter4j-media-support) and replace 3.x.x with these newly downloaded jars under flume/lib directory.
Run the agent again and twitter data will be streamed to HDFS.
Change
Twitter-agent.sources.source1.type=com.cloudera.flume.source.TwitterSource
with
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource

Error while deploying application in tomcat 7

Hi I am getting the following error while deplying application in tomcat:
Jul 25, 2013 5:04:43 PM org.apache.catalina.deploy.NamingResources cleanUp
WARNING: Failed to retrieve JNDI naming context for container [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/oms-inter-web]] so no cleanup was performed for that container
javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp].
at org.apache.naming.NamingContext.lookup(NamingContext.java:820)
at org.apache.naming.NamingContext.lookup(NamingContext.java:168)
at org.apache.catalina.deploy.NamingResources.cleanUp(NamingResources.java:988)
at org.apache.catalina.deploy.NamingResources.stopInternal(NamingResources.java:970)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:232)
at org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5494)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:232)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:160)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1595)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1585)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Any help will be appreciated.
it may be because of previous applications are not properly undeployed from tomcat container.
Stop the tomcat, and then
go to the tomcat conf directory /<Tomcat root>/conf/Catalina/localhost and clean the xml files in that directory. and restart the tomcat
Note:
1. make a backup of those xml files before deleting, so that if this not works then you can restore it.
To do the same in Netbeans IDE
In Netbeans menu go to Window->Services ,
In the listed Services expand Servers->->web applications-> right click on each application and choose Undeploy
Note: the tomcat need to be in running condition for this
I had exact the same problem and because Eclipse Tomcat was using the same working directory as Tomcat standalone I cannot started the standalone Tomcat properly. The solution was to change the Eclipse-Tomcat Config - Arguments - Working directory to something different than Tomcat/webapps.

Resources