/dev/stdout permission denied Tomcat access logs - spring-boot

I'm trying to enable Tomcat access logs in the STS console, but i get an error on startup:
java.io.FileNotFoundException: /dev/stdout.2020-09-02 (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.catalina.valves.AccessLogValve.open(AccessLogValve.java:585)
at org.apache.catalina.valves.AccessLogValve.startInternal(AccessLogValve.java:615)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:182)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:953)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:422)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:793)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:344)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:99)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.<init>(TomcatEmbeddedServletContainer.java:84)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:552)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:177)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:164)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:134)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:537)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:762)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:372)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1187)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1176)
at au.com.paycorp.rest.ws.Application.main(Application.java:29)
application.yml :
server:
port: 50110
contextPath: /rest
tomcat:
accesslog:
enabled: true
directory: "/dev"
prefix: stdout
buffered: false
suffix:
file-date-format:
pattern: "[ACCESS] %{org.apache.catalina.AccessLog.RemoteAddr}r %l %t %D %F %B %S vcap_request_id:%{X-Vcap-Request-Id}i"
I understand that it does not have the permission to write to /dev/stdout, but i'm not sure how to whitelist it, how to give it permission for access.

I would say that you issue is not in your code but on your server.
Obviously, it's Linux or Unix. So, use a chmod command to change what your user can do on this folder.

Give ownership of the dev folder to the current user with:
sudo chown -R $USER dev/
Change the group owner of the folder to the current user using this command:
sudo chgrp -R $USER dev/
This should fix it.

As you can see, Tomcat is trying to open the file /dev/stdout.2020-09-02 instead of /dev/stdout. The reason behind this is that an empty file-data-format means default format for your locale. If you want to disable the addition of a date in the filename you need to use:
rotatable: false
On the other hand this will redirect your logs to the standard output depending on whether the standard output is a file or not: Java does not duplicate the file descriptor 1, but calls open on /dev/stdout. Check this answer to understand why it won't work on pipes/sockets.
To achieve what you want you'll need to modify the method AccessLogValve#open:
protected synchronized void open() {
// Open the current log file
// If no rotate - no need for dateStamp in fileName
File pathname = getLogFile(rotatable && !renameOnRotate);
// Modify HERE:
if ("-".equals(pathname)) {
writer = System.out;
return;
}
...
}
and set - as the name of your logfile.

Related

Docker Compose Volumes - Device Option - Windows - JHipster

I'm trying to configure a JHipster web application (Java Spring Boot + Angular) to use an embedded H2 DB in production environment.
What I want to achieve is to save the H2 generated DB files on the host machine so that if I need to backup those file for some reasons I have easy access.
The application will be deployed using docker compose.
DEV ENVIRONMENT
I'm using a Windows machine to develop this web application.
I have installed Docker Desktop on my machine and is running on WSL 2 engine.
JHIPSTER DOCKER IMAGE
The docker image I am using to deploy the application is the standard used by the JHipster project. This means that the web application inside docker runs with UID = 1000
The User UID = 1000 doesn't have root access (which is of course correct for security reasons)
DOCKER COMPOSE AND VOLUME CONFIGURATIONS
The first docker compose configuration I tried was the following one:
version: '3.8'
services:
app:
image: scriba_webapp
environment:
- ...
ports:
- ...
volumes:
- scriba-db:/tmp/h2
volumes:
scriba-db:
but when the web applications starts I receive a "permission denied" error. Looking at the logs the problem seems to be when the application tries to initialize the DB (which means creating and writing a file to the /tmp/h2 container directory).
Here the stacktrace:
org.h2.message.DbException: Log file error: "/tmp/h2/db.trace.db", cause: "java.io.FileNotFoundException: /tmp/h2/db.trace.db (Permission denied)" [90034-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.TraceSystem.logWritingError(TraceSystem.java:294)
at org.h2.message.TraceSystem.openWriter(TraceSystem.java:315)
at org.h2.message.TraceSystem.writeFile(TraceSystem.java:263)
at org.h2.message.TraceSystem.write(TraceSystem.java:247)
at org.h2.message.Trace.error(Trace.java:180)
at org.h2.engine.Database.setBackgroundException(Database.java:2230)
at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
at org.h2.mvstore.MVStore.handleException(MVStore.java:2877)
at org.h2.mvstore.MVStore.panic(MVStore.java:481)
at org.h2.mvstore.MVStore.<init>(MVStore.java:402)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:3579)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:170)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:103)
at org.h2.engine.Database.getPageStore(Database.java:2659)
at org.h2.engine.Database.open(Database.java:675)
at org.h2.engine.Database.openDatabase(Database.java:307)
at org.h2.engine.Database.<init>(Database.java:301)
at org.h2.engine.Engine.openSession(Engine.java:74)
at org.h2.engine.Engine.openSession(Engine.java:192)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:171)
at org.h2.engine.Engine.createSession(Engine.java:166)
at org.h2.engine.Engine.createSession(Engine.java:29)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:340)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:173)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:152)
at org.h2.Driver.connect(Driver.java:69)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:266)
at org.springframework.boot.autoconfigure.liquibase.DataSourceClosingSpringLiquibase.afterPropertiesSet(DataSourceClosingSpringLiquibase.java:46)
at tech.jhipster.config.liquibase.AsyncSpringLiquibase.initDb(AsyncSpringLiquibase.java:118)
at tech.jhipster.config.liquibase.AsyncSpringLiquibase.afterPropertiesSet(AsyncSpringLiquibase.java:103)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1845)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1782)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:602)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1154)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:908)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338)
at it.brainylabs.scriba.ScribaApp.main(ScribaApp.java:69)
Caused by: org.h2.jdbc.JdbcSQLNonTransientException: Log file error: "/tmp/h2/db.trace.db", cause: "java.io.FileNotFoundException: /tmp/h2/db.trace.db (Permission denied)" [90034-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:505)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
... 56 more
Caused by: java.io.FileNotFoundException: /tmp/h2/db.trace.db (Permission denied)
at java.base/java.io.FileOutputStream.open0(Native Method)
at java.base/java.io.FileOutputStream.open(Unknown Source)
at java.base/java.io.FileOutputStream.<init>(Unknown Source)
at java.base/java.io.FileOutputStream.<init>(Unknown Source)
at org.h2.store.fs.FilePathDisk.newOutputStream(FilePathDisk.java:306)
at org.h2.store.fs.FileUtils.newOutputStream(FileUtils.java:239)
at org.h2.message.TraceSystem.openWriter(TraceSystem.java:311)
... 53 more
Exception in thread "Logback shutdown hook [default]" java.util.ConcurrentModificationException
at java.base/java.util.ArrayList$Itr.checkForComodification(Unknown Source)
at java.base/java.util.ArrayList$Itr.next(Unknown Source)
at ch.qos.logback.classic.LoggerContext.fireOnReset(LoggerContext.java:323)
at ch.qos.logback.classic.LoggerContext.reset(LoggerContext.java:226)
at ch.qos.logback.classic.LoggerContext.stop(LoggerContext.java:348)
at ch.qos.logback.core.hook.ShutdownHookBase.stop(ShutdownHookBase.java:39)
at ch.qos.logback.core.hook.DelayingShutdownHook.run(DelayingShutdownHook.java:57)
at java.base/java.lang.Thread.run(Unknown Source)
The next configuration I tried was using the "old" bind mounts instead of the new docker volumes:
version: '3.8'
services:
app:
image: scriba_webapp
environment:
- ...
ports:
- ...
volumes:
- type: bind
source: D:\...\h2
target: /tmp/h2
With this everything works fine, the web applications startup correctly and I can see the DB files in the source directory. So without other changes to the project a bind mount works correctly while a docker volume doesn't.
The final test I did was to configure the docker volume like this:
version: '3.8'
services:
app:
image: ...
environment:
- ...
ports:
- ...
volumes:
- scriba-db
volumes:
scriba-db:
driver_opts:
type: "volume"
o: "uid=1000,rw"
device: **???**
Since the original error was a permission denied I figured I should try and give that user read/write access to the docker volume but I didn't manage to do so since it's not clear to me how the volume option device should be configured for this case.
If I configure the driver_opts like this:
scriba-db:
driver_opts:
type: "tmpfs"
o: "uid=1000,rw"
device: "tmpfs"
The web application starts correctly but I can't see the DB files in the docker volume since tmpfp (If I understood this correctly) means that the FS is just temporary and will be delete once the containers stops.
THE QUESTIONS
How come a bind mount works just like that and a docker volume instead has problems?
Let's say I want to use a docker volume and save those DB files on the host machine, how should I configure the driver_opts on a Windows machine?

Spring cloud server Client does not see properties git uri

I have server and client, made by referring https://spring.io/guides/gs/centralized-configuration/ But my client does not run, cause
java.lang.IllegalStateException: Unable to load config data from 'http://localhost:8888'
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.getReferences(StandardConfigDataLocationResolver.java:128)
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.resolve(StandardConfigDataLocationResolver.java:115)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.lambda$resolve$1(ConfigDataLocationResolvers.java:115)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.resolve(ConfigDataLocationResolvers.java:126)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.resolve(ConfigDataLocationResolvers.java:115)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.resolve(ConfigDataLocationResolvers.java:107)
at org.springframework.boot.context.config.ConfigDataImporter.resolve(ConfigDataImporter.java:101)
at org.springframework.boot.context.config.ConfigDataImporter.resolve(ConfigDataImporter.java:93)
at org.springframework.boot.context.config.ConfigDataImporter.resolveAndLoad(ConfigDataImporter.java:81)
at org.springframework.boot.context.config.ConfigDataEnvironmentContributors.withProcessedImports(ConfigDataEnvironmentContributors.java:121)
at org.springframework.boot.context.config.ConfigDataEnvironment.processInitial(ConfigDataEnvironment.java:242)
at org.springframework.boot.context.config.ConfigDataEnvironment.processAndApply(ConfigDataEnvironment.java:230)
at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:97)
at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:89)
at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEnvironmentPreparedEvent(EnvironmentPostProcessorApplicationListener.java:100)
at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEvent(EnvironmentPostProcessorApplicationListener.java:86)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:131)
at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:82)
at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:63)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:117)
at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:111)
at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:62)
at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:362)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1311)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1300)
at bobryakov.dmitry.ConfigurationClientApplication.main(ConfigurationClientApplication.java:9)
Caused by: java.lang.IllegalStateException: File extension is not known to any PropertySourceLoader. If the location is meant to reference a directory, it must end in '/'
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.getReferencesForFile(StandardConfigDataLocationResolver.java:214)
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.getReferences(StandardConfigDataLocationResolver.java:125)
... 30 common frames omitted
As i understand, it because of client doesn't see it's files in server spring.cloud.config.server.git.uri or spring.cloud.config.server.native.search-locations or spring.cloud.config.server.native.searchLocations I tried different notations:
file:///C:/Users/u_m167y/Desktop/config
file:///C:/Users/u_m167y/Desktop/config/
file://C:/Users/u_m167y/Desktop/config
C:/Users/u_m167y/Desktop/config
C:\\\\Users\\\\u_m167y\\\\Desktop\\\\config\\\\
But nothing works. I have git repo in that dir with .git and a-bootiful-client.properties In client i have spring.application.name=a-bootiful-client What is wrong here?
On Windows, you need an extra "/" in the file URL if it is absolute with a drive prefix (for example,file:///${user.home}/config-repo)
Refer spring-cloud-config
EDIT :
The above link has following listing, whihc shows a recipe for creating the git repository in the preceding example:
$ cd $HOME
$ mkdir config-repo
$ cd config-repo
$ git init .
$ echo info.foo: bar > application.properties
$ git add -A .
$ git commit -m "Add application.properties"

failed to launch apache.spark.master

Whenever i run start-master.sh command on my local machine i am getting following error please someone help me to fix this issue
Terminal Error
Error which i get in terminal
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-2.0.1-bin-hadoop2.6/logs/spark-andani-org.apache.spark.deploy.master.Master-1-andani.sakha.com.out
failed to launch org.apache.spark.deploy.master.Master:
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
Log Error
If i check the spark log file following is the error
Exception in thread "main" java.net.BindException: Cannot assign requested address: Service 'sparkMaster' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'sparkMaster' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
The error is due to your sparkMaster was not able to contact to your internal IP
Once check your /etc/hosts file weather it pointing to proper host name or your previous IP address might have changed.
Reconfigure it and run the command once again.

Zeppelin won't start with SSL enabled

I'm using Ambari 2.4.0.1 with HDP 2.5 and trying to configure Zeppelin to use SSL. When I set the zeppelin.ssl property to "true" I always get this error when starting the server:
ERROR [2017-01-24 02:13:43,456] ({main} ZeppelinServer.java[main]:118) - Error while running jettyServer
java.io.FileNotFoundException: /etc/zeppelin/2.5.3.0-37/0/null (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.eclipse.jetty.util.resource.FileResource.getInputStream(FileResource.java:290)
at org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:43)
at org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:871)
at org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:273)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:64)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:256)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:366)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116)
I have no idea what file it's trying to look for in /etc/zeppelin/2.5.3.0-37/0/
The zeppelin.ssl.keystore.path is set to conf/keystore and the keystore file is at that location. It's a relative path under /usr/hdp/current/zeppelin-server, and the conf dir there is actually a symlink to /etc/zeppelin/2.5.3.0-37/0/
I have client auth set to false, but set the truststore path nonetheless, and that didn't seem to make any difference.
If I toggle the zeppelin.ssl setting to "false" the server starts normally.
Any thoughts on what could be going on?
OK, in Ambari the tooltip for the keystore path field says it should be a relative path, relative to zeppelin home. But just now on a whim I changed it to the absolute path, and now my server starts in SSL mode. I don't know if the docs are wrong, or if it's a code bug, but it works with an absolute path, so at least I have a path forward.

MapReduce ERROR UserGroupInformation - PriviledgedActionException

I am trying to run following MapReduce code in my local machine:
https://github.com/Jeffyrao/warcbase/blob/extract-links/src/main/java/org/warcbase/data/ExtractLinks.java
However, I met this exception:
[main] ERROR UserGroupInformation - PriviledgedActionException as:jeffy (auth:SIMPLE) cause:java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Resource file:/Users/jeffy/Documents/Eclipse/warcbase/map_backup.txt is not publicly accessable and as such cannot be part of the public cache.
Exception in thread "main" java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Resource file:/Users/jeffy/Documents/Eclipse/warcbase/map_backup.txt is not publicly accessable and as such cannot be part of the public cache.
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:144)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:155)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:625)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:391)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1269)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:394)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1266)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1287)
at org.warcbase.data.ExtractLinks.run(ExtractLinks.java:254)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.warcbase.data.ExtractLinks.main(ExtractLinks.java:270)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Resource file:/Users/jeffy/Documents/Eclipse/warcbase/map_backup.txt is not publicly accessable and as such cannot be part of the public cache.
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:140)
... 14 more
I think this problem is because of I am trying to add a file to DistributedCache(Look at my code at Line 81-86 and Line 235). Any suggestion is welcome. Thanks!
I've met with a similar problem while running a Hadoop 2 job with DistributedCache added in local environment. Finally the cause of my problem is that Hadoop 2 is not only verifying the path itself to have public execution & read access permission, but it also verifies that all its ancestor directories should have execution permission. In this case, if "/" or "/Users" does not have a 755 permission, the file will still fail to be added into public cache.
See method static boolean ancestorsHaveExecutePermissions(FileSystem fs,
Path path, LoadingCache<Path,Future<FileStatus>> statCache)at Hadoop class FSDownload.java
One solution could be granting permission to all directories (sounds unsafe).
And a better solution is making sure all resource files to be cached are in /tmp folder or any other folder that defaultly have a >755 permission.
I've met with similar problem.
I run mahout seq2sparse with tfidf in local mode. And raise error:
Exception in thread "main" java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Resource file:/root/title.tfidf/dictionary.file-0 is not publicly accessable and as such cannot be part of the public cache.
I found permission of /root is 750 by default
drwxr-x---. 12 root root 4096 16:04 root
So i changed permission of /root
chmod 755 /root
then it works. so thank Yitong.
I had to change permissions of only my home directory as following
chmod go+rx /home/hadoop
to fix the problem as / and /home already have rx permisions for group and other users on my system. Here 'hadoop' is my linux login/user name.

Resources