Elasticsearch crashes when updated data path - elasticsearch

Elasticsearch setup works fine with default configurations.
But when updated its path.data setting from elasticsearch.yml file it crashes with below error
[2015-11-19 12:39:56,194][ERROR][bootstrap ] Exception
java.lang.IllegalStateException: Unable to access 'path.data' (/home/hadoop/bigdata/data/elasticsearch)
at org.elasticsearch.bootstrap.Security.addPath(Security.java:197)
at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:170)
at org.elasticsearch.bootstrap.Security.configure(Security.java:100)
at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:181)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:159)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Caused by: java.nio.file.AccessDeniedException: /home/hadoop/bigdata/data
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:308)
at java.nio.file.Files.createDirectories(Files.java:702)
at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:218)
at org.elasticsearch.bootstrap.Security.addPath(Security.java:195)
... 6 more
I have copied elasticsearch directory from /var/lib location with preserved mode. But no success.
Can anybody please help me to come out of this error
Thanks,
Sanjay Bhosale

This error is coming because of not setting permissions for user "elasticsearch" to access the folder.
Try making "elasticsearch" user(default user of elasticsearch) the owner of the folder using below commands -
sudo chown elasticsearch: /home/hadoop/bigdata/data/elasticsearch

In addition to ensure that the user elasticsearch is the owner and group of the data folder, the user should also have permission (+x) to traverse each level of the path to the configured data path (in your case, /home/hadoop/bigdata/data/elasticsearch).
That is, if the parent directory of the Elasticsearch path.data is not owned by user elasticsearch (as in your case, the parent folder belongs to user hadoop), then you should check each level of the parent directory, to ensure it be set with o+x permission (with chmod) to guarantee permission to the user elasticsearch (i.e., others).
I have learned this solution from other's question: Elasticsearch cannot open log file: Permission denied.

Related

Apache Drill: Local udf directory must be writable for application user error

I'm trying to get Drill up and running on my machine. However, whenever I enter drill-embedded mode (bin/drill-embedded on Bash), I get this error:
Error: Failure in starting embedded Drillbit: java.lang.IllegalStateException: Local udf directory [/tmp/drill/udf/udf/local] must be writable for application user (state=,code=0)
If I try to run a query at this point, it'll give back:
No current connection
Any idea how to fix this? I've tried starting with a clean shell with no luck. Is it a permissions issue?
You have to give the directory /tmp/drill/udf/udf/local write access. Since it is a directory in /tmp, you might need root access to give permissions, or you will have to use sudo. To give permission, use this:
chmod 777 -R /tmp/drill/udf/udf/local
Also make sure the user is having at least read permission on the parent directories, otherwise you will get a permission denied error again.

Elasticsearch 5.5 - Error when using custom logs directory: Unable to create logger at ''

I try to install elasticsearch as Windows service. I set the environment variables to change the data and logs path with DATA_DIR and LOG_DIR.
If the LOG_DIR is not created yet, and it is only 1 level, the directory will be created (as expected).
The problem is when I specify LOG_DIR with nested directory and the directory doesn’t exist yet, it will throw error:
Unable to create logger at ''
For example:
LOG_DIR=D:/test/logs
If this location doesn’t exist, the error will occur.
Is there any way to tell ES to create the directory recursively?
Thank you!
The logs directory should be created automatically, but Elasticsearch will not create directories recursively, thats has to be done by the user.

run.as option does not work other than Nifi user

I want to run my NiFi application using ec2-user rather than default nifi user. I changed run.as=ec2-user in bootstrap.conf but it did not worked .It is not allowing me to start Nifi application getting following error while staring nifi service.
./nifi.sh start
nifi.sh: JAVA_HOME not set; results may vary
Java home:
NiFi home: /opt/nifi/current
Bootstrap Config File: /opt/nifi/current/conf/bootstrap.conf
User Runnug Nifi Application : sudo -u ec2-user
Error: Could not find or load main class org.apache.nifi.bootstrap.RunNiFi
Any pointer to this issue?
This is most likely a file permission problem, which is not covered by installing the service with nifi.sh install. A summary of the required permissions includes:
Read access to the entire distribution in the NIFI_HOME directory
Write access to the NIFI_HOME directory itself - NiFi will create a number of directories and files at runtime including logs, work, state, and various repositories.
Write access to the bin directory
Write access to the conf directory
Write access to the lib directory, and to all of the files in the lib directory
It is certainly possible to narrow the permissions by creating the working directories manually, and by adjusting NiFi's settings to rearrange the directory layout. But the permissions above should get you started.

Elasticsearch: Changing data directory on an Amazon Linux Machine

I have installed Elasticsearch on an Amazon Linux Machine using the latest rpm package from their website. After thatt, I have attached an EBS volume and created a directory on this volume. I want this directory to be the data directory of Elasticsearch. So, I started the elasticsearch service first with defaults. I created a new directory in the user ec2-user home directory
mkdir my_data
Then I changed the path.data in the /etc/elasticsearch/elasticsearch.yml file to point to this new directory
path.data: /home/ec2-user/my_data
Then I changed the ownership of this directory:
sudo chown -R elasticsearch:elasticsearch /home/ec2-user/my_data
So, currently the permissions look like this
[ec2-user#ip-XXXXXX ~]$ ls -lrt
total 28632
drwxrwxr-x 2 elasticsearch elasticsearch 4096 Feb 4 06:18 my_data
However, when I try to start elasticsearch, I get the error:
Starting elasticsearch: Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/home/ec2-user/my_data)
Likely root cause: java.nio.file.AccessDeniedException: /home/ec2-user/my_data
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:383)
at java.nio.file.Files.createDirectory(Files.java:630)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:734)
at java.nio.file.Files.createDirectories(Files.java:720)
at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)
at org.elasticsearch.bootstrap.Security.addPath(Security.java:314)
at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:256)
at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:212)
at org.elasticsearch.bootstrap.Security.configure(Security.java:118)
at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
[FAILED]
I found it surprising, but in the latest version of Elasticsearch, if you create a data directory inside home of other user, ES is unable to access it. Though logically it is perfect too. What i suggest that you either mount an external hard disk for elasticsearch or create a data directory inside /home/ on the parallel of ec2-user. so you directory should have a path /home/my-data and it will work like a charm. :)
Thanks ,
Bharvi
In case this helps anyone with the problem that I was seeing...
This seems to be an oddity with java.nio.file.Files.createDirectories. The doc says "Unlike the createDirectory method, an exception is not thrown if the directory could not be created because it already exists." In your case, the folder exists so you should not get an exception. But the check for existence done in UnixFileSystemProvider is via mkdir which will throw an access-denied exception before it throws an already-exists exception. The access-denied exception which you are seeing then is not that elasticsearch doesn't have access to /home/ec2-user/my_data but rather that it doesn't have access to make that directory. So the solution is to fix the permission problem that is preventing elasticsearch from making the directory /home/ec2-user/my_data. For you this would be to make /home/ec2-user writeable by elasticsearch or to create a path like /home/ec2-user/my_data_holder/my_data and then make /home/ec2-user/my_data_holder writeable by elasticsearch.

While creting a view I got this error; cleartool: Error: Failed to record hostname in storage directory

I am creating a view and I got this error cleartool: Error: Failed to record hostname in storage directory .
Check that root or the ClearCase administrators group has permission to write to this directory.
I tried all the possible troubleshoot using online help and others, but no luck. Can anyone help?
You can check the technote "Registering a VOB or creating a new View or VOB reports error: Failed to record hostname"
View Tool
Error creating view - '<view-tag>'
Fail to record hostname " HOST " in storage directory "<path to view storage>.
Check that root or the ClearCase administrators group has permission to write to
this directory.
Unable to create view "<global path to view storage>".
Cause
The cause of the error ultimately stems from the inability of ClearCase to successfully record the hostname in the .hostname file located in the storage directory of the VOB or view.
In addition of the various solutions, check if that error persists on different clients, for different users.
If not, it is likely linked to your profile.
Check for instance your CLEARCASE_PRIMARY_GROUP and your credmap (credential mapping).
In my case, it was always a case of applying the right fix_prot to the view/vob storage.
For view storage, it was that exact sequence:
alias sfp sudo /usr/atria/etc/utils/fix_prot
sfp -force -rec -chown <owner> -chgrp <ClearCaseUsers> -chmod 775 /path/to/viewStorage/yourView.vws
sfp -force -root -chown <owner> -chgrp <ClearCaseUsers> /path/to/viewStorage/yourView.vws
Replace <owner> and <ClearCaseUsers> by the right owner and group.
On creating a view, other common problems for remotely stored views are:
1) The "clearcase" group on the client and server do not point to the same group. You would need to get clearbug2's of both hosts and compare the albd credentials and the host data in the registry data in the "clearcase_info" directory of the .zip file.
2) You are attempting to create a Unix-hosted view from a Windows client.

Resources