I am trying to mount an S3 bucket into an EC2 server.
To keep things simple, since the bucket has root permissions only, I am using the root user on the EC2 instance. Running
s3fs mybucket ./mybucket/
Correctly mounts the bucket, so if I run ls ./mybucket
I get all the directories.
The problem arises when I try to list one of the subdirectory with ls ./mybucket/subdirectory, and I get the following error:
ls: reading directory '.': Software caused connection abort
From that moment on, any request will yield the error
ls: cannot open directory '.': Transport endpoint is not connected
Am I doing something wrong ? Is there a way to fix this ?
Related
I'm trying to migrate a bucket of one minio server to another using mc client, the command that I'm using is mc mirror (mc mirror --remove --overwrite --preserve minioproducao/compartilhado minioteste/compartilhado) the command works fine, but I was cheeking some
permissions between both server inside the bucket and I realized that the permission are different, for example:
I've connected inside of the the both kubernetes container and run the ls -l inside the bucket directory
on the origin it was showed like this: drwxr-xr-x arquivo.JPG #ps: it's ins't a file its a directoryenter image description here and inside of it there's two file like this: part.1 xl.meta
on the destine it was showed like this: -rw-r--r-- arquivo.JPG #ps: it's was copied as as file not a directory like it's on the origin minio server bucket
I'm wondering if there is a way to do a exactly copy of what's on my origin minio server, to another?
Thank you in advance!
Here's a picture that will help illustrate What I'm saying:
I'm trying to get Drill up and running on my machine. However, whenever I enter drill-embedded mode (bin/drill-embedded on Bash), I get this error:
Error: Failure in starting embedded Drillbit: java.lang.IllegalStateException: Local udf directory [/tmp/drill/udf/udf/local] must be writable for application user (state=,code=0)
If I try to run a query at this point, it'll give back:
No current connection
Any idea how to fix this? I've tried starting with a clean shell with no luck. Is it a permissions issue?
You have to give the directory /tmp/drill/udf/udf/local write access. Since it is a directory in /tmp, you might need root access to give permissions, or you will have to use sudo. To give permission, use this:
chmod 777 -R /tmp/drill/udf/udf/local
Also make sure the user is having at least read permission on the parent directories, otherwise you will get a permission denied error again.
I have installed Elasticsearch on an Amazon Linux Machine using the latest rpm package from their website. After thatt, I have attached an EBS volume and created a directory on this volume. I want this directory to be the data directory of Elasticsearch. So, I started the elasticsearch service first with defaults. I created a new directory in the user ec2-user home directory
mkdir my_data
Then I changed the path.data in the /etc/elasticsearch/elasticsearch.yml file to point to this new directory
path.data: /home/ec2-user/my_data
Then I changed the ownership of this directory:
sudo chown -R elasticsearch:elasticsearch /home/ec2-user/my_data
So, currently the permissions look like this
[ec2-user#ip-XXXXXX ~]$ ls -lrt
total 28632
drwxrwxr-x 2 elasticsearch elasticsearch 4096 Feb 4 06:18 my_data
However, when I try to start elasticsearch, I get the error:
Starting elasticsearch: Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/home/ec2-user/my_data)
Likely root cause: java.nio.file.AccessDeniedException: /home/ec2-user/my_data
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:383)
at java.nio.file.Files.createDirectory(Files.java:630)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:734)
at java.nio.file.Files.createDirectories(Files.java:720)
at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)
at org.elasticsearch.bootstrap.Security.addPath(Security.java:314)
at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:256)
at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:212)
at org.elasticsearch.bootstrap.Security.configure(Security.java:118)
at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
[FAILED]
I found it surprising, but in the latest version of Elasticsearch, if you create a data directory inside home of other user, ES is unable to access it. Though logically it is perfect too. What i suggest that you either mount an external hard disk for elasticsearch or create a data directory inside /home/ on the parallel of ec2-user. so you directory should have a path /home/my-data and it will work like a charm. :)
Thanks ,
Bharvi
In case this helps anyone with the problem that I was seeing...
This seems to be an oddity with java.nio.file.Files.createDirectories. The doc says "Unlike the createDirectory method, an exception is not thrown if the directory could not be created because it already exists." In your case, the folder exists so you should not get an exception. But the check for existence done in UnixFileSystemProvider is via mkdir which will throw an access-denied exception before it throws an already-exists exception. The access-denied exception which you are seeing then is not that elasticsearch doesn't have access to /home/ec2-user/my_data but rather that it doesn't have access to make that directory. So the solution is to fix the permission problem that is preventing elasticsearch from making the directory /home/ec2-user/my_data. For you this would be to make /home/ec2-user writeable by elasticsearch or to create a path like /home/ec2-user/my_data_holder/my_data and then make /home/ec2-user/my_data_holder writeable by elasticsearch.
It's my first time deploying something to AWS using Elastic Beanstalk and so far I've gotten to the point where I can run eb create and get started. The first time I did this I got Errno 13. Specifically, I got to the point where it tried to create the application and then:
Creating application version archive "app-150423_212419".
ERROR: IOError :: [Errno 13] Permission denied: './.viminfo'
I learned that this is a root access issue and so I followed a step found here that stated I should try the bash command:
sudo chown -R test /home/test
Here test = my user name and home = Users.
This got me to the error ERROR: OSError :: [Errno 2] No such file or directory: './.collab/ext'
I'm really not sure what that directory is supposed to be or why it's trying to access it. How can I choose a proper directory so that I can get things up and running?
eb create will attempt to zip up your entire directory and deploy it to an elastic beanstalk environment. I am not sure why certain files seem to not exist(maybe you have some symlinks?).
It also looks as if you might be trying to run eb create in your home directory. Dont do that. In fact remove the .elasticbeanstalk folder from your home directory right now.
All you need to do is go into your project directory, run eb init, then eb create.
Strange problem. We use a cf web server to access NAS. We have tested this issue to the nines and can't figure out the cause.
The problem: We get empty result sets doing a cfdirectory on the share to a known directory with the correct casing. However, we can ls from the cf server as the cf user and see everything without permission errors.
Tests we've tried:
Making a test file to be user the path we are testing is correct - fail.
Doing a directory listing from python - works.
Doing a CFFILE read and write from the offending web server to the directory in question - works.
Doing a CFDIRECTORY on a local directory - works.
Doing ls -la on the directory sudo'd to cfuser - works.
Doing ls -la as root on the directory - works.
Changing cf user permissions to root and retrying CFDIRECTORY - fail.
Changing mount to mount as root user and retrying CFDIRECTORY - fail.
chown-ing the files and the parent dir and retrying CFDIRECTORY - fail.
I can only think of a couple of things. First, make sure your case is correct since filesystem reads are case-sensitive in CF in Linux.
Secondly, I have not had much luck reading directly to SMB from CF. What has worked for me in the past is mounting a drive using SMB Fuse as a normal mount point and making sure the owner/group matches the CF user.