I am able to mount the OSS bucket with the ECS instance normally,
./ossutil mb oss://<bucketname> --acl=public-read --endpoint=oss-ap-south-1.aliyuncs.com
ossfs <bucket-name> <local-folder-path> -ourl=http://oss-ap-south-1.aliyuncs.com
but now I want to mount the folder inside the OSS bucket to the ECS instance, the goal is to use the same OSS bucket for multiple instances by differentiating with folders inside the bucket
how can I do that?
Thanks
In my opinion, in this scenario, you might try to work with permissions assigned to a mounted bucket into a directory. However, this scenario might simply not work.
https://github.com/aliyun/ossfs/wiki/FAQ-EN#11
https://github.com/aliyun/ossfs/wiki/FAQ-EN#12
The other idea could be a dedicated RAM policy created for each directory. For example:
Directory1
directory1 -> RAM policy 1
(directory1) -> read-only access to all other directories and full read-write to a single directory.
Directory2
directory2 -> RAM policy 2
(directory2) -> read-only access to all other directories and full read-write to a single directory.
And so on.
Related
I have a web application, using Alibaba Cloud OSS, which is synchronized to the cloud from the local folder in Linux server, I see the files are uploading onto the Linux server but not synchronized to OSS
I have reconfigured the entire setup using the ossutil and the ossfs but still the same issue
The below is the error I face when I try to run the command
ossfs -ourl=http://oss-ap-south-1.aliyuncs.com
ossfs: There is no enough disk space for used as cache(or temporary) directory by ossfs.
Did you follow this guide?
For me OSS mount on Linux works when I type in command line:
ossfs bucketname /mnt/directory -ourl=http://oss-your-region.aliyuncs.com
If your Linux machine is in Alibaba Cloud you can use
-ourl=http://oss-your-region-internal.aliyuncs.com
You need to mount the OSS bucket to the specified directory as follows to synchronize Linux server and OSS.
To mount the OSS bucket to the directory:
ossfs bucket mountpoint -ourl=http://oss-your-region.aliyuncs.com
For instance, mount the bucket bucketName to the /tmp/ossfs directory. The AccessKeyId is abcdef, the AccessKeySecret is 123456, and the OSS endpoint is http://oss-cn-hangzhou.aliyuncs.com.
echo bucketName:abcdef:123456 > /etc/passwd-ossfs
chmod 640 /etc/passwd-ossfs
mkdir /tmp/ossfs
ossfs bucketName /tmp/ossfs -ourl=http://oss-cn-beijing.aliyuncs.com
Note: Permissions must be set correctly.
I'm using serveless to deploy lambda function, I need to add an executable bin file but when it is uploaded I don't have executable permissions, also I can't change permissions after deployed, the only thing I can do is to move the file to /tmp and there change the permissions, it works ok but adds a lot of overhead because I have to move the files on every Invoke becasue /tmp is ephemeral.
I know there is a known issue that windows&linux files permission are different, so if you zip a file on windows and unzip it on a linux machines you will have problem with permission, especialy with execution, and that happens when serverless deploys the files.
¿Anyone have a better workaround for this? (rather than "deploy from a windows machine")
Several articles have been extremely helpful in understanding Docker's volume and data management. These two in particular are excellent:
http://container-solutions.com/understanding-volumes-docker/
http://www.alexecollins.com/docker-persistence/
However, I am not sure if what I am looking for is discussed. Here is my understanding:
When running docker run -v /host/something:/container/something the host files will overlay (but not overwrite) the container files at the specified location. The container will no longer have access to the location's previous files, but instead only have access to the host files at that location.
When defining a VOLUME in a Dockerfile, other containers may share the contents created by the image/container.
The host may also view/modify a Dockerfile volume, but only after discovering the true mountpoint using docker inspect. (usually somewhere like /var/lib/docker/vfs/dir/cde167197ccc3e138a14f1a4f7c....). However, this is hairy when Docker has to run inside a Virtualbox VM.
How can I reverse the overlay so that when mounting a volume, the container files take precedence over my host files?
I want to specify a mountpoint where I can easily access the container filesystem. I understand I can use a data container for this, or I can use docker inspect to find the mountpoint, but neither solution is a good solution in this case.
The docker 1.10+ way of sharing files would be through a volume, as in docker volume create.
That means that you can use a data volume directly (you don't need a container dedicated to a data volume).
That way, you can share and mount that volume in a container which will then keep its content in said volume.
That is more in line with how a container is working: isolating memory, cpu and filesystem from the host: that is why you cannot "mount a volume and have the container's files take precedence over the host file": that would break that container isolation and expose to the host its content.
Begin your container's script with copying files from a read-only mount bind reflecting the host files to a work location in the container. End the script with copying necessary results from the container's work location back to the host using either the same or different mount point.
Alternatively to the end-of-the script command, run the container without automatically removing it at the end, then run docker cp CONTAINER_NAME:CONTAINER_DIR HOST_DIR, then docker rm CONTAINER_NAME.
Alternatively to copying results back to the host, keep them in a separate "named" volume, provided that the container had it mounted (type=volume,src=datavol,dst=CONTAINER_DIR/work). Use the named volume with other docker run commands to retrieve or use the results.
The input files may be modified in the host during development between the repeated runs of the container. Avoid shadowing them with the frozen files in the named volume. Beginning the container script with copying the input files from the host may help.
Using a named volume helps running the container read-only. (One may still need --tmpfs /tmp for temporary files or --tmpfs /tmp:exec if some container commands create and run executable code in the temporary location).
If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.
I've read the docs and a few things still confuse me, mostly related to sync folders and database data.
I want to use the following folder structure on my host machine
ROOT
|- workFolder
||- project1
|||- project1DatabaseAndFiles
|||- project1WebRoot
||- project2
|||- project2DatabaseAndFiles
|||- project2WebRoot
||- project3
|||- project3DatabaseAndFiles
|||- project3WebRoot
And then create VM's where each VM host webroot points to the appropriate projectX/projectXWebRoot folder.
From what I've read, I can only specify one remote Sync DIR. (http://docs.vagrantup.com/v2/synced-folders/). But if I create a new VM I want to specify the project name too, thereby selecting the correct host folder.
Is what I'm describing possible using Vagrant?
If I wanted another developer to use this environment, I'd like for them to have instant access to the database structure/setup etc without having to import any SQL files. Is this possible?
I'm hoping I'm just not understanding Vagrants purpose, but this seems like a good use of shared VM's to me. Any pointers or articles that might help would be very welcome.
From what I've read, I can only specify one remote Sync DIR.
No that is not true. You can always add more shared folders. From the
manual:
This directive is used to configure shared folders on the virtual machine and may be used multiple times in a Vagrantfile.
This means you can define additional shared folders using:
config.vm.share_folder "name", "/path/on/vm", "path/on/host"
If I wanted another developer to use this environment, I'd like for them to have instant access to the database structure/setup etc without having to import any SQL files. Is this possible?
Yes, you can alter the data storage path of, say, MySQL to store it in on a share on the host so that
the data is not lost when the VM is destroyed.
However, this is not as simple as it sounds. If you're using the MySQL cookbook (again, assuming you're using MySQL), you have to modify it so that the shared folder is mounted with the uid and gid of the mysql user or otherwise the user can't write to it. You can mount a share manually like this:
mount -t vboxsf -o uid=`id -u mysql` -o gid=`id -g mysql` sharename /new/data/dir
Also, if you're using Ubuntu or Debian Wheezy, Apparmor needs to be configured differently for MySQL,
as it does not allow writes to the newly configured data directory. This can be done by writing
/new/data/dir r,
/new/data/dir/** rwk,
to /etc/apparmor/apparmor.d/local/usr.sbin.mysqld. This version of the mysql cookbook supports this behaviour already, so you can look up how it does that.