unable to create link the ossfs local directory to the cloud - alibaba-cloud

I have a web application, using Alibaba Cloud OSS, which is synchronized to the cloud from the local folder in Linux server, I see the files are uploading onto the Linux server but not synchronized to OSS
I have reconfigured the entire setup using the ossutil and the ossfs but still the same issue
The below is the error I face when I try to run the command
ossfs -ourl=http://oss-ap-south-1.aliyuncs.com
ossfs: There is no enough disk space for used as cache(or temporary) directory by ossfs.

Did you follow this guide?
For me OSS mount on Linux works when I type in command line:
ossfs bucketname /mnt/directory -ourl=http://oss-your-region.aliyuncs.com
If your Linux machine is in Alibaba Cloud you can use
-ourl=http://oss-your-region-internal.aliyuncs.com

You need to mount the OSS bucket to the specified directory as follows to synchronize Linux server and OSS.
To mount the OSS bucket to the directory:
ossfs bucket mountpoint -ourl=http://oss-your-region.aliyuncs.com
For instance, mount the bucket bucketName to the /tmp/ossfs directory. The AccessKeyId is abcdef, the AccessKeySecret is 123456, and the OSS endpoint is http://oss-cn-hangzhou.aliyuncs.com.
echo bucketName:abcdef:123456 > /etc/passwd-ossfs
chmod 640 /etc/passwd-ossfs
mkdir /tmp/ossfs
ossfs bucketName /tmp/ossfs -ourl=http://oss-cn-beijing.aliyuncs.com
Note: Permissions must be set correctly.

Related

How Can I Transfer a File From Google Bucket to Google Compute Engine VM

I have set up a Windows server on Google Cloud. I also have a Google Storage Bucket. I want to transfer a zip to the VM. How can I do this?
I figured this out. Follow the directions to create your VM and storage bucket.
Start your vm and rdp into the server. THEN from WITHIN the VM instance run:
\Google\Cloud SDK>gsutil -m cp -r gs://[bucket]/[your file] C:/users/[computer name]/[location]
Replace [computer name] with your user's name on windows, and replace [location] with the location you want to transfer the file to.

How to access local files from docker image running springboot using the file URI?

I have a Spring Boot application running in a docker image. We can access local API's using host.docker.internal, so is there any way to access local files using its URL, i.e. for eg:
file:///Users/ayush.singhal/Downloads/arguments.csv
I know about accessing the file by mounting the volume, but I am trying to do it from internal program itself that is running in the docker image.
You can add a mount point using "volumes" in the docker-compose file that will point to a folder in the host system. By default, docker isolates the container from everything else.

Move files S3 to Ftb using bash script

I want to move files from Amazon s3 to ftp using bash script command...
I already tried
rsync --progress -avz -e ssh s3://folder//s3://folder/
Can anyone please suggest the correct command?
Thanks in advance
AWS built sync in their cli,
aws s3 sync ./localdir s3://mybucket
You can sync your local directory to remote bucket.
How to install aws cli?
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
If you don't want to take the cli installation route, you can use docker to connect to a container, share your local directory to that container and perform the sync.
https://hub.docker.com/r/mesosphere/aws-cli/
Hope it helps.
You can't copy objects from S3 in that way because S3 is not an SSH service, it a file storage. So the easiest way is to mount the S3 bucket. Then you can use it like a normal volume and copy all files to the target.
You should do that on the target system otherwise you have to copy all the file over the third server or computer.
https://www.interserver.net/tips/kb/mount-s3-bucket-centos-ubuntu-using-s3fs/

Mounting a network folder to a Docker container on Windows 10

I'm trying to mount a network folder with a Docker container on Windows 10 with the following syntax. Using UNC paths does not work. I'm running it under Hyper-V and the stable version of Docker.
docker run -v \\some\windows\network\path:/some/local/container
Before I was using Docker Toolbox, and I could map a network share to an internal folder with VirtualBox. I've tried adding the network share as a drive, but it doesn't show up as an available drive under the settings panel.
Currently I'm using mklink to mirror a local folder to the network folder, but I'd like to not depend on this as a solution.
Do this with Windows based containers
Go to Microsoft documentation https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts.
There you'll find information about how to mount a network drive as a volume in a windows container.
Do this with Linux based containers
Is currently (as of 2019-11-13) not possible. BUT you can use a plugin: https://github.com/ContainX/docker-volume-netshare
I didn't use it, so I have no experience with it. Just found it during my research and wanted to add this as a potential solution.
Recommended solution
While researching on this topic I felt that you should probably mount the drive from within the container. You can pass required credentials either via file or parameters.
Example for credentials as file
You would require to install the package cifs-utils in the container, add
COPY ./.smbcredentials /.smbcredentials
to the Dockerfile and then run the following command after the container is started:
sudo mount -t cifs -o file_mode=0600,dir_mode=0755,credentials=/.smbcredentials //192.168.1.XXX/share /mnt
Potential duplicate
There was another stackoverflow thread on this topic here:
Docker add network drive as volume on windows
The answer provided there (https://stackoverflow.com/a/57510166/12338776) didn't work for me though.

Downloading folders from aws s3, cp or sync?

If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.

Resources