permission problem when mirror a bucket, I'm trying to migrate a bucket of one minio server to another - minio

I'm trying to migrate a bucket of one minio server to another using mc client, the command that I'm using is mc mirror (mc mirror --remove --overwrite --preserve minioproducao/compartilhado minioteste/compartilhado) the command works fine, but I was cheeking some
permissions between both server inside the bucket and I realized that the permission are different, for example:
I've connected inside of the the both kubernetes container and run the ls -l inside the bucket directory
on the origin it was showed like this: drwxr-xr-x arquivo.JPG #ps: it's ins't a file its a directoryenter image description here and inside of it there's two file like this: part.1 xl.meta
on the destine it was showed like this: -rw-r--r-- arquivo.JPG #ps: it's was copied as as file not a directory like it's on the origin minio server bucket
I'm wondering if there is a way to do a exactly copy of what's on my origin minio server, to another?
Thank you in advance!
Here's a picture that will help illustrate What I'm saying:

Related

GCP use cloud storage file to pull the code from git

I am trying to put a start-up script in cloud storage file for a vm. this cloud storage file will contain pull related command.
so the first step to get a ssh-key, I generate it from bitbucket, now when I went for adding the ssh-key in vm metadata, I saw there is already ssh there in metadata.
How can I use this metadata ssh key to pull the repo from bitbucket. I want to write the shell script to pull the code in cloud storage file and then give this file as startup script for the vm.
I am stuck on how can I access ssh-key. I saw somewhere
cat ~/.ssh/id_rsa.pub
I was guessing this file should show the keys it has as I am able to see the ssh-keys in vm metadata, but it says file not found.
I am looking into wrong file
Thanks,

Scheduled process to copy files out of S3 into a temp-folder in Ubuntu 18.04

Looking for recommendations for the following scenario:
In an ubuntu 18.04 server, every 1 minute check for new files in an AWS S3 bucket, fetch only the newest file to a temp folder at the end of the day remove them.
It should be automated in bash.
I proposed using aws s3 events notification, queues, lambda but it was defined that is best to keep it simple.
i am looking for recommendations for the steps described below:
For step 1 i was doing aws s3 ls | awk (FUNCTION to filter files updated within the last minute)
then i realized that it was best to do it with grep
0-Cron job should run from 7:00 to 23:00 every minute
1-List the files updated to S3 bucket during the past 1 minute
2-List the files in a temp-encrypted folder in ubuntu 18.03
3-Are the files listed in step 1 already downloaded in folder temp-encrypted from step 2
4-If the files are not already donloaded > download newest files from S3 bucket into temp-encrypted
5-At end of the day 23:00 take a record of the last files fetched from s3
6-run cleanup script at end of the day to remove everything in temp-encrypted
I attach a diagram with the intended process and infrastructure design.
The solution was like this:
Change FTPS to SFTP running in Ubuntu 18.04
change main ports: randomport1 for SSH and randomport2 for SFTP
configure SFTP in sshd_config file
once everything is working create local directory structure
by using a bash script
5.1 List what is in S3 and save in a var
5.2 for each of the files listed in s3 check if there is a new file not present in the mirrored file in the local directory s3-mirror
5.3 if there is new file fetch, touch a file with empy contents in s3-mirror directory just same name, move encrypted file to SFTP and remove fetched S3 file from mirrored local directory
5.4 record successful actions in a log.
So far it works good.

Moving a directory from local-machine to an instance in GCP

I am trying to move a folder from my local machine to the remote server in an instance of GCP:
gcloud compute scp --recurse myDirectory instance-1:~/Folder
It looks like it is uploaded (because I see the files uploading), but when I checked the folder at the remote server there is nothing. What I am doing wrong?
I have two projects and I set up gcloud with the suitable project.
The answer is inspired by: gcloud compute copy-files succeeds but no files appear
I needed to write username#instance, thus this works:
gcloud compute scp --recurse myDirectory username#instance-1:~/Folder

Move files S3 to Ftb using bash script

I want to move files from Amazon s3 to ftp using bash script command...
I already tried
rsync --progress -avz -e ssh s3://folder//s3://folder/
Can anyone please suggest the correct command?
Thanks in advance
AWS built sync in their cli,
aws s3 sync ./localdir s3://mybucket
You can sync your local directory to remote bucket.
How to install aws cli?
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
If you don't want to take the cli installation route, you can use docker to connect to a container, share your local directory to that container and perform the sync.
https://hub.docker.com/r/mesosphere/aws-cli/
Hope it helps.
You can't copy objects from S3 in that way because S3 is not an SSH service, it a file storage. So the easiest way is to mount the S3 bucket. Then you can use it like a normal volume and copy all files to the target.
You should do that on the target system otherwise you have to copy all the file over the third server or computer.
https://www.interserver.net/tips/kb/mount-s3-bucket-centos-ubuntu-using-s3fs/

Downloading folders from aws s3, cp or sync?

If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.

Resources