Directory names causing errors when setting bucket contents to public? - s3cmd

I am using s3cmd (version 2.0.1) to set all the contents of a bucket/directory as public, but I can't set the object's acl as public for directories named with all lower case characters. I can only set the bucket contents as public if the bucket name is all upper case, or a mix of lower and upper case.
For example, this doesn't work:
I created a new bucket with rclone:
rclone mkdir remotename:name
And then tried to set the contents to public: s3cmd setacl s3://name --acl-public
and get the error ERROR: S3 error: 405 (MethodNotAllowed)
But this does work:
Create a new bucket: rclone mkdir remotename:NEWNAME
and successfully set the contents of that as public with: s3cmd setacl s3://NEWNAME --acl-public
Is there a limitation to what I can call my buckets/directories, or is there something I'm not understanding about setting bucket contents to public with s3cmd?

Ah, got it fixed. The issue has to do with s3cmd. In my .s3cfg file on the host_bucket line, I removed the ā€œsā€ after $(bucket). Once I did this I could create and make public whatever bucket name I want.

Related

Issue getting a file with Laravel

Hi I do not know what is happening but I want to get a file from the storage folder and it says that it does not existe but it exists I mean I go to the folder and it is there..
I retrieve the file like this:
$path = storage_path('app/public/' . $settlement->support);
$settlement_file = Storage::disk('local')->get($path);
And I get this error:
message: "File not found at path:
home/jysparki/newjis.jisparking.com/storage/app/public/06152617_8-2025.pdf"
and If I go to the folder it exists so I wonder what am I doing wrong? Thanks.
If you look at config/filesystems.php file and find disks array, you may notice that root under the local disk key is set to storage_path('app'). Well it's up to what the value may be in your case you're free to change it, but what does this mean is whenever you call Storage::disk('local') method it will try to find files relative to storage_path('app'), so there is no need to specify full path. Try this instead
$settlement_file = Storage::disk('local')->get('/public/' . $settlement->support);
or
$settlement_file = Storage::disk('public')->get($settlement->support);
as Laravel by default has public disk which pointed into /app/public storage directory

Temp file not being deleted

I'm trying to create a temporary file in my pipeline, then use that file in another rule.
For example, I have two rules in a .smk file:
#Unzip adapter trimmed fastq file
rule unzip_fastq:
input:
'{sample}.adapterTrim.round2.fastq.gz',
output:
temp('{sample}.adapterTrim.round2.fastq')
conda:
'../envs/rep_element.yaml'
shell:
'gunzip -c {input[0]} > {output[0]}'
#Run bowtie2 to align to rep elements and parse output
rule parse_bowtie2_output_realtime:
input:
'{sample}.adapterTrim.round2.fastq'
output:
'rep_element_pipeline/{sample}.fastq.gz.mapped_vs_' + config["ref"]["bt2_index"] + '.sam'
params:
bt2=config["ref"]["bt2_index_path"], eid=config["ref"]["enst2id"]
conda:
'../envs/rep_element.yaml'
shell:
'perl ../scripts/parse_bowtie2_output_realtime_includemultifamily.pl '
'{input[0]} {params.bt2} {output[0]} {params.eid}'
{sample}.adapterTrim.round2.fastq is used once and should ultimately be deleted upon completion. However, I'm finding that this file is uploaded to Amazon S3, even with the addition of temp(). I'm also finding that this file is removed locally, but still persists on S3.
Am I doing this correctly? '{sample}.adapterTrim.round2.fastq' is not currently written in the rule-all of the Snakefile.
We ultimately need to prevent this file from being uploaded to S3, so if there is a way to specify not to upload this file in the rule, that would be useful.
It seems that the snippet in the question is not consistent with actual use, since for S3 files one would need to wrap file names in remote.
However, as a general solution, documentation contains the following:
The remote() wrapper is mutually-exclusive with the temp() and protected() wrappers.
Hence, if you intend to use a temp file, make sure it's not wrapped in remote, or explicitly wrap the file in local.

Move files under GCS with renaming

I want to write the following bash script which copies files from one GCS bucket to another with renaming options.
My input folder is gs://test-rtt-integration/result/frd/*.orc
and my destination folder is gs://test-rtt-integration/recent_files/frd
The renaming of the copied file should be done based on the name provided from gs://test-rtt-integration/complex-files/TAN/recent_files/today/frd
once the copy with renaming is done I need to clean gs://test-rtt-integration/result/frd
I tested the following commands, but they are not working properly
NAME = "$(gsutil ls gs://test-rtt-integration/complex-files/TAN/recent_files/today/frd)"
gsutil mv gs://test-rtt-integration/result/frd/*.orc gs://test-rtt-integration/recent_files/frd/$NAME
gsutil rm -rf gs://test-rtt-integration/result/frd
( all .orc files and other files should be deleted)
But this is not working properly as I have to split the NAME based on / and get the last split , so if the result of split is called SPLIT , I have to do gsutil mv gs://test-rtt-integration/result/frd/*.orc gs://test-rtt-integration/recent_files/frd/$SPLIT
Any idea on how to do this?
The question is a little bit confusing. You say that you want to move files from one Google Cloud Storage bucket to another, but all the operations are made in one single bucket called test-rtt-integration.
However, as soon as you get the file location with the command gsutil ls gs://[BUCKET_NAME]/folder e.g. gs://[BUCKET_NAME]/folder/[FILENAME].orc, since the gs://[BUCKET_NAME]/folder/ part is always the same for all the objects in the folder, just replace it with null and you will get only the object name at the end as [FILENAME].orc etc.
I am not sure if this is exactly what you are looking for, but I did a little bit of coding myself and I have created a bash script that:
Gets the name of each object from gs://[BUCKET_NAME]/from bucket folder
Copy all objects from gs://[BUCKET_NAME]/from bucket folder to the gs://[BUCKET_NAME]/to/ bucket folder
Delete all objects from gs://[BUCKET_NAME]/from bucket folder
Inside there are comments that explain how every operation works in details. If that is not exactly what you are looking for, you can get the basic idea of how that works and implement it in different way that will suit you better. I have tested the scrip myself in Google Cloud Shell and it is working. The example code can be found in GitHub.

Which Environment.SpecialFolder.XXX will give me Context.getFilesDir()?

I want to access a folder in which I can save temporary files and write there a file and where I don't need permission to write to the folder.
I currently use:
string targetBaseDir = Environment.GetFolderPath(
Environment.SpecialFolder.Personal,
Environment.SpecialFolderOption.Create
);
This gives me a private directory for which I need no permissions which is the directory that corresponds to Context.getFilesDir(), however as I don't need the file to exist permanently it would be cleaner if I could save them in the directory that corresponds to Context.getFilesDir(). What do I have to write instead of .Personal to get the directory?
You can use FileSystem.AppDataDirectory via Xamarin.Essentials in your .NetStd library to obtain what is the "getFilesDir()" location on Android.
Example:
var appData = Xamarin.Essentials.FileSystem.AppDataDirectory;
appData equals /data/user/0/com.sushihangover.FormsTestSuite/files
And the Android FilesDir:
Log.Debug("SO", FilesDir.ToString() );
Equals /data/user/0/com.sushihangover.FormsTestSuite/files
access a folder in which I can save temporary files
In that case use the Cache dir:
var cacheDir = Xamarin.Essentials.FileSystem.CacheDirectory;
cacheDir would equal: `/data/user/0/com.sushihangover.FormsTestSuite/cache
re: https://learn.microsoft.com/en-us/xamarin/essentials/
Both Environment.SpecialFolder.Personal and Environment.SpecialFolder.MyDocuments will give you the path of Context.getFilesDir().
However, if you want to save small temporary files, you can make use of the cache directory Context.getCacheDir().
There is no special folder enum for the cache directory though. You'll have to get the path from the current context Context.CacheDir.Path, or from Xamarin.Essentials.FileSystem.CacheDirectory if you use Xamarin.Essentials API.
Read more:
File access with Xamarin.Android with SpecialFolder
Writing cache file in internal storage
File system helpers in Xamarin.Essentials

s3cmd sync is remote copying the wrong files to the wrong locations

I've got the following as part of a shell script to copy site files up to a S3 CDN:
for i in "${S3_ASSET_FOLDERS[#]}"; do
s3cmd sync -c /path/to/.s3cfg --recursive --acl-public --no-check-md5 --guess-mime-type --verbose --exclude-from=sync_ignore.txt /path/to/local/${i} s3://my.cdn/path/to/remote/${i}
done
Say S3_ASSET_FOLDERS is:
("one/" "two/")
and say both of those folders contain a file called... "script.js"
and say I've made a change to two/script.js - but not touched one/script.js
running the above command will firstly copy the file from /one/ to the correct location, although I've no idea why it thinks it needs to:
INFO: Sending file
'/path/to/local/one/script.js', please wait...
File
'/path/to/local/one/script.js'
stored as
's3://my.cdn/path/to/remote/one/script.js' (13551
bytes in 0.1 seconds, 168.22 kB/s) [1 of 0]
... and then a remote copy operation for the second folder:
remote copy: two/script.js -> script.js
What's it doing? Why?? Those files aren't even similar. Different modified times, different checksums. No relation.
And I end up with an s3 bucket with two incorrect files in. The file in /two/ that should have been updated, hasn't. And the file in /one/ that shouldn't have changed is now overwritten with the contents of /two/script.js
Clearly I'm doing something bizarrely stupid because I don't see anyone else having the same issue. But I've no idea what??
First of all, try to run it without --no-check-md5 option.
Second, I suggest you to pay attention to directory names, specifically trailing slashes.
s3cmd documentation says:
With directories there is one thing to watch out for ā€“ you can either upload the directory and its contents or just the contents. It all depends on how you specify the source.
To upload a directory and keep its name on the remote side specify the source without the trailing slash
On the other hand to upload just the contents, specify the directory it with a trailing slash

Resources