How to rename an object in Google Storage bucket through the API?
See also An error attempting to rename a Google Bucket object (Google bug?)
Objects can't be renamed. The best you can do is copy to a new object and delete the original. If the new and old object are the same location (which will be true if they're in the same bucket, for example) it will be a metadata-only (no byte copying) operation, and hence fast. However, since it's two operations it won't be atomic.
Not sure if you want to do this programmatically or manually, but the gsutil tool has a mv option which can be used for renaming objects.
gsutil mv gs://my_bucket/oldprefix gs://my_bucket/newprefix
As other posters noted, behind the scenes, this does a copy and delete.
First, use the "rewrite" method to produce a copy of the original object. Then, delete the original object.
Documentation on rewrite: https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite
Related
I am creating a web application in Go.
I have modified my working code so that it can read and write files on both a local filesystem and a bucket of Google Cloud Storage based on a flag.
Basically I included a small package in the middle, and I implemented my-own-pkg.readFile or my-own-pkg.WriteFile and so on...
I have replaced all calls in my code where I read or save files from the local filesystem with calls to my methods.
Finally these methods include a simple switch case that runs the standard code to read/write locally or the code to read/wrote from/to a gcp bucket.
My current problem
In some parts I need to perform a ReadDir to get the list of DirEntries and then cycle though them. I do not want to change my code except for replacing os.readDir with my-own-pkg.ReadDir.
So far I understand that there is not a native function in the gcp module. So I suppose (but here I need your help because I am just guessing) that I would need an implementation of fs.FS for the gcp. It being a new feature of go 1.6 I guess it's too early to find one.
So I am trying to create simply a my-own-pkg.ReadDir(folderpath) function that does the following:
case "local": { }
case "gcp": {
<Use gcp code sample to list objects in my bucket with Query.Prefix = folderpath and
Query.Delimiter="/"
Then create a slice of my-own-pkg.DirEntry (because fs.DkrEntry is just an interface and so it needs to be implemented... :-( ) and return them.
In order to do so I need to implement also the interface fs.DirEntry (which requires the implementation of interface for FileInfo and maybe something else...)
Question 1) is this the right path to follow to solve my issue or is there a better way?
Question 2) (only) if so, does the gcp method that lists object with a prefix and a delimiter return just files? I can't see a method that returns also the list of prefixes found
(If I have prefix/file1.txt and prefix/a/file2.txt I would like to get both "file1.txt" and "a" as files and prefixes...)
I hope I was enough clear... This time I can't include code because it's incomplete... But in case it helps I can paste what I can.
NOTE: by the way go 1.6 allowed me to solve elegantly a similar issue when dealing with assets either embedded or on the filesystem thanks to the existing implementation of fs.FS and the related ReadDirFS. So good if I could follow the same route 🙂
By the way I am going on studying and experimenting so in case I am successful I will contribute as well :-)
I think your abstraction layer is good but you need to know something on Cloud Storage: The directory doesn't exist.
In fact, all the object are put at the root of the bucket / and the fully qualified name of the object is /path/to/object.file. You can filter on a prefix, that return all the object (i.e. file because directory doesn't exist) with the same path prefix.
It's not a full answer to your question but I'm sure that you can think and redesign the rest of your code with this particularity in mind.
I went through the documentation of minio-go-api. But didn't get any solution for that, as objects are sorted based on the alphabetic order.
A hack way, will be to first read all the objects and then take last modified date from each object and form the new list, which is not at all feasible for production
#Siddhanta Rath, One way to handle this is to use mc tool. Command mc find --newer and mc find --older will handle this. But internally, it will do listObjects and do the sorting for you.
The other approach would be to subscribe to notification and make sure that there is a list of uploaded objects in a database.
There is no capability to specify sort order in the Amazon S3 API. Your application will need to sort the objects into the desired oder.
I'm using the aws sdk to delete an object (or objects) from a bucket, the problem is that keys that don't exist still get counted as successfully deleted, shouldn't the SDK raise an error that the key doesn't exist?
The other problem is that an object corresponding to a key that does exist isn't being removed but is returning as being successfully deleted.
EDIT:
The second problem only seems to be when the object to be deleted is inside of a folder, in the root it gets deleted fine.
The DELETE object operation for Amazon S3 intentionally returns a 200 OK even when the target object did not exist. This is because it is idempotent by design. For this reason, the aws-sdk gem will return a successful response in the same situation.
A quick clarification on the forward-slash. You can have any number of '/' characters at the beginning of your key, but an object with a preceding '/' is different from the object without. For example:
# public urls for two different objects
http://bucket-name.s3-amazonaws.com/key
http://bucket-name.s3-amazonaws.com//key
Just be consistent on whether you choose to use a slash or not.
Turns out what you can't have have '/' at the beginning of the key, which I didn't realise, not sure why it was there but it was screwing up the key.
I try to add AutoSave support to the Core Data File Wrapper example
Now if i have a new/untitled document writeSafelyToURL is called with the NSAutosaveElsewhereOperation type.
The bad thing is, I get this type in both typical use cases
- new file: which store a complete new document by creating the file wrapper and the persistent store file
- save diff: where the file wrapper already exists and only an update is required.
Does somebody else already handled this topic or did somebody already migrated this?
The original sample use the originalStoreURL to distinguish those two use cases, which solution worked best for you?
Thanks
The nsICacheSession has a method openCacheEntry() which returns an existing cache entry. Is there a method such as createCacheEntry() that will create a cache entry. I want to create an XPCOM object that will read files from disk and write them to Firefox cache.
Thanks
As stated in MDC nsICache interface reference opening an entry with ACCESS_WRITE will create the entry if it doesn't already exist (if it already exists, the entry will be doomed so you can replace it with a new one).
Perhaps a little too late for the original poster, but this can be useful for other people out there.