Parse Files Migration - parse-platform

I have migrated parse server and pointed all the client application to the new standalone parse server. I have used the parse files utils to migrate existing files from parse to aws s3. The migration completed properly and I can see the images in my s3 bucket. There is an option to add prefix to the migrated files which I have done.
Now on the client website when I check the URL of the images they are the same starting with 'tfss' which means they are still getting rendered from parse hosted S3 bucket. What are the steps I need to take to make sure the images are getting rendered from my s3 bucket?
Do I need to remove the fileKey from parse server or what?
The config that I used for file migration is as follows
module.exports = {
applicationId: <APPLICATION ID>,
masterKey: <MASTER KEY>,
mongoURL: <NEW MONGODB URL>,
serverURL: "https://api.parse.com/1",
filesToTransfer: 'all',
renameInDatabase: false,
renameFiles: false,
aws_accessKeyId: <NEW S3 BUCKET ACCESS KEY>,
aws_secretAccessKey: <NEW S3 BUCKET SECRET>,
aws_bucket: <BUCKET NAME>,
aws_bucketPrefix: "prod_migrated_"
};
Thanks in advance. Please help with further steps.

Without having your Parse-Server configuration, it is a bit hard to know how you have it setup, but here are a few things to check:
If you have all your files in S3 and all the clients are pointing to your new Parse Server then you can remove the fileKey parameter from your Parse Server configuration. This will prevent Parse Server from formatting the file URL with the hosted hostname and fileKey.
Verify in your Parse Server filesAdapter configuration for S3, you have set a proper baseUrl, bucketPrefix and directAccess parameters as stated in the documentation. The baseUrl should be something similar to https://<BUCKET_NAME>.s3.amazonaws.com.
Verify that you have also setup the proper bucket policy to grant read privileges to be able to get the URL (see S3 adapter documentation). You can verify this by trying to access one of the images in your S3 bucket in the browser.

Related

Move files to BLOB using Automate

I want to move files my email to attachment to Azure BLOB using Power Automate. I am aware of the BLOB storage connection but I can't use it as I don't have the access key.
After surfing Google, I managed to find the below link. I need help on how to get the x-ms header and how to choose the folder inside the BLOB to upload the file into.
I lack all kind on experience in HTTP and Azure BLOB. :(
Please help.
Link: https://powerusers.microsoft.com/t5/Using-Flows/how-to-upload-to-blob-container-via-sas-url/m-p/125756#M3360
After reproducing from our end, Here is how we could able to save files in our blob from the HTTP connector.
We have used “x-ms-blob-type” as a header with a value of “BlockBlob”. Make sure you add the path to your storage account in URI in the below format.
https://<STORAGE ACCOUNT NAME>.blob.core.windows.net/<CONTAINER NAME>/<FILE NAME><SAS>
RESULT:

Can I serve files stored in Google Cloud Storage via a http.FileServer in golang?

I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.

The requested URI does not represent any resource on the server

I am trying to host a website in Azure Blob Storage
as discussed here
I have had success with www.mysite.com.au which is redirecting to
( where mysite is not the real name )
http://docs.mysite.com.au/site/index.html ( not a real url )
where docs is a cname with the alias being the blob storage name.
The blob access policy is set to Container
The direct link in Azure is https://mysite.blob.core.windows.net/site/index.html (not the real name)
I am puzzled as to why I cannot go to http://docs.mysite.com.au/site/index.html directly
When I do this I get an error
The requested URI does not represent any resource on the server
I think the answer might be to do with working with blobs not files.
Similar to why "subfolders" cant be created in $root.
[Update]
I also ran into this problem when I deleted index.html and then re-uploaded it.
I can see the file in storage explorer.
I think I will need to revert to an app service.
For hosting static website on Azure Blob Storage, you could leverage the root container ($root) and store your files under the root path as follows:
https://brucchstorage.blob.core.windows.net/index.html
Custom domain: http://brucestorage.conforso.org/index.html
For script and css files, you could create another container (e.g. content), then put script files under content/script/ and css files under content/css/ or you could create each container for storing script and css files.
https://brucchstorage.blob.core.windows.net/content/css/bootstrap.min.css
https://brucchstorage.blob.core.windows.net/content/script/bootstrap.min.js
The requested URI does not represent any resource on the server
AFAIK, the blob in the root container cannot include a forward slash (/) in its name. If you upload blob into root container with the / in its name, then you would retrieve this error.
I think I must have had the custom name set incorrectly in Azure.
It should have been docs.mysite.com.au ( not the real name)

Storing object with Control-Cache header in object storage is unachievable

I did upload an object with Cache-Control as parameter and it does not take effect in object storage bucket but it does in AWS S3 bucket using the same code:
$s3Client->putObject([
'ACL' => 'public-read',
'Bucket' => config('filesystems.disks.object-storage.bucket_name'),
'CacheControl' => 'public, max-age=86400',
'Key' => $path,
'SourceFile' => $path,
]);
I don't really understand why the same code does not have same effect in both cloud buckets since both use S3 API.
The uploaded file has control-cache header in AWS S3 and the same file in IBM OO doesn't get the same result.
how can I set correctly control-cache header in object-storage file ?
The IBM object storage currently does not have all the options as AWS S3, the valid API operations are listed here https://ibm-public-cos.github.io/crs-docs/api-reference
As you can see there is not support for control cache
It can be done now - at least, certainly through the IBM Cloud Object Storage CLI:
ibmcloud cos put-object --bucket bucket-name-here --cache-control
"public, max-age=31536000" --body dir/file.jpg --key prefix/file.jpg
Assuming you have the rights to do this, it will result in a with the appropriate Cache-Control header. There are optional parameters for e.g. Content-Type as well, although it seemed to detect the correct one for a JPG. To replace metadata on existing files you may have to copy from the bucket to the same bucket, as is done here.
Prior to this I created a service account with HMAC and entered the credentials with ibmcloud cos config hmac. You may also need ibmcloud cos config region to set your default region first.
As for the API itself, setCacheControl() [and setHttpExpiresDate()] seem like what you need. For the REST API you may need Cache-Control to be part of the PUT - it has been listed as a "common header" since June 2018. I'm not sure this is how you achieve this goal via REST, but it seems likely - this is how you set Content-Type.
In the web console I wasn't able to see an equivalent to the way Oracle offers to set Cache-Control headers when selecting files to upload, as it starts uploading immediately upon drag-and-drop using Aspera Connect. (This is unfortunate, as it's a relatively user-friendly way to upload a moderate number of files with paths.)

Parse Server - Where does it store uploaded files

I am new to Parse Server (implementing it on Heroku and locally).
I have a basic question, when I upload a file using the ParseFile class, it provides me a URL and a fileobject. Where is this File being stored?
Is it being stored physically on a file system? Or in Mongodb?
Thank you!
I found a collection in Mongodb named fs.files. The files I uploaded were located there. I assume the Parse URL is generated as a redirect.

Resources