SurrealDB - store binary files - surrealdb

Is it possible to store binary files (e.g. images) using SurrealDB?
I can't find anything about this in docs.
If not, where can I store images since all the other data is stored in SurrealDB.

SurrealDB wasn't created as a file store. For this purpose, you can use for example object storage. Nearly every cloud service provide object storage.
If you want an open-source solution that can host on your own, you can check MinIO object storage - github repo.

Related

How to display image from database with Laravel and react hooks

I am new in reactjs I want to display image with Laravel and react hooks any easy way and fast to load image from Laravel path.?
there’s a lot of way to handle display image :
store your file path to db, so you can call your image using path
store you image to db using blob type (but i don’t recommend this approach because it will make your db size increases significantly)
you can store your image to aws / another cloud services
From my experience, it’s better to store path only into the database then store image file in your system. because it have better performance than store the image to db. the disadvantage using this approach is while you have to backup the images and db. or if you move the image manually, the path in db will not match with the actual image’s path. it will throw an error.
you can try using aws since aws give a free tier s3 storage.
For further information about use s3 storage, you can read Laravel documentation about the Filesystem for the technical use.

When I have Azure do a CloudBlockBlob.StartCopyAsync(), is there a way to have it verify a checksum?

When I initiate an async copy of a block blob to another storage account using StartCopyAsync, is Azure doing any kind of integrity check for me, or if not, is there a way to have it do so?
I found that I can set the Properties.ContentMD5 property and have the integrity verified when uploading blobs. Is it also verifying during a copy operation?
I searched through the docs and found no mention of an integrity check during an async copy specifically. I found a couple references to AzCopy making integrity checks, and it also has the /CheckMD5 option, which is essentially what I'd like Azure to do after the blob copy.
As far as I know, the azure blob SDK is the package of the azure blob rest api.
So the azure SDK StartCopyAsync method will use copy operation(rest api) send to the azure server side to tell the server copy.
According to the copy operation article, you could find "When a blob is copied, the following system properties are copied to the destination blob with the same values".
It contains the "Content-MD5" property.

Temporary storage for Azure WebSites

I want to cache some cropped images and serve them without calculating them again in a Azure WebSite. When I used the Azure VM I was just storing them at the D drive (temporary drive) but I don't know where to store them now.
I could use the Path.GetTempPath but I am not sure if this is the best approach.
Can you suggest me where should I store my Temporary files when I am serving from a Azure WebSite?
Azure Websites also comes with a Temp folder. The path is defined in the environment variable %TEMP%
You can store your images in App_Data folder in the root of your application or you can use Azure CDN for caching.
You could store the processed content on Azure Blob Storage and serve the content from there.
If what you really want is a cache you can also look into using the Azure Redis Cache.
you can use Path.GetTempPath() and Path.GetTempFileName() functions for the temp file name, but you are limited though in terms of space, so if you're doing a 10K save for every request and expect 100,000 requests at a time per server, maybe blob storage is better.
Following sample demonstrate how to save temp file in azure, both Path and Bolb.
Doc is here:https://code.msdn.microsoft.com/How-to-store-temp-files-in-d33bbb10
Code click here:https://github.com/Azure-Samples/storage-blob-dotnet-store-temp-files/archive/master.zip

Access to filesystem on AppHarbor

I want to try AppHarbor, but I have an application which stores uploaded files in certain place on a filesystem. Is it compatible with AppHarbor? Can I store files in the file system and access them later?
(what kind of path can I expect, like c:\blabla something or what?)
Thank you.
You can store files on the local filesystem, but the application directory is wiped on each new deployment so it's not recommended to rely on for file storage.
Instead we recommend that you use a cloud storage service such as Amazon S3, Google Cloud Storage or similar. There are .NET libraries for both services.
We recently wrote a blog post about uploading files directly to S3 and GCS from the browser that you might want to read.
If you are using a background worker, you need to 'Enable File System Write Access' in the settings of you application.
Then, you are permitted access to write to: Path.GetTempPath()
Sourced from this support question: http://support.appharbor.com/discussions/problems/5868-create-directory-in-background-worker

Storing images in file system, amazon-s3 datastores

There has been numerous discussions related to storing images (or binary data) in the database or file system (Refer: Storing Images in DB - Yea or Nay?)
We have decided to use the file system for storing images and relevant image specific metadata in the database itself in the short term and migrate to an amazon s3 based data store in the future. Note: the data store will be used to store user pictures, photos from group meetings ...
Are there any off the shelf java based open source frameworks which provide an abstraction to handle storage and retrieval via http for the above data stores. We wouldn't want to write any code related to admin tasks like backups, purging, maintenance.
Jets3t - http://jets3t.s3.amazonaws.com/index.html
We've used this and it works like a charm for S3.
I'm not sure I understand if you are looking for a framework that will work for both file-system storage and S3, but as unique as S3 is, I'm not sure that such a thing would exist. Obviously with S3, backups and maintenance are handled for you.

Resources