My app use a Google cloud Firestore instance. Among the data my App manages there some classical data (string, number, ...): No problem with that / Firestore handle these use case easily.
But my app also need to consume images that are linked to the other data.
So I'm looking for the right solution to manage images.: I try to use the "reference" type Field from my Firestore instance but I'm not sure that the right way...
Is there another solution outside Firestore?
What about Google cloud Filestore?: It seems available only from an app engine or a VM...
I try to use the "reference" type Field from my Firestore instance but I'm not sure that the right way...
Is there another solution outside Firestore?
What about Google cloud Filestore?: It seems available only from an app engine or a VM...
Disclosure: I work on the Firebase team at Google.
When I want to use both structured and unstructured data in my application, I use Cloud Firestore for the structured data, and Cloud Storage for the unstructured data. I use both of these through their Firebase SDKs, so that I can access the data and files directly from within my application code, or from server-side code (typically running in Cloud Functions).
There is no built-in reference type between Firestore and Storage, so you'll need to manage that yourself. I usually store either the path to the image in Firestore, or the download URL of the image. The choice between these two mostly depends on whether I want the file to be publicly accessible, or whether access needs to be controlled more tightly.
Since there is no managed relationship between Firestore and Storage (or any other Firebase/Google Cloud Platform services), you'll need to manage this yourself. This means that you'll need to write the related data (like the path above), check for its integrity when reading it (and handle corrupt data gracefully), and consider periodically running a script that removes/fixes up corrupt data.
Related
I have a Golang application running in Kubernetes which needs to persist a single string value outside of it's memory. In other words, if the application is redeployed or the pod is restarted - the value is not lost. I also need to be able to read and write this from golang regularly.
What is a good way to do this?
So far I've thought about:
ConfigMap: Would this be considered misuse to utilize a config map, and are they even persistent?
PersistentVolume: This seems appropriate, but can I store a single value or a file with this, rather than setting up an entire database?
Thank you!
In Kubernetes, you have the following options to store data outside the POD (or actually to share data between PODs).
Persistent Volume: a shared filesystem, you share data as files
ConfigMap/Secret: Kubernetes-based shared objects, you use Kubernetes API to store data; Kubernetes uses etcd under the hood and the consensus algorithm is applied to every data change, so the size of data needs to be small and the performance is not great
3rd-party tool: Hazelcast, Redis, TiKV; you use their client (or REST API) to store your values
I'm not sure about your exact use case, but I'd start from a 3-rd party software. They are very simple to deploy with Helm Chart of Operators. Another option is Persistent Volume. ConfigMap/Secret, I'd treat it as last resort.
I'm trying to build a small site that gets its data from a database (currently I use Firebase's Cloud Firestore).
I've build it using next.js and thought to host it on vercel. It looks very nice and was working well.
However, the site needs to handle ~1000 small documents - serve, search, and rarely update. In order to reduce calls to the database on every request, which is costly both in time, and in database pricing, I thought it would be better if the server could get the full list of item when it starts (or on the first request), and then hold them in memory and make data request get the data from its memory.
It worked well in the local dev server, but when I deployed it to vercel, it didn't work. It seems it forces me to work in serverless mode, where each request is separate, and I can't use a common in-memory cache to get the data.
Am I missing something and there is a way to achieve something like that with next.js on vercel?
If not, can you recommend other free cloud services that can provide what I'm looking for?
One option can be using FaunaDB and Netlify, as described in this post, but I ended up opening a free Wix site and using Wix data to store the data. I built http-functions module to provide access to the data via REST, which also caches highly used data in memory. Currently it seems to work like a charm!
I have this use case, where I have created server side views on sync gateway based on a rolling time window of 10 days. Is there a way to directly pull those on my device side?
When I look at the documentation, I see that there's no way these can be replicated directly and one needs to make REST calls:
http://developer.couchbase.com/documentation/mobile/1.2/develop/guides/sync-gateway/accessing-cb-views/index.html
Is that assumption correct?
The other approach I saw was that let all the data be replicated on the client side and then write Couchbase lite views on the client side using Map reduce functions. Which one's the correct approach out of the 2?
Yes I believe that your assumption is correct - views have to be queried directly via the public REST API. I also believe that your solution for syncing data and then querying it on the client side will also work.
In order to find the "correct approach" I would consider your app needs and deployment workflow:
Using view on the server will require:
Managing (CRUD) of the views in SG - similar to managing functions in a database. These would ideally be managed by some deployment / management code.
Clients need to be able to make the API call to the public interface to access view info. This then requires a cache to work offline.
Slicing data locally means that sync will bring down all data and the device will have to perform the search / slice / aggregation previously carried out by the server. This will:
Work offline.
Put a potential extra strain on the app device.
I don't think that there are any easy answers here - ideally views would be synced to the device, but I don't know if that's even possible with the current SG implementation.
(Note 1: that the views must be created in Sync Gateway via the admin REST interface and not through the Couchbase web interface.).
(Note 2: I'm a server-side programmer, so this view is tainted.)
What I ended up doing was writing webhooks, which basically let me have the same docs replicated onto a Couchbase server. Then I did all needed aggregations and pushed those to syn gatewy(which got replicated to the app).
May or mayn't be right but works for my case....
I'm developing an app extension that needs to share data with the containing app. I created an app group and moved the core data store of the main app to that folder. From the extension I can create the managed object context and save data to the store and I can also access it from the containing app. Now I have two independent applications accessing the same core data store. This sounds like a recipe for disaster to me. Is what I have set up sufficient for sending data from the extension to the containing app or should I look for another way?
In this situation you'll have two entirely independent Core Data stacks accessing the same persistent store file.
Assuming that you're using SQLite, you're fine, at least as far as data integrity. Core Data uses SQLite transactions to save changes and SQLite is fine with multiple processes using the same file. Neither process will corrupt data for the other or mess up the file.
You will have to deal with keeping data current in the app. For example if someone uses the share extension to create new data while the app is running. You won't get anything like NSManagedObjectContextDidSaveNotification in this case. You'll need to find your own way to ensure you get any new updates.
In many cases you can make this almost trivial-- listen for UIApplicationDidBecomeActiveNotification, which will be posted any time your app comes to the foreground. When you get it, check the persistent store for new data and load it.
If you want to get a little more elegant, you could use something like MMWormhole for a sort-of file based IPC between the app and the extension. Then the extension can explicitly inform the app that there's new data, and the app can respond.
Very interesting answer from Tom Harrington. However I need to mention about MMWormhole. I've found that MMWormhole uses NSFileCoordinator and apple tells that:
Using file coordination in an app extension to access a container
shared with its containing app may result in a deadlock in iOS
versions 8.1.x and earlier.
What Apple suggests for safe save operations you can read here:
You can use CFPreferences, atomic safe save operations on flat
files, or SQLite or Core Data to share data in a group container
between multiple processes even if one is suspended mid transaction.
Here is the link to the Apple Technical Note TN2408.
I'm toying with Windows Azure creating an ASP.NET MVC project, I've checked the WAPTK (Windows Azure Platform Training Kit), Google, and here for answers to my question, but I couldn't find the answer. In an ASP.NET MVC project, in what file do I create containers for cloud storage? (In the WAPTK, there's a hands-on lab that uses webforms and puts storage containers in the _Default partial class.)
In an ASP.NET MVC project, in what file do I create containers for cloud storage? (In the WAPTK, there's a hands-on lab that uses webforms and puts storage containers in the _Default partial class.)
Generally I'd recommend you set up the container access (including the create) in some nicely encapsulated class - preferably hiding behind an interface for easy testability.
I'd recommend:
put this class and it's interface in a class library (not in your ASP.Net project)
put the config details in the csdef/cscfg files
if you only plan to use a fixed list of containers, then either:
create these ahead of installing your app - e.g. from a simple command line app
or create these from a call in the init of Global.asax
if you plan to dynamically create containers (e.g. for different users or actions) then create these from Controller/Service code as is required - e.g. when a user signs up or when an action is first performed.
if actions might occur several times and you really don't know if the container will be there or not, then find some way (e.g. an in-memory hashtable or in-sql persistent table) to help ensure that you don't need to continually call CreateIfNotExist - remember that each call to CreateIfNotExist will slow your app down and cost you money.
for "normal" access operations like read/write/delete, these will typically be from Controller code - or from Service code sitting behind a Controller
If in doubt, think of it a bit like "how would I partition up my logic if I was creating folders on a local disk - or on a shared network drive"
Hope that helps a bit.
Stuart
I am not sure if I understand you correctly, but generally speaking files and Azure doesn't fit together well. All changes stored at the local file system are volatile and only guaranteed to live as long as the current Azure instance. You can however create a blob and mount it as a local drive, which makes the data persistent. This approach has some limitations, since it will allow one azure instance writing and maximum 8 readers.
So instead you should probably use blobs rather than files. The problem of knowing which blob to access would then be solved by using Azure table storage to index the blobs.
I recently went to a presentation where the presenter had investigated quite a bit about Azure table storage and his findings was that limiting partition keys to groups of 100-1000 elements would give the best performance. (Partition keys are used internally by azure to determine which data to group.)
You should definitely use blob storage for your files. It's not particularly difficult and as this is a new project there is no need to use Azure Drives.
If you are just using blob storage to serve up images for your site then you can reference them with a normal tag in your html e.g
<img src="http://myaccountname.blob.core.windows.net/containername/filename">
This will only work if the file or container is shared and not secured. This is fine if you are just serving up static content on html pages.
If you want to have secured access to blobs for a secure site then you have to do a couple of things, firstly your website will need to know how to access your blobs.
in your servicedefinition.csdef file you will need to include
<Setting name="StorageAccount" />
and then add
<Setting name="StorageAccount" value="DefaultEndpointsProtocol=https;
AccountName=[youraccountname];AccountKey=[youraccountkey]" />
to your serviceconfiguration.csfg file.
Then you can use the windows azure SDK to access that account from within your web role. Starting with
Dim Account As CloudStorageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageAccount"))
Dim blobClient As CloudBlobClient = Account.CreateCloudBlobClient
From there you can
Read/write to blobs
delete blobs
list blobs
create time limited urls using shared access signatures.
There is a great resource here from Steve Marx. Which although it is about accessing blob storage from silverlight which you are not using does give you lots of information in one place.
Your question wasn't very specific but this should give you some idea where to start.
#Faester is correct you will probably need some resource either table storage or SQL Azure to store references to these files.