Location of files written by spring-boot application on Google Cloud - spring-boot

I have deployed a Spring-boot application on Google cloud. This is a web application in which the JS client captures frames from the camera and sends it back to the server. The server processes these images and stores them in a directory. This application works perfectly fine locally and on the Google cloud. But on the cloud, I am not able to locate the directory created by the application. Could you please suggest a possible solution?

In the Google App Engine Environment, the local filesystem that your application is deployed to is not writeable. This behavior ensures the security and scalability of your application.
If your application needs to save and write files I suggest you take a look at this document where you can find different kind of solutions, for example, as per is mentioned at the document:
"App Engine creates a default bucket when you create an app. This bucket provides the first 5GB of storage for free and includes a free quota for Cloud Storage I/O operations. You can create other Cloud Storage buckets, but only the default bucket includes the first 5GB of storage for free."

Related

Has anyone ever used Felix Cloud Storage on heroku?

Felix Cloud Storage is a heroku add-on which lets you store files based on AWS S3.
On the free tier you get up to 100 GB-Mo Shared Storage (monthly)
It looks like there is enough space on the shared bucket and therefore I should be able
to upload at least a file.
The issue I have is that when I try to create a shared space, it will throw an error:
I wanted to contact Felix support team but there is no such information. I was wondering
if someone ever used felix-cloud on their Heroku app and if yes, what did you do different?
This is a very rare and special case where for file storage it's not feasible to use AWS S3 or other similar solutions, but it's rather required through a Heroku add-on.

How to connect Spring MVC application and cloud storage

I do application on Spring MVC where I need to store users photos. There are some ways to store files, but they have disadvantages:
in local storage - limit of host storage
in DB - cashe, limit of DB, long process of converting images to store in DB
I want ask you, is there some way to upload images(any files) on cloud service, for example https://i.onthe.io/ or google drive and then load them to my application (on JSP page).
There will be 2 steps to upload into Google Drive from a Spring application.
1.Implement Oauth2 Authorization - Either by using Google APIs or Spring OAuth2
2.Use Drive API Client Library for Java to upload/download files into Google drive.
Refer GoogleDrive API JavaDoc here.
Spring Cloud AWS maybe can help you enter link description here

How to pipe upload stream of large files to Azure Blob storage via Node.js app on Azure Websites?

I am building a Video service using Azure Media Services and Node.js
Everything went semi-fine till now, however when I tried to deploy the app to Azure Web Apps for hosting, any large files fail with 404.13 error.
Yes I know about maxAllowedContentLength and not, that is NOT a solution. It only goes up to 4GB, which is pathetic - even HDR environment maps can easily exceed that amount these days. I need to enable users to upload files up to 150GB in size. However when azure web apps recieves a multipart request it appears to buffer it into memory until a certain threshold of either bytes or just seconds (upon hitting which, it returns me a 404.13 or a 502 if my connection is slow) BEFORE running any of my server logic.
I tried Transfer-Encoding: chunked header in the server code, but even if that would help, since Web Apps doesn't let the code run, that doesn't actually matter.
For the record: I am using Sails.js at backend and Skipper is handling the stream piping to Azure Blob Service. Localhost obviously works just fine regardless of file size. I made a duplicate of this question on MSDN forums, but those are as slow as always. You can go there to see what I have found so far: enter link description here
Clientside I am using Ajax FormData to serialize the fields (one text field and one file) and send them, using the progress even to track upload progress.
Is there ANY way to make this work? I just want it to let my serverside logic handle the data stream, without buffering the bloody thing.
Rather than running all this data through your web application, you would be better off having your clients upload directly to a container in your Azure blob storage account.
You will need to enable CORS on your Azure Storage account to support this. Then, in your web application, when a user needs to upload data you would instead generate a SAS token for the storage account container you want the client to upload to and return that to the client. The client would then use the SAS token to upload the file into your storage account.
On the back-end, you could fire off a web job to do whatever processing you need to do on the file after it's been uploaded.
Further details and sample ajax code to do this is available in this blog post from the Azure Storage team.

No permanent filesystem for Heroku?

The app I am currently hosting on Heroku allows users to submit photos. Initially, I was thinking about storing those photos on the filesystem, as storing them in the database is apparently bad practice.
However, it seems there is no permanent filesystem on Heroku, only an ephemeral one. Is this true and, if so, what are my options with regards to storing photos and other files?
It is true. Heroku allows you to create cloud apps, but those cloud apps are not "permanent" - they are instances (or "slugs") that can be replicated multiple times on Amazon's EC2 (that's why scaling is so easy with Heroku). If you were to push a new version of your app, then the slug will be recompiled, and any files you had saved to the filesystem in the previous instance would be lost.
Your best bet (whether on Heroku or otherwise) is to save user submitted photos to a CDN. Since you are on Heroku, and Heroku uses AWS, I'd recommend Amazon S3, with optionally enabling CloudFront.
This is beneficial not only because it gets around Heroku's ephemeral "limitation", but also because a CDN is much faster, and will provide a better service for your webapp and experience for your users.
Depending on the technology you're using, your best bet is likely to stream the uploads to S3 (Amazon's storage service). You can interact with S3 with a client library to make it simple to post and retrieve the files. Boto is an example client library for Python - they exist for all popular languages.
Another thing to keep in mind is that Heroku file systems are not shared either. This means you'll have to be putting the file to S3 with the same application as the one handling the upload (instead of say, a worker process). If you can, try to load the upload into memory, never write it to disk and post directly to S3. This will increase the speed of your uploads.
Because Heroku is hosted on AWS, the streams to S3 happen at a very high speed. Keep that in mind when you're developing locally.

Azure MVC ServiceConfiguration.cscfg change after deployment

I've added a setting to ServiceConfiguration.cscfg with the idea that it will allow me to turn on/off a feature of the MVC app. The code correctly reads the setting however while running the app in local dev compute emulator, I don't see the ServiceConfiguration.cscfg file in the .csx directory. I only see the ServiceDefinition.csdef file which has the key but not the value. I want to change the value.
The idea is that I have a text file I can alter after deploying that will allow me to turn on/off parts of the app by opening text file on Azure and making changes.
I don't want to be dependent on Azure Storage or a hop off the Azure box.
What is the best way to change my own app config setting in azure?
Well,
Your path is correct. ServiceConfiguration.cscfg is one of the places where you could have service wide settings. And there is one gotcha here, you can't dynamically change the service configuration with local Azure emulator. If you want to change something in the service configuration, you have to stop your debugging session, change the setting and start new session. Only in live Azure Environment, you can change the service configuration, and it will be propageted to all instances.
I intentionally bolded service wide settings. With full IIS mode (available since SDK 1.3) you can have multiple web sites per single Web Role. That would mean multuple applications. Now I would not want to mess setting for one of the applications, with settings for the other. That is why I would put an application wide settings in an Azure Table. And your application may query this table every N seconds/minutes, depends what is your targeted response time.
I wonder what are your thought begind the "I don't want to be dependend on Azure Storage" statement? Before all, you are developing application for the Windows Azure platform. Ain't you going to have any dynamic data? File uploads or file generation or anything like that? Check out the Windows Azure Storage SLA. I don't think a Windows Azure storage (in your case I suggest Tables) would be in any harm for your application. Especially when your service deployment is in the same geographic region as your storage account.

Resources