Has anyone ever used Felix Cloud Storage on heroku? - heroku

Felix Cloud Storage is a heroku add-on which lets you store files based on AWS S3.
On the free tier you get up to 100 GB-Mo Shared Storage (monthly)
It looks like there is enough space on the shared bucket and therefore I should be able
to upload at least a file.
The issue I have is that when I try to create a shared space, it will throw an error:
I wanted to contact Felix support team but there is no such information. I was wondering
if someone ever used felix-cloud on their Heroku app and if yes, what did you do different?
This is a very rare and special case where for file storage it's not feasible to use AWS S3 or other similar solutions, but it's rather required through a Heroku add-on.

Related

Location of files written by spring-boot application on Google Cloud

I have deployed a Spring-boot application on Google cloud. This is a web application in which the JS client captures frames from the camera and sends it back to the server. The server processes these images and stores them in a directory. This application works perfectly fine locally and on the Google cloud. But on the cloud, I am not able to locate the directory created by the application. Could you please suggest a possible solution?
In the Google App Engine Environment, the local filesystem that your application is deployed to is not writeable. This behavior ensures the security and scalability of your application.
If your application needs to save and write files I suggest you take a look at this document where you can find different kind of solutions, for example, as per is mentioned at the document:
"App Engine creates a default bucket when you create an app. This bucket provides the first 5GB of storage for free and includes a free quota for Cloud Storage I/O operations. You can create other Cloud Storage buckets, but only the default bucket includes the first 5GB of storage for free."

How to get instances configuration deployed in Ali cloud?

Our customer using Alibaba cloud to deploy their application. They rented dozens of VMs/instances. We are asked to using API to get instance configuration (i.e. number of core, memory, network bandwidth, SSD, disk type, zone and etc) by program. We have found Ali open APIs in github./1/
Could someone point out which exact API could we call to get instance configuration?
/1/ https://github.com/aliyun/aliyun-openapi-java-sdk/
You can use the DescribeInstanceAttribute API. I feel it's very helpful to use the APIExplorer to debug APIs. You don't need to install SDKs, CLIs, write any code, and configure access keys etc. Just login your Alibaba Cloud account, fill in some inputs, and click the Send Request. Everything is there!

Transferring between IaaS providers with only raw images of instances and volumes

My system runs on a cloud environment provided by an academic research computing organization. Problem is, it's unreliable, service is bad, and it can be slow.
So I'm considering switching to a different cloud provider, but I want to know whether it is possible to create a new compute instance and volume from just snapshots (raw images) of those instances and volumes? I asked DigitalOcean and they said this isn't possible (I'd have to create a new droplet and reinstall/transfer everything).
I've also emailed AWS but no response yet. If this is possible (just because it seems like the simplest route), are there any recommendations of cloud providers?
My system is running Ubuntu with Apache and MySQL. It hosts a wordpress website, a large database, and a series of Java tools. The instance snapshot is about 20gb and the storage volume is 250gb.
Thanks in advance!

Modify the url of my cloud service

How to change the url of windows azure application which is in cloud?
modify the url from xxx.cloudapp.net to yyy.cloudapp.net
thanks for your helps
You can't do that! The only way is to create new cloud service and deploy your package there.
Please note that creating cloud service is nothing else but reserving the DNS name (XXX.cloudapp.net). You are not being charged for creating cloud services. You are only charged for when you deploy something on those cloud services. So you can create as many as you wish (well, I think there is some soft limit on the number of cloud services you can create, so delete the ones you are not planning to use).
When you go for production I highly suggest you to use your own custom domain (i.e. www.mycomany.com). For this, please follow the instructions here.

No permanent filesystem for Heroku?

The app I am currently hosting on Heroku allows users to submit photos. Initially, I was thinking about storing those photos on the filesystem, as storing them in the database is apparently bad practice.
However, it seems there is no permanent filesystem on Heroku, only an ephemeral one. Is this true and, if so, what are my options with regards to storing photos and other files?
It is true. Heroku allows you to create cloud apps, but those cloud apps are not "permanent" - they are instances (or "slugs") that can be replicated multiple times on Amazon's EC2 (that's why scaling is so easy with Heroku). If you were to push a new version of your app, then the slug will be recompiled, and any files you had saved to the filesystem in the previous instance would be lost.
Your best bet (whether on Heroku or otherwise) is to save user submitted photos to a CDN. Since you are on Heroku, and Heroku uses AWS, I'd recommend Amazon S3, with optionally enabling CloudFront.
This is beneficial not only because it gets around Heroku's ephemeral "limitation", but also because a CDN is much faster, and will provide a better service for your webapp and experience for your users.
Depending on the technology you're using, your best bet is likely to stream the uploads to S3 (Amazon's storage service). You can interact with S3 with a client library to make it simple to post and retrieve the files. Boto is an example client library for Python - they exist for all popular languages.
Another thing to keep in mind is that Heroku file systems are not shared either. This means you'll have to be putting the file to S3 with the same application as the one handling the upload (instead of say, a worker process). If you can, try to load the upload into memory, never write it to disk and post directly to S3. This will increase the speed of your uploads.
Because Heroku is hosted on AWS, the streams to S3 happen at a very high speed. Keep that in mind when you're developing locally.

Resources