I have a unique set-up I am trying to determine if Heroku can accommodate. There is so much marketing around polygot applications, but only one example I can actually find!
My application consists of:
A website written in Django
A separate Java application, which takes files uploaded by users, parses them, and stores the data in a database
A shared database accessible by both applications
Because these user-uploaded files can be enormous, I want the uploaded file to go directly to the Java application. My preferred architecture is:
The Django-generated webpage displays the upload form.
The form does an AJAX submit to the Java application
The browser starts polling the database to see if the Java application has inserted the data
Meanwhile the Java application does its thing w/ the user-uploaded file and updates the database when it's done
The Django webpage AJAX-refreshes a div with the results of the user upload once the polling mechanism sees that the upload is complete
The big issue I can't figure out here is if I can get both the Django the Java apps either running on the same set of dynos or on different dynos but under the same domain to avoid AJAX cross-domain issues. Does Heroku support URL-level routing? For ex:
Django application available at http://www.myawesomewebsite.com
Java application available at http://www.myawesomewebsite.com/javaurl/
If this is not possible, does anyone have any ideas for work-arounds? I know I could have the user upload the file to Django and have Django send the request to Java from the server-side instead of the client side, but that's an awful lot of passing around of enormous files.
Thanks so much!
Heroku does not support the ability to route via the URL. Polyglot components should exist as their own subdomains and operate in a cross-domain fashion.
As a side-note: Have you considered directly uploading to S3 instead of uploading to your app on Heroku which will then (presumably) upload to S3. If you're dealing with cross-domain file uploads this is worth considering for its high level of scalability.
Related
I have an android app which gets its data from firebase realtime database. For updating the realtime database automatically, I've written a python script which crawls data from a website and processes it. Then it sends the data to my firebase realtime database using the admin sdk. I am willing to store and execute the script on my server, so that it is executed automatically twice a day. Is it safe to upload my serviceAccountKey.json along with it? If it is not, then how can I achieve my desired functionality?
Yes, it is fine to store the service account JSON file in your own server. That's the intended use case. Just make sure it's not exposed to users in anyway.
I am building a Video service using Azure Media Services and Node.js
Everything went semi-fine till now, however when I tried to deploy the app to Azure Web Apps for hosting, any large files fail with 404.13 error.
Yes I know about maxAllowedContentLength and not, that is NOT a solution. It only goes up to 4GB, which is pathetic - even HDR environment maps can easily exceed that amount these days. I need to enable users to upload files up to 150GB in size. However when azure web apps recieves a multipart request it appears to buffer it into memory until a certain threshold of either bytes or just seconds (upon hitting which, it returns me a 404.13 or a 502 if my connection is slow) BEFORE running any of my server logic.
I tried Transfer-Encoding: chunked header in the server code, but even if that would help, since Web Apps doesn't let the code run, that doesn't actually matter.
For the record: I am using Sails.js at backend and Skipper is handling the stream piping to Azure Blob Service. Localhost obviously works just fine regardless of file size. I made a duplicate of this question on MSDN forums, but those are as slow as always. You can go there to see what I have found so far: enter link description here
Clientside I am using Ajax FormData to serialize the fields (one text field and one file) and send them, using the progress even to track upload progress.
Is there ANY way to make this work? I just want it to let my serverside logic handle the data stream, without buffering the bloody thing.
Rather than running all this data through your web application, you would be better off having your clients upload directly to a container in your Azure blob storage account.
You will need to enable CORS on your Azure Storage account to support this. Then, in your web application, when a user needs to upload data you would instead generate a SAS token for the storage account container you want the client to upload to and return that to the client. The client would then use the SAS token to upload the file into your storage account.
On the back-end, you could fire off a web job to do whatever processing you need to do on the file after it's been uploaded.
Further details and sample ajax code to do this is available in this blog post from the Azure Storage team.
I have a server side API running on Heroku for one of my iOS apps, implemented as a Ruby Rack app (Sinatra). One of the main things the app does is upload images, which the API then processes for meta info like size and type and then stores in S3. What's the best way to handle this scenario on Heroku since these requests can be very slow as users can be on 3G (or worse)?
Your best option is to upload the images directly to Amazon S3 and then have it ping you with the details of what was uploaded.
https://devcenter.heroku.com/articles/s3#file-uploads
I am writing a service as part of which a user chooses an image from a url (not my domain) and later he and others can view that image.
I need to save this image to a third party server (S3).
After a lot of wasted time I found I can not do it from the client side due to security issues (I can't get the third party image data and send it from the client side without alerting the client, which is just bad)
I also do not want to do the uploading on my server because I run Rails on Heroku and the workers expansive.
So I though of two options:
use something like transloadit.com,
or write a service on EC2 that will run over my db, find where the rows where the images are not uploaded and upload them.
I decided to go for the EC2 and S3 because the solution i am writing is meant for enterprise and it seems that it will sound better as part of the architecture when presented to customers.
My question is: what is the setup i need so I can access the Heroku db from an external service?
Any better ideas on how to solve this?
So you want to effectively write a worker, but instead of doing it on Heroku you want to do it on EC2? That feels like more work.
As for the database, did you see the documentation? It shows how to get the URL.
PS. Did you not find it in the docs?
I want to be able to synchronize several text files on a user's PC in real time from my web application. Basically I want a few data files on the local PC to mirror the state of a user's data in my web application so if the web application or the user's internet connection is lost he can use those data files to get some critical info (possibly using html/javascript code stored in with those files that would run in offline mode on those data files.)
I know that google gears has a lot of interesting tools for working with offline state, but I'd prefer an even simpler application in html/javascript that wouldn't be as reliant on google gears. I'd rather use google gears to just create those files and slowly keep them in synch with the web application's version of data throughout the day.
Update on answers:
PersistJS is a good suggestion I will look into, but I was hoping people would direct me towards really good Google Gears tutorials resources.
You can save data on the browser using PersistJS, which uses the best client-side persistent storage mechanism it can find, supporting:
Flash
Google Gears
HTML 5 storage specs
browser-specific extensions
cookies
When your app reconnects, you can resync. Creating and reading text files is something the browser will generally block your web site from doing.
Risking of stating the obvious; if you want to store user state locally, isn't cookies the standard way?
maybe more then one cookie will be needed, but that sounds like the simplest of ways.
You're going to need to make an ActiveX control and a FireFox plugin to get these permissions. Short of that I agree with orip try using PersistJS
You can ask the user to download a subversion client that is predefined to interface with your subversion server only. Then write your web application to interface with the subversion service from your side only.
There is a good deal of security harm associated with granting access to a user's file system so you will want to lock down all possible points of exploitation. You will want to ensure that the user cannot access the subversion server except through the client that you ask them to install. You will want to ensure the connection between the application server and the subversion server is extremely secure so that the transmission path cannot be compromised and that malicious logic that may be loaded onto the application server cannot access the subversion server. I would say to encrypt the transmission path between those two servers and put the subversion server behind the firewall separating your network DMZ. I would also suggest use a challenge/response mechanism between the application server and the subversion server to prevent malicious code from appearing to be legitimate decisions made on the application server. Also, ensure that data only flows form the application server to the subversion server in a unidirectional fashion only, because if there is malicious logic planted on your application server then any data that comes from the subversion server is compromised without even accessing that server.
you could use the File System Object FSO through javascript, however it is dependant on Microsoft as it is an ActiveX control, it would also require permissions in the browser, or perhaps a HTA (HTML Application).
http://www.webreference.com/js/column71/
Its a real security issue so most avenues are closed down inhrentley.
Inherently the web model was designed not to authorize upstream from server to client. Now things are changing slowly maybe could you do this with Websocket ?