I often see a blob has 3 different formats:
wasb://XXXXXXXXXX#XXXXX.blob.core.windows.net
https://storageaccount.blob.core.windows.net/container/path/to/blob
abfss://container#storageaccount.dfs.core.windows.net/paht/to/blob
I know the differences between abfss and wasbs. But I am not quite clear what are the use cases for https vs. abfss. When should we use which format?
Does a blob always have https and abfss formats? In which case we have abfss format but do not have https format?
Related
I would like to generate a PDF with a QR code for a web-app I'm making. Should I generate the PDF on the server side or the client side?
In my experience, generating on the server has the following advantages:
more options - tools/libraries, language support, fonts etc
storage options - your servers can store the PDF if required in a db/document-store
security - your storage options are all secured on the server and hidden from the browser
reliability - your PDF will be generated on the server, stored and available to the client. This same PDF can live for as long as required in storage and delivered to the client whenever required. You can confirm any time that the PDF the client has is the same one as stored.
Generating on the client might be quicker, so it depends on what the life-cycle of your PDF is as to which path is best.
I hope that helps.
I'm creating a web app that uses GraphQL, the requirement is to handle GraphQL operations over WebSocket. Although I managed to achieve this by using subscriptions-transport-ws. however, I'm quite stuck with handling file uploads. and somehow I came across with streaming file from client to server using socket.io-stream, but this leads to having 2 separate API for textual data and files. So, I was wondering if there is a way to combine this functionality into GraphQL.
I ran into the exact same problem, and my file sizes were such that converting to base64 wasn't a feasible option. I also didn't want to use a separate library outside of GraphQL because that would require substantial setup changes in my server (to handle both GraphQL and non-GraphQL).
Fortunately, the solution ended up being fairly simple. I created two GraphQL clients on the front-end - one for the majority of my traffic exclusively over WebSockets, and another exclusively over HTTP just for operations that involved file uploads.
Now I can simply specify which client I want depending on whether the operation involves file uploads, without complex changes on my server or impacting the real-time benefits of all of my other queries.
currently, I am also finding a proper solution for this but you can use an alternative approach by converting your file data into base64 and pass that data as a string. This approach can only work for a small size file as string data-type can not store large amounts of data as buffers do.
I need to be able to obtain performance metrics such as loading times, REST Calls time through a google extension and mozilla firefox add-on.
The challenge is to get the extension itself. I can't seem to be able to obtain an extension having those functionality on both google extension and mozilla firefox add-on.
Another challenge is that the data obtained from the performance metrics should be exported so as to be put in a structured database.
Anyone know any extension/add-on with those requirements?
I don't think you will find such extension, in particular storing the results into the database will not be possible at all as extensions can only access limited subset of browser functionality, so called sandbox.
So instead of looking for an extension I would recommend investigating Navigation Timing API
The output of the API can be converted into i.e. JSON object using JSON.stringify() function
JSON is some kind of universal data interchange format so databases either can consume it directly or you can convert it into whatever you want.
If you're looking for a way to automate everything you can consider using Apache JMeter tool as an unified testing solution:
browser automation is being provided via WebDriver Sampler
database connectivity is implemented via JDBC Connection Configuration and JDBC Request elements
JMeter by nature is a load testing tool so you can combine the above steps which check client-side performance with the backend performance test.
I am making a lot of _all_docs requests to CouchDB's HTTP server. One thing I'm realizing is that the data is not compressed, so this results in large file sizes. Even by using limit and skip, the files can sometimes be 10MB each. That doesn't cause any problems for my app, but it does mean that if a connection to our CouchDB server is slower than our office connection, it will go rather slow.
Is there any way I can enable HTTP compression? I am not referring to attachments - just the JSON files.
Also, I am using Windows Server - not Linux/Unix.
Thanks!
There is no support in CouchDB directly, but it has been requested. (so voice your support there if you want this included)
That being said, there are a number of options you have. First, you can set up nginx as a reverse proxy and allow it to compress (and possibly cache) responses for you. After a quick search, I found this plugin that you install in CouchDB directly.
Another thing is that CouchDB does a pretty solid job of allowing clients to cache reliably. You can leverage this to prevent repeatedly downloading the same large resource.
Parse is turning out to be convenient to use. However the service does not allow for streaming videos to the server. I need a service that allows my users to stream videos to the server while they are recording. I was able to find the KickFlip SDK. Does anyone know how I might be able to pair KickFlip with Parse for Video Streaming? Even if I were to use both services, as opposed to just the KickFlip SDk, how would I coordinate the two? Parse provides rich social database but storing a ParseFile video is limiting (no streaming and 10MB max).
Should work on a REST client with a normal POST ( i would not do your scenario without adding something like Heroku/EC2 to parse.com PAAS architecture, but it will work )... I dont know 'flipkick' but you must know low-level protocol details b4 you include this in a solution.
In REST.POST.Headers you would need to include the combo of header values consistent with a client posting a stream on https to parse.( more here ). ie NO length header, ask for chunked encoding, use http 1.1. On a FILE POST to parse, IMO you dont need to worry about a timeout. You will have to append to the end of your posted bytestream the normal http , chunked-encoding , byte sequence signifying END_OF_DATA.
Sample Curl interaction with headers etc.
Observe relevant parse.file.sz. limitations...