I'm facing an embedded target that doesn't have any local storage in which to save a dump file.
Related
I've set up the pipeline and it works (I followed this documentation https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-ftp), it downloads the zip file and loads it to the blob storage.
however the resulted zip file is corrupted. it has a slightly different size than the original file.
I set the infer content type to YES. Also tried this setting to no but didn't change result.
I tried with hardcoded and dynamic naming.
I would like write reqular CSV file to Storage, but what I get is folder "sample_file.csv" and 4 files under it. How to create normal csv file from data frame to Azure Storage Gen2?
I'm happy with any advice or link to article.
df.coalesce(1).write.option("header", "true").csv(TargetDirectory + "/sample_file.csv")
So I am trying to upload a file with Celery that uses Redis on my Heroku website. I am trying to upload a .exe type file with the size of 20MB. Heroku is saying in they're hobby: dev section that the max memory that could be uploaded is 25MB. But I, who is trying to upload a file in Celery(turning it from bytes to base64, decoding it and sending it to the function) is getting kombu.exceptions.OperationalError: OOM command not allowed when used memory > 'maxmemory'. error. Keep in mind when I try to upload for e.g a 5MB file it works fine. But 20MB doesn't. I am using Python with the Flask framework
There are two ways to store files in DB (Redis is just an in-memory DB). You can either store a blob in the DB (for small files, say a few KBs), or you can store the file in memory and store a pointer to the file in DB.
So for your case, store the file on disk and place only the file pointer in the DB.
The catch here is that Heroku has a Ephemeral file system that gets erased every 24 hours, or whenever you deploy new version of the app.
So you'll have to do something like this:
Write a small function to store the file on the local disk (this is temporary storage) and return the path to the file
Add a task to Celery with the file path i.e. the parameter to the Celery task will be the "file-path" not a serialized blob of 20MB data.
The Celery worker process picks the task you just enqueued when it gets free and executes it.
If you need to access the file later, and since the local heroku disk only has temporary, you'll have to place the file in some permanent storage like AWS S3.
(The reason we go through all these hoops and not place the file directly in S3 is because access to local disk is fast while S3 disks might be in some other server farm at some other location and it takes time to save the file there. And your web process might appear slow/stuck if you try to write the file to S3 in your main process.)
Can any1 provide me an Idea, How to implement unzipping of .gz format file through Worker. If i try to write unzipping of file then, where i need to store unzipped file(i.e one text file
) , Will it be loaded in any location in azure. how can i specify the path in Windows Azure Worker process like current execting directory. If this approach doesnot work, then i need to create one more blob to store unzipped .gz file i.e txt.
-mahens
In your Worker Role, it is up to you how a .gz file arrive (downloaded from Azure Blob storage) however on the file is available you can use GZipStream to compress/uncompress a .GZ file. You can also find code sample in above link with Compress and Decompress function.
This SO discussion shares a few tools and code to explain how you can unzip .GZ using C#:
Unzipping a .gz file using C#
Next when you will use Decompress/Compress code in a Worker Role you have ability to store it directly to local storage (as suggested by JcFx) or use MemoryStream to store directly to Azure Blob Storage.
The following SO article shows how you can use GZipStream to store unzipped content into MemoryStream and then use UploadFromStream() API to store directly to Azure Blob storage:
How do I use GZipStream with System.IO.MemoryStream?
If you don't have any action related to your unzipped file then storing directly to Azure Blob storage is best however if you have to do something with unzipped content you can save locally as well as storage to Azure Blob storage back for further usage.
This example, using SharpZipLib, extracts a .gzip file to a stream. From there, you could write it to Azure local storage, or to blob storage:
http://wiki.sharpdevelop.net/GZip-and-Tar-Samples.ashx
In my WP 7 App, i have to store the images and XML file of two types,
1: first type of files are not updated frequently on server so i want to store them Permanently on local storage so that when ever app starts it can access these files from local storage , and when these files are updated on server , also update local storage files.I want these files not to be deleted on application termination.
2: Second type of files are those that i want to save in isolated storage temporarily e.g. app requested a XML file from server , i stored it locally and next time if app requests same file instead of getting it from server get it from local storage , and Delete these files when the application terminates..
How can i do this ?
Thanks
1) Isolated Storage is designed to be used to store data that should remain permanent (until the user uninstalls the app). There's example code of how to write and save a file on MSDN. Therefore, any file you save (temp or not), will be stored until the user uninstalls the app or your app deletes the file.
2) For temporary data, you can use the PhoneApplicationState property. This will automatically delete the files after your app closes. However, there's a size limit (I belive PhoneApplicationService.State has a limit of 4mb).
Alternatively, if the XML file is too big, you can write it to the Isolated Storage. Then, you can handle your page's Closing event and delete the file from Isolated Storage there using the DeleteFile method.