Windows Azure: Creation of a file on cloud blob container - windows

I am writing a program that will be executing on the cloud. The program will generate an output that should be written on to a file and the file should be saved on the blob container.
I don't have a idea of how to do that
Will this code
FileStream fs = new FileStream(file, FileMode.Create);
StreamWriter sw = new StreamWriter(fs);
generate a file named "file" on the cloud...
Oh.. then how to store the content to the blob..

Are you attempting to upload a Page blob or a Block blob? Usually block blobs are what are required, unless you are going to create a VM from the blob image, then a page blob is needed.
Something like this works however. This snippet taken from the most excellent Blob Transfer Utility Check it out for all your upload and download blob needs. (Just change the type from Block To Page if you need a VHD)
public void UploadBlobAsync(ICloudBlob blob, string LocalFile)
{
// The class currently stores state in class level variables so calling UploadBlobAsync or DownloadBlobAsync a second time will cause problems.
// A better long term solution would be to better encapsulate the state, but the current solution works for the needs of my primary client.
// Throw an exception if UploadBlobAsync or DownloadBlobAsync has already been called.
lock (WorkingLock)
{
if (!Working)
Working = true;
else
throw new Exception("BlobTransfer already initiated. Create new BlobTransfer object to initiate a new file transfer.");
}
// Attempt to open the file first so that we throw an exception before getting into the async work
using (FileStream fstemp = new FileStream(LocalFile, FileMode.Open, FileAccess.Read)) { }
// Create an async op in order to raise the events back to the client on the correct thread.
asyncOp = AsyncOperationManager.CreateOperation(blob);
TransferType = TransferTypeEnum.Upload;
m_Blob = blob;
m_FileName = LocalFile;
var file = new FileInfo(m_FileName);
long fileSize = file.Length;
FileStream fs = new FileStream(m_FileName, FileMode.Open, FileAccess.Read, FileShare.Read);
ProgressStream pstream = new ProgressStream(fs);
pstream.ProgressChanged += pstream_ProgressChanged;
pstream.SetLength(fileSize);
m_Blob.ServiceClient.ParallelOperationThreadCount = 10;
asyncresult = m_Blob.BeginUploadFromStream(pstream, BlobTransferCompletedCallback, new BlobTransferAsyncState(m_Blob, pstream));
}

Related

Get Azure Batch ResourceFile from existing file in Azure Blob Container

I am following this tutorial to do my first steps with Azure Batch
https://github.com/Azure-Samples/batch-dotnet-ffmpeg-tutorial/blob/master/BatchDotnetTutorialFfmpeg/Program.cs
There is one method to upload files to a container and to get back a ResourceFile for further processing.
private static ResourceFile UploadFileToContainer(CloudBlobClient blobClient, string containerName, string filePath)
{
Console.WriteLine("Uploading file {0} to container [{1}]...", filePath, containerName);
string blobName = Path.GetFileName(filePath);
filePath = Path.Combine(Environment.CurrentDirectory, filePath);
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
CloudBlockBlob blobData = container.GetBlockBlobReference(blobName);
blobData.UploadFromFileAsync(filePath).Wait();
// Set the expiry time and permissions for the blob shared access signature.
// In this case, no start time is specified, so the shared access signature
// becomes valid immediately
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy
{
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(2),
Permissions = SharedAccessBlobPermissions.Read
};
// Construct the SAS URL for blob
string sasBlobToken = blobData.GetSharedAccessSignature(sasConstraints);
string blobSasUri = String.Format("{0}{1}", blobData.Uri, sasBlobToken);
return ResourceFile.FromUrl(blobSasUri, filePath);
}
This works fine as long as you want to upload your files again and again. For some reason my upload broke down and I did not get the ResourceFile Object.
So how can I get a ResourceFile Object from an already uploaded file to an existing container?
Regards
Michael

How to list all children in Google Drive's appfolder and read file contents with Xamarin / c#?

I'm trying to work with text files in the apps folder.
Here's my GoogleApiClient constructor:
googleApiClient = new GoogleApiClient.Builder(this)
.AddApi(DriveClass.API)
.AddScope(DriveClass.ScopeFile)
.AddScope(DriveClass.ScopeAppfolder)
.UseDefaultAccount()
.AddConnectionCallbacks(this)
.EnableAutoManage(this, this)
.Build();
I'm connecting with:
googleApiClient.Connect()
And after:
OnConnected()
I need to list all files inside the app folder. Here's what I got so far:
IDriveFolder appFolder = DriveClass.DriveApi.GetAppFolder(googleApiClient);
IDriveApiMetadataBufferResult result = await appFolder.ListChildrenAsync(googleApiClient);
Which is giving me the files metadata.
But after that, I don't know how to read them, edit them or save new files. They are text files created with my app's previous version (native).
I'm following the google docs for drive but the Xamarin API is a lot different and has no docs or examples. Here's the API I'm using: https://components.xamarin.com/view/googleplayservices-drive
Edit:
Here is an example to read file contents from the guide:
DriveFile file = ...
file.open(mGoogleApiClient, DriveFile.MODE_READ_ONLY, null)
.setResultCallback(contentsOpenedCallback);
First I can't find anywhere in the guide what "DriveFile file = ..." means. How do I get this instance? DriveFile seems to be a static class in this API.
I tried:
IDriveFile file = DriveClass.DriveApi.GetFile(googleApiClient, metadata.DriveId);
This has two problems, first it complains that GetFile is deprecated but doesn't say how to do it properly. Second, the file doesn't have an "open" method.
Any help is appreciated.
The Xamarin binding library wraps the Java Drive library (https://developers.google.com/drive/), so all the guides/examples for the Android-based Drive API work if you keep in mind the Binding's Java to C# transformations:
get/set methods -> properties
fields -> properties
listeners -> events
static nested class -> nested class
inner class -> nested class with an instance constructor
So you can list the AppFolder's directory and files by recursively using the Metadata when the drive item is a folder.
Get Directory/File Tree Example:
await Task.Run(() =>
{
async void GetFolderMetaData(IDriveFolder folder, int depth)
{
var folderMetaData = await folder.ListChildrenAsync(_googleApiClient);
foreach (var driveItem in folderMetaData.MetadataBuffer)
{
Log.Debug(TAG, $"{(driveItem.IsFolder ? "(D)" : "(F)")}:{"".PadLeft(depth, '.')}{driveItem.Title}");
if (driveItem.IsFolder)
GetFolderMetaData(driveItem.DriveId.AsDriveFolder(), depth + 1);
}
}
GetFolderMetaData(DriveClass.DriveApi.GetAppFolder(_googleApiClient), 0);
});
Output:
[SushiHangover.FlightAvionics] (D):AppDataFolder
[SushiHangover.FlightAvionics] (F):.FlightInstrumentationData1.json
[SushiHangover.FlightAvionics] (F):.FlightInstrumentationData2.json
[SushiHangover.FlightAvionics] (F):.FlightInstrumentationData3.json
[SushiHangover.FlightAvionics] (F):AppConfiguration.json
Write a (Text) File Example:
using (var contentResults = await DriveClass.DriveApi.NewDriveContentsAsync(_googleApiClient))
using (var writer = new OutputStreamWriter(contentResults.DriveContents.OutputStream))
using (var changeSet = new MetadataChangeSet.Builder()
.SetTitle("AppConfiguration.txt")
.SetMimeType("text/plain")
.Build())
{
writer.Write("StackOverflow Rocks\n");
writer.Write("StackOverflow Rocks\n");
writer.Close();
await DriveClass.DriveApi.GetAppFolder(_googleApiClient).CreateFileAsync(_googleApiClient, changeSet, contentResults.DriveContents);
}
Note: Substitute a IDriveFolder for DriveClass.DriveApi.GetAppFolder to save a file in a subfolder of the AppFolder.
Read a (text) File Example:
Note: driveItem in the following example is an existing text/plain-based MetaData object that is found by recursing through the Drive contents (see Get Directory/File list above) or via creating a query (Query.Builder) and executing it via DriveClass.DriveApi.QueryAsync.
var fileContexts = new StringBuilder();
using (var results = await driveItem.DriveId.AsDriveFile().OpenAsync(_googleApiClient, DriveFile.ModeReadOnly, null))
using (var inputStream = results.DriveContents.InputStream)
using (var streamReader = new StreamReader(inputStream))
{
while (streamReader.Peek() >= 0)
fileContexts.Append(await streamReader.ReadLineAsync());
}
Log.Debug(TAG, fileContexts.ToString());

Dynamics CRM Attachment Data Import Using SDK

I am trying to import Attachments/Annotations to CRM Dynamics, I am doing this using the SDK.
I am not using the data import wizard.
I am not individually creating Annotation entities, instead I am using Data Import Feature programmatically.
I mostly leveraged the DataImport sample from the SDK sample code (SDK\SampleCode\CS\DataManagement\DataImport).
Import import = new Import()
{
ModeCode = new OptionSetValue((int)ImportModeCode.Create),
Name = "Data Import"
};
Guid importId = _serviceProxy.Create(import);
_serviceProxy.Create(
new ColumnMapping()
{
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
ProcessCode = new OptionSetValue((int)ColumnMappingProcessCode.Process),
SourceEntityName = sourceEntityName,
SourceAttributeName = sourceAttributeName,
TargetEntityName = targetEntityName,
TargetAttributeName = targetAttributeName
});
I am getting an error "The reference to the attachment could not be found".
The documentation says the crm async service will find the physical file on disk and upload it, my question is where does the async service look for attachment files?
I tried to map documentbody field to the full path of the attachment on the desk, but that still didn't work.
The answer below was provided before the question edits clarifying the use of the import wizard instead of the SDK. The answer below is specific to using the SDK.
When you are attaching files to an Annotation (Note) record in CRM via the SDK, you do use the documentbody attribute (along with mimetype), but you have to first convert it base64.
Something like this:
var myFile = #"C:\Path\To\My\File.pdf";
// Do checks to make sure file exists...
// Convert to Base64.
var base64Data = Convert.ToBase64String(System.IO.File.ReadAllBytes(myFile));
var newNote = new Entity("annotation");
// Set subject, regarding object, etc.
// Add the data required for a file attachment.
newNote.Attributes.Add("documentbody", base64Data);
newNote.Attributes.Add("mimetype", "text/plain"); // This mime type seems to work for all file types.
orgService.Create(newNote);
I found the solution in an obscure blog post, I think the documentation is misleading or unclear, the way this whole thing works, makes having the files available on the server disk for the async to process, odd.
To follow the same principle, all contents should be sent like the csv file itself while being linked to the same import.
To solve this we need create individual special Internal ImportFile for each physical attachment, and link it to the import that has the attachments record details.
As you see below with linking the attachments ImportFile using the ImportId and then setting the two properties (ProcessCode and FileTypeCode), it all worked in the end.
Suffice to say using this method is much more efficient and quicker than individually creating Annotation records.
foreach (var line in File.ReadLines(csvFilesPath + "Attachment.csv").Skip(1))
{
var fileName = line.Split(',')[0].Replace("\"", null);
using (FileStream stream = File.OpenRead(attachmentsPath + fileName))
{
byte[] byteData = new byte[stream.Length];
stream.Read(byteData, 0, byteData.Length);
stream.Close();
string encodedAttachmentData = System.Convert.ToBase64String(byteData);
ImportFile importFileAttachment = new ImportFile()
{
Content = encodedAttachmentData,
Name = fileName,
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
UseSystemMap = true,
ImportId = new EntityReference(Import.EntityLogicalName, importId),
ProcessCode = new OptionSetValue((int)ImportFileProcessCode.Internal),
FileTypeCode = new OptionSetValue((int)ImportFileFileTypeCode.Attachment),
RecordsOwnerId = currentUserRef
};
_serviceProxy.Create(importFileAttachment);
}
idx++;
}

Web API - Setting Response.Content with byte[] / MemoryStream Contents not working properly

My requirement is to use Web API to send across the network, a zip file (consisting a bunch of files in turn) which should not be written anywhere locally (not written anywhere on the server/client disk). For zipping, I am using DotNetZip - Ionic.Zip.dll
Code at Server:
public async Task<IHttpActionResult> GenerateZip(Dictionary<string, StringBuilder> fileList)
{
// fileList is actually a dictionary of “FileName”,”FileContent”
byte[] data;
using (ZipFile zip = new ZipFile())
{
foreach (var item in filelist.ToArray())
{
zip.AddEntry(item.Key, item.Value.ToString());
}
using (MemoryStream ms = new MemoryStream())
{
zip.Save(ms);
data = ms.ToArray();
}
}
var result = new HttpResponseMessage(HttpStatusCode.OK);
MemoryStream streams = new MemoryStream(data);
//, 0, data.Length-1, true, false);
streams.Position = 0;
//Encoding UTFEncode = new UTF8Encoding();
//string res = UTFEncode.GetString(data);
//result.Content = new StringContent(res, Encoding.UTF8, "application/zip");
<result.Content = new StreamContent(streams);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/zip");
//result.Content.Headers.ContentLength = data.Length;
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = "test.zip";
return this.Ok(result);
}
The issue I am facing is that after the zip file downloaded at client end when modified as a test.bin has its stream contents (byte[] data in this example’s contents) missing. (I am getting back a test.zip file. When I change the file locally from test.zip to test.bin, I am seeing that the File’s contents as shown below. It does not contain the Response.Content values. P.S. I have also tried the MIME type “application/octet-stream” as well. No luck!)
Test.zip aka test.bin’s contents:
{"version":{"major":1,"minor":1,"build":-1,"revision":-1,"majorRevision":-1,"minorRevision":-1},
"content":{"headers":[{"key":"Content-Type","value":["application/zip"]},
{"key":"Content-Disposition","value":["attachment; filename=test.zip"]}]},
"statusCode":200,"reasonPhrase":"OK","headers":[],"isSuccessStatusCode":true}
Can someone please help me on how we can set result.Content with a MemoryStream object (I have seen example of “FileStream” at other places on google to set “result.Content” but I want to use MemoryStream object only!). I am highlighting this because I think the problem lies with setting the MemoryStream object to the result.Content (in order to properly save the streams content into the result.Content object)
P.S. I have also gone thru Uploading/Downloading Byte Arrays with AngularJS and ASP.NET Web API (and a bunch of other links) but it did not help me much… :(
Any help is greatly appreciated. Thanks a lot in advance :)
I got my issue solved!!
All I did was to change the Response Type to HttpResponseMessage and use "return result" in the last line rather than Ok(result) { i.e. HttpResponseMessage Type rather than OKNegiotatedContentResult Type)

How can I insert an image with iTextSharp in an existing PDF?

I have an existing PDF and I can use FdFWriter to input to text boxes. It works well. Now I have an image. I have read the documentation and looked at many examples but they all create new documents and insert an image. I want to take an existing PDF and insert an image into either an image field or as the icon image of a button. I have tried but it corrupts the document.
I need to be able to take an existing document and put an image on it. I do not want to open, read, replace, and delete the original. This original changes and the name "original" only means the source file in this context. There are many PDF files like this that need an image.
Thank you for any help.
Edit - I am very thankful for the code below. It works great, but the problem for me is that the existing PDF has digital signatures on it. When the document is copied like this (into result.pdf) those signatures, while still present, have a different byte count or other item that is corrupted. This means the signatures, while they show up on result.pdf, have an icon next to them that state "invalid signature."
In case it matters I am using a Topaz signature pad to create my signatures, which has it's own security. Merely copying the PDF will not corrupt it but the process below will.
I am trying to put the image on the existing document, not a copy of it, which in this case matters.
Also, by signature, I mean handwritten, not pin numbers.
Thank you again.
EDIT - Can PdfSignatureAppearance be used for this?
EDIT - I seem to be able to do it with:
var stamper = new PdfStamper(reader, outputPdfStream,'1',true);
If you want to change the contents of an existing PDF file and add extra content such as watermarks, pagenumbers, extra headers, PdfStamper is the object you need. I have successfully used the following code to insert an image into an existing pdf file to a given absolute position:
using System.IO;
using iTextSharp.text;
using iTextSharp.text.pdf;
class Program
{
static void Main(string[] args)
{
using (Stream inputPdfStream = new FileStream("input.pdf", FileMode.Open, FileAccess.Read, FileShare.Read))
using (Stream inputImageStream = new FileStream("some_image.jpg", FileMode.Open, FileAccess.Read, FileShare.Read))
using (Stream outputPdfStream = new FileStream("result.pdf", FileMode.Create, FileAccess.Write, FileShare.None))
{
var reader = new PdfReader(inputPdfStream);
var stamper = new PdfStamper(reader, outputPdfStream);
var pdfContentByte = stamper.GetOverContent(1);
Image image = Image.GetInstance(inputImageStream);
image.SetAbsolutePosition(100, 100);
pdfContentByte.AddImage(image);
stamper.Close();
}
}
}
When you insert the image you have the possibility to resize it. You can take a look at transformation matrix in the iTextSharp documentation.
Here is a similar example whichi inserts an image on the page using the stamper:
Gmane iTex Mailing List Post
I could solve my problem by simply adding following lines to my signing code to add image
var image = iTextSharp.text.Image.GetInstance(#"C:\Users\sushil\Documents\sansign.jpg");
appearance.Acro6Layers = true;
appearance.SignatureGraphic = image;
appearance.SignatureRenderingMode = PdfSignatureAppearance.RenderingMode.GRAPHIC_AND_DESCRIPTION;
As I was signing document with visible digital signature , now I can have both image and digital signature properties side by side
in the .net core6 that uses DDD try this declare class in Infrastructure Layer
using System.IO;
using iTextSharp.text;
using iTextSharp.text.pdf;
public async Task<string> SignatureToPdf(string pathPdfFile, string
pathSignatureImage, string pathOutputName)
{
var webRootPath = hostingEnvironment.ContentRootPath;
if (!File.Exists(Path.Combine(webRootPath, pathPdfFile))) return
null;
await using Stream inputPdfStream =
new FileStream(Path.Combine(webRootPath, pathPdfFile),
FileMode.Open, FileAccess.Read, FileShare.Read);
await using Stream inputImageStream =
new FileStream(Path.Combine(webRootPath, pathSignatureImage), FileMode.Open, FileAccess.Read, FileShare.Read);
await using Stream outputPdfStream =
new FileStream(Path.Combine(webRootPath, pathOutputName),
FileMode.Create, FileAccess.Write, FileShare.None);
var reader = new PdfReader(inputPdfStream);
var stamper = new PdfStamper(reader, outputPdfStream);
var pdfContentByte = stamper.GetOverContent(1);
var image = Image.GetInstance(inputImageStream);
image.SetAbsolutePosition(100, 100);
pdfContentByte.AddImage(image);
stamper.Close();
return "ok";
}
pdftk can do this. It's not a library but you can easily call it from your code as a .exe.
See stamp and background commands:
http://www.pdflabs.com/docs/pdftk-man-page/
ref: How to do mail merge on top of a PDF?

Resources