Windows Azure REST API MediaLink - windows

I'm trying to use the API REST of Windows Azure for creating a virtual machine deployment. However, I've got a problem when trying to specify an OS image in the following XML file:
<Deployment xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Name>SomeName</Name>
<DeploymentSlot>Production</DeploymentSlot>
<Label></Label>
<RoleList>
<Role i:type="PersistentVMRole">
<RoleName>SomeName</RoleName>
<OsVersion i:nil="true"/>
<RoleType>PersistentVMRole</RoleType>
<ConfigurationSets>
<ConfigurationSet i:type="WindowsProvisioningConfigurationSet">
<ConfigurationSetType>WindowsProvisioningConfiguration</ConfigurationSetType>
<ComputerName>SomeName</ComputerName>
<AdminPassword>XXXXXXXXXX</AdminPassword>
<EnableAutomaticUpdates>true</EnableAutomaticUpdates>
<ResetPasswordOnFirstLogon>false</ResetPasswordOnFirstLogon>
</ConfigurationSet>
<ConfigurationSet i:type="NetworkConfigurationSet">
<ConfigurationSetType>NetworkConfiguration</ConfigurationSetType>
<InputEndpoints>
<InputEndpoint>
<LocalPort>3389</LocalPort>
<Name>RemoteDesktop</Name>
<Protocol>tcp</Protocol>
</InputEndpoint>
</InputEndpoints>
</ConfigurationSet>
</ConfigurationSets>
<DataVirtualHardDisks/>
<Label></Label>
<OSVirtualHardDisk>
<MediaLink>¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿???????????????</MediaLink>
<SourceImageName>¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿??????????????????</SourceImageName>
</OSVirtualHardDisk>
</Role>
</RoleList>
</Deployment>`
I need the MediaLink (URI of the OS image) and the SourceImageName (Canonical name of the OS image). My question is, the web portal provides several PREDEFINED images but I cannot determine the URI and the canonical name of them. Will I be forced to create my own OS image and upload it to any of the storage services under my Windows Azure account?

To get these parameters, you could perform List OS Images Service Management API operation on your subscription.
UPDATE
Please discard some of my comments below (sorry about those). I was finally able to create a VM using REST API :). Here're some of the things:
<MediaLink> element should specify the URL of the VHD off of which your VM will be created. It has to be a URL in one of your storage account in the same subscription as your virtual machine cloud service. So for this, you could specify a URL like: https://[yourstorageaccount].blob.core.windows.net/[blobcontainer]/[filename].vhd where [blobcontainer] would be the name of the blob container where you would want the API to store the VHD while the [filename] is any name that you want to give to your VHD. What REST API does is that it copies the source image specified in the <SourceImageName> and saves it at the URI specified in the <MediaLink> element.
Make sure that your Service and Storage Account where your VHD will be stored are in the same data center/affinity group. Furthermore that data center should be able to support Virtual Machines. It turns out that not all data centers support Virtual Machines.
Order of XML element is of utmost importance. You move one element up or down would result in 400 error.
Based on my experimentation, here's the code:
private static void CreateVirtualMachineDeployment(string subscriptionId, X509Certificate2 cert, string cloudServiceName)
{
try
{
string uri = string.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", subscriptionId, cloudServiceName);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
request.Method = "POST";
request.ContentType = "application/xml";
request.Headers.Add("x-ms-version", "2013-03-01");
request.ClientCertificates.Add(cert);
string requestPayload = #"<Deployment xmlns=""http://schemas.microsoft.com/windowsazure"" xmlns:i=""http://www.w3.org/2001/XMLSchema-instance"">
<Name>[SomeName]</Name>
<DeploymentSlot>Production</DeploymentSlot>
<Label>[SomeLabel]</Label>
<RoleList>
<Role i:type=""PersistentVMRole"">
<RoleName>MyTestRole</RoleName>
<OsVersion i:nil=""true""/>
<RoleType>PersistentVMRole</RoleType>
<ConfigurationSets>
<ConfigurationSet i:type=""WindowsProvisioningConfigurationSet"">
<ConfigurationSetType>WindowsProvisioningConfiguration</ConfigurationSetType>
<ComputerName>[ComputerName]</ComputerName>
<AdminPassword>[AdminPassword - Ensure it's strong Password]</AdminPassword>
<AdminUsername>[Admin Username]</AdminUsername>
<EnableAutomaticUpdates>true</EnableAutomaticUpdates>
<ResetPasswordOnFirstLogon>false</ResetPasswordOnFirstLogon>
</ConfigurationSet>
<ConfigurationSet i:type=""NetworkConfigurationSet"">
<ConfigurationSetType>NetworkConfiguration</ConfigurationSetType>
<InputEndpoints>
<InputEndpoint>
<LocalPort>3389</LocalPort>
<Name>RemoteDesktop</Name>
<Protocol>tcp</Protocol>
</InputEndpoint>
</InputEndpoints>
</ConfigurationSet>
</ConfigurationSets>
<DataVirtualHardDisks/>
<Label></Label>
<OSVirtualHardDisk>
<MediaLink>https://[storageaccount].blob.core.windows.net/vhds/fb83b3509582419d99629ce476bcb5c8__Microsoft-SQL-Server-2012SP1-Web-CY13SU04-SQL11-SP1-CU3-11.0.3350.0.vhd</MediaLink>
<SourceImageName>fb83b3509582419d99629ce476bcb5c8__Microsoft-SQL-Server-2012SP1-Web-CY13SU04-SQL11-SP1-CU3-11.0.3350.0</SourceImageName>
</OSVirtualHardDisk>
</Role>
</RoleList>
</Deployment>";
byte[] content = Encoding.UTF8.GetBytes(requestPayload);
request.ContentLength = content.Length;
using (var requestStream = request.GetRequestStream())
{
requestStream.Write(content, 0, content.Length);
}
using (HttpWebResponse resp = (HttpWebResponse)request.GetResponse())
{
}
}
catch (WebException webEx)
{
using (var streamReader = new StreamReader(webEx.Response.GetResponseStream()))
{
string result = streamReader.ReadToEnd();
Console.WriteLine(result);
}
}
}
Hope this helps!

Related

Accessing ADLS Gen2 public container from the C# SDK

I am unable to access a public container using the C# SDK, even though I have enabled "Allow Blob public access" in the storage account configuration.
var fileSystemClient = new DataLakeFileSystemClient(new Uri("https://somestorageaccount.dfs.core.windows.net/public"), new DataLakeClientOptions());
var paths = fileSystemClient.GetPaths();
foreach (var path in paths)
{
Console.WriteLine(path);
}
This code throws the following exception:
Azure.RequestFailedException: 'Server failed to authenticate the
request. Make sure the value of Authorization header is formed
correctly including the signature.
Is there anything I can configure to make this work?
I tried in my environment and got below results:
Initially, I created ADLS gen2 container with public access level set to container level.
Portal:
When I try to access the file through browser, I got same error.
Browser:
When we are accessing through file system, Files kept in storage system are not accessible anonymously. It is necessary to authorize access even if it is public Access level. You are getting this error because you are attempting to access the resource without authorization.
If you need to access files, you need to authorize with SAS token.
I tried with File URL + SAS token in the browser. I can be able to access the file.
You can get SAS-token by clicking file with generate SAS token.
Browser:
If you need access path of data lake gen 2 in C#, you use the StorageSharedKeyCredential method by this link:
string storageAccountName = StorageAccountName;
string storageAccountKey = StorageAccountKey;
Uri serviceUri = StorageAccountUri;
StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(storageAccountName, storageAccountKey);
DataLakeServiceClient serviceClient = new DataLakeServiceClient(serviceUri, sharedKeyCredential);
DataLakeFileSystemClient filesystem = serviceClient.GetFileSystemClient(Randomize("sample-filesystem-list"));
List<string> names = new List<string>();
foreach (PathItem pathItem in filesystem.GetPaths())
{
names.Add(pathItem.Name);
}
Reference:
java - How to get list of child files/directories having parent DataLakeDirectoryClient class instance - Stack Overflow in java by Jim Xu.

Tomcat Performance with Spring Boot API for File Upload

I have a Spring boot API and one of the endpoints allows users to upload video's. Now My controller basically takes the file as a MultiPart file and then I store it in a temp folder accessible to tomcat. Once I have it stored on Disk, I then push the video to an S3 bucket.
Now to me anyway, this seems to be less than optimal, Like if I wanted to have a 100 or a 1000 users upload at once it seems really non performant to write the files to disk first.
As a little background I'm storing it on disk with the intention that if there is a issue pushing to S3 I can retry
The below code might show what I'm doing better than the above:
public Video addVideo(#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
this.amazonS3ClientService.uploadFileToS3Bucket(file, title, description));
}
Method for storing Video file:
String fileNameWithExtenstion = awsS3FileName + "." + FilenameUtils.getExtension(multipartFile.getOriginalFilename());
//creating the file in the server (temporarily)
File file = new File(tomcatTempDir + fileNameWithExtenstion);FileOutputStream fos = new FileOutputStream(file);
fos.write(multipartFile.getBytes());
fos.close();PutObjectRequest putObjectRequest = new PutObjectRequest(this.awsS3Bucket, awsS3BucketFolder + UnigueId + "/" + fileNameWithExtenstion, file);
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
specified.amazonS3.putObject(putObjectRequest);
//removing the file created in the server
file.delete();
So my question is....is there a better way in Tomcat to:
A) Take in a file via a controllerB) Push to S3
There is no other way to do it with multipart. The problem with multipart that to properly segement parts from the requst they need sometimes skipped or be repeatable. That is impossible within memory w/o having memory to explode. Therefore, Commons FileUpload caches them on disk after a certain threshold is reached.
Multipart requests are the worst way for that. I highly recommend to use either PUT or POST with content type application/octet-stream. You can take the bare request input stream and pass to HttpClient to stream to your backend server. I did this already 5 years ago and it works for gigabytes. I have posted the solution in the Apache HttpClient mailing list.
There is one possibility how this could work under specific conditions:
All parts are in the correct physical order you want to read
Your write to a backend is fast enough to sustain the read from the front
Consume the root part and then go over to the next physical one, process the request body lazily. JAX-WS RI (Metro) has a very nice handling of multipart requests for XOP/MTOM. Learn from that because you won't be able to make it any better.
Perhaps you can try to direct stream the input stream from your MultipartFile to S3.
Consider the following uploadFileToS3Bucket method:
public PutObjectResult uploadFileToS3Bucket(InputStream input, long size, String title, String description) {
// Indicate the length of the information to avoid the need to compute it by the AWS SDK
// See: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/PutObjectRequest.html#PutObjectRequest-java.lang.String-java.lang.String-java.io.InputStream-com.amazonaws.services.s3.model.ObjectMetadata-
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(size); // rely on Spring implementation. Maybe you probably also can use input.available()
// compute the object name as appropriate
String key = "...";
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, input, objectMetadata
);
// The rest of your code
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
return specified.amazonS3.putObject(putObjectRequest);
}
Of course, you need to provide the service the input stream obtained from the client request associated with the MutipartFile object:
public Video addVideo(
#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
try (InputStream input = file.getInputStream()) {
this.amazonS3ClientService.uploadFileToS3Bucket(input, file.getSize(), title, description));
}
}
Probably you can also play with the getBytes method of MultipartFile and create a ByteArrayInputStream to perform the operation.
In addVideo:
byte[] bytes = file.getBytes();
In uploadFileToS3Bucket:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(bytes.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, new ByteArrayInputStream(bytes), objectMetadata
);
I would prefer the first solution, but try to determine which option offers you the best performance.

POST a single large file in .net core

I have a .net core 2.1 api application that will download a file from a remote location based on the file name. Here is the code:
static public class FileDownloadAsync
{
static public async Task DownloadFile(string filename)
{
//File name is 1GB.zip for testing
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
using (HttpClient client = new HttpClient())
{
string url = #"http://speedtest.tele2.net/" + filename;
using (HttpResponseMessage response = await client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead))
using (Stream readFrom = await response.Content.ReadAsStreamAsync())
{
string tempFile = $"D:\\Test\\{filename}";
using (Stream writeTo = File.Open(tempFile, FileMode.Create))
{
await readFrom.CopyToAsync(writeTo);
}
}
stopwatch.Stop();
Debug.Print(stopwatch.Elapsed.ToString());
}
}
}
This is working great, it will pull a 1 gig file down in about 50 seconds. Well within the required download time. I have hard coded a test file to download in this code for testing as well as storage location--these values will ultimately come from a config file when moved into production. Here is the API endpoint that calls this function:
[HttpGet("{fileName}")]
public async Task<string> GetFile(string fileName)
{
await FileDownloadAsync.DownloadFile(fileName);
return "Done";
}
So getting the file from a remote location down to the local server is not a problem. I need some help/guidance on re-posting this file to another API. Once the file is downloaded, there is some work done on the file to prepare it for upload (the files are all MP4 files), and once that work is done, I need to post it to another API for more proprietary processing. Here is the API end point data I have:
POST: /batch/requests Allocates resources to start new batch transcription. Use this method to request[work] on the input
audio data. Upon the accepted request, the response provides
information about the associated request ID and processing status.
Headers: Authorization: Authorization token
Accept: application/json
Content-Type: Indicates the audio format. The value must be:
audio/x-wav;codec=pcm;bit=16;rate=8000;channels=1
audio/x-wav;codec=pcm;bit=16;rate=16000;channels=1
audio/x-raw;codec=pcm;bit=16;rate=8000;channels=1
audio/x-raw;codec=pcm;bit=16;rate=16000;channels=1
video/mp4
Content-Length (optional): The size of the input voice file. Not
required if a chunked transfer is used.
Query string parameters (required):
profileId: one of supported (see GET profiles) customerId: the id of
the customer. A string of minimum 1 and up to 250 alphanumeric, dot
(.) and dash (-) characters.
So I will set the Content-Type to video/MP4 for processing. Note that if the input size is not used if a chunked transfer is used.
Right now, I am more concerned with just posting (streaming) the file in a non-chunked format while we await for more information on what they consider "chunking" a file.
So I am looking for help on steaming the file from disk to the endpoint. Everything I am running across for .net core API is creating the API to download the file from a POST like a Razor page or Angular page--I already have that. I just need some help on "re-posting" to another API.
Thanks
Using the HttpClient you open a stream to the file, create a content stream, set the necessary headers and post to the endpoint
Stream file = File.Open(filepath, FileMode.Open);
var content = new StreamContent(file);
content.Headers.ContentType = new MediaTypeHeaderValue("video/MP4");
client.DefaultRequestHeaders.Add("Authorization", "token here");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json");
using (HttpResponseMessage response = await client.PostAsync(url, content)) {
//...
}

Can anyone tell me the Java utility to download documents to your local PC from Content Engine in filenet?

Hello Guys I am trying to write the java utility to download the documents to local PC from content engine in filenet can anyone help me out?
You should read about FileNet P8 CE API, you can start here:
You have to know that the FileNet Content Engine has two types of interface that can be used to connect to it: RMI and SOAP. A cmd line app you are planning to write, can connect only by SOAP (I am not sure that this is true for the newest versions, but what is definitely true, that it is much easier to setup the SOAP connection than EJB), so you have to read that part of the documentation, how to establish a connection in this way to your Content Engine.
On the link above, you can see that first of all you have to collect the required jars for SOAP connection: please check the "Required for a Content Engine Java API CEWS transport client" section for the file names.
After you collect them, you will need a SOAP WSDL URL and a proper user and password, the user has to have read properties and read content right to the documents you would like to download. You also need to know the ObjectStore name and the identifier or the location of your documents.
Now we have to continue using this Setting Up a Thick Client Development Environment link (I opened it from the page above.)
Here you have to scroll down to the "CEWS transport protocol (non-application-server dependent)" section.
Here you can see, that you have to create a jaas.conf file with the following content:
FileNetP8WSI {
com.filenet.api.util.WSILoginModule required;
};
This file must be added as the following JVM argument when you run the class we will create:
java -cp %CREATE_PROPER_CLASSPATH% -Djava.security.auth.login.config=jaas.conf DownloadClient
Now, on the top-right corner of the page, you can see links that describes what to do in order to get a connection, like "Getting Connection", "Retrieving an EntireNetwork Object" etc. I used that snipplet to create the class below for you.
public class DownloadClient {
public static void main(String[] args) throws Exception{
String uri = "http://filenetcehost:9080/wsi/FNCEWS40MTOM";
String userId = "ceadmin";
String password = "password";
String osName = "Test";
UserContext uc = UserContext.get();
try {
//Get the connection and default domain
Connection conn = Factory.Connection.getConnection(uri);
Domain domain = Factory.Domain.getInstance(conn, null);
ObjectStore os = Factory.ObjectStore.fetchInstance(domain, osName, null);
// the last value (jaas samza name) must match with the name of the login module in jaas.conf
Subject subject =UserContext.createSubject(connection, userId, password, "FileNetP8WSI");
// set the subject to the local thread via threadlocal
uc.pushSubject(subject);
// from now, we are connected to FileNet CE, and objectStore "Test"
//https://www.ibm.com/support/knowledgecenter/en/SSNW2F_5.2.0/com.ibm.p8.ce.dev.ce.doc/document_procedures.htm
Document doc = Factory.Document.getInstance(os, ClassNames.DOCUMENT, new Id("{F4DD983C-B845-4255-AC7A-257202B557EC}") );
// because in FileNet a document can have more that one associated content element
// (e.g. stores single page tifs and handle it as a multipaged document), we have to
// get the content elements and iterate list.
ContentElementList docContentList = doc.get_ContentElements();
Iterator iter = docContentList.iterator();
while (iter.hasNext() )
{
ContentTransfer ct = (ContentTransfer) iter.next();
// Print element sequence number and content type of the element.
// Get and print the content of the element.
InputStream stream = ct.accessContentStream();
// now you have an inputstream to the document content, you can save it local file,
// or you can do what you want with it, just do not forget to close the stream at the end.
stream.close();
}
} finally {
uc.popSubject();
}
}
}
This code is just shows how can you implement such a thick client, I have created it now using the documentation, not production code. But after specifying the packages to import, and may handle the exceptions it will probably work.
You have to specify the right URL, user, password and docId of course, and you have to implement the copy from the TransferInputStream to a FileOutputStream, e.g. by using commons.io or java NIO, etc.

Sending Images to the client from tomcat server

I am building a framework for e-commerce site. I have used jersey to create REST APIs. I need to send images to the clients as per the request.
How can I do so from my application server as Tomcat and jersey as REST API?
Since I am new to this, I have no clue how to send images to an Android client when they are shown as item.
Every resource is identified by the URI, client will ask for a particular image or a bunch of images by quering the URL, So you just need to expose a service, following service is an example to send single image to client.
#GET
#Path("/images/{imageId}")
#Produces("image/png")
public Response downloadImage(#PathParam("imageId") String imageId) {
MultiMediaDB imageDb = new MultiMediaDB();
String filePath = imageDb.getImage(imageId);
File file = new File(filePath);
ResponseBuilder response = Response.ok((Object) file);
response.header("Content-Disposition",
"attachment; filename=\"fileName.png\"");
return response.build();
}
MultiMediaDB is my custom class to get the file location from the DB, you can hardcode it as of now for testing purpose like D:\server_image.png.
You need to mention Content-Disposition as an attachment so that file will not be downloaded, instead attached to the form. In android you just need to read inputstream from a HttpURLConnection object and send that to bitmap as shown below
URL url = new URL(BaseUrl + "/images/" + imageId);
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
urlConnection.connect();
iStream = urlConnection.getInputStream();
bitmap = BitmapFactory.decodeStream(iStream);
The you can set that bitmap to imageview or what ever you have as a container.

Resources