I'm using CaptureSource to get video stream. In order to save video I can use outof the box FileSink which allows me to save mp4 encoded file but in my application I want to add some adjustments to video stream (add some artifacts, text, logo, etc) and then save it to isolated storage.
I could define derived from VideoSink class and override OnSample method:
protected override void OnSample(long sampleTimeInHundredNanoseconds, long frameDurationInHundredNanoseconds, byte[] sampleData)
{
//process sampleData
//encode sampleData
//save encoded sampleData
}
But I'm not sure how to encode raw video data on Windows Phone.
I'm looking for any video encoders for WP.
TIA for advice!
Related
I have some video file embedded with the project, that to be opened in the native video player using Xamarin forms.
Note: Having in-app video player is limited here.
Any idea for this?
Sorry for late, you could refer to FormsNativeVideoPlayer.
When you load your video file from Raw folder, you just need to modify the VideoPlayer_CustomRenderer code like this :
protected override void OnElementChanged (ElementChangedEventArgs<Xamarin.Forms.View> e)
{
base.OnElementChanged (e);
...
string uriPath = "android.resource://" + Forms.Context.PackageName + "/" + Resource.Raw.audio;
var uri = Android.Net.Uri.Parse((uriPath));
//Set the videoView with our uri, this could also be a local video on device
videoView.SetVideoURI (uri);
...
}
I'm using the Microsoft Bot Framework with Cognitive Services to generate images from a source image that the user uploads via the bot. I'm using C#.
The Cognitive Services API returns a byte[] or a Stream representing the treated image.
How can I send that image directly to my user? All the docs and samples seem to point to me having to host the image as a publically addressable URL and send a link. I can do this but I'd rather not.
Does anyone know how to simple return the image, kind of like the Caption Bot does?
You should be able to use something like this:
var message = activity.CreateReply("");
message.Type = "message";
message.Attachments = new List<Attachment>();
var webClient = new WebClient();
byte[] imageBytes = webClient.DownloadData("https://placeholdit.imgix.net/~text?txtsize=35&txt=image-data&w=120&h=120");
string url = "data:image/png;base64," + Convert.ToBase64String(imageBytes)
message.Attachments.Add(new Attachment { ContentUrl = url, ContentType = "image/png" });
await _client.Conversations.ReplyToActivityAsync(message);
The image source of HTML image elements can be a data URI that contains the image directly rather than a URL for downloading the image. The following overloaded functions will take any valid image and encode it as a JPEG data URI string that may be provided directly to the src property of HTML elements to display the image. If you know ahead of time the format of the image returned, then you might be able to save some processing by not re-encoding the image as JPEG by just returning the image encoded as base 64 with the appropriate image data URI prefix.
public string ImageToBase64(System.IO.Stream stream)
{
// Create bitmap from stream
using (System.Drawing.Bitmap bitmap = System.Drawing.Bitmap.FromStream(stream) as System.Drawing.Bitmap)
{
// Save to memory stream as jpeg to set known format. Could also use PNG with changes to bitmap save
// and returned data prefix below
byte[] outputBytes = null;
using (System.IO.MemoryStream outputStream = new System.IO.MemoryStream())
{
bitmap.Save(outputStream, System.Drawing.Imaging.ImageFormat.Jpeg);
outputBytes = outputStream.ToArray();
}
// Encoded image byte array and prepend proper prefix for image data. Result can be used as HTML image source directly
string output = string.Format("data:image/jpeg;base64,{0}", Convert.ToBase64String(outputBytes));
return output;
}
}
public string ImageToBase64(byte[] bytes)
{
using (System.IO.MemoryStream inputStream = new System.IO.MemoryStream())
{
inputStream.Write(bytes, 0, bytes.Length);
return ImageToBase64(inputStream);
}
}
I am working on a feature for my Android app. I would like to read text from a picture then save that text in a database. Is using OCR the best way? Is there another way? Google suggests in its documentation that NDK should only be used if strictly necessary but what are the downfalls exactly?
Any help would be great.
you can use google vision library for convert image to text, it will give better output from image.
Add below library in build gradle:
compile 'com.google.android.gms:play-services-vision:10.0.0+'
TextRecognizer textRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
Frame imageFrame = new Frame.Builder()
.setBitmap(bitmap) // your image bitmap
.build();
String imageText = "";
SparseArray<TextBlock> textBlocks = textRecognizer.detect(imageFrame);
for (int i = 0; i < textBlocks.size(); i++) {
TextBlock textBlock = textBlocks.get(textBlocks.keyAt(i));
imageText = textBlock.getValue(); // return string
}
From this Simple example of OCRReader in Android tutorial you can read text from image and also you can scan for text using camera, using very simple code.
This library is developed using Mobile Vision Text API
For scan text from camera
OCRCapture.Builder(this)
.setUseFlash(true)
.setAutoFocus(true)
.buildWithRequestCode(CAMERA_SCAN_TEXT);
For extract text from image
String text = OCRCapture.Builder(this).getTextFromUri(pickedImage);
//You can also use getTextFromBitmap(Bitmap bitmap) or getTextFromImage(String imagePath) buplic APIs from OCRLibrary library.
Text from an image can be extracted using Firebase machine learning (ML) kit. There are two versions of the text recognition API, on-device API (free) and on-cloud API.
To use the API, first create BitMap of the image, which should be upright. Then create FirebaseVisionImage object passing the bitmap object.
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
Then create FirebaseVisionTextRecognizer object.
FirebaseVisionTextRecognizer textRecognizer = FirebaseVision.getInstance()
.getCloudTextRecognizer();
Then pass the FirebaseVisionImage object to processImage() method, add listeners to the resulting task and capture the extracted text in success callback method.
textRecognizer.processImage(image)
.addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
#Override
public void onSuccess(FirebaseVisionText firebaseVisionText) {
//process success
}
})
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
//process failure
}
});
For complete example which shows how to use Firebase ML text recognizer, see https://www.zoftino.com/extracting-text-from-images-android
There is a different option. You can upload your image to the server, OCR it from the server, then get the result.
I am trying to process images uploaded to azure using webjob. I have 2 containers image and thumbs.
Currently, I am reading from image container, creating a thumbnail and writing it to thumbs container using the following code, which works great.
public static void GenerateThumbnail([QueueTrigger("addthumb")] ImageDTO blobInfo,
[Blob("images/{Name}", FileAccess.Read)] Stream input, [Blob("thumbs/{Name}")] CloudBlockBlob outputBlob)
{
using (Stream output = outputBlob.OpenWrite())
{
ConvertImageToThumbnail(input, output, blobInfo.Name);
outputBlob.Properties.ContentType = GetMimeType(blobInfo.Name);
}
}
Now, I would also like to resize the main image from image container (if it's too big), compress it and replace the original with it.
Is there a way to read from and write to the same blob?
Yes, you can read/write to the same blob. For example you could change your input binding to bind to CloudBlockBlob using FileAccess.ReadWrite:
public static void GenerateThumbnail(
[QueueTrigger("addthumb")] ImageDTO blobInfo,
[Blob("images/{Name}", FileAccess.ReadWrite)] CloudBlockBlob input,
[Blob("thumbs/{Name}")] CloudBlockBlob output)
{
// Process the image
}
You can then access the OpenRead/OpenWrite stream methods on that blob to read the image blob and process/modify it as needed.
I have array of Image in JavaFx. I want create a clip video (animation) from those images, including sound file.
How Can I achieve this?
NOTE: I want to get a video file at the end of the process (avi, mp4 ...).
This is my array:
Image[] frames
I tried use "keyFrame" class... but without success:
ImageView destImageView = new ImageView();
Group group;
group = new Group();
group.setTranslateX(300);
group.setTranslateY(450);
Image[] frames = m.getFrames();
KeyFrame[] kf = new KeyFrame[frames.length];
for(int i=0;i<frames.length;i++){
kf[0] =new KeyFrame(new Duration(0), new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent event) {
// destImageView.setImage();
// group.getChildren().setAll(destImageView);
}
});
}
You can use
javax.imageio.ImageIO.write(javafx.embed.swing.SwingFXUtils.fromFXImage(frame, null), "png", new File(directory, fileName));
to save each image as a png file.
Make sure to give the frames sequentially numbered filenames, e.g. img0000.png, img0001.png etc..
Then use ImageJ/Fiji (https://imagej.net/Fiji/Downloads)
to import the image sequence and save as an avi file. Alternatively, as ImageJ is open-source and written in Java you could import and use directly the ImageJ class
ij.plugin.filter.AVI_Writer
You could then convert it to an mp4 or other format using VLC Player, for example.