I am reading color frame from Kinect V2 sensor using Microsoft Kinect SDK v2. I am copying the frame data in a byte array, which is later on converted into EmguCV Image. Below is the snippet from the code-
// A pixel buffer to hold image data from the incoming color frame
private byte[] pixels = null;
private KinectSensor kinectSensor = null;
private ColorFrameReader colorFrameReader = null;
public KinectForm()
{
this.kinectSensor = KinectSensor.GetDefault();
this.colorFrameReader = this.kinectSensor.ColorFrameSource.OpenReader();
this.colorFrameReader.FrameArrived += this.Reader_ColorFrameArrived;
// create the colorFrameDescription from the ColorFrameSource using Bgra format
FrameDescription colorFrameDescription = this.kinectSensor.ColorFrameSource.CreateFrameDescription(ColorImageFormat.Bgra);
// Create a pixel buffer to hold the frame's image data as a byte array
this.pixels = new byte[colorFrameDescription.Width * colorFrameDescription.Height * colorFrameDescription.BytesPerPixel];
// open the sensor
this.kinectSensor.Open();
InitializeComponent();
}
private void Reader_ColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
using (ColorFrame colorFrame = e.FrameReference.AcquireFrame())
{
if (colorFrame != null)
{
FrameDescription colorFrameDescription = colorFrame.FrameDescription;
if (colorFrame.RawColorImageFormat == ColorImageFormat.Bgra)
colorFrame.CopyRawFrameDataToArray(pixels);
else
colorFrame.CopyConvertedFrameDataToArray(this.pixels, ColorImageFormat.Bgra);
//Initialize Emgu CV image then assign byte array of pixels to it
Image<Bgr, byte> img = new Image<Bgr, byte>(colorFrameDescription.Width, colorFrameDescription.Height);
img.Bytes = pixels;
imgBox.Image = img;//Show image in Emgu.CV.UI.ImageBox
}
}
}
The converted image is corrupted after zooming more than 25%. Please see below screenshots-
50% Zoom -
25% Zoom -
12.5% Zoom -
The < TColor> should be Bgra, you can confirm from the "pixels" bytes array.
edit:
As the intensity of the Kinect's color frame is 0, the image won't be visible, so I added this code to solve the issue, I hope there is a better (faster) way to solve it.
private void FixIntensity(byte[] p)
{
int i = 0;
while (i < p.Length)
{
p[i+3] = 255;
i += 4;
}
}
Related
I'm trying to implement an UI where the user can edit and apply effects to an uploaded image, and want to save the BlendMode merged to the image. It's possible to save the result of the blended image or apply it using the Canvas?
There are some packages that apply some specific filters, but I want something more customizable for the end user.
I already saw some examples of how to implement Canvas to draw images, but can't figure it out how to use to load an image an apply the blend related in the docs. Anyone could give an example?
UPDATED:
For who has the same question, bellow follows the code with how to save a image from canvas to a file with blendMode applied.
But I still haven't the result expected. The quality of the image generated isn't the same as the original image, neither the blend seems to be the blend that i've applied. And i can't save as jpg, just as png file.
So, how can i load an image, apply a blend with canvas and save as a jpg file, without losing quality?
CODE:
const kCanvasSize = 200.0;
class CanvasImageToFile {
CanvasImageToFile._();
static final instance = CanvasImageToFile._();
ByteData _readFromFile(File file) {
// File file = getSomeCorrectFile();
Uint8List bytes = file.readAsBytesSync();
return ByteData.view(bytes.buffer);
}
Future<File> _writeToFile(ByteData data) async {
String dir = (await getTemporaryDirectory()).path;
String filePath = '$dir/tempImage.jpg';
final buffer = data.buffer;
return new File(filePath).writeAsBytes(
buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
}
Future<ui.Image> _loadImageSource(File imageSource) async {
// ByteData data = await rootBundle.load(asset);
ByteData data = _readFromFile(imageSource);
ui.Codec codec = await ui.instantiateImageCodec(data.buffer.asUint8List());
ui.FrameInfo fi = await codec.getNextFrame();
return fi.image;
}
Future<File> generateImage(File imageSource) async {
File imageResult;
ui.Image image;
await _loadImageSource(imageSource).then((value) {
image = value;
});
if (image != null) {
final recorder = ui.PictureRecorder();
var rect =
Rect.fromPoints(Offset(0.0, 0.0), Offset(kCanvasSize, kCanvasSize));
final canvas = Canvas(recorder, rect);
Size outputSize = rect.size;
Paint paint = new Paint();
//OVERLAY - BlendMode uses the previously drawn content as a mask
paint.blendMode = BlendMode.colorBurn;
paint.color = Colors.red;
// paint.colorFilter = ColorFilter.mode(Colors.blue, BlendMode.colorDodge);
// paint = Paint()..color = Colors.red;
// paint = Paint()..blendMode = BlendMode.multiply;
//Image
Size inputSize = Size(image.width.toDouble(), image.height.toDouble());
final FittedSizes fittedSizes =
applyBoxFit(BoxFit.cover, inputSize, outputSize);
final Size sourceSize = fittedSizes.source;
final Rect sourceRect =
Alignment.center.inscribe(sourceSize, Offset.zero & inputSize);
canvas.saveLayer(rect, paint);
canvas.drawImageRect(
image, sourceRect, rect, paint);
canvas.restore();
final picture = recorder.endRecording();
final img = await picture.toImage(200, 200);
final byteData = await img.toByteData(format: ImageByteFormat.png);
await _writeToFile(byteData).then((value) {
imageResult = value;
});
return imageResult;
}
After some research e some adjust at decoding image from png to rawUnmodified in my previous code using (Bitmap package), i could save the image with the original format (jpg) and achieved what i wanted. If there's anyone who have the same question, bellow follows the code to load an image with canvas, apply a blend and write to a file with the same quality:
Future<File> generateImage(
File imageSource, Color color, BlendMode blendMode) async {
File imageResult;
ui.Image image;
await _loadImageSource(imageSource).then((value) {
image = value;
});
if (image != null) {
final recorder = ui.PictureRecorder();
var rect = Rect.fromPoints(Offset(0.0, 0.0),
Offset(image.width.toDouble(), image.height.toDouble()));
final canvas = Canvas(recorder, rect);
Size outputSize = rect.size;
Paint paint = new Paint();
//OVERLAY - BlendMode uses the previously drawn content as a mask
// paint.blendMode = blendMode;
// paint.color = color;
paint.colorFilter = ColorFilter.mode(color, blendMode);
//Image
Size inputSize = Size(image.width.toDouble(), image.height.toDouble());
final FittedSizes fittedSizes =
applyBoxFit(BoxFit.contain, inputSize, outputSize);
final Size sourceSize = fittedSizes.source;
final Rect sourceRect =
Alignment.center.inscribe(sourceSize, Offset.zero & inputSize);
canvas.drawImageRect(image, sourceRect, rect, paint);
final picture = recorder.endRecording();
final img = await picture.toImage(image.width, image.height);
ByteData byteData =
await img.toByteData(format: ui.ImageByteFormat.rawUnmodified);
Bitmap bitmap = Bitmap.fromHeadless(
image.width, image.height, byteData.buffer.asUint8List());
Uint8List headedIntList = bitmap.buildHeaded();
await _writeToFile(headedIntList.buffer.asByteData()).then((value) {
imageResult = value;
});
return imageResult;
}
}
Hi i try to convert my Texture 2D in Image (and i cant use a Raw Image because the resolution dont match in phones) but the problem is that Image does not have the Texture element. how Convert UnityEngine.Texture2D in Image.Sprite.
//Image Profile
protected Texture2D pickedImage;
public Texture2D myTexture2D;
public RawImage getRawImageProfile;
public RawImage getRawImageArrayProfile;
public Image getRawImageProfile2;
public Image getRawImageArrayProfile2;
public void PickImageFromGallery(int maxSize = 256)
{
NativeGallery.GetImageFromGallery((path) =>
{
if( path != null )
{
byte[] imageBytes = File.ReadAllBytes(path);
pickedImage = null;
pickedImage = new Texture2D(2, 2);
pickedImage.LoadImage(imageBytes);
getRawImageProfile.texture = pickedImage;
getRawImageArrayProfile.texture = pickedImage;
getRawImageProfile2.sprite = pickedImage; //ERROR CONVERT SPRITE
//getRawImageArrayProfile2.texture = pickedImage;
}
}, maxSize: maxSize);
byte[] myBytes;
myBytes = pickedImage.EncodeToPNG();
enc = Convert.ToBase64String(myBytes);
}
Sprite.Create does exactly what you're looking for.
From the Unity docs on Sprite.Create:
Sprite.Create creates a new Sprite which can be used in game applications. A texture needs to be loaded and assigned to Create in order to control how the new Sprite will look.
In code:
public Texture2D myTexture2D; // The texture you want to convert to a sprite
Sprite mySprite; // The sprite you're gonna save to
Image myImage; // The image on which the sprite is gonna be displayed
public void FooBar()
{
mySprite = Sprite.Create(myTexture2D, new Rect(0.0f, 0.0f, myTexture2D.width, myTexture2D.height), new Vector2(0.5f, 0.5f), 100.0f);
myImage.sprite = mySprite; // apply the new sprite to the image
}
In the above example we take the image data from myTexture2D, and create a new Rect that is of the same size as the original texture2D, with its pivot point in the center, using 100 pixels per unit. We then apply the newly made sprite to the image.
I have Xamarin Android project and I would like to recognize QR code from camera and save picture to storage at the same time. I used Android.Hardware.Camera.IPreviewCallback to get image from camera. Saving image works as expected but recognition of QR code fails. Here is my code:
void Android.Hardware.Camera.IPreviewCallback.OnPreviewFrame(byte[] data, Android.Hardware.Camera camera)
{
byte[] jpegData = ConvertYuvToJpeg(data);
Bitmap bitmap = BytesToBitmap(jpegData);
SaveBitmapImage(bitmap); // This works great
var width = (int)_textureView.Width;
var height = (int)_textureView.Height;
// How to get LuminanceSource??
//LuminanceSource source = new RGBLuminanceSource(rgbValues, bm.Width, bm.Height, RGBLuminanceSource.BitmapFormat.ARGB32);
//LuminanceSource source = new RGBLuminanceSource( jpegData, width, height);
LuminanceSource source = new PlanarYUVLuminanceSource(data, width, height,
0, 0, width, height, false);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
QRCodeReader reader = new QRCodeReader();
var result = reader.decode(binaryBitmap);
}
Call to
var result = reader.decode(binaryBitmap);
always returns null.
Edit:
It seems that problem is with camera. It is not focusing on QR code, image is blurry and ZXing library is unable to decode it. How can I make camera focus?
Problem is with camera focus. Focus mode must be set. Here is a code:
var parameters = _camera.GetParameters();
parameters.FocusMode = GetOptimalFocusMode(parameters);
_camera.SetParameters(parameters);
private String GetOptimalFocusMode(Android.Hardware.Camera.Parameters parameters)
{
String result;
IList<String> focusModes = parameters.SupportedFocusModes;
if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeContinuousVideo))
{
result = Android.Hardware.Camera.Parameters.FocusModeContinuousVideo;
}
else if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeAuto))
{
result = Android.Hardware.Camera.Parameters.FocusModeAuto;
}
else
{
result = parameters.SupportedFocusModes.First();
}
return result;
}
I have a quite simple unity GUI that has the following scheme :
Where Brekt and so are buttons.
The GUI works just fine on PC and is on screen space : overlay so it is supposed to be adapted automatically to fit every screen.
But on tablet the whole GUI is smaller and reduced in the center of the screen, with huge margins around the elements (can't join a screenshot now)
What is the way to fix that? Is it something in player settings or in project settings?
Automatically scaling the UI requires using combination of anchor,pivot point of RecTransform and the Canvas Scaler component. It is hard to understand it without images or videos. It is very important that you thoroughly understand how to do this and Unity provided full video tutorial for this.You can watch it here.
Also, when using scrollbar, scrollview and other similar UI controls, the ContentSizeFitter component is also used to make sure they fit in that layout.
There is a problem with MovementRange. We must scale this value too.
I did it so:
public int MovementRange = 100;
public AxisOption axesToUse = AxisOption.Both; // The options for the axes that the still will use
public string horizontalAxisName = "Horizontal"; // The name given to the horizontal axis for the cross platform input
public string verticalAxisName = "Vertical"; // The name given to the vertical axis for the cross platform input
private int _MovementRange = 100;
Vector3 m_StartPos;
bool m_UseX; // Toggle for using the x axis
bool m_UseY; // Toggle for using the Y axis
CrossPlatformInputManager.VirtualAxis m_HorizontalVirtualAxis; // Reference to the joystick in the cross platform input
CrossPlatformInputManager.VirtualAxis m_VerticalVirtualAxis; // Reference to the joystick in the cross platform input
void OnEnable()
{
CreateVirtualAxes();
}
void Start()
{
m_StartPos = transform.position;
Canvas c = GetComponentInParent<Canvas>();
_MovementRange = (int)(MovementRange * c.scaleFactor);
Debug.Log("Range:"+ _MovementRange);
}
void UpdateVirtualAxes(Vector3 value)
{
var delta = m_StartPos - value;
delta.y = -delta.y;
delta /= _MovementRange;
if (m_UseX)
{
m_HorizontalVirtualAxis.Update(-delta.x);
}
if (m_UseY)
{
m_VerticalVirtualAxis.Update(delta.y);
}
}
void CreateVirtualAxes()
{
// set axes to use
m_UseX = (axesToUse == AxisOption.Both || axesToUse == AxisOption.OnlyHorizontal);
m_UseY = (axesToUse == AxisOption.Both || axesToUse == AxisOption.OnlyVertical);
// create new axes based on axes to use
if (m_UseX)
{
m_HorizontalVirtualAxis = new CrossPlatformInputManager.VirtualAxis(horizontalAxisName);
CrossPlatformInputManager.RegisterVirtualAxis(m_HorizontalVirtualAxis);
}
if (m_UseY)
{
m_VerticalVirtualAxis = new CrossPlatformInputManager.VirtualAxis(verticalAxisName);
CrossPlatformInputManager.RegisterVirtualAxis(m_VerticalVirtualAxis);
}
}
public void OnDrag(PointerEventData data)
{
Vector3 newPos = Vector3.zero;
if (m_UseX)
{
int delta = (int)(data.position.x - m_StartPos.x);
delta = Mathf.Clamp(delta, -_MovementRange, _MovementRange);
newPos.x = delta;
}
if (m_UseY)
{
int delta = (int)(data.position.y - m_StartPos.y);
delta = Mathf.Clamp(delta, -_MovementRange, _MovementRange);
newPos.y = delta;
}
transform.position = new Vector3(m_StartPos.x + newPos.x, m_StartPos.y + newPos.y, m_StartPos.z + newPos.z);
UpdateVirtualAxes(transform.position);
}
Images are locally saved in that application.I want save image from j2me application
to phone memory.Is there is any encoder or convert byte array?How to Save it?Pls help me....
try {
String url=System.getProperty("fileconn.dir.photos")+"model0_0.jpg";
FileConnection fc=(FileConnection)Connector.open(url,Connector.READ_WRITE);
if(!fc.exists()) {
fc.create();
}else {
// return;
}
OutputStream os=fc.openOutputStream();
int iw=galleryImage.getWidth();int ih=galleryImage.getHeight();
rawInt=new int[iw*ih];
galleryImage.getRGB(rawInt,0,iw,0,0,iw,ih);
ByteArrayOutputStream baos=new ByteArrayOutputStream();
for(int i=0;i<rawInt.length;i++)
baos.write(rawInt[i]);
byte byteData[]=baos.toByteArray();
baos.close();
ByteArrayInputStream b_stream=new ByteArrayInputStream(byteData);
int i=0;
/*while((i=b_stream.read())!=-1) {
os.write(i);
}*/
for( i=0;i<content.length;i++) {
os.write(b_stream.read());
}
//os.write(byteData);
os.flush();
os.close();
System.out.println("\n\nImage Copied..\n");
fc.close();
} catch (IOException e) {
//System.out.println("image not read for gallery");
e.printStackTrace();
}
catch(java.lang.IllegalArgumentException iae){iae.printStackTrace();}
catch(Exception e){e.printStackTrace();}
i tried this code.When one Unformatted file are stored in defaut image folder.That file size is 0.0KB.I think, image is not read............
JPGEncoder is a really nice piece of SW and it works well on resource-constrained devices. However, it's based on Sun's JIMI library which is now owned by Oracle. Oracle's license terms are somewhat permissive, but they deny usage on embedded devices such as mobile phones. Depending on your situation, this might be a showstopper.
If you have an RGBA data from image,you would to encode it before save it,So find a suitable encoder for your purpose.You can use png format for all j2me program.First you would to earn RGBA data from image:
/**
* Gets the channels of the image passed as parameter.
* #param img Image
* #return matrix of byte array representing the channels:
* [0] --> alpha channel
* [1] --> red channel
* [2] --> green channel
* [3] --> blue channel
*/
public byte[][] convertIntArrayToByteArrays(Image img) {
int[] pixels = new int[img.getWidth() * img.getHeight()];
img.getRGB(pixels, 0, img.getWidth(), 0, 0, img.getWidth(),
img.getHeight());
// separate channels
byte[] red = new byte[pixels.length];
byte[] green = new byte[pixels.length];
byte[] blue = new byte[pixels.length];
byte[] alpha = new byte[pixels.length];
for (int i = 0; i < pixels.length; i++) {
int argb = pixels[i];
//binary operations to separate the channels
//alpha is the left most byte of the int (0xAARRGGBB)
alpha[i] = (byte) (argb >> 24);
red[i] = (byte) (argb >> 16);
green[i] = (byte) (argb >> 8);
blue[i] = (byte) (argb);
}
return new byte[][]{alpha, red, green, blue};
}
Now download PNG.java class from here.In this class we have:
toPNG(int,int,byte[],byte[],byte[],byte[])
The first ints are the width and height of the image, the byte arrays are in order: alpha, red, green and... blue.The width and height are pretty straightforward:+getWidth():int and +getHeight():int from the Image object(as you done)and others are earned by ** convertIntArrayToByteArrays**:
byte[][] rgba = convertIntArrayToByteArrays(galleryImage);
byte[] encodeImage = toPNG(galleryImage.getWidth(),galleryImage.getHeight(),rgba[0] ,rgba[1] ,rgba[2],rgba[3]);
Now you can save encodeImage in file by fileconnection.
If you like save image with format jpeg,download com.encoder.jpg from here,then:
import com.encoder.jpg.*;
//your input image
Image image = Image.createImage(128, 128);
JPGEncoder encoder = new JPGEncoder();
int quality = 65;
byte[] encodedImage = encoder.encode(image, quality);
//now save or send encoded jpeg image
Finally you can use MediaProcessor from jsr234:
//Create MediaProcessor for raw Image
MediaProcessor mediaProc = GlobalManager.createMediaProcessor("image/raw");
//Get control over the format
ImageFormatControl formatControl = (ImageFormatControl)
mediaProc.getControl("javax.microedition.amms.control.ImageFormatControl");
//Set necessary format
formatControl.setFormat("image/jpeg");
Refrences:
stackoverflow
java-n-me