Urhosharp Material.FromImage not woking with some jpg files - xamarin

I'm using Xamarin.Forms with Urhosharp in my project. I'm tring to set a matrial from an image on a sphere, everything is OK in my Android project but in iOS project, when I set material from some jpg files it doesn't work and all I get is a black screen.
Here is the jpg that works correctly:
And here is the other one that doesn't:
This is my code:
var scene = new Scene();
scene.CreateComponent<Octree>();
// Node (Rotation and Position)
var node = scene.CreateChild("room");
node.Position = new Vector3(0, 0, 0);
//node.Rotation = new Quaternion(10, 60, 10);
node.SetScale(1f);
// Model
var modelObject = node.CreateComponent<StaticModel>();
modelObject.Model = ResourceCache.GetModel("CustomModels/SmoothSphere.mdl");
var zoneNode = scene.CreateChild("Zone");
var zone = zoneNode.CreateComponent<Zone>();
zone.SetBoundingBox(new BoundingBox(-300.0f, 300.0f));
zone.AmbientColor = new Color(1f, 1f, 1f);
//get image from byte[]
//var url = "http://www.wsj.com/public/resources/media/0524yosemite_1300R.jpg";
//var wc = new WebClient() { Encoding = Encoding.UTF8 };
//var mb = new MemoryBuffer(wc.DownloadData(new Uri(url)));
var mb = new MemoryBuffer(PanoramaBuffer.PanoramaByteArray);
var image = new Image(Context) { Name = "MyImage" };
image.Load(mb);
//or from resource
//var image = ResourceCache.GetImage("Textures/grave.jpg");
var isFliped = image.FlipHorizontal();
if (!isFliped)
{
throw new Exception("Unsuccessful flip");
}
var m = Material.FromImage("1.jpg");
m.SetTechnique(0, CoreAssets.Techniques.DiffNormal, 0, 0);
m.CullMode = CullMode.Cw;
//m.SetUVTransform(Vector2.Zero, 0, 0);
modelObject.SetMaterial(m);
// Camera
var cameraNode = scene.CreateChild("camera");
_camera = cameraNode.CreateComponent<Camera>();
_camera.Fov = 75.8f;
_initialZoom = _camera.Zoom;
// Viewport
Renderer.SetViewport(0, new Viewport(scene, _camera, null));
I already tried to change compression level, ICCC profile and ...

I asked the same question in forums.xamarin.com and someone answered the question and I'll share it here :
In iOS every texture needs to have a power of two resolution, like 256 x 256 or 1024 x 512. Check if that is the issue. Additionally check that your using the latest UrhoSharp version.
Also make sure that the image is set as BundleResource in the iOS project.

Related

Updating Texture2D frequently causes process to crash (UpdateSubresource)

I am using SharpDX to basically render browser (chromium) output buffer on directX process.
Process is relatively simple, I intercept CEF buffer (by overriding OnPaint method) and write that to a texture2D.
Code is relatively simple:
Texture creation:
public void BuildTextureWrap() {
var oldTexture = texture;
texture = new D3D11.Texture2D(DxHandler.Device, new D3D11.Texture2DDescription() {
Width = overlay.Size.Width,
Height = overlay.Size.Height,
MipLevels = 1,
ArraySize = 1,
Format = DXGI.Format.B8G8R8A8_UNorm,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = D3D11.ResourceUsage.Default,
BindFlags = D3D11.BindFlags.ShaderResource,
CpuAccessFlags = D3D11.CpuAccessFlags.None,
OptionFlags = D3D11.ResourceOptionFlags.None,
});
var view = new D3D11.ShaderResourceView(
DxHandler.Device,
texture,
new D3D11.ShaderResourceViewDescription {
Format = texture.Description.Format,
Dimension = D3D.ShaderResourceViewDimension.Texture2D,
Texture2D = { MipLevels = texture.Description.MipLevels },
}
);
textureWrap = new D3DTextureWrap(view, texture.Description.Width, texture.Description.Height);
if (oldTexture != null) {
obsoleteTextures.Add(oldTexture);
}
}
That piece of code is executed at start and when resize is happening.
Now when CEF OnDraw I basically copy their buffer to texture:
var destinationRegion = new D3D11.ResourceRegion {
Top = Math.Min(r.dirtyRect.y, texDesc.Height),
Bottom = Math.Min(r.dirtyRect.y + r.dirtyRect.height, texDesc.Height),
Left = Math.Min(r.dirtyRect.x, texDesc.Width),
Right = Math.Min(r.dirtyRect.x + r.dirtyRect.width, texDesc.Width),
Front = 0,
Back = 1,
};
// Draw to the target
var context = targetTexture.Device.ImmediateContext;
context.UpdateSubresource(targetTexture, 0, destinationRegion, sourceRegionPtr, rowPitch, depthPitch);
There are some more code out there but basically this is only relevant piece. Whole thing works until OnDraw happens frequently.
Apparently if I force CEF to Paint frequently, whole host process dies.
This is happening at UpdateSubresource.
So my question is, is there another, safer way to do this? (Update texture frequently)
Solution to this problem was relatively simple yet not so obvious at the beginning.
I simply moved the code responsible for updating texture inside render loop and just keep internal buffer pointer cached.

How to recognize QR code from image using ZXing library?

I have Xamarin Android project and I would like to recognize QR code from camera and save picture to storage at the same time. I used Android.Hardware.Camera.IPreviewCallback to get image from camera. Saving image works as expected but recognition of QR code fails. Here is my code:
void Android.Hardware.Camera.IPreviewCallback.OnPreviewFrame(byte[] data, Android.Hardware.Camera camera)
{
byte[] jpegData = ConvertYuvToJpeg(data);
Bitmap bitmap = BytesToBitmap(jpegData);
SaveBitmapImage(bitmap); // This works great
var width = (int)_textureView.Width;
var height = (int)_textureView.Height;
// How to get LuminanceSource??
//LuminanceSource source = new RGBLuminanceSource(rgbValues, bm.Width, bm.Height, RGBLuminanceSource.BitmapFormat.ARGB32);
//LuminanceSource source = new RGBLuminanceSource( jpegData, width, height);
LuminanceSource source = new PlanarYUVLuminanceSource(data, width, height,
0, 0, width, height, false);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
QRCodeReader reader = new QRCodeReader();
var result = reader.decode(binaryBitmap);
}
Call to
var result = reader.decode(binaryBitmap);
always returns null.
Edit:
It seems that problem is with camera. It is not focusing on QR code, image is blurry and ZXing library is unable to decode it. How can I make camera focus?
Problem is with camera focus. Focus mode must be set. Here is a code:
var parameters = _camera.GetParameters();
parameters.FocusMode = GetOptimalFocusMode(parameters);
_camera.SetParameters(parameters);
private String GetOptimalFocusMode(Android.Hardware.Camera.Parameters parameters)
{
String result;
IList<String> focusModes = parameters.SupportedFocusModes;
if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeContinuousVideo))
{
result = Android.Hardware.Camera.Parameters.FocusModeContinuousVideo;
}
else if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeAuto))
{
result = Android.Hardware.Camera.Parameters.FocusModeAuto;
}
else
{
result = parameters.SupportedFocusModes.First();
}
return result;
}

nokia Imaging SDK customize BlendFilter

I have created this code
Uri _blendImageUri = new Uri(#"Assets/1.png", UriKind.Relative);
var _blendImageProvider = new StreamImageSource((System.Windows.Application.GetResourceStream(_blendImageUri).Stream));
var bf = new BlendFilter(_blendImageProvider);
Filter work nice. But I want change image size for ForegroundSource property. How can I load image with my size?
If I understood you correctly you are trying to blend ForegroundSource with only a part of the original image? That is called local blending at it is currently not supported on the BlendFilter itself.
You can however use ReframingFilter to reframe the ForegroundSource and then blend it. Your chain will look like something like this:
using (var mainImage = new StreamImageSource(...))
using (var filterEffect = new FilterEffect(mainImage))
{
using (var secondaryImage = new StreamImageSource(...))
using (var secondaryFilterEffect = new FilterEffect(secondaryImage))
using (var reframing = new ReframingFilter(new Rect(0, 0, 500, 500), 0)) //reframe your image, thus "setting" the location and size of the content when blending
{
secondaryFilterEffect.Filters = new [] { reframing };
using (var blendFilter = new BlendFilter(secondaryFilterEffect)
using (var renderer = new JpegRenderer(filterEffect))
{
filterEffect.Filters = new [] { blendFilter };
await renderer.RenderAsync();
}
}
}
As you can see, you can use the reframing filter to position the content of your ForegroundSource so that it will only blend locally. Note that when reframeing you can set the borders outside of the image location (for example new Rect(-100, -100, 500, 500)) and the areas outside of the image will appear as black transparent areas - exactly what you need in BlendFilter.

EaselJS: Using updateCache() with AlphaMaskFilter When Dragging Mask

I'm using an imported png with an alpha gradient that I'm setting as a mask that reveals the bitmap it is assigned to. The mask object is draggable (kind of like a flashlight). I know I'm supposed to use an AlphaMaskFilter as one of the filters, and I know I'm supposed to use .updateCache()... I'm just not sure I'm using them correctly?
var stage;
var assetQueue;
var bg;
var bgMask;
var container;
var amf;
$(document).ready(function(){
loadImages();
});
function loadImages()
{
// Set up preload queue
assetQueue = new createjs.LoadQueue();
assetQueue.addEventListener("complete", preloadComplete);
assetQueue.loadManifest([{id:"img_bg",src:"images/Nintendo-logo-red.jpg"}, {id:"img_bg_mask",src:"images/background_mask.png"}]);
}
function preloadComplete()
{
assetQueue.removeEventListener("complete", preloadComplete);
init();
}
function init()
{
stage = new createjs.Stage("stage_canvas");
setBackgrounds();
sizeStage();
$(document).mousemove(function(evt){
trackMouse(evt);
});
}
function trackMouse(evt)
{
var mouseX = evt.pageX;
var mouseY = evt.pageY;
// Move the containing clip around
container.x = mouseX - (bgMask.image.width / 2);
container.y = mouseY - (bgMask.image.height / 2);
// Offset the position of the masked image.
bg.x = -container.x;
bg.y = -container.y;
container.updateCache();
stage.update();
}
function setBackgrounds()
{
bg = new createjs.Bitmap(assetQueue.getResult("img_bg"));
bgMask = new createjs.Bitmap(assetQueue.getResult("img_bg_mask"));
container = new createjs.Container();
container.addChild(bg);
amf = new createjs.AlphaMaskFilter(bgMask.image)
container.filters = [amf];
container.cache(0, 0, bg.image.width, bg.image.height);
stage.addChild(container);
stage.update();
}
function sizeStage()
{
var windowW = 600;
var windowH = 600;
stage.canvas.width = windowW;
stage.canvas.height = windowH;
stage.update();
}
Solution found (for anyone interested). The key is to add the image you want to mask to a container. Move the container to any position you want, then offset the contained image within the container. The code has been updated to reflect this.

How to load image in actionscript to stage without changing it's size?

I made an application that the user could load an image, then they could edit it. The problem is, when I load an image for the first time, it loaded with it's original pixel size. But when i load another image, the pixel size differ from the original one.
Here is the function code when image loaded to stage :
function onFileLoadComplete(event:Event):void
{
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onDataLoadComplete);
loader.loadBytes(loadFileRef.data);
loadFileRef = null;
}
function onDataLoadComplete(event:Event):void
{
var bitmapData:BitmapData = Bitmap(event.target.content).bitmapData;
var matrix:Matrix = new Matrix();
matrix.translate(-imageView_mc.x, -imageView_mc.y);
matrix.scale(scaleX, scaleY);
matrix.translate(imageView_mc.x, imageView_mc.y);
imageView_mc.graphics.clear();
imageView_mc.graphics.beginBitmapFill(bitmapData, matrix, false, true);
imageView_mc.graphics.drawRect(0, 0, bitmapData.width, bitmapData.height);
imageView_mc.graphics.endFill();
trace ("Image width: " ,imageView_mc.width, ", Image height: " ,imageView_mc.height);
imageView_mc.width = stage.stageWidth;
imageView_mc.height = stage.stageHeight;
(imageView_mc.scaleX < imageView_mc.scaleY) ? imageView_mc.scaleY = imageView_mc.scaleX : imageView_mc.scaleX = imageView_mc.scaleY;
(imageView_mc.scaleX > imageView_mc.scaleY) ? imageView_mc.scaleY = imageView_mc.scaleX : imageView_mc.scaleX = imageView_mc.scaleY;
imageView_mc.x = 0.5*(stage.stageWidth-imageView_mc.width);
imageView_mc.y = 0.5*(stage.stageHeight-imageView_mc.height);
}
so, is there something wrong with this code? this code is for loading image to stage, then scale it proportionally to stage size.
Any feedback would be really appreciated.
Reset your scale when loaded new image before applying matrix transform.
scaleX = scaleY = 1;

Resources