I am using SharpDX to basically render browser (chromium) output buffer on directX process.
Process is relatively simple, I intercept CEF buffer (by overriding OnPaint method) and write that to a texture2D.
Code is relatively simple:
Texture creation:
public void BuildTextureWrap() {
var oldTexture = texture;
texture = new D3D11.Texture2D(DxHandler.Device, new D3D11.Texture2DDescription() {
Width = overlay.Size.Width,
Height = overlay.Size.Height,
MipLevels = 1,
ArraySize = 1,
Format = DXGI.Format.B8G8R8A8_UNorm,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = D3D11.ResourceUsage.Default,
BindFlags = D3D11.BindFlags.ShaderResource,
CpuAccessFlags = D3D11.CpuAccessFlags.None,
OptionFlags = D3D11.ResourceOptionFlags.None,
});
var view = new D3D11.ShaderResourceView(
DxHandler.Device,
texture,
new D3D11.ShaderResourceViewDescription {
Format = texture.Description.Format,
Dimension = D3D.ShaderResourceViewDimension.Texture2D,
Texture2D = { MipLevels = texture.Description.MipLevels },
}
);
textureWrap = new D3DTextureWrap(view, texture.Description.Width, texture.Description.Height);
if (oldTexture != null) {
obsoleteTextures.Add(oldTexture);
}
}
That piece of code is executed at start and when resize is happening.
Now when CEF OnDraw I basically copy their buffer to texture:
var destinationRegion = new D3D11.ResourceRegion {
Top = Math.Min(r.dirtyRect.y, texDesc.Height),
Bottom = Math.Min(r.dirtyRect.y + r.dirtyRect.height, texDesc.Height),
Left = Math.Min(r.dirtyRect.x, texDesc.Width),
Right = Math.Min(r.dirtyRect.x + r.dirtyRect.width, texDesc.Width),
Front = 0,
Back = 1,
};
// Draw to the target
var context = targetTexture.Device.ImmediateContext;
context.UpdateSubresource(targetTexture, 0, destinationRegion, sourceRegionPtr, rowPitch, depthPitch);
There are some more code out there but basically this is only relevant piece. Whole thing works until OnDraw happens frequently.
Apparently if I force CEF to Paint frequently, whole host process dies.
This is happening at UpdateSubresource.
So my question is, is there another, safer way to do this? (Update texture frequently)
Solution to this problem was relatively simple yet not so obvious at the beginning.
I simply moved the code responsible for updating texture inside render loop and just keep internal buffer pointer cached.
Related
I'm working with a Photoshop document that has over 1,000 layers in it (1052, to be precise). The file itself is a collage and each layer is an individual image within that collage. So, when all layers are visible it's a complex collage of 1052 overlapping images. What I have been trying to do is create an animation of the collage assembly so that, for instance, layer 0 appears and then layer 1, layer 2, etc. and when each layer appears the previous layer also stays visible.
I have been able to make a frame animation, to which I've then added all other layers as frames but in this case the animation I create just makes each new frame/layer visible on its own, and making the previous layer(s) invisible. I've gone through the first several frames, manually keeping the previous layer/frame visible but I'm hoping there's a way to do this more automatically.
Is there an approach or a setting I'm missing that will allow me to automagically create this animation, keeping each frame visible as the next frame also becomes visible?
Basically, the premise of want you want is to go through all layers, switching on and off the viability as you go. Or switch them all off before you start, and get the code to loop through all the layers. Also as you are starting out, get rid of any groups, it'll just be easier. Make a copy of the psd without them if you have to.
In Photoshop scripting, the top most layer is 0. The next one below is 1 etc. It's easier to count backwards over the loop
for (var i = numOfLayers -1; i >= 0; i--)
{
// do stuff here
}
The visibility of each layer is controlled with:
theLayer.visible = true; // visible
thatLayer.visible = false; // not visible
So in short you'll have something like this:
// WITH ALL THE LAYERS VISIBILITY SWITCHED OFF TO START WITH!
// Switch off any dialog boxes
displayDialogs = DialogModes.NO; // OFF
// call the source document
var srcDoc = app.activeDocument;
// get the number of layers in the PSD
var numOfLayers = srcDoc.layers.length;
var myFolder = "D:\\temp\\";
// main loop
// ignore background
for (var i = numOfLayers -2; i >= 0 ; i--)
{
var docName = "image_" + i + ".jpg";
// get a reference to the layers as you go up
var thisLayer = srcDoc.layers[i];
thisLayer.visible = true;
// duplicate image into new document
duplicate_it(docName);
// active document is now the NEW one!
// flatten it
activeDocument.flatten();
jpeg_it(myFolder + docName, 12);
// close the image WITHOUT saving as we've just saved it
app.activeDocument.close(SaveOptions.DONOTSAVECHANGES);
// make the source document the active document
app.activeDocument = srcDoc;
// switch OFF the layer
thisLayer.visible = false;
}
// function DUPLICATE IT (str)
// --------------------------------------------------------
function duplicate_it(str)
{
// duplicate image into new document
if (arguments.length == 0) str = "temp";
var id428 = charIDToTypeID( "Dplc" );
var desc92 = new ActionDescriptor();
var id429 = charIDToTypeID( "null" );
var ref27 = new ActionReference();
var id430 = charIDToTypeID( "Dcmn" );
var id431 = charIDToTypeID( "Ordn" );
var id432 = charIDToTypeID( "Frst" );
ref27.putEnumerated( id430, id431, id432 );
desc92.putReference( id429, ref27 );
var id433 = charIDToTypeID( "Nm " );
desc92.putString( id433, str ); // name
executeAction( id428, desc92, DialogModes.NO );
}
// function JPEG IT (file path + file name, jpeg quality)
// ----------------------------------------------------------------
function jpeg_it(filePath, jpgQuality)
{
if(! jpgQuality) jpgQuality = 12;
// jpg file options
var jpgFile = new File(filePath);
jpgSaveOptions = new JPEGSaveOptions();
jpgSaveOptions.formatOptions = FormatOptions.OPTIMIZEDBASELINE;
jpgSaveOptions.embedColorProfile = true;
jpgSaveOptions.matte = MatteType.NONE;
jpgSaveOptions.quality = jpgQuality;
activeDocument.saveAs(jpgFile, jpgSaveOptions, true, Extension.LOWERCASE);
}
I would like to use MediaStream.captureStream() method, but it is either rendered useless due to specification and bugs or I am using it totally wrong.
I know that captureStream gets maximal framerate as the parameter, not constant and it does not even guarantee that, but it is possible to change MediaStream currentTime (currently in Chrome, in Firefox it has no effect but in return there is requestFrame, not available at Chrome), but the idea of manual frame requests or setting the placement of the frame in the MediaStream should override this effect. It doesn't.
In Firefox it smoothly renders the video, frame by frame, but the video result is as long as wall clock time used for processing.
In Chrome there are some dubious black frames or reordered ones (currently I do not care about it until the FPS matches), and the manual setting of currentTime gives nothing, the same result as in FF.
I use modified code from MediaStream Capture Canvas and Audio Simultaneously answer.
const FPS = 30;
var cStream, vid, recorder, chunks = [], go = true,
Q = 61, rec = document.getElementById('rec'),
canvas = document.getElementById('canvas'),
ctx = canvas.getContext('2d');
ctx.strokeStyle = 'rgb(255, 0, 0)';
function clickHandler() {
this.textContent = 'stop recording';
//it has no effect no matter if it is empty or set to 30
cStream = canvas.captureStream(FPS);
recorder = new MediaRecorder(cStream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportStream;
this.onclick = stopRecording;
recorder.start();
draw();
}
function exportStream(e) {
if (chunks.length) {
var blob = new Blob(chunks)
var vidURL = URL.createObjectURL(blob);
var vid2 = document.createElement('video');
vid2.controls = true;
vid2.src = vidURL;
vid2.onend = function() {
URL.revokeObjectURL(vidURL);
}
document.body.insertBefore(vid2, vid);
} else {
document.body.insertBefore(document.createTextNode('no data saved'), canvas);
}
}
function saveChunks(e) {
e.data.size && chunks.push(e.data);
}
function stopRecording() {
go = false;
this.parentNode.removeChild(this);
recorder.stop();
}
var loadVideo = function() {
vid = document.createElement('video');
document.body.insertBefore(vid, canvas);
vid.oncanplay = function() {
rec.onclick = clickHandler;
rec.disabled = false;
canvas.width = vid.videoWidth;
canvas.height = vid.videoHeight;
vid.oncanplay = null;
ctx.drawImage(vid, 0, 0);
}
vid.onseeked = function() {
ctx.drawImage(vid, 0, 0);
/*
Here I want to include additional drawing per each frame,
for sure taking more than 180ms
*/
if(cStream && cStream.requestFrame) cStream.requestFrame();
draw();
}
vid.crossOrigin = 'anonymous';
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
vid.currentTime = 0;
}
function draw() {
if(go && cStream) {
++Q;
cStream.currentTime = Q / FPS;
vid.currentTime = Q / FPS;
}
};
loadVideo();
<button id="rec" disabled>record</button><br>
<canvas id="canvas" width="500" height="500"></canvas>
Is there a way to make it operational?
The goal is to load video, process every frame (which is time consuming in my case) and return the processed one.
Footnote: I do not want to use ffmpeg.js, external server or other technologies. I can process it by classic ffmpeg without using JavaScript at all, but this is not the point of this question, it is more about MediaStream usability / maturity. The context is Firefox/Chrome here, but it may be node.js or nw.js as well. If this is possible at all or awaiting bug fixes, the next question would be feeding audio to it, but I think it would be good as separate question.
I have Xamarin Android project and I would like to recognize QR code from camera and save picture to storage at the same time. I used Android.Hardware.Camera.IPreviewCallback to get image from camera. Saving image works as expected but recognition of QR code fails. Here is my code:
void Android.Hardware.Camera.IPreviewCallback.OnPreviewFrame(byte[] data, Android.Hardware.Camera camera)
{
byte[] jpegData = ConvertYuvToJpeg(data);
Bitmap bitmap = BytesToBitmap(jpegData);
SaveBitmapImage(bitmap); // This works great
var width = (int)_textureView.Width;
var height = (int)_textureView.Height;
// How to get LuminanceSource??
//LuminanceSource source = new RGBLuminanceSource(rgbValues, bm.Width, bm.Height, RGBLuminanceSource.BitmapFormat.ARGB32);
//LuminanceSource source = new RGBLuminanceSource( jpegData, width, height);
LuminanceSource source = new PlanarYUVLuminanceSource(data, width, height,
0, 0, width, height, false);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
QRCodeReader reader = new QRCodeReader();
var result = reader.decode(binaryBitmap);
}
Call to
var result = reader.decode(binaryBitmap);
always returns null.
Edit:
It seems that problem is with camera. It is not focusing on QR code, image is blurry and ZXing library is unable to decode it. How can I make camera focus?
Problem is with camera focus. Focus mode must be set. Here is a code:
var parameters = _camera.GetParameters();
parameters.FocusMode = GetOptimalFocusMode(parameters);
_camera.SetParameters(parameters);
private String GetOptimalFocusMode(Android.Hardware.Camera.Parameters parameters)
{
String result;
IList<String> focusModes = parameters.SupportedFocusModes;
if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeContinuousVideo))
{
result = Android.Hardware.Camera.Parameters.FocusModeContinuousVideo;
}
else if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeAuto))
{
result = Android.Hardware.Camera.Parameters.FocusModeAuto;
}
else
{
result = parameters.SupportedFocusModes.First();
}
return result;
}
I have created this code
Uri _blendImageUri = new Uri(#"Assets/1.png", UriKind.Relative);
var _blendImageProvider = new StreamImageSource((System.Windows.Application.GetResourceStream(_blendImageUri).Stream));
var bf = new BlendFilter(_blendImageProvider);
Filter work nice. But I want change image size for ForegroundSource property. How can I load image with my size?
If I understood you correctly you are trying to blend ForegroundSource with only a part of the original image? That is called local blending at it is currently not supported on the BlendFilter itself.
You can however use ReframingFilter to reframe the ForegroundSource and then blend it. Your chain will look like something like this:
using (var mainImage = new StreamImageSource(...))
using (var filterEffect = new FilterEffect(mainImage))
{
using (var secondaryImage = new StreamImageSource(...))
using (var secondaryFilterEffect = new FilterEffect(secondaryImage))
using (var reframing = new ReframingFilter(new Rect(0, 0, 500, 500), 0)) //reframe your image, thus "setting" the location and size of the content when blending
{
secondaryFilterEffect.Filters = new [] { reframing };
using (var blendFilter = new BlendFilter(secondaryFilterEffect)
using (var renderer = new JpegRenderer(filterEffect))
{
filterEffect.Filters = new [] { blendFilter };
await renderer.RenderAsync();
}
}
}
As you can see, you can use the reframing filter to position the content of your ForegroundSource so that it will only blend locally. Note that when reframeing you can set the borders outside of the image location (for example new Rect(-100, -100, 500, 500)) and the areas outside of the image will appear as black transparent areas - exactly what you need in BlendFilter.
I made an application that the user could load an image, then they could edit it. The problem is, when I load an image for the first time, it loaded with it's original pixel size. But when i load another image, the pixel size differ from the original one.
Here is the function code when image loaded to stage :
function onFileLoadComplete(event:Event):void
{
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onDataLoadComplete);
loader.loadBytes(loadFileRef.data);
loadFileRef = null;
}
function onDataLoadComplete(event:Event):void
{
var bitmapData:BitmapData = Bitmap(event.target.content).bitmapData;
var matrix:Matrix = new Matrix();
matrix.translate(-imageView_mc.x, -imageView_mc.y);
matrix.scale(scaleX, scaleY);
matrix.translate(imageView_mc.x, imageView_mc.y);
imageView_mc.graphics.clear();
imageView_mc.graphics.beginBitmapFill(bitmapData, matrix, false, true);
imageView_mc.graphics.drawRect(0, 0, bitmapData.width, bitmapData.height);
imageView_mc.graphics.endFill();
trace ("Image width: " ,imageView_mc.width, ", Image height: " ,imageView_mc.height);
imageView_mc.width = stage.stageWidth;
imageView_mc.height = stage.stageHeight;
(imageView_mc.scaleX < imageView_mc.scaleY) ? imageView_mc.scaleY = imageView_mc.scaleX : imageView_mc.scaleX = imageView_mc.scaleY;
(imageView_mc.scaleX > imageView_mc.scaleY) ? imageView_mc.scaleY = imageView_mc.scaleX : imageView_mc.scaleX = imageView_mc.scaleY;
imageView_mc.x = 0.5*(stage.stageWidth-imageView_mc.width);
imageView_mc.y = 0.5*(stage.stageHeight-imageView_mc.height);
}
so, is there something wrong with this code? this code is for loading image to stage, then scale it proportionally to stage size.
Any feedback would be really appreciated.
Reset your scale when loaded new image before applying matrix transform.
scaleX = scaleY = 1;