How to create video from still images in MATLAB? - image

I am new to MATLAB. I am trying to encrypt video file in MATLAB. I encrypted the individual frame of video. I am using MATLAB 7.10.0 (R2010a) , that's why I used "mmreader" fumction. But now I am not getting how to reassemble all the encrypted frames into a new video. here is my code,
vid = mmreader('videoSampl.avi');
numFrame = vid.NumberOfframes;
for i = 1:2:3
frame = read(vid, i);
gray = rgb2gray(frame);
n = numel(gray);
plaintext = reshape(gray, n, 1);
cipherImg = cipher (plaintext, w, s_box, poly_mat, 1);
re_plaintext = inv_cipher (cipherImg, plaintext, w, inv_s_box, inv_poly_mat, 1);
img = reshape(cipherImg, 128, 128);
imwrite(img,['videoaes/encrypted/image' int2str(i), '.jpg']);
imgP = reshape(re_plaintext, 128, 128);
imwrite(imgP,['videoaes/decrypted/Dimage' int2str(i), '.jpg']);
im(i)=image(frames);
end
I have two folders encrypted and decrypted, I want to convert these folders into avi video again.

Here is the code to convert video from ur encrypted images
ImagesFolder=uigetdir;
jpegFiles = dir(strcat(ImagesFolder,'\*.png'));
S = [jpegFiles(:).datenum];
[S,S] = sort(S);
jpegFilesS =jpegFiles(S);
VideoFile=strcat(ImagesFolder,'\MyVideo');writerObj=VideoWriter(VideoFile);
fps= 10; writerObj.FrameRate = fps;
open(writerObj);
fort=1:length(jpegFilesS)Frame=imread(strcat(ImagesFolder,'\',jpegFilesS(t).name))writeVideo(writerObj,im2frame(Frame))endclose(writerObj;

Related

Convert from Android.Media.Image to Android.Graphics.Bitmap (possibily with Xamarin)

I have a Android.Media.Image frame acquired from Camera2 API Preview (in a Xamarin app), that I want to process.
The image format is YUV_420_888.
In a previous iteration I was able to get a Android.Graphics.Bitmap from the CameraPreview (but not anymore), and with that bitmap I was able to:
crop it
convert it to JPEG, save it or send it to server
convert it to a Mat to process with Emgu in real time
I searched for how to convert from Android.Media.Image to Android.Graphics.Bitmap thinking it was going to be quite straightforward, but it's not.
I found mostly java code that I tried to convert to C# Xamarin but without success.
First step is supposed to be to convert to a YuvImage (found 2 different versions, hence the comments):
private static byte[] YUV_420_888toNV21(Android.Media.Image image)
{
byte[] nv21;
ByteBuffer yBuffer = image.GetPlanes()[0].Buffer;
ByteBuffer uBuffer = image.GetPlanes()[1].Buffer;
ByteBuffer vBuffer = image.GetPlanes()[2].Buffer;
int ySize = yBuffer.Remaining();
int uSize = uBuffer.Remaining();
int vSize = vBuffer.Remaining();
nv21 = new byte[ySize + uSize + vSize];
//yBuffer.Get(nv21, 0, ySize);
//vBuffer.Get(nv21, ySize, vSize);
//uBuffer.Get(nv21, ySize + vSize, uSize);
yBuffer.Get(nv21, 0, ySize);
uBuffer.Get(nv21, ySize, uSize);
vBuffer.Get(nv21, ySize + uSize, vSize);
return nv21;
}
And then the saving:
var bbitmap = YUV_420_888toNV21(image);
YuvImage yuv = new YuvImage(bbitmap, ImageFormatType.Nv21, image.Width, image.Height, null);
var ms = new MemoryStream();
yuv.CompressToJpeg(new Android.Graphics.Rect(0, 0, image.Width, image.Height), 100, ms);
File.WriteAllBytes("/sdcard/Download/testFrame.jpg", ms.ToArray());
But this is the result:
This was just a test to actually save and check the captured frame, because at the end of this I still don't have a Bitmap (even if it was working).
In cas of real time processing, I don't want/need the JPG conversion becaus it would only slow down everything.
I'd like to convert the Image to a Bitmap, crop it and feed it to Emgu as a Mat.
Any suggestion?
Thanks

Covert xamarin.image into base 64 format

I need to Convert a xamarin forms image into a base64 format, Can anyone help me with this?
This is how i've being trying to do it, but it doesent work.
var inputStream = signatureImage.Source.GetValue(UriImageSource.UriProperty);
//Getting Stream as a Memorystream
var signatureMemoryStream = inputStream as MemoryStream;
if (signatureMemoryStream == null)
{
signatureMemoryStream = new MemoryStream();
inputStream.CopyTo(signatureMemoryStream);
}
//Adding memorystream into a byte array
var byteArray = signatureMemoryStream.ToArray();
//Converting byte array into Base64 string
base64String = Convert.ToBase64String(byteArray);
"signatureImage" is the image name.
Once you get your file path , you can use the following code that worked for me.
var stream = file.GetStream();
var bytes = new byte [stream.Length];
await stream.ReadAsync(bytes, 0, (int)stream.Length);
string base64 = System.Convert.ToBase64String(bytes);
I found it here
Image is just a control in Xamarin forms to display the image.
It is not something from which you can get your image byte array out.
You will be better of using the Media Plugin and save it to disk. Then load it via memory stream and convert.
You can also use FFImageLoading. It has 2 methods which can be of use for you :
GetImageAsJpgAsync(int quality = 90, int desiredWidth = 0, int desiredHeight = 0)
GetImageAsPngAsync(int desiredWidth = 0, int desiredHeight = 0)
The SO question - Convert Image into byte array in Xamarin.Forms shows how to do it in Platform specific code here.
The forum thread (Convert Image to byte[]) has a good discussion on why you can't get it from the control.
You can do this as below as well
var base64String = Convert.ToBase64String(File.ReadAllBytes(file.Path))

Creating flv from image in Matlab

I'm trying to make flv file from images via the following Matlab code.The problem is converting bmp images fo flv.
I'm almost new in this section of Matlab. do you have any Idea?
clear
clc
vidobj = videoinput('winvideo',2);
preview(vidobj);
No_snapshot = 5;
interval = 1;
Format = 'bmp';
PathName = uigetdir;
tic;
count = 0;
date_temp = datevec(now);
date_string_vid = [num2str(date_temp(1)),'-',num2str(date_temp(2)),'-',num2str(date_temp(3)),'-',...
num2str(date_temp(4)),'-',num2str(date_temp(5)),'-',num2str(date_temp(6))];
while 1
if fix(toc/interval) > count
count = fix(toc/interval);
date_string = num2str(count);
imwrite(getsnapshot(vidobj),[PathName,'\',date_string,'.' Format], Format);
end
if count >= No_snapshot
break;
end
end
closepreview;
delete(vidobj);
% =========================================================================
PathName = 'G:\capture_video\movie1\';
obj=VideoWriter(date_string_vid,'Grayscale AVI');
open(obj)
for m=1:No_snapshot
m1=imread([PathName,num2str(m),'.bmp']);
% m1=double(m1(:,:,1));
F = im2frame(m1);
aviObject = addframe(obj,F); % Add the frame to the AVI file
end
close(obj)

ffmpeg libx264 AVCodecContext settings

I am using a recent windows (Jan 2011) ffmpeg build and trying to record video in H264. It is recording fine in MPEG4 using the following settings:
c->codec_id = CODEC_ID_MPEG4;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->width = VIDEO_WIDTH;
c->height = VIDEO_HEIGHT;
c->bit_rate = c->width * c->height * 4;
c->time_base.den = FRAME_RATE;
c->time_base.num = 1;
c->gop_size = 12;
c->pix_fmt = PIX_FMT_YUV420P;
Simply changing CODEC Id to H264 causes avcodec_open() to fail (-1). I found a list of possible settings How to encode h.264 with libavcodec/x264?. I have tried these, without setting pix_fmt, avcodec_open() still fails but if I additionally set c->pix_fmt = PIX_FMT_YUV420P; then I get a divide by zero exception.
I then came across a few posts on here that say I should set nothing (with exception of code_id, codec_type, width, height and perhaps bit_rate and pix_fmt) as the library now chooses the best settings itself. I have tried various combinations, still avcode_open() fails.
Does anyone have some advice on what to do or some settings that are current?
Thanks.
Here are one set of H264 settings which give the issue I describe:
static AVStream* AddVideoStream(AVFormatContext *pOutputFmtCtx,
int frameWidth, int frameHeight, int fps)
{
AVCodecContext* ctx;
AVStream* stream;
stream = av_new_stream(pOutputFmtCtx, 0);
if (!stream)
{
return NULL;
}
ctx = stream->codec;
ctx->codec_id = pOutputFmtCtx->oformat->video_codec; //CODEC_ID_H264
ctx->codec_type = AVMEDIA_TYPE_VIDEO;
ctx->width = frameWidth; //704
ctx->height = frameHeight; //576
ctx->bit_rate = frameWidth * frameHeight * 4;
ctx->coder_type = 1; // coder = 1
ctx->flags|=CODEC_FLAG_LOOP_FILTER; // flags=+loop
ctx->me_cmp|= 1; // cmp=+chroma, where CHROMA = 1
ctx->partitions|=X264_PART_I8X8+X264_PART_I4X4+X264_PART_P8X8+X264_PART_B8X8; // partitions=+parti8x8+parti4x4+partp8x8+partb8x8
ctx->me_method=ME_HEX; // me_method=hex
ctx->me_subpel_quality = 7; // subq=7
ctx->me_range = 16; // me_range=16
ctx->gop_size = 250; // g=250
ctx->keyint_min = 25; // keyint_min=25
ctx->scenechange_threshold = 40; // sc_threshold=40
ctx->i_quant_factor = 0.71; // i_qfactor=0.71
ctx->b_frame_strategy = 1; // b_strategy=1
ctx->qcompress = 0.6; // qcomp=0.6
ctx->qmin = 10; // qmin=10
ctx->qmax = 51; // qmax=51
ctx->max_qdiff = 4; // qdiff=4
ctx->max_b_frames = 3; // bf=3
ctx->refs = 3; // refs=3
ctx->directpred = 1; // directpred=1
ctx->trellis = 1; // trellis=1
ctx->flags2|=CODEC_FLAG2_BPYRAMID+CODEC_FLAG2_MIXED_REFS+CODEC_FLAG2_WPRED+CODEC_FLAG2_8X8DCT+CODEC_FLAG2_FASTPSKIP; // flags2=+bpyramid+mixed_refs+wpred+dct8x8+fastpskip
ctx->weighted_p_pred = 2; // wpredp=2
// libx264-main.ffpreset preset
ctx->flags2|=CODEC_FLAG2_8X8DCT;
ctx->flags2^=CODEC_FLAG2_8X8DCT; // flags2=-dct8x8
// if set this get divide by 0 error on avcodec_open()
// if don't set it get -1 error on avcodec_open()
//ctx->pix_fmt = PIX_FMT_YUV420P;
return stream;
}
In my experience you should give FFMPEG the least amount of information when initialising your codec as possible. This may seem counter intuitive but it means that FFMPEG will use it's default settings that are more likely to work than your own guesses. See what I would include below:
AVStream *stream;
m_video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
stream = avformat_new_stream(_outputCodec, m_video_codec);
ctx = stream->codec;
ctx->codec_id = m_fmt->video_codec;
ctx->bit_rate = m_AVIMOV_BPS; //Bits Per Second
ctx->width = m_AVIMOV_WIDTH; //Note Resolution must be a multiple of 2!!
ctx->height = m_AVIMOV_HEIGHT; //Note Resolution must be a multiple of 2!!
ctx->time_base.den = m_AVIMOV_FPS; //Frames per second
ctx->time_base.num = 1;
ctx->gop_size = m_AVIMOV_GOB; // Intra frames per x P frames
ctx->pix_fmt = AV_PIX_FMT_YUV420P;//Do not change this, H264 needs YUV format not RGB
As in previous answers, here is a working example of the FFMPEG library encoding RGB frames to a H264 video:
http://www.imc-store.com.au/Articles.asp?ID=276
An extra thought on your code though:
Have you called register all like below?
avcodec_register_all();
av_register_all();
If you don't call these two functions near the start of your code your subsequent calls to FFMPEG will fail and you'll most likely seg-fault.
Have a look at the linked example, I tested it on VC++2010 and it works perfectly.

Images saved with D3DXSaveSurfaceToFile will open in Paint, not Photoshop

I'm using D3DXSaveSurfaceToFile to save windowed Direct3D 9 surfaces to PNG, BMP and JPG files. There are no errors returned from the D3DXSaveSurfaceToFile call and all files open fine in Windows Photo Viewer and Paint. But they will not open in a higher end image editing program such as Paint Shop Pro or Photoshop. The error messages from these programs basically say that the file is corrupted. If I open the files in Paint and then save them in the same file format with a different file name, then they'll open fine in the other programs.
This leads me to believe that D3DXSaveSurfaceToFile is writing out non-standard versions of these file formats. Is there some way I can get this function to write out files that can be opened in programs like Photoshop without the intermediate step of resaving the files in Paint? Or is there another function I should be using that does a better job of saving a Direct3D surfaces to an image?
Take a look at the file in a image meta viewer. What does it tell you?
Unfortunately D3DXSaveSurfaceToFile() isn't the most stable (it's also exceptionally slow). Personally I do something like the below code. It works even on Anti-aliased displays by doing an offscreen render to take the screenshot then getting it into a buffer. It also supports only the most common of the pixel formats. Sorry for any errors in it, pulled it out of an app I used to work on.
You can then, in your code and probably in another thread, then convert said 'bitmap' to anything you like using a variety of different code.
void HandleScreenshot(IDirect3DDevice9* device)
{
DWORD tcHandleScreenshot = GetTickCount();
LPDIRECT3DSURFACE9 pd3dsBack = NULL;
LPDIRECT3DSURFACE9 pd3dsTemp = NULL;
// Grab the back buffer into a surface
if ( SUCCEEDED ( device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pd3dsBack) ))
{
D3DSURFACE_DESC desc;
pd3dsBack->GetDesc(&desc);
LPDIRECT3DSURFACE9 pd3dsCopy = NULL;
if (desc.MultiSampleType != D3DMULTISAMPLE_NONE)
{
if (SUCCEEDED(device->CreateRenderTarget(desc.Width, desc.Height, desc.Format, D3DMULTISAMPLE_NONE, 0, FALSE, &pd3dsCopy, NULL)))
{
if (SUCCEEDED(device->StretchRect(pd3dsBack, NULL, pd3dsCopy, NULL, D3DTEXF_NONE)))
{
pd3dsBack->Release();
pd3dsBack = pd3dsCopy;
}
else
{
pd3dsCopy->Release();
}
}
}
if (SUCCEEDED(device->CreateOffscreenPlainSurface(desc.Width, desc.Height, desc.Format, D3DPOOL_SYSTEMMEM, &pd3dsTemp, NULL)))
{
DWORD tmpTimeGRTD = GetTickCount();
if (SUCCEEDED(device->GetRenderTargetData(pd3dsBack, pd3dsTemp)))
{
D3DLOCKED_RECT lockedSrcRect;
if (SUCCEEDED(pd3dsTemp->LockRect(&lockedSrcRect, NULL, D3DLOCK_READONLY | D3DLOCK_NOSYSLOCK | D3DLOCK_NO_DIRTY_UPDATE)))
{
int nSize = desc.Width * desc.Height * 3;
BYTE* pixels = new BYTE[nSize +1];
int iSrcPitch = lockedSrcRect.Pitch;
BYTE* pSrcRow = (BYTE*)lockedSrcRect.pBits;
LPBYTE lpDest = pixels;
LPDWORD lpSrc;
switch (desc.Format)
{
case D3DFMT_A8R8G8B8:
case D3DFMT_X8R8G8B8:
for (int y = desc.Height - 1; y >= 0; y--)
{
lpSrc = reinterpret_cast<LPDWORD>(lockedSrcRect.pBits) + y * desc.Width;
for (unsigned int x = 0; x < desc.Width; x++)
{
*reinterpret_cast<LPDWORD>(lpDest) = *lpSrc;
lpSrc++; // increment source pointer by 1 DWORD
lpDest += 3; // increment destination pointer by 3 bytes
}
}
break;
default:
ZeroMemory(pixels, nSize);
}
pd3dsTemp->UnlockRect();
BITMAPINFOHEADER header;
header.biWidth = desc.Width;
header.biHeight = desc.Height;
header.biSizeImage = nSize;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biPlanes = 1;
header.biBitCount = 3 * 8; // RGB
header.biCompression = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
BITMAPFILEHEADER bfh = {0};
bfh.bfType = 0x4d42;
bfh.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
bfh.bfSize = bfh.bfOffBits + nSize;
unsigned int rough_size = sizeof(BITMAPINFOHEADER) + sizeof(BITMAPFILEHEADER) + nSize;
unsigned char* p = new unsigned char[rough_size]
memcpy(p, &bfh, sizeof(BITMAPFILEHEADER));
p += sizeof(BITMAPFILEHEADER);
memcpy(p, &header, sizeof(BITMAPINFOHEADER));
p += sizeof(BITMAPINFOHEADER);
memcpy(p, pixels, nSize);
delete [] pixels;
/**********************************************/
// p now has a full BMP file, write it out here
}
}
pd3dsTemp->Release();
}
pd3dsBack->Release();
}
}
Turns out that it was a combination of a bug in my code and Paint being more forgiving than Photoshop when it comes to reading files. The bug in my code caused the files to be saved with the wrong extension (i.e. Image.bmp was actually saved using D3DXIFF_JPG). When opening a file that contained a JPG image, but had a BMP extension, Photoshop just failed the file. I guess Paint worked since it ignored the file extension and just decoded the file contents.
Looking at a file in an image meta viewer helped me to see the problem.

Resources