I receive a JPEG image from a sever in a char * buffer. I want to show this picture in a picture box before saving it.All I know is that a picture box can show images from File, Hbitmap and Stream. I don't want to use the file one. and I don't know how to use the other ones.
I've searched and tried some, And here's my code.
I don't know why it doesn't show any picture.
delegate void setImagedelegate(Stream ^ image);
void threadDecodeAndShow()
{
while (1)
{
if (f)
{
//the package that is receiving has some custom headers,
// I first find about the size of the JPEG and
//put a pointer at the beginning of the JPEG part.
BYTE *pImgSur = NULL;
DWORD imageInfoLength = *(DWORD*)m_pImgDataBufPtr[nIndexCurBuf];
DWORD customInfoLenForUser = *(DWORD*)(m_pImgDataBufPtr[nIndexCurBuf] + 4 + imageInfoLength);
DWORD jpegLength = *(DWORD*)(m_pImgDataBufPtr[nIndexCurBuf] + 4 + imageInfoLength + 4 + customInfoLenForUser);
pImgSur = (BYTE *)(m_pImgDataBufPtr[nIndexCurBuf] + 12 + customInfoLenForUser + imageInfoLength);
auto store = gcnew array<Byte>(jpegLength);
System::Runtime::InteropServices::Marshal::Copy(IntPtr(pImgSur), store, 0, jpegLength);
auto stream = gcnew System::IO::MemoryStream(store);
this->setImage(stream);
f = 0;
}
}
}
void setImage(Stream ^ image)
{
if (this->pictureBox1->InvokeRequired)
{
setImagedelegate^ d =
gcnew setImagedelegate(this, &MainPage::setImage);
this->Invoke(d, gcnew array<Object^> { image });
}
else
{
this->pictureBox1->Image = Image::FromStream(image);
this->pictureBox1->Show();
}
}
You can turn a char* buffer into a stream with a memory stream. Two ways to do it, depends on how long the buffer remains valid. The Image class requires the backing store for the stream to remain readable for the life of the image. So if you are 100% sure that you can rely on the buffer surviving long enough then you can do it like this:
using namespace System;
using namespace System::Drawing;
Image^ BytesToImage(char* buffer, size_t len) {
auto stream = gcnew System::IO::UnmanagedMemoryStream((unsigned char*)buffer, len);
return Image::FromStream(stream);
}
If you don't have that guarantee, or you can't be sure, then you have to copy the buffer content:
Image^ BytesToImageBuffered(char* buffer, size_t len) {
auto store = gcnew array<Byte>(len);
System::Runtime::InteropServices::Marshal::Copy(IntPtr(buffer), store, 0, len);
auto stream = gcnew System::IO::MemoryStream(store);
return Image::FromStream(stream);
}
The garbage collector takes care of destroying the stream and array objects, happens after you dispose the Image object, so no need to help.
Related
This question already has an answer here:
FFMPEG: ‘PIX_FMT_BGR24’ was not declared in this scope
(1 answer)
Closed 6 years ago.
i installed the ffmpeg from source code according https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu, and write a test file to save ppm file from a video, but the code cannot reslove PIX_FMT_RGB24, i write the code as below:
int main() {
// Initalizing these to NULL prevents segfaults!
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtxOrig = NULL;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVFrame *pFrameRGB = NULL;
AVPacket packet;
int frameFinished;
int numBytes;
uint8_t *buffer = NULL;
struct SwsContext *sws_ctx = NULL;
const char* url = "/home/liulijuan/bin/test.mp4";
// [1] Register all formats and codecs
av_register_all();
// [2] Open video file
if(avformat_open_input(&pFormatCtx, url, NULL, NULL)!=0)
return -1; // Couldn't open file
// [3] Retrieve stream information
if(avformat_find_stream_info(pFormatCtx, NULL)<0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error
av_dump_format(pFormatCtx, 0, url, 0);
// Find the first video stream
videoStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1)
return -1; // Didn't find a video stream
// Get a pointer to the codec context for the video stream
pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
if(pCodec==NULL) {
fprintf(stderr, "Unsupported codec!\n");
return -1; // Codec not found
}
// Copy context
pCodecCtx = avcodec_alloc_context3(pCodec);
if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) {
fprintf(stderr, "Couldn't copy codec context");
return -1; // Error copying codec context
}
// Open codec
if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
return -1; // Could not open codec
// Allocate video frame
pFrame=av_frame_alloc();
// Allocate an AVFrame structure
pFrameRGB=av_frame_alloc();
if(pFrameRGB==NULL)
return -1;
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
pCodecCtx->width,
pCodecCtx->height,
PIX_FMT_RGB24,
SWS_BILINEAR,
NULL,
NULL,
NULL
);
// [4] Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
// Save the frame to disk
if(++i<=5)
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
// Free the RGB image
av_free(buffer);
av_frame_free(&pFrameRGB);
// Free the YUV frame
av_frame_free(&pFrame);
// Close the codecs
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);
return 0;
}
so i replace PIX_FMT_RGB24 with AV_PIX_FMT_RGB24, but i cannot open the saved ppm file, the save code as below:
void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
FILE *pFile;
char szFilename[32];
int y;
printf("start save frame ...\n");
// Open file
sprintf(szFilename, "/home/liulijuan/frame%d.ppm", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
printf("start write header ...\n");
// Write header
fprintf(pFile, "/P6\n%d %d\n255\n", width, height);
// Write pixel data
for(y=0; y<height; y++)
fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);
// Close file
fclose(pFile);
printf("close file ...\n");
}
so, what's wrong with this code?
As of commit 78071a14, pixel formats have been prefixed with AV_, and the PIX_FMT_* defines were moves to libavutil/old_pix_fmts.h (which was included by the original pixfmt.h). This file was then removed in the next major version.
The fix is to add this prefix (AV_) to any PIX_FMT_* statements which haven't been updated yet.
I'm trying to find a way to let the system tell me whenever there's a new entry in the USN Change Journal to track modifications made to files and directories on an NTFS volume (Server 2008/2012).
This way I don't have to constantly poll the journal and can just let my thread sleep until I get notified when there's a new change-event.
However, is there even such an interrupt?
The FSCTL_QUERY_USN_JOURNAL function doesn't specifically mention interrupts (events, notifications), nor have I been able to find another way to achieve this with less intensive poll-and-compare techniques.
I'm not a hard-core programmer so there may be simpler ways to tie these functions to interrupts that I'm not aware of.
Could I perhaps find out where the USN Change Journal is stored and watch that file with another process that can generate and interrupt on change?
https://msdn.microsoft.com/en-us/library/aa365729(v=vs.85).aspx
The code posted here blocks the executing thread till the new USN record is created in the Journal. When new records arrive, the thread awakens and you can process changes and/or notify listeners via a callback that filesystem has changed (in the example it just prints message to the console). Then the thread blocks again. This example uses one thread per volume (so for each volume, separate NTFSChangesWatcher class instance needed).
It is not specified which tools or language you use, so I will write as I did it. To run this code, create a Visual Studio C++ Win32 Console Application.
Create NTFSChangesWatcher class. Paste this code in NTFSChangesWatcher.h file (replacing auto-generated one):
#pragma once
#include <windows.h>
#include <memory>
class NTFSChangesWatcher
{
public:
NTFSChangesWatcher(char drive_letter);
~NTFSChangesWatcher() = default;
// Method which runs an infinite loop and waits for new update sequence number in a journal.
// The thread is blocked till the new USN record created in the journal.
void WatchChanges();
private:
HANDLE OpenVolume(char drive_letter);
bool CreateJournal(HANDLE volume);
bool LoadJournal(HANDLE volume, USN_JOURNAL_DATA* journal_data);
bool NTFSChangesWatcher::WaitForNextUsn(PREAD_USN_JOURNAL_DATA read_journal_data) const;
std::unique_ptr<READ_USN_JOURNAL_DATA> GetWaitForNextUsnQuery(USN start_usn);
bool NTFSChangesWatcher::ReadJournalRecords(PREAD_USN_JOURNAL_DATA journal_query, LPVOID buffer,
DWORD& byte_count) const;
std::unique_ptr<READ_USN_JOURNAL_DATA> NTFSChangesWatcher::GetReadJournalQuery(USN low_usn);
char drive_letter_;
HANDLE volume_;
std::unique_ptr<USN_JOURNAL_DATA> journal_;
DWORDLONG journal_id_;
USN last_usn_;
// Flags, which indicate which types of changes you want to listen.
static const int FILE_CHANGE_BITMASK;
static const int kBufferSize;
};
and this code in NTFSChangesWatcher.cpp file:
#include "NTFSChangesWatcher.h"
#include <iostream>
using namespace std;
const int NTFSChangesWatcher::kBufferSize = 1024 * 1024 / 2;
const int NTFSChangesWatcher::FILE_CHANGE_BITMASK =
USN_REASON_RENAME_NEW_NAME | USN_REASON_SECURITY_CHANGE | USN_REASON_BASIC_INFO_CHANGE | USN_REASON_DATA_OVERWRITE |
USN_REASON_DATA_TRUNCATION | USN_REASON_DATA_EXTEND | USN_REASON_CLOSE;
NTFSChangesWatcher::NTFSChangesWatcher(char drive_letter) :
drive_letter_(drive_letter)
{
volume_ = OpenVolume(drive_letter_);
journal_ = make_unique<USN_JOURNAL_DATA>();
bool res = LoadJournal(volume_, journal_.get());
if (!res) {
cout << "Failed to load journal" << endl;
return;
}
journal_id_ = journal_->UsnJournalID;
last_usn_ = journal_->NextUsn;
}
HANDLE NTFSChangesWatcher::OpenVolume(char drive_letter) {
wchar_t pattern[10] = L"\\\\?\\a:";
pattern[4] = static_cast<wchar_t>(drive_letter);
HANDLE volume = nullptr;
volume = CreateFile(
pattern, // lpFileName
// also could be | FILE_READ_DATA | FILE_READ_ATTRIBUTES | SYNCHRONIZE
GENERIC_READ | GENERIC_WRITE | SYNCHRONIZE, // dwDesiredAccess
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, // share mode
NULL, // default security attributes
OPEN_EXISTING, // disposition
// It is always set, no matter whether you explicitly specify it or not. This means, that access
// must be aligned with sector size so we can only read a number of bytes that is a multiple of the sector size.
FILE_FLAG_NO_BUFFERING, // file attributes
NULL // do not copy file attributes
);
if (volume == INVALID_HANDLE_VALUE) {
// An error occurred!
cout << "Failed to open volume" << endl;
return nullptr;
}
return volume;
}
bool NTFSChangesWatcher::CreateJournal(HANDLE volume) {
DWORD byte_count;
CREATE_USN_JOURNAL_DATA create_journal_data;
bool ok = DeviceIoControl(volume, // handle to volume
FSCTL_CREATE_USN_JOURNAL, // dwIoControlCode
&create_journal_data, // input buffer
sizeof(create_journal_data), // size of input buffer
NULL, // lpOutBuffer
0, // nOutBufferSize
&byte_count, // number of bytes returned
NULL) != 0; // OVERLAPPED structure
if (!ok) {
// An error occurred!
}
return ok;
}
bool NTFSChangesWatcher::LoadJournal(HANDLE volume, USN_JOURNAL_DATA* journal_data) {
DWORD byte_count;
// Try to open journal.
if (!DeviceIoControl(volume, FSCTL_QUERY_USN_JOURNAL, NULL, 0, journal_data, sizeof(*journal_data), &byte_count,
NULL)) {
// If failed (for example, in case journaling is disabled), create journal and retry.
if (CreateJournal(volume)) {
return LoadJournal(volume, journal_data);
}
return false;
}
return true;
}
void NTFSChangesWatcher::WatchChanges() {
auto u_buffer = make_unique<char[]>(kBufferSize);
auto read_journal_query = GetWaitForNextUsnQuery(last_usn_);
while (true) {
// This function does not return until new USN record created.
WaitForNextUsn(read_journal_query.get());
cout << "New entry created in the journal!" << endl;
auto journal_query = GetReadJournalQuery(read_journal_query->StartUsn);
DWORD byte_count;
if (!ReadJournalRecords(journal_query.get(), u_buffer.get(), byte_count)) {
// An error occurred.
cout << "Failed to read journal records" << endl;
}
last_usn_ = *(USN*)u_buffer.get();
read_journal_query->StartUsn = last_usn_;
// If you need here you can:
// Read and parse Journal records from the buffer.
// Notify an NTFSChangeObservers about journal changes.
}
}
bool NTFSChangesWatcher::WaitForNextUsn(PREAD_USN_JOURNAL_DATA read_journal_data) const {
DWORD bytes_read;
bool ok = true;
// This function does not return until new USN record created.
ok = DeviceIoControl(volume_, FSCTL_READ_USN_JOURNAL, read_journal_data, sizeof(*read_journal_data),
&read_journal_data->StartUsn, sizeof(read_journal_data->StartUsn), &bytes_read,
nullptr) != 0;
return ok;
}
unique_ptr<READ_USN_JOURNAL_DATA> NTFSChangesWatcher::GetWaitForNextUsnQuery(USN start_usn) {
auto query = make_unique<READ_USN_JOURNAL_DATA>();
query->StartUsn = start_usn;
query->ReasonMask = 0xFFFFFFFF; // All bits.
query->ReturnOnlyOnClose = FALSE; // All entries.
query->Timeout = 0; // No timeout.
query->BytesToWaitFor = 1; // Wait for this.
query->UsnJournalID = journal_id_; // The journal.
query->MinMajorVersion = 2;
query->MaxMajorVersion = 2;
return query;
}
bool NTFSChangesWatcher::ReadJournalRecords(PREAD_USN_JOURNAL_DATA journal_query, LPVOID buffer,
DWORD& byte_count) const {
return DeviceIoControl(volume_, FSCTL_READ_USN_JOURNAL, journal_query, sizeof(*journal_query), buffer, kBufferSize,
&byte_count, nullptr) != 0;
}
unique_ptr<READ_USN_JOURNAL_DATA> NTFSChangesWatcher::GetReadJournalQuery(USN low_usn) {
auto query = make_unique<READ_USN_JOURNAL_DATA>();
query->StartUsn = low_usn;
query->ReasonMask = 0xFFFFFFFF; // All bits.
query->ReturnOnlyOnClose = FALSE;
query->Timeout = 0; // No timeout.
query->BytesToWaitFor = 0;
query->UsnJournalID = journal_id_;
query->MinMajorVersion = 2;
query->MaxMajorVersion = 2;
return query;
}
Now you can use it (for example in the main function for testing):
#include "NTFSChangesWatcher.h"
int _tmain(int argc, _TCHAR* argv[])
{
auto watcher = new NTFSChangesWatcher('z');
watcher->WatchChanges();
return 0;
}
And console output should be like this on every change in the filesystem:
This code was slightly reworked to remove unrelated details and is a part of the Indexer++ project. So for more details, you can refer to the original code.
You can use Journal, but in this case I'd use easier method via registering a directory notification by calling the FindFirstChangeNotification or ReadDirectoryChangesW functions, see https://msdn.microsoft.com/en-us/library/aa364417.aspx
If you'd prefer to use Journal, this is - I think - the best introductory article with many examples. It is written for W2K, but those concepts are still valid: https://www.microsoft.com/msj/0999/journal/journal.aspx
I have been trying to put together code to actually save image from fingerprint sensors. I have already tried forums and this is my current code which saves file with correct file size but When i open the image, Its not image of fingerprint rather it looks like a corrupted image. Here is what it looks like.
My code is given below. Any help will be appreciated. I am new to windows development.
bool SaveBMP(BYTE* Buffer, int width, int height, long paddedsize, LPCTSTR bmpfile)
{
BITMAPFILEHEADER bmfh;
BITMAPINFOHEADER info;
memset(&bmfh, 0, sizeof(BITMAPFILEHEADER));
memset(&info, 0, sizeof(BITMAPINFOHEADER));
//Next we fill the file header with data:
bmfh.bfType = 0x4d42; // 0x4d42 = 'BM'
bmfh.bfReserved1 = 0;
bmfh.bfReserved2 = 0;
bmfh.bfSize = sizeof(BITMAPFILEHEADER) +
sizeof(BITMAPINFOHEADER) + paddedsize;
bmfh.bfOffBits = 0x36;
//and the info header:
info.biSize = sizeof(BITMAPINFOHEADER);
info.biWidth = width;
info.biHeight = height;
info.biPlanes = 1;
info.biBitCount = 8;
info.biCompression = BI_RGB;
info.biSizeImage = 0;
info.biXPelsPerMeter = 0x0ec4;
info.biYPelsPerMeter = 0x0ec4;
info.biClrUsed = 0;
info.biClrImportant = 0;
HANDLE file = CreateFile(bmpfile, GENERIC_WRITE, FILE_SHARE_READ,
NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
//Now we write the file header and info header:
unsigned long bwritten;
if (WriteFile(file, &bmfh, sizeof(BITMAPFILEHEADER),
&bwritten, NULL) == false)
{
CloseHandle(file);
return false;
}
if (WriteFile(file, &info, sizeof(BITMAPINFOHEADER),
&bwritten, NULL) == false)
{
CloseHandle(file);
return false;
}
//and finally the image data:
if (WriteFile(file, Buffer, paddedsize, &bwritten, NULL) == false)
{
CloseHandle(file);
return false;
}
//Now we can close our function with
CloseHandle(file);
return true;
}
HRESULT CaptureSample()
{
HRESULT hr = S_OK;
WINBIO_SESSION_HANDLE sessionHandle = NULL;
WINBIO_UNIT_ID unitId = 0;
WINBIO_REJECT_DETAIL rejectDetail = 0;
PWINBIO_BIR sample = NULL;
SIZE_T sampleSize = 0;
// Connect to the system pool.
hr = WinBioOpenSession(
WINBIO_TYPE_FINGERPRINT, // Service provider
WINBIO_POOL_SYSTEM, // Pool type
WINBIO_FLAG_RAW, // Access: Capture raw data
NULL, // Array of biometric unit IDs
0, // Count of biometric unit IDs
WINBIO_DB_DEFAULT, // Default database
&sessionHandle // [out] Session handle
);
// Capture a biometric sample.
wprintf_s(L"\n Calling WinBioCaptureSample - Swipe sensor...\n");
hr = WinBioCaptureSample(
sessionHandle,
WINBIO_NO_PURPOSE_AVAILABLE,
WINBIO_DATA_FLAG_RAW,
&unitId,
&sample,
&sampleSize,
&rejectDetail
);
wprintf_s(L"\n Swipe processed - Unit ID: %d\n", unitId);
wprintf_s(L"\n Captured %d bytes.\n", sampleSize);
PWINBIO_BIR_HEADER BirHeader = (PWINBIO_BIR_HEADER)(((PBYTE)sample) + sample->HeaderBlock.Offset);
PWINBIO_BDB_ANSI_381_HEADER AnsiBdbHeader = (PWINBIO_BDB_ANSI_381_HEADER)(((PBYTE)sample) + sample->StandardDataBlock.Offset);
PWINBIO_BDB_ANSI_381_RECORD AnsiBdbRecord = (PWINBIO_BDB_ANSI_381_RECORD)(((PBYTE)AnsiBdbHeader) + sizeof(WINBIO_BDB_ANSI_381_HEADER));
PBYTE firstPixel = (PBYTE)((PBYTE)AnsiBdbRecord) + sizeof(WINBIO_BDB_ANSI_381_RECORD);
SaveBMP(firstPixel, AnsiBdbRecord->HorizontalLineLength, AnsiBdbRecord->VerticalLineLength, AnsiBdbRecord->BlockLength, "D://test.bmp");
wprintf_s(L"\n Press any key to exit.");
_getch();
}
IInspectable is correct, the corruption looks like it's coming from your implicit use of color tables:
info.biBitCount = 8;
info.biCompression = BI_RGB;
If your data is actually just 24-bit RGB, you can do info.biBitCount = 24; to render a valid bitmap. If it's lower (or higher) than that, then you'll need to do some conversion work. You can check AnsiBdbHeader->PixelDepth to confirm that it's the 8 bits per pixel that you expect.
It also looks like your passing AnsiBdbRecord->BlockLength to SaveBMP isn't quite right. The docs for this field say:
WINBIO_BDB_ANSI_381_RECORD structure
BlockLength
Contains the number of bytes in this structure plus the number of bytes of sample image data.
So you'll want to make sure to subtract sizeof(WINBIO_BDB_ANSI_381_RECORD) before passing it as your bitmap buffer size.
Side note, make sure you free the memory involved after the capture.
WinBioFree(sample);
WinBioCloseSession(sessionHandle);
I am toying around with a libwebsockets tutorial trying to make it such that, after it receives a message from a connection over a given protocol, it sends a response to all active connections implementing that protocol. I have used the function libwebsocket_callback_all_protocol but it is not doing what I think it should do from its name (I'm not quite sure what it does from the documentation).
The goal is to have two webpages open and, when info is sent from one, the result will be relayed to both. Below is my code - you'll see that libwebsocket_callback_all_protocol is called in main (which currently does nothing, I think....) :
#include <stdio.h>
#include <stdlib.h>
#include <libwebsockets.h>
#include <string.h>
static int callback_http(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason, void *user,
void *in, size_t len)
{
return 0;
}
static int callback_dumb_increment(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason,
void *user, void *in, size_t len)
{
switch (reason) {
case LWS_CALLBACK_ESTABLISHED: // just log message that someone is connecting
printf("connection established\n");
break;
case LWS_CALLBACK_RECEIVE: { // the funny part
// create a buffer to hold our response
// it has to have some pre and post padding. You don't need to care
// what comes there, libwebsockets will do everything for you. For more info see
// http://git.warmcat.com/cgi-bin/cgit/libwebsockets/tree/lib/libwebsockets.h#n597
unsigned char *buf = (unsigned char*) malloc(LWS_SEND_BUFFER_PRE_PADDING + len +
LWS_SEND_BUFFER_POST_PADDING);
int i;
// pointer to `void *in` holds the incomming request
// we're just going to put it in reverse order and put it in `buf` with
// correct offset. `len` holds length of the request.
for (i=0; i < len; i++) {
buf[LWS_SEND_BUFFER_PRE_PADDING + (len - 1) - i ] = ((char *) in)[i];
}
// log what we recieved and what we're going to send as a response.
// that disco syntax `%.*s` is used to print just a part of our buffer
// http://stackoverflow.com/questions/5189071/print-part-of-char-array
printf("received data: %s, replying: %.*s\n", (char *) in, (int) len,
buf + LWS_SEND_BUFFER_PRE_PADDING);
// send response
// just notice that we have to tell where exactly our response starts. That's
// why there's `buf[LWS_SEND_BUFFER_PRE_PADDING]` and how long it is.
// we know that our response has the same length as request because
// it's the same message in reverse order.
libwebsocket_write(wsi, &buf[LWS_SEND_BUFFER_PRE_PADDING], len, LWS_WRITE_TEXT);
// release memory back into the wild
free(buf);
break;
}
default:
break;
}
return 0;
}
static struct libwebsocket_protocols protocols[] = {
/* first protocol must always be HTTP handler */
{
"http-only", // name
callback_http, // callback
0, // per_session_data_size
0
},
{
"dumb-increment-protocol", // protocol name - very important!
callback_dumb_increment, // callback
0, // we don't use any per session data
0
},
{
NULL, NULL, 0, 0 /* End of list */
}
};
int main(void) {
// server url will be http://localhost:9000
int port = 9000;
const char *interface = NULL;
struct libwebsocket_context *context;
// we're not using ssl
const char *cert_path = NULL;
const char *key_path = NULL;
// no special options
int opts = 0;
// create libwebsocket context representing this server
struct lws_context_creation_info info;
memset(&info, 0, sizeof info);
info.port = port;
info.iface = interface;
info.protocols = protocols;
info.extensions = libwebsocket_get_internal_extensions();
info.ssl_cert_filepath = cert_path;
info.ssl_private_key_filepath = key_path;
info.gid = -1;
info.uid = -1;
info.options = opts;
info.user = NULL;
info.ka_time = 0;
info.ka_probes = 0;
info.ka_interval = 0;
/*context = libwebsocket_create_context(port, interface, protocols,
libwebsocket_get_internal_extensions,
cert_path, key_path, -1, -1, opts);
*/
context = libwebsocket_create_context(&info);
if (context == NULL) {
fprintf(stderr, "libwebsocket init failed\n");
return -1;
}
libwebsocket_callback_all_protocol(&protocols[1], LWS_CALLBACK_RECEIVE);
printf("starting server...\n");
// infinite loop, to end this server send SIGTERM. (CTRL+C)
while (1) {
libwebsocket_service(context, 50);
// libwebsocket_service will process all waiting events with their
// callback functions and then wait 50 ms.
// (this is a single threaded webserver and this will keep our server
// from generating load while there are not requests to process)
}
libwebsocket_context_destroy(context);
return 0;
}
I had the same problem, the libwebsocket_write on LWS_CALLBACK_ESTABLISHED generate some random segfault so using the mail list the libwebsockets developer Andy Green instructed me the correct way is to use libwebsocket_callback_on_writable_all_protocol, the file test-server/test-server.c in library source code shows sample of use.
libwebsocket_callback_on_writable_all_protocol(libwebsockets_get_protocol(wsi))
It worked very well to notify all instances, but it only call the write method in all connected instances, it do not define the data to send. You need to manage the data yourself. The sample source file test-server.c show a sample ring buffer to do it.
http://ml.libwebsockets.org/pipermail/libwebsockets/2015-January/001580.html
Hope it helps.
From what I can quickly grab from the documentation, in order to send a message to all clients, what you should do is store somewhere (in a vector, a hashmap, an array, whatever) the struct libwebsocket * wsi that you have access when your clients connect.
Then when you receive a message and want to broadcast it, simply call libwebsocket_write on all wsi * instances.
That's what I'd do, anyway.
I have a program which is supposed to demux input mpeg-ts, transcode the mpeg2 into h264 and then mux the audio alongside the transcoded video. When I open the resulting muxed file with VLC I neither get audio nor video. Here is the relevant code.
My main worker loop is as follows:
void
*writer_thread(void *thread_ctx) {
struct transcoder_ctx_t *ctx = (struct transcoder_ctx_t *) thread_ctx;
AVStream *video_stream = NULL, *audio_stream = NULL;
AVFormatContext *output_context = init_output_context(ctx, &video_stream, &audio_stream);
struct mux_state_t mux_state = {0};
//from omxtx
mux_state.pts_offset = av_rescale_q(ctx->input_context->start_time, AV_TIME_BASE_Q, output_context->streams[ctx->video_stream_index]->time_base);
//write stream header if any
avformat_write_header(output_context, NULL);
//do not start doing anything until we get an encoded packet
pthread_mutex_lock(&ctx->pipeline.video_encode.is_running_mutex);
while (!ctx->pipeline.video_encode.is_running) {
pthread_cond_wait(&ctx->pipeline.video_encode.is_running_cv, &ctx->pipeline.video_encode.is_running_mutex);
}
while (!ctx->pipeline.video_encode.eos || !ctx->processed_audio_queue->queue_finished) {
//FIXME a memory barrier is required here so that we don't race
//on above variables
//fill a buffer with video data
OERR(OMX_FillThisBuffer(ctx->pipeline.video_encode.h, omx_get_next_output_buffer(&ctx->pipeline.video_encode)));
write_audio_frame(output_context, audio_stream, ctx); //write full audio frame
//FIXME no guarantee that we have a full frame per packet?
write_video_frame(output_context, video_stream, ctx, &mux_state); //write full video frame
//encoded_video_queue is being filled by the previous command
}
av_write_trailer(output_context);
//free all the resources
avcodec_close(video_stream->codec);
avcodec_close(audio_stream->codec);
/* Free the streams. */
for (int i = 0; i < output_context->nb_streams; i++) {
av_freep(&output_context->streams[i]->codec);
av_freep(&output_context->streams[i]);
}
if (!(output_context->oformat->flags & AVFMT_NOFILE)) {
/* Close the output file. */
avio_close(output_context->pb);
}
/* free the stream */
av_free(output_context);
free(mux_state.pps);
free(mux_state.sps);
}
The code for initialising libav output context is this:
static
AVFormatContext *
init_output_context(const struct transcoder_ctx_t *ctx, AVStream **video_stream, AVStream **audio_stream) {
AVFormatContext *oc;
AVOutputFormat *fmt;
AVStream *input_stream, *output_stream;
AVCodec *c;
AVCodecContext *cc;
int audio_copied = 0; //copy just 1 stream
fmt = av_guess_format("mpegts", NULL, NULL);
if (!fmt) {
fprintf(stderr, "[DEBUG] Error guessing format, dying\n");
exit(199);
}
oc = avformat_alloc_context();
if (!oc) {
fprintf(stderr, "[DEBUG] Error allocating context, dying\n");
exit(200);
}
oc->oformat = fmt;
snprintf(oc->filename, sizeof(oc->filename), "%s", ctx->output_filename);
oc->debug = 1;
oc->start_time_realtime = ctx->input_context->start_time;
oc->start_time = ctx->input_context->start_time;
oc->duration = 0;
oc->bit_rate = 0;
for (int i = 0; i < ctx->input_context->nb_streams; i++) {
input_stream = ctx->input_context->streams[i];
output_stream = NULL;
if (input_stream->index == ctx->video_stream_index) {
//copy stuff from input video index
c = avcodec_find_encoder(CODEC_ID_H264);
output_stream = avformat_new_stream(oc, c);
*video_stream = output_stream;
cc = output_stream->codec;
cc->width = input_stream->codec->width;
cc->height = input_stream->codec->height;
cc->codec_id = CODEC_ID_H264;
cc->codec_type = AVMEDIA_TYPE_VIDEO;
cc->bit_rate = ENCODED_BITRATE;
cc->time_base = input_stream->codec->time_base;
output_stream->avg_frame_rate = input_stream->avg_frame_rate;
output_stream->r_frame_rate = input_stream->r_frame_rate;
output_stream->start_time = AV_NOPTS_VALUE;
} else if ((input_stream->codec->codec_type == AVMEDIA_TYPE_AUDIO) && !audio_copied) {
/* i care only about audio */
c = avcodec_find_encoder(input_stream->codec->codec_id);
output_stream = avformat_new_stream(oc, c);
*audio_stream = output_stream;
avcodec_copy_context(output_stream->codec, input_stream->codec);
/* Apparently fixes a crash on .mkvs with attachments: */
av_dict_copy(&output_stream->metadata, input_stream->metadata, 0);
/* Reset the codec tag so as not to cause problems with output format */
output_stream->codec->codec_tag = 0;
audio_copied = 1;
}
}
for (int i = 0; i < oc->nb_streams; i++) {
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
oc->streams[i]->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
if (oc->streams[i]->codec->sample_rate == 0)
oc->streams[i]->codec->sample_rate = 48000; /* ish */
}
if (!(fmt->flags & AVFMT_NOFILE)) {
fprintf(stderr, "[DEBUG] AVFMT_NOFILE set, allocating output container\n");
if (avio_open(&oc->pb, ctx->output_filename, AVIO_FLAG_WRITE) < 0) {
fprintf(stderr, "[DEBUG] error creating the output context\n");
exit(1);
}
}
return oc;
}
Finally this is the code for writing audio:
static
void
write_audio_frame(AVFormatContext *oc, AVStream *st, struct transcoder_ctx_t *ctx) {
AVPacket pkt = {0}; // data and size must be 0;
struct packet_t *source_audio;
av_init_packet(&pkt);
if (!(source_audio = packet_queue_get_next_item_asynch(ctx->processed_audio_queue))) {
return;
}
pkt.stream_index = st->index;
pkt.size = source_audio->data_length;
pkt.data = source_audio->data;
pkt.pts = source_audio->PTS;
pkt.dts = source_audio->DTS;
pkt.duration = source_audio->duration;
pkt.destruct = avpacket_destruct;
/* Write the compressed frame to the media file. */
if (av_interleaved_write_frame(oc, &pkt) != 0) {
fprintf(stderr, "[DEBUG] Error while writing audio frame\n");
}
packet_queue_free_packet(source_audio, 0);
}
A resulting mpeg4 file can be obtained from here: http://87.120.131.41/dl/mpeg4.h264
I have ommited the write_video_frame code since it is a lot more complicated and I might be making something wrong there as I'm doing timebase conversation etc. For audio however I'm doing 1:1 copy. Each packet_t packet contains data from av_read_frame from the input mpegts container. In the worst case I'd expect that my audio is working and not my video. However I cannot get either of those to work. Seems the documentation is rather vague on making things like that - I've tried both libav and ffmpeg irc channels to no avail. Any information regarding how I can debug the issue will be greatly appreciated.
When different containers yield different results in libav it is almost always a timebase issue. All containers have a time_base that they like, and some will accept custom values... sometimes.
You must rescale the time base before putting it in the container. Generally tinkering with the mux state struct isn't something you want to do and I think what you did there doesn't do what you think. Try printing out all of the timebases to find out what they are.
Each frame you must recalculate PTS at least. If you do it before you call encode the encoder will produce the proper DTS. Do the same for the audio, but generally set the DTS it to AV_NO_PTS and sometimes you can get away with setting the audio PTS to that as well. To rescale easily use the av_rescale(...) functions.
Be careful assuming that you have MPEG-2 data in a MPEG-TS container, that is not always true.