Plese see code below:
double* data = new double[100];
boost::shared_ptr<Eigen::VectorXd> rfstdevs = boost::make_shared<Eigen::VectorXd>(
Eigen::Map<Eigen::RowVectorXd>(data, 1, 100));
My understanding is that Eigen would take the buffer directly and use it, so should I manually free the data buffer or the newly created VectorXd would do it for me?
Thank you...
This performs a deep copy from, so you need to free data. If you don't want a deep copy, then use the Map object directly:
Map<RowVectorXd> rfstdevs(data,1,100);
You will still need to delete data yourself, Map won't do it as it does not know where it comes from.
Related
RocksDBStore<K,V> stores keys and values as byte[] on disk. It converts to/from K and V typed objects using Serdes provided while constructing the object of RocksDBStore<K,V>.
Given this, please help me understand the purpose of the following code in RocksDbKeyValueBytesStoreSupplier:
return new RocksDBStore<>(name,
Serdes.Bytes(),
Serdes.ByteArray());
Providing Serdes.Bytes() and Serdes.ByteArray() looks redundant.
RocksDbKeyValueBytesStoreSupplier is introduced in KAFKA-5650 (Kafka Streams 1.0.0) as part of KIP-182: Reduce Streams DSL overloads and allow easier use of custom storage engines.
In KIP-182, there is the following sentence :
The new Interface BytesStoreSupplier supersedes the existing StateStoreSupplier (which will remain untouched). This so we can provide a convenient way for users creating custom state stores to wrap them with caching/logging etc if they chose. In order to do this we need to force the inner most store, i.e, the custom store, to be a store of type <Bytes, byte[]>.
Please help me understand why we need to force custom stores to be of type <Bytes, byte[]>?
Another place (KAFKA-5749) where I found similar sentence:
In order to support bytes store we need to create a MeteredSessionStore and ChangeloggingSessionStore. We then need to refactor the current SessionStore implementations to use this. All inner stores should by of type < Bytes, byte[] >
Why?
Your observation is correct -- the PR implementing KIP-182 did miss to remove the Serdes from RocksDBStore that are not required anymore. This was fixed in 1.1 release already.
I want to draw latency information for each struct bio that passes through the block layer. I have a module that overrides make_request_fn. I would want to find out how long did that bio took from there to reach request queue and from there to driver and so on.
I tried to attach a custom struct to the bio I receive at make_request_fn but since I did not create those, I cant use the bi_private field. Is there any way to work around this?
One option I have is to make a bio wrapper structure and copy bio structs into it before passing it to the lower functions so that I could use container_of to record times.
I have read about tools like blktrace and btt but I need that information inside my module. Is there any way to achieve this?
Thank you.
The solution I used seemed like a common workaround once I found something similar in the source of drbd block driver. The bi_private field can be used only by the function that allocates it. So I used bio_clone in the following way
bio_copy = bio_clone(bio_source, GFP_NOIO);
struct something *instance = kmalloc(sizeof(struct something), GFP_KERNEL);
instance->bio_original = bio_source;
//update timestamps for latency inside this struct instance
bio_copy->bi_private = instance;
bio_copy->bi_end_io = my_end_io_function;
bio_copy->bi_dev = bio_source->bi_dev;
...
...
make_request_fn(queue, bio_copy);
You'll have to write a bi_end_io function. Do remember to call bio_endio for original bio inside this function. You might need to copy bi_error field into bio_source's bi_error before calling bio_endio(bio_source).
Hope this helps someone.
My understanding of this was that perhaps CGPDFContext is to be used for editing PDF document data and CGPDFDocument is used for storing it, since the documentation doesn't list any ways to alter the content of a CGPDFDocument.
I'm also not quite sure what CGDataConsumer/Provider does. From reading the documentation I got the impression that the consumer/provider abstracts the relationship between the CG object and the CFData it writes to; so I don't have to do that myself. So I figured the following code would create a two page blank PDFdocument:
//Don't know exactly how large a PDF is so I gave it 1 MB for now
self->pdfData = CFDataCreateMutable(kCFAllocatorDefault, 1024);
self->consumerRef = CGDataConsumerCreateWithCFData(self->pdfData);
self.pdfRef = CGPDFContextCreate(self->consumerRef, NULL, NULL);
CGPDFContextBeginPage(self.pdfRef, NULL); //Creates a blank page?
CGPDFContextEndPage(self.pdfRef);
CGPDFContextBeginPage(self.pdfRef, NULL); //Creates a second blank page?
CGPDFContextEndPage(self.pdfRef);
//Copies the data from pdfRef's consumer into docRef's provider?
self.docRef = CGPDFDocumentCreateWithProvider(
CGDataProviderCreateWithCFData(
CFDataCreateCopy(kCFAllocatorDefault, self->pdfData)
));
It didn't work though, and NSLogging the first two pages of docRef returns NULL. I'm rather new at this, the C-Layer stuff in particular. Can someone explain to me the relationship between CGPDFContext, CGPDFDocument, CGDataConsumer & CGDataProvider and how I'd use them to create a blank PDF?
Your basic understanding is correct as far as I can see:
A CGPDFContext is a drawing context that "translates" everything that is drawn onto it to PDF instructions (typically for storage in a PDF file).
A CGPDFDocument is used to open an existing PDF file and get information from it.
When you want to create your own PDF file, you have two ways to do it as described here: https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CGPDFContext/Reference/reference.html
Use "CGPDFContextCreate" which you pass a data consumer. The data consumer gets the data and can do with it as it pleases (you could create a data consumer that passes the PDF onto the clipboard for example).
Use "CGPDFContextCreateWithURL" which you pass a URL. In that case your data will be written to a PDF file at that URL.
If you want to use these functions, have a look at this page https://developer.apple.com/library/mac/documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_pdf/dq_pdf.html#//apple_ref/doc/uid/TP30001066-CH214-TPXREF101 which explains in detail how to create PDF files with a data provider and without (simply to a PDF).
To figure out what is happening I would start by trying to write a simple PDF file to disk before writing one to a data provider and then using that data provider immediately to read it again. Without trying your code however, let me point out that you didn't use "CGPDFContextClose" which is described in the document as closing the PDF document and flushing all information to output. You could actually having a situation where stuff is cached and not written to your data provider yet, simply because you haven't forced that.
I am generating images on fly from DICOM files using:
public ActionResult Generatemage()
{
FileContentResult data;
.....
objImage = im.Bitmap(outputSize, PixelFormat.Format24bppRgb, m);
using (var memStream = new MemoryStream())
{
objImage.Save(memStream, ImageFormat.Png);
data = this.File(memStream.GetBuffer(), "image/png");
}
return data;
}
Can I store the image as a session variable so I can modify it using Point3D?
I tried to use:
Bitmap data = (Bitmap)Session["newimage"];
Got these two errors:
Cannot implicitly convert type 'System.Drawing.Bitmap' to 'System.Web.Mvc.FileContentResult' and
A local variable named 'data' is already defined in this scope
I would appreciate your suggestions, thanks in advance.
Can I store the image as a session variable so I can modify it using
Point3D?
I suggest to not do that. If you have not read Nathanael's post on image resizing pitfalls then I suggest you do so now. It may be talking about resizing but it also give hints on working with images in general. On point #3 it says:
Serving a file from disk by loading it into memory. Think about how
much RAM your server has, how large a single image is, how long it has
to stay in memory before users fi downloading it, and how many
users you have requesting images.
In your particular case you can replace "before users finish downloading it" with "before Point3D finish processing the image". So, what I suggest is that you get a handle to that file, say maybe there's an Id that uniquely identifies a file per user, use that Id to retrieve the file when it's time to process it with Point3D, load it into a MemoryStream (assuming Point3D can work with mem. stream), process it, then dispose of it. In that manner you are only holding on to the image for the duration of "Point3D processing".
Cannot implicitly convert type 'System.Drawing.Bitmap' to
'System.Web.Mvc.FileContentResult' and A local variable named 'data'
is already defined in this scope
That is most probably because you have defined data as such:
FileContentResult data;
and then you are doing a:
Bitmap data = (Bitmap)Session["newimage"];
same variable of two different types within the same scope.
Some functions, like ExtAudioFileOpenURL, only accept URLs to as a path to a file. This is fine but what if your file is within a container or a memory buffer, is it still possible to create a URL to point to this?
eg
char * w = read_sample_bytes(...);
CFURLRef url = CFURLCreateForBuffer(..., w, ...);
ExtAudioFileOpenURL(url, &extAudioFile);
etc..
or will I have to extract the data to a temporary file and create a url to that?
Presumably, you would first create an AudioFileID with AudioFileInitializeWithCallbacks, then wrap the result using ExtAudioFileWrapAudioFileID for the ExtAudioFile APIs you will need. No CF/NS-URL is required to create/read files in memory using this approach.
You can't create a URL to a region of memory.
For your specific purpose, you'll have to either do what Justin suggested or use Audio Queue Services.