What are the specifications required from D3D12Resource to tensorize it? - windows

I am trying to tensorize a D3D12Resource using the ITensorStaticsNative.CreateFromD3D12Resource method, however with the current D3D12Resource that I have, I am running into an invalid input exception. So I am wondering what are the requirements for the D3D12Resource? Can it have a D3D12_RESOURCE_DIMENSION_TEXTURE2D? Do I need to drop the alpha channel?
The examples that I found assume that the image gets loaded to the CPU first where the alpha channel gets dropped, the data is casted to a float, and then a D3D12Resource gets created as a D3D12_RESOURCE_DIMENSION_BUFFER dimension.
So is CreateFromD3D12Resource method only expecting D3D12Resources that have a D3D12_RESOURCE_DIMENSION_BUFFER? Or can I provide a D3D12Resource with a D3D12_RESOURCE_DIMENSION_TEXTURE2D to be tensorized, and what would that look like?
Thank you.

Related

How can I get the *original* data behind an NSImage?

I have an instance of NSImage that's been handed to me by an API whose implementation I don't control.
I would like to obtain the original data (NSData) from which that NSImage was created, without the data being converted to another representation/format (or otherwise "molested"). If the image was created from a file, I want the exact, byte-for-byte contents of the file, including all metadata, etc. If the image was created from some arbitrary NSData instance I want an exact, byte-for-byte-equivalent copy of that NSData.
To be pedantic (since this is the troublesome case I've come across), if the NSImage was created from an animated GIF, I need to get back an NSData that actually contains the original animated GIF, unmolested.
EDIT: I realize that this may not be strictly possible for all NSImages all the time; How about for the subset of images that were definitely created from files and/or data?
I have yet to figure out a way to do this. Anyone have any ideas?
I agree with Ken, and having a subset of conditions (I know it's a GIF read from a file) doesn't change anything. By the time you have an NSImage, a lot of things have already happened to the data. Cocoa doesn't like to hold a bunch of data in memory that it doesn't directly need. If you had an original CGImage (not one generated out of the NSImage), you might get really lucky and find the data you wanted in CGDataProviderCopyData, but even if it happened to work, there's no promises about it.
But thinking through how you might, if you happened to get incredibly lucky, try to make it work:
Get the list of representations with -imageRepresentations.
Find the one that matches the original (hopefully there's just the one)
Get a CGImage from it with -CGImageForProposedRect:context:hints. You probably want a rect that matches the size of the image, and I'd probably pass a hint of no interpolation.
Get the data provider with CGImageGetDataProvider
Copy its data with CGDataProviderCopyData. (But I doubt this will be the actual original data including metadata, byte-for-byte.)
There are callbacks that will get you a direct byte-pointer into the internal data of a CGDataProvider (like CGDataProviderGetBytePointerCallback), but I don't know of any way to request the list of callbacks from an existing CGDataProvider. That's typically something Quartz accesses, and we just pass during creation.
I strongly suspect this is impossible.
This is not possible.
For one thing, not all images are backed by data. Some may be procedural. For example, an image created using +imageWithSize:flipped:drawingHandler: takes a block which draws the image.
But, in any case, even CGImage converts the data on import, and that's about as low-level as the Mac frameworks get.

omnet simulation of token bucket

am developing a simulation model on the omnet++..Basically my work is to develop something related to LTE, but first I need to develop a simple model which takes the packet from a source and store it in a queue for sometime and deliver it to sink...
I have developed this model and its working fine for me....
Now I need to place tokenbucket meter in between the queue and the sink...to handle the burst and send back rejected packet from the token meter back to the queue..something like second attached image..I have taken this tokenbucketmeter from the simuLTE package of OMNET...
When I simulate this, it is showing error like
Quote: cannot cast (queueing::Job *)net.tokenBucketMeter.job-1 to type 'cPacket *'
Am not getting where exactly the problem is, may be the source am using is creating the jobs, and tokenbucket meter accepts only the packets..If it is so then what type of the source should I use??
Will you please clarify this?? Will be very thankful
I am using OMNeT++ in a project at the moment too. Learning to use OMNeT++ having only touched some C99 before can be a bit frustrating.
From checking the demo projects you are using as a base for your project - it looks like Job and cPacket do not share any useful types other than cObject so I would not try to cast like this.
Have a look at PassiveQueue.cc in the /queueinglib project handles Jobs - everything is passed around as a cMessage and cast using the built in cast:
cMessage msg (comes in from method signature)
Job *job = check_and_cast<Job *>(msg);
cPackets, which you want to use, are a child of cMessage in the inheritance hierarchy shown in this link:
http://www.omnetpp.org/doc/omnetpp/api/index.html
I am not using cPackets myself, but it seems likely, given how protocols work, that you would be able to translate a message into one or more packets.

Decoding a picture from a gps tracker

I'am developing a server for a GPS Tracker that can send pictures taken by a camera connected to it, inside a vehicle.
The problem is that I follow every step in the manual and I can't still decode the bytes sent by the tracker into a picture:
I receive the picture in packages separated by the headers and "tails", each one. When I receive the bytes I convert them into hexadecimals as the manual expecifies, then I have to remove the headers and "tails" and apparently after joinned the remain data and saved as a .jpeg, the image should appear, but it doesn't.
the company name is "Toplovo" from China. Have anyone else solve something similar?
Are the line feeds part of your actual data? Because if so I doubt that's supposed to happen.
Otherwise, make sure you're writing the file in binary mode. In some languages this matters. You didn't really specify, but make sure you're not in text mode. Also make sure you're not using any datatypes unsuited for hexidecimal values (again, we don't even know what language you're using, so, it's kind of hard to give specific suggestions.)

write only stream

I'm using joliver/EventStore library and trying to find a way of how to get a stream not reading any events from it.
The reason is that I want just to write some events into that store for specific stream without loading all 10k messages from it.
The way you're expected to use the store is that you always do a GetById first. Even if you new up an Aggregate and Save it, you'll see in the CommonDomain EventStoreRepository that it will first correlate it with the existing data.
The key reason why a read is needed first is that the infrastructure needs to work out how many events have gone before to compute the new commit sequence number.
Regarding your citing of your example threshold that makes you want to optimize this away... If you're really going to have that level of events, you'll already be into snapshotting territory as you'll need to have an appropriately efficient way of doing things other than blind write too.
Even if you're not intending to lean on snapshotting, half the benefit of using EventStore is that the facility is buitl in for when you need it.

Core Data / NSTextView breaks only after save

We have an NSTextView and some data saved about its contents in a core data Managed object context. Everything works great while the managed object context stays in memory. However when we save it, we get very weird fetch request behaviors.
For example, we run a fetch request that asks for all elements with a textLocation less than or equal to 15. The first object in the array we get back has a textLocation of 16.
I know I can't get a definitive answer here, as the code is fairly complex. But does anyone know what this issue smells of?
My thought is that we are somehow not getting the proper MOC synced with the NSTextView after saving? What could change that breaks this?
Thanks.
For example, we run a fetch request
that asks for all elements with a
textLocation less than or equal to 15.
The first object in the array we get
back has a textLocation of 16.
Really, the only way to get that is to (in reverse order of likelihood):
Mess up ethe definition of the attribute such that you think your are saving one type of numerical info but that you are actually saving another.
You've mangled the predicate so that it actually looks for values of 16 or greater. (You can test predicates against an array of dictionaries whose keys have the same names as you Core Data entities.)
It's an error in the conversion between a number and a string for purposes of displaying in the UI or logging.
I would start with (3) myself because it seems more common and until you confirm you don't have a display problem, you can't diagnose the other problems.
I finally managed to work out what was going on. I was setting textLocation using the setPrimitiveValue... just because I didn't want notifications to fire off. Turns out that's a really bad idea, because Core Data didn't know the value had changed. It still thought the value was 15 instead of 16.
Let this be a lesson: never bypass KVO unless you're INSIDE the managed object and you know what you're doing!

Resources