Transformers for physical isolation - transformer-model

My current project doesn't need electrical isolation so much as it needs mechanical isolation. (A continuously rotating part needs to be powered without twisting wires.) One way I thought to accomplish this was similar to how a brushed motor works but I don't think I can manufacture something like that with the tools available to me, which got me thinking about transformers.
As I understand it, the strongest part of the magnetic field in an inductor is inside the coil so I reasoned that a transformer with the primary coil wrapped around the secondary coil would provide good performance. I was hoping to put something together such that the secondary (inside) coil could rotate freely. Unfortunately, winding my own is not yielding promising results. What's worse is that I cannot find anything on the market nor any documentation of transformers designed this way which leads me to believe that I've overlooked something. Can anyone shed some light on this?

Related

Looking for a reasonable way to have several shapes as usable buttons in processing

I recently (5 weeks ago) started my first school year in an school based apprenticeship to become an IT assistant.
We're learning programming and are starting with very basic processing things, while the ultimate plan is to get into C#.
Now I understand that processing might not be the best language for my little project but I still would like to work this out somehow.
What I want to build is a "Stargate Dial Computer". If you know the TV Show you'll know what I'm talking about.
I wanted to make it visually appealing so I decided to use one of the available tools to create my shapes as I am using a DHD (term from the show) for the dial process - see picture: https://i.imgur.com/r7jBjRG.png
This small shape setup already is over 500 lines of code and that seems unwise in itself. Besides that, the plan is to have every single of these trapezoids be a pushable button - but to achieve that manually I'd have to check their coordinates against the mouse collision to utilize them as buttons.
What I'm asking for now is any input on how to work with these shapes in a logical way to make my Idea even possible.
Something like, checking for the shape's color instead of the shape's coordinates itself like 40 times and getting the "active" shape's size in some kind of function. Or a way to just get every shape one by one in a loop, checking for every beginShape and endShape instance if that wouldn't be a performance nightmare.
Keep in mind that I am a beginner. I do know the basics, also of other languages, and I can apply some programming logic here and there - but since I'm not sure what processing can and can't do (yet) I'm looking for an answer to the question if this is even reasonable or possible, or not.
Any help and ideas would be much appreciated!
Thanks!

Leap Motion point cloud

How can we access the point cloud in the Leap Motion API? One feature that led me to purchase it was the point cloud demo from their promo video, but I can't seem to locate documentation regarding it and user replies on the forums seem mixed. Am I just missing something?
I'm looking to use the Leap Motion as a sort of cheap 3D scanner.
That demo was clearly a mockup which simulated a 3-D model of the human hand, not actual point cloud data. You can tell by the fact that points were displayed which could not have possibly been read by the sensor, due to obstruction.
orion78fr points to one forum post on this, but the transcript of an interview by the founders provides more information direct from the source:
Can you please allow access to cloud points in SDK?
David: So I think sometimes people have a misperception as to really
how things work in our hardware. It’s very different from other things
like the Kinect, and in normal device operation we have very different
priorities than most other technologies. Our priority is precision,
small movements, very low latency, very low CPU usage - so in order to
do that we will often be making sacrifices that make what the device
is doing completely not applicable to what I think you’re getting at,
which is 3D scanning.
What we’re working on are sort of alternative device modes that will
let you use it for those sorts of purposes, but that’s not what it was
originally built for.You know, it’s our goal to let it be able to do
those things and with the hardware can do many things. But our
priority right now is of course human computer interaction, which we
think is really the missing component in technology, and that’s our
core passion.
Michael: We really believe in trying to squeeze every ounce of
optimization and performance out of the devices for the purpose they
were built. So in this case the Leap today is intended to be a great
human computer interface. And we have made thousands of little
optimizations along the way to make it better, that might sacrifice
things in the process that might be useful for things like 3D scanning
objects. But those are intentional decisions, but they don’t mean that
we think 3D scanning isn’t exciting and isn’t a good use case. There
will be other things we build as a company in the future, and other
devices that might be able to do both or maybe there will be two
different devices. One that is fully optimized for 3D scanning, and
one that continues to be optimized and as great as it can be at
tracking fingers and hands.
If we haven’t done a good job communicating that the device isn’t
about 3D scanning or isn’t going to be able to 3D scan, that’s
unfortunate and it’s a mistake on our part - but that’s something that
we’ve had to sacrifice. The good news is that those sacrifices have
made the main device really exceptional at tracking hands and fingers.
I have developed with the Leap Motion Controller as well as several other 3-D scanning systems, and from what I've seen I'd seriously doubt if we're ever going to get point cloud data out of the currently shipping hardware. If we do, the fidelity will be far below what we see for gross finger and hand tracking from that device.
There are some low-cost alternatives for 3-D scanning that have started to emerge. SoftKinetic has their DepthSense 325 camera for $250 (which is effectively the same as the Creative Gesture Camera that is only $150 right now). The DS 325 is a time-of-flight IR camera that gives you a 320x240 point cloud map of the 3-D space in front of it. In my tests, it worked well with opaque materials, but anything with a little gloss or shininess gave it trouble.
The PrimeSense Carmine 1.09 ($200) uses structured light to get point cloud data in front of it, as an advancement of the technology they supplied for the original Kinect. It has a lower effective sptial resolution than the SoftKinetic cameras, but it seems to provide less depth noise and to work on a wider variety of materials.
The DUO was also a promising project, but unfortunately its Kickstarter campaign failed. It was using stereoscopic imaging from an IR source to return a point cloud from a couple of PS3 Eye cameras. They may restart that project at some point in the future.
While the Leap may not do what you want, it looks like more and more devices are coming out in the consumer price range to enable 3-D scanning.
See this link
It says that yes, Leap Motion can theorically handle point cloud and it was temporarily part of the visualiser in beta and no, you can't access it using the Leap Motion APIs right now.
It may appear in the future but it's not a priority of Leap Motion Team.
As with LeapMotion SDK 2.x one can at least access the stereo camera images! As I know by myself it is a convenient solution, for many tasks where the point cloud data was asked for. This is why I mention it here, even if it does not give the point-cloud data internally generated by the driver to extract the pointer-metadata. But now one has the capability to generate own point-cloud by yourself, this is why I think it is strongly related to the question.
Currently there is no access to the Pointcloud in the public API. But I think this video is no mock-up, so there should be a possibility:
http://www.youtube.com/watch?v=MYgsAMKLu7s#t=40s
Roadtovr recently reviewed the Nimble Sense Kickstarter, which is using point cloud.
It’s the same technology that the Kinect 2 uses, and it’s supposed to have some advantages over the Leap Motion.
Because it’s a depth sensing camera, you can point the camera top-down like the Touch+, although their product will not ship till next year.

How do I go about creating my own API/Framework/Libraries

Some libraries are very obvious to me. I mean I can create the math, string etc libraries on my own. I don't understand how people create API's like OpenGL, OpenCV, DirectX, and MFC. I don't understand how to write them on my own like I write libraries to compute math and string functions. Are there any resources on the net that would teach me how I can go about doing this.
Those kind of libraries are driven by two things.
1) is the domain that they're working in (notably GPU architecture, and their capabilities and limitations), and
2) the model of those capabilities as view by the API designer.
Simply, someone with some (ideally) reasonable understanding of the problem domain said "I think if you want to work with a GPU, I'd like the GPU to look like this", and came up with a model to present to the API consumer. Then the framework was written to convert that model view that the API designer contrived with the actual workings of the underlying mechanism (in this case a GPU).
Consider something like an Object Relational Mapper tool. Here, they're trying present an OO view mated to an underlying Relational representation.
The designers likely took some idea of what they wanted, tried them out to see how realistic it was do, and then started filling in the gaps and polishing the edges.
The way for you to start is to simply pick a domain which you have knowledge, but don't like how it works, and think "gosh, it would be better like this" and then start solving THAT problem. If things go well, you'll have some momentum and the process will likely get a bit more organic from there. But, ideally not too organic.
The hard part is putting your new API to work and USING it as someone who doesn't know the API, or necessarily the domain, would use it. Using it also gives you the opportunity to encounter the "gosh, it would be better like this" phase again.
Rinse and repeat until you're happy with the outcome.
Few paper designs survive contact with the actual development of the system. Some do better than others, but it's hard to know how an API is going to feel as a developer until you start using it.
So try a few apis on your own, study those you like, study those you hate and think how to make them better, and work on some proof of concept implementations to see how it goes.

Data-oriented design in practice?

There has been one more question on what data-oriented design is, and there's one article which is often referred to (and I've read it like 5 or 6 times already). I understand the general concept of this, especially when dealing with, for example, 3d models, where you'd like to keep all vertexes together, and not pollute your faces with normals, etc.
However, I do have a hard time visualizing how data-oriented design might work for anything but the most trivial cases (3d models, particles, BSP-trees, and so on). Are there any good examples out there which really embraces data-oriented design and shows how this might work in practice? I can plow through large code-bases if needed.
What I'm especially interested in is the mantra "where there's one there are many", which I can't really seem to connect with the rest here. Yes, there are always more than one enemy, yet, you still need to update each enemy individually, cause they're not moving the same way now are they? Same goes for the 'balls'-example in the accepted answer to the question above (I actually asked this in a comment to that answer, but haven't gotten a reply yet). Is it simply that the rendering will only need the positions, and not the velocities, whereas the game simulation will need both, but not the materials? Or am I missing something? Perhaps I've already understood it and it's a far more straightforward concept than I thought.
Any pointers would be much appreciated!
So, what is DOD all about? Obviously, it's about performance, but it's not just that. It's also about well-designed code that is readable, easy to understand and even reusable.
Now Object Oriented design is all about designing code and data to fit into encapsulated virtual "objects". Each object is a seperate entity with variables for properties that object might have and methods to take action on itself or other objects in the world. The advantage of OO design is that it's easy to mentally model your code into objects because the whole (real) world around us seems to work in the same way. Objects with properties that can interact with each other.
Now the problem is that the cpu in your computer works in a completely different way. It works best when you let it do the same things again and again. Why is that? Because of a little thing called cache. Accessing RAM on a modern computer can take 100 or 200 CPU cycles (and the CPU has to wait all that time!), which is way too long. So there's this small portion of memory on the CPU that can be accessed really quickly, cache memory. Problem is it's only a few MB tops. So every time you need data that wasn't in cache, you still need to go the long way to RAM. That's not just that way for data, the same goes for code. Trying to execute a function that's not in instruction cache will cause a stall while the code is loaded from RAM.
Back to OO programming. Objects are big, but most functions need only a small portion of that data, so we're wasting cache by loading unnecessary data. Methods call other methods which call other methods, thrashing your instruction cache. Still, we often do a lot of the same stuff over and over again. Let's take a bullet from a game for example. In a naive implementation each bullet could be a separate object. There might be a bullet manager class. It calls the first bullet's update function. It updates the 3D position using the direction/velocity. This causes a lot of other data from the object to be loaded into the cache. Next, we call the World Manager class to check for a collision with other objects. This loads lots of other stuff into the cache, maybe it even causes code from the original bullet manager class to get dropped from instruction cache. Now we return to the bullet update, there was no collision, so we return to bullet manager. It might need to load some code again. Next up, bullet #2 update. This loads lots of data into the cache, calls world... etc. So in this hypthetical situation, we've got 2 stalls for loading code and let's say 2 stalls for loading data. That's at least 400 cycles wasted, for 1 bullet, and we haven't taken bullets that hit something else into account. Now a CPU runs at 3+ GHz so we're not going to notice one bullet, but what if we've got 100 bullets? Or even more?
So this is the where there's one there's many story. Yes, there are some cases where you've only got on object, your manager classes, file access, etc. But more often, there's a lot of similar cases. Naive, or even not-naive object oriented design will lead to lots of problems. So enter data oriented design. The key of DOD is to model your code around your data, not the other way around as with OO-design. This starts at the first stages of design. You do not first design your OO code and then optimize it. You start by listing and examining your data and thinking out how you want to modify it(I'll get to a practical example in a moment). Once you know how your code is going to modify the data you can lay it out in a way that makes it as efficient as possible to process it. Now you may think this can only lead to a horrible soup of code and data everywhere but that is only the case if you design it badly (bad design is just as easy with OO programming). If you design it well, code and data can be neatly designed around specific functionality, leading to very readable and even very reusable code.
So back to our bullets. Instead of creating a class for each bullet, we only keep the bullet manager. Each bullet has a position and a velocity. Each bullet's position needs to be updated. Each bullet has to have a collision check and all bullets that have hit something need to take some action accordingly. So just by taking a look at this description I can design this whole system in a much better way. Let's put the positions of all bullets in an array/vector. Let's put the velocity of all bullets in an array/vector. Now let's start by iterating allong those two arrays and updating each position value with it's corresponding velocity. Now, all data loaded into the data cache is data we're going to use. We can even put a smart pre-loading command to already pre-load some array data in advance so the data's in cache when we get to it. Next, collision check. I'm not going into detail here, but you can imagine how updating all bullets after each other can help. Also note that if there's a collision, we're not going to call a new function or do anything. We just keep a vector with all bullets that had collision and when collision checking is done, we can update all those after each other. See how we just went from lots of memory access to almost none by laying our data out differently? Did you also notice how our code and data, even though not designed in an OO way any more, are still easy to understand and easy to reuse?
So to get back to the "where there's one there's many". When designing OO code you think about one object, the prototype/class. A bullet has a velocity, a bullet has a position, a bullet will move each frame by it's velocity, a bullet can hit something, etc. When you think about that, you will think about a class, with velocity, position, and an update function which moves the bullet and checks for collision. However, when you have multiple objects you need to think about all of them. Bullets have positions, velocity. Some bullets may have collision. Do you see how we're not thinking about an individual object any longer? We're thinking about all of them and are designing code a lot differently now.
I hope this helps answer your second part of the question. It's not about whether you need to update each enemy or not, it's about the most efficient way to update them. And while designing only your enemies using DOD may not help gain much performance, designing the entire game around these principles (only where applicable!) may lead to a lot of performance gains!
So onto the first part of the question, that is other examples of DOD. I'm sorry but I don't have that much there. There is one really good example though, I came across this some time ago, a series on data oriented design of a behavior tree by Bjoern Knafla: http://bjoernknafla.com/data-oriented-behavior-tree-overview You probably want to start at the first one in the series of 4, links are in the article itself.
Hope this still helps, despite the old question. Or maybe some other SO user come across this question and have some use from this answer.
I read the question you linked to and the article.
I've read one book on the subject of data driven design.
I'm pretty much in the same boat as you.
The way I understand Noel's article is that you design your game in the typical object oriented way. You have classes and methods that work on the classes.
After you've done your design, you ask yourself the following question:
How can I arrange all of the data I've designed in one huge blob?
Think of it in terms of writing your entire design as one functional method, with lots of subordinate methods. It reminds me of the massive 500,000 line Cobol programs of my youth.
Now, you probably won't write the entire game as one huge functional method. Really, in the article, Noel is talking about the rendering portion of a game. Think of it as a game engine (the one huge functional method) and the code to drive the game engine (the OOP code).
What I'm especially interested in is the mantra "where there's one there are many", which I can't really seem to connect with the rest here. Yes, there are always more than one enemy, yet, you still need to update each enemy individually, cause they're not moving the same way now are they?
You're thinking in terms of objects. Try thinking in terms of functionality.
Each enemy update is an iteration of a loop.
What's important is that the enemy data is structured to be in one memory location, rather than spread across enemy object instantiations.

How a marker-based augmented reality algorithm (like ARToolkit's one) works?

For my job i've been using a Java version of ARToolkit (NyARTookit). So far it proven good enough for our needs, but my boss is starting to want the framework ported in other platforms such as web (Flash, etc) and mobiles. While i suppose i could use other ports, i'm increasingly annoyed by not knowing how the kit works and beyond that, from some limitations. Later i'll also need to extend the kit's abilities to add stuff like interaction (virtual buttons on cards, etc), which as far as i've seen in NyARToolkit aren't supported.
So basically, i need to replace ARToolkit with a custom mark detector (and in case of NyARToolkit, try to get rid of JMF and use a better solution via JNI). However i don't know how these detectors work. I know about 3D graphics and i've built a nice framework around it, but i need to know how to build the underlying tech :-).
Does anyone know any sources about how to implement a marker-based augmented reality application from scratch? When searching in google i only find "applications" of AR, not the underlying algorithms :-/.
'From scratch' is a relative term. Truly doing it from scratch, without using any pre-existing vision code, would be very painful and you wouldn't do a better job of it than the entire computer vision community.
However, if you want to do AR with existing vision code, this is more reasonable. The essential sub-tasks are:
Find the markers in your image or video.
Make sure they are the ones you want.
Figure out how they are oriented relative to the camera.
The first task is keypoint localization. Techniques for this include SIFT keypoint detection, the Harris corner detector, and others. Some of these have open source implementations - i think OpenCV has the Harris corner detector in the function GoodFeaturesToTrack.
The second task is making region descriptors. Techniques for this include SIFT descriptors, HOG descriptors, and many many others. There should be an open-source implementation of one of these somewhere.
The third task is also done by keypoint localizers. Ideally you want an affine transformation, since this will tell you how the marker is sitting in 3-space. The Harris affine detector should work for this. For more details go here: http://en.wikipedia.org/wiki/Harris_affine_region_detector

Resources