Importing animation keytracks into Maya - c++11

I have a human bipedal animation file format that I would like to programmatically read into Maya using the C++ API.
The animation file format is similar to that of the Open Asset Importer's per-node animation structure.
For every joint, there is a series of up to 60 3D vector keys (to describe the translation of the joint) and 60 quaternion keys (to describe the rotation of the joint). Every joint is guaranteed to have the same number of keys (or no keys at all).
The length (time in seconds) of the animation can be specified or changed (so that you can set the 60 keys to happen over 2 seconds for a 30 FPS animation, for example).
The translations and rotations of the joints propagates down the skeleton tree every frame, producing the animation.
Here's a sample. Additional remarks about the data structure are added by the logging facility. I have truncated the keys for brevity.
Bone Bip01
Parent null
60 Position Keys
0 0.000000 4.903561 99.240829 -0.000000
1 0.033333 4.541568 99.346550 -2.809127
2 0.066667 4.182590 99.490318 -5.616183
... (truncated)
57 1.366667 5.049816 99.042770 -116.122604
58 1.400000 4.902135 99.241692 -118.754120
59 1.400000 4.902135 99.241692 -118.754120
60 Rotation Keys
0 0.000000 -0.045869 0.777062 0.063631 0.624470
1 0.033333 -0.043855 0.775018 0.061495 0.627400
2 0.066667 -0.038545 0.769311 0.055818 0.635212
... (truncated)
57 1.366667 -0.048372 0.777612 0.065493 0.623402
58 1.400000 -0.045869 0.777062 0.063631 0.624470
59 1.400000 -0.045869 0.777062 0.063631 0.624470
Bone Bip01_Spine
Parent Bip01
60 Position Keys
...
60 Rotation Keys
...
In C++, the data structure I currently have corresponds to this:
std::unordered_map<string, std::vector<Vector3>> TranslationKeyTrack is used to map a set of translation vectors to the corresponding bone.
std::unordered_map<string, std::vector<Quaternion>> RotationKeyTrack is used to map a set of rotation quaternions to the corresponding bone.
Additional notes: There are some bones that do not move relative to its parent bone; these bones have no keys at all (but has an entry with 0 keys).
There are also some bones that have only rotation, or only position keys.
The skeleton data is stored in a separate file that I can already read into Maya using MFnIkJoint.
The bones specified in the animation file is 1:1 to the bones in that skeleton data.
Now I would like to import this animation data into Maya. However, I do not understand Maya's way of accepting animation data through its C++ API.
In particular, the MFnAnimCurve function set addKeyFrame or addKey accepts only a single floating point value tied to a time key, while I have a list of vectors and quaternions. MFnAnimCurve also accepts 'tangents'; after reading the documentation, I am still unsure of how to convert the data I have into these tangents.
My question is: How do I convert the data I have into something Maya understands?
I understand better with examples, so some sample code will be helpful.

So after a few days of trial-and-error and examining the few fragments of code around the Internet, I have managed to come up with something that works.
Given the abovementioned TranslationKeyTrack and RotationKeyTrack,
Iterate through the skeleton. For each joint,
Set the initial positions and orientations of the skeleton. This is needed because there are some joints that do not move relative to its parent; if the initial positions and orientations are not set, the entire skeleton may move erratically.
Set the AnimCurve keys.
The iteration looks like this:
MStatus status;
MItDag dagIter(MItDag::kDepthFirst, MFn::kJoint, &status);
for (; !dagIter.isDone(); dagIter.next()) {
MDagPath dagPath;
status = dagIter.getPath(dagPath);
MFnIkJoint joint(dagPath);
string name_key = joint.name().asChar();
// Set initial position, and the translation AnimCurve keys.
if (TranslationKeyTrack.find(name_key) != TranslationKeyTrack.end()) {
auto pos = TranslationKeyTrack[name_key][0];
joint.setTranslation(MVector(pos.x, pos.y, pos.z), MSpace::kTransform);
setPositionAnimKeys(dagPath.node(), positionTracks[name_key]);
}
// Set initial orientation, and the rotation AnimCurve keys.
if (RotationKeyTrack.find(name_key) != RotationKeyTrack.end()) {
auto rot = rotationTracks[name_key][0];
joint.setOrientation(rot.x, rot.y, rot.z, rot.w);
setRotationAnimKeys(dagPath.node(), RotationKeyTrack[name_key]);
}
}
For brevity, I will omit showing setPositionAnimKeys, and show only setRotationAnimKeys only. However, the ideas for both are the same. Note that I used kAnimCurveTL for translation tracks.
void MayaImporter::setRotationAnimKeys(MObject joint, const vector<Quaternion>& rotationTrack) {
if (rotationTrack.size() < 2) return; // Check for empty tracks.
MFnAnimCurve rotX, rotY, rotZ;
setAnimCurve(joint, "rotateX", rotX, MFnAnimCurve::kAnimCurveTA);
setAnimCurve(joint, "rotateY", rotY, MFnAnimCurve::kAnimCurveTA);
setAnimCurve(joint, "rotateZ", rotZ, MFnAnimCurve::kAnimCurveTA);
MFnIkJoint j(joint);
string name = j.name().asChar();
for (int i = 0; i < rotationTrack.size(); i++) {
auto rot = rotationTrack[i];
MQuaternion rotation(rot.x, rot.y, rot.z, rot.w);
// Depending on your input, you may have to do additional processing
// to get the correct Euler rotation here.
auto euler = rotation.asEulerRotation();
MTime time(FPS*i, MTime::kSeconds); // FPS is a number defined elsewhere.
rotX.addKeyframe(time, euler.x);
rotY.addKeyframe(time, euler.y);
rotZ.addKeyframe(time, euler.z);
}
}
Finally, the bit of code I used for setAnimCurve. It essentially attaches the AnimCurve to the joint. This bit of code is adapted from a mocap file importer here. Hooray open source!
void MayaImporter::setAnimCurve(const MObject& joint, const MString attr, MFnAnimCurve& curve, MFnAnimCurve::AnimCurveType type) {
MStatus status;
MPlug plug = MFnDependencyNode(joint).findPlug(attr, false, &status);
if (!plug.isKeyable())
plug.setKeyable(true);
if (plug.isLocked())
plug.setLocked(false);
if (!plug.isConnected()) {
curve.create(joint, plug, type, nullptr, &status);
if (status != MStatus::kSuccess)
cout << "Creating anim curve at joint failed!" << endl;
} else {
MFnAnimCurve animCurve(plug, &status);
if (status == MStatus::kNotImplemented)
cout << "Joint " << animCurve.name() << " has more than one anim curve." << endl;
else if (status != MStatus::kSuccess)
cout << "No anim curves found at joint " << animCurve.name() << endl;
curve.setObject(animCurve.object(&status));
}
}

Related

How can I randomize a video to play after another by pressing a key on Processing?

I'm quite new to Processing.
I'm trying to make Processing randomly play a video after I clear the screen by mouseclick, so I create an array that contain 3 videos and play one at a time.
Holding 'Spacebar' will play a video and release it will stop the video. Mouseclick will clear the screen to an image. The question is how can it randomize to another video if I press spacebar again after clear the screen.
I've been searching all over the internet but couldn't find any solution for my coding or if my logic is wrong, please help me.
Here's my code.
int value = 0;
PImage photo;
import processing.video.*;
int n = 3; //number of videos
float vidN = random(0, n+1);
int x = int (vidN);
Movie[] video = new Movie[3];
//int rand = 0;
int index = 0;
void setup() {
size(800, 500);
frameRate(30);
video = new Movie[3];
video[0] = new Movie (this, "01.mp4");
video[1] = new Movie (this, "02.mp4");
video[2] = new Movie (this, "03.mp4");
photo = loadImage("1.jpg");
}
void draw() {
}
void movieEvent(Movie video) {
video.read();
}
void keyPressed() {
if (key == ' ') {
image(video[x], 0, 0);
video[x].play();
}
}
void mouseClicked() {
if (value == 0) {
video[x].jump(0);
video[x].stop();
background(0);
image(photo, 0, 0);
}
}
You have this bit of logic in your code which picks a random integer:
float vidN = random(0, n+1);
int x = int (vidN);
In theory, if you want to randomise to another video when the spacebar is pressed again you can re-use this bit of logic:
void keyPressed() {
if (key == ' ') {
x = int(random(n+1));
image(video[x], 0, 0);
video[x].play();
}
}
(Above I've used shorthand of the two lines declaring vidN and x, but the logic is the same. If the logic is harder to follow since two operations on the same line (picking a random float between 0,n+1 and rounding down to an integer value), feel free to expand back to two lines: readability is more important).
As side notes, these bit of logic look a bit off:
the if (value == 0) condition will always be true since value never changes, making both value and the condition redundant. (Perhaps you plan to use for something else later ? If so, you could save separate sketches, but start with the simplest version and exclude anything you don't need, otherwise, in general, remove any bit of code you don't need. It will be easier to read, follow and change.)
Currently your logic says that whenever you click current video resets to the start and stops playing and when you hit the spacebar. Once you add the logic to randomise the video that the most recent frame of the current video (just randomised) will display (image(video[x], 0, 0);), then that video will play. Unless you click to stop the current video, previously started videos (via play()) will play in the background (e.g. if they have audio you'll hear them in the background even if you only see one static frame from the last time space was pressed).
Maybe this is the behaviour you want ? You've explained a localised section of what you want to achieve, but not overall what the whole of the program you posted should do. That would help others provide suggestions regarding logic.
In general, try to break the problem down to simple steps that you can test in isolation. Once you've found a solid solution for each part, you can add each part into a main sketch one at a time, testing each time you add something. (This way if something goes wrong it's easy to isolate/fix).
Kevin Workman's How To Program is a great article on this.
As a mental excercise it will help to read through the code line by line and imagine what it might do. Then run it and see if the code behaves as you predicted/intended. Slowly and surely this will get better and better. Have fun learning!

Qt Creator label value

I currently face the following problem:
I have 64 labels. Label_1 all the way up to Label_64.
I also have an int i.
"i" also goes from 1-64
I want that, when i == 1 Label_1 shall display an image. If i == 2, Label_2 shall display that image and so on.
Currently I'd do that with:
if(i == 1)
{
QPixmap pix("...");
ui->label_1->setPixmap(pix);
}
if(i == 2)
{
QPixmap pix("...");
ui->label_2->setPixmap(pix);
}
if(i == 3)
{
QPixmap pix("...");
ui->label_3->setPixmap(pix);
}
...
Is there some way to do that easier? Something like:
QPixmap pix("...");
ui->label_i->setPixmap(pix);
where the choosen label is directly defined by i?
You can store a list of QLabels.
QList<QLabel*> labels;
labels.at(i)->setPixmap(pix)
The disadvantage of this method is that you should manually assign ui->label_i to labels.at(i) for every i from 1 to 64 once:
labels.insert(0, NULL); // empty space to keep numbering the same.
labels.insert(1, ui->labels_1);
labels.insert(2, ui->labels_2);
...
labels.insert(64, ui->labels_64);
Depending on your specific case, you may use a more tricky solution. For example, if all the labels are stored in a QVBoxLayout at position 1 to 64, you can access label i as follows:
QVBoxLayout *layout = ...;
QLabel *label = qobject_cast<QWidget*> (layout->itemAt(i)->widget ());
if (label) // should be true if assumption is correct
label->setPixmap(pix);
You can also use method two to initialise the list of method 1.
See the Qt documentation for more information.

MT4 expert trade panel - "OBJ_RECTANGLE_LABEL"

MetaTrader4 Expert Advisor for Trade Panel.
How can I link some OBJ_RECTANGLE_LABEL for moving with another single object?
Link 'em indirectly
There is no direct support for linking a few GUI-objects to move with another one.
This does not mean, it is not possible to have it working like this.
In one Augmented Trader UI-tool, I needed to have both all the GUI-components and some computed values behaving under some similar logic ( keeping all the lines, rectangles, text labels and heat-map colors, under some common UI-control-logic ). All the live-interactive-GUI orchestration was locked onto a few permited user-machine interactions, where the user was able to move with a set of UI-control-objects, some of which were freely modify-able, whereas some were restricted ( with the use of the augmented reality controllers ) to move just vertically or just horizontally or were just locked to start as tangents from the edges of Bollinger Bands in such a place, where the vertical line of the UI-control-object was moved by the user, etc.
The Live-interactive-GUI solution is simple:
Besides the [ Expert Advisor ] create and run another process, the [ Script ] that would be responsible for the GUI-object automation. Within this script, use some read-only values from objects, let's say a blue vertical line, as a SENSOR_x1, an input to the GUI-composition.
If someone or something moves this blue vertical line, your event-watching loop inside the script will detect a new value for the SENSOR_x1andre-process all the UI-layout scheme by adding the just observed / detected motion of a SENSOR_x1_delta = SENSOR_x1 - SENSOR_x1_previous;This way, one can update the motion detector-loop in the [ Script ], chasing all the SENSOR_* actual values and promoting the detected SENSOR_*_delta-s onto all objects, that are being used in the GUI-layout composition.
Finally it is worth to stage the updates of the screen with a few enforced WindowRedraw(); instructions, throughout the re-processing of the augmented reality in the Live-interactive-GUI.
Code from a PoC-demonstrator
One may notice, the code is in a pre-New-MQL4.56789 syntax, using some there permitted variable naming conventions, that ceased to be permitted now. The scope of the Event-Monitor function ( a self-contained function, optimised for max speed / min latency in handling all the three corners of the MVC-framework ( Model-is Live-GUI project-specific, Visual-is the Live-GUI augmentation-specific, Controller-is flexible and composed as a sort of Finite-State-Machine, from principal building blocks and implemented via "object.method" calls in the switch(){}. Loop sampling rate works great down to few tens of milliseconds, so the Live-GUI is robust and smoothly floating on the Trader's Desk.
This is not best way but schematically shows what to do.
string mainObjectNAME,
dependantObjectNAME; // dependant - your obj label
void OnChartEvent( const int id,
const long &lparam,
const double &dparam,
const string &sparam
){
if ( id == CHARTEVENT_OBJECT_DRAG
|| id == CHARTEVENT_OBJECT_ENDEDIT
){
if ( StringCompare( sparam, mainObjectNAME ) == 0 ){
datetime time1 = (datetime) ObjectGetInteger( 0, mainObjectNAME, OBJPROP_TIME1 );
double price1 = ObjectGetDouble( 0, dependantObjectNAME, OBJPROP_PRICE1 );
if ( !ObjectMove( 0, dependantObjectNAME, 0, time1, price1 ) )
Print( __LINE__,
"failed to move object ",
dependantObjectNAME
);
}
ChartRedraw();
}
}
if you modify the mainObject by any of the recognised means ( by dragging or passing other parameters ) - then dependant object ( OBJ_RECT_LABEL in your case ) is moved with ObjectMove() or ObjectSet() functions.

Arduino keypad matrix example? ( teensyduino )

I'm a beginner using Arduino with a Teensy 3.2 board and programming it as a usb keyboard.
I have two 4 button membrane switches. Their button contacts are on pins 1-8, and the 9th pin holds a soldered together wire of both membrane switches' "ground" line or whatever it's true name is; the line that completes the circuit.
Basically when you press the buttons they are supposed to simply type "a, b, c..." respectively. I've been told I need to use a matrix for this.
I'm looking for an example of how to code a keyboard matrix that effectively supports a one row/9 column line (or vice versa?) I've been unable to find that solution online.
All I have so far is this code which, when the button on the second pin is pressed, sends tons of "AAAAAAAAAAAAAAAA" keystrokes.
void setup() {
// make pin 2 an input and turn on the
// pullup resistor so it goes high unless
// connected to ground:
pinMode(2, INPUT_PULLUP);
Keyboard.begin();
}
void loop() {
//if the button is pressed
if(digitalRead(2)==LOW){
//Send an ASCII 'A',
Keyboard.write(65);
}
}
Would anyone be able to help?
First of all, a 1-row keypad is NOT a matrix. Or better, technically it can be considered a matrix but... A matrix keypad is something like this:
You see? In order to scan this you have to
Pull Row1 to ground, while leaving rows 2-4 floating
Read the values of Col1-4. These are the values of switches 1-4
Pull Row2 to ground, while leaving rows 1 and 3-4 floating
Read the values of Col1-4. These are the values of switches 5-8
And so on, for all the rows
As for the other problem, you are printing an A when the button is held low. What you want to achieve is to print A only on the falling edge of the pin (ideally once per pressure), so
char currValue = digitalRead(2);
if((currValue==LOW) && (oldValue==HIGH))
{
//Send an ASCII 'A',
Keyboard.write(65);
}
oldValue = currValue;
Of course you need to declare oldValue outside the loop function and initialize it to HIGH in the main.
With this code you won't receive tons of 'A's, but however you will see something like 5-10 'A's every time you press the button. Why? Because of the bouncing of the button. That's what debouncing techniques are for!
I suggest you to look at the class Bounce2 to get an easy to use class for your button. IF you prefer some code, I wrote this small code for another question:
#define CHECK_EVERY_MS 20
#define MIN_STABLE_VALS 5
unsigned long previousMillis;
char stableVals;
char buttonPressed;
...
void loop() {
if ((millis() - previousMillis) > CHECK_EVERY_MS)
{
previousMillis += CHECK_EVERY_MS;
if (digitalRead(2) != buttonPressed)
{
stableVals++;
if (stableVals >= MIN_STABLE_VALS)
{
buttonPressed = !buttonPressed;
stableVals = 0;
if (buttonPressed)
{
//Send an ASCII 'A',
Keyboard.write(65);
}
}
}
else
stableVals = 0;
}
}
In this case there is no need to check for the previous value, since the function already has a point reached only when the state changes.
If you have to use this for more buttons, however, you will have to duplicate the whole code (and also to use more stableVals variables). That's why I suggsted you to use the Bounce2 class (it does something like this but, since it is all wrapped inside a class, you won't need to bother about variables).

Bad relocalization after motion tracking loss

With my team we want to implement area learning for relocalization purposes in our projects.
I added this functionnality and it seems to work well. But when a drift disaster happens (motion tracking lost) and that the main camera is instantaneously projected in "the other side of the universe" the program doesn't succeed in relocalizing it : the camera is 2 meters below, or 3 meters beside than where it should be.
Is it an area description error (because it has got not enough point of interests) ?
Or I still have not understood how to use area learning ?
Thanks a lot.
P.S.:
I use the Unity SDK.
public void Update()
{
TangoPoseData pose = new TangoPoseData ();
TangoCoordinateFramePair pair;
if(poseLocalized)
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
else
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
double timestamp = VideoOverlayProvider.RenderLatestFrame(TangoEnums.TangoCameraId.TANGO_CAMERA_COLOR);
PoseProvider.GetPoseAtTime (pose, timestamp, pair);
m_status = pose.status_code;
if (pose.status_code == TangoEnums.TangoPoseStatusType.TANGO_POSE_VALID)
{
// it does not differ with the pair base frame
Matrix4x4 ssTd = UpdateTransform(pose);
m_uwTuc = m_uwTss * ssTd * m_dTuc;
}
}
public void OnTangoPoseAvailable(TangoPoseData pose)
{
if (pose == null)
{
return;
}
// Relocalization signal
if (pose.framePair.baseFrame == TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION &&
pose.framePair.targetFrame == TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE)
{
poseLocalized = true;
}
// If pose status is not valid, nothing is valid
if (!(pose.status_code == TangoEnums.TangoPoseStatusType.TANGO_POSE_VALID))
{
poseLocalized = false;
// Do I forget something here ?
}
}
I've regularly observed that the localization and re-localization of the Area Learning can produce x,y Pose coordinates off by a few meters.
Coordinates can be more accurate if I take more care in recording an area well before moving to a new area.
Upon re-localization the coordinate accuracy is improved if the tablet is able to observe the area using slow, consistent movements before traveling to a new area.
If I learn a new area I always return to a well known area for better accuracy as described by drift correction:
I have two Tango tablets using a Java app that is autonomously navigating an iRobot in my home. I've setup a grid test site using 1 meter tape marks to make the observations.

Resources