Oy I'm doing some animations in Maya 2019 and I have some issues once exporter in Unreal Engine 4.20.
For exporting I use the File > Game Exporter menu. (Any other export workflow I tried so far resulted in a freezing import in Unreal).
Everything goes well using the Game Exporter, I do have some errors when imported in UE4 but it all looks fine, except for one thing:
My character's face is all messed up. After some digging, I discovered it's all about some of the face's morph targets having insane values:
If I manually put all those insane values to 0, everything looks fine. What could cause this?
If I check the mesh asset, the morph targets are limited from -1 to 1 and they look as they should look when I move the slider.
The errors I have when importing my animation are:
Imported bone transform is different from original. Please check Output Log to see detail of error.
Mesh [Geometry have no name] in the fbx file is not reference by any hierarchy node.
Thanks for any help.
Sometimes the hard way is actually the simplest way.
I couldn't have anything working using Game Exporter or using plugins, so I made myself a script for baking my animation including my corrective morphs and exported the result in FBX 2018. It works.
So here's how I made up my script:
First I selected my export selection set (it contains my geometry and deforming joints), then clicked Key > Bake Simulation (you have to be in Animation set menu to have the key menu displayed). I open the script editor and copy the command line starting by "bakeResults" (it's easy to find it if you have selected plenty of things just before: it's a massively long line at the end of the script editor).
From there I have my script base, but it doesn't include the baking of my blend shapes.
So, for each object with blend shapes to bake:
select the object in the outliner
double click its blend shape input in the channel box
copy its name
paste it at the end of the script, just before the };
make sure to respect the syntax: each list element must be between two " and separated with a comma and space , . No comma nor space after the last element before };
It looks something like this:
bakeResults -simulation true -t "1:60" -sampleBy 1 -oversamplingRate 1 -disableImplicitControl true -preserveOutsideKeys true -sparseAnimCurveBake false -removeBakedAttributeFromLayer false -removeBakedAnimFromLayer false -bakeOnOverrideLayer false -minimizeRotation true -controlPoints false -shape true {"element_to_bake", "another_thing_to_bake"};
But with much more elements in the list.
Notice the -t "1:60" in the line: it's the frame range you want to bake. You will need to adapt those numbers to your needs.
Once you have this line, I recommend you save it in a file somewhere.
To use it, SAVE YOUR PROJECT BEFORE BAKING, then paste the script in the MEL command line at the bottom left corner of the UI, adapt the time range if needed, then press ⏎ ENTER to execute the baking script.
When the baking is finished, you can export your FBX and then reload your saved project file.
Related
When I am trying to align two point clouds in meshlab, there is an error saying No successful arc among candidate Alignment arcs. And even though the point clouds after alignment can be seen in window, I cannot save it successfully. However I try, the ply file which I save always contains only the last layer, rather than all the layers. Dose the saving failure have something to do with the process error? How can I overcome this error? I work on Ubuntu 18.04 version.
Meshlab will only export one layer when you export to any output file format. If you have several layers that should be exported to same file you need to merge the layer with the filter Flatten visible layer.
To run this filter you can click with the right mouse button on the layer list, or find it in the menu Filter->Mesh Layer->Flatten visible layer
I have an Idle animation with arms holding a weapon which I have tweaked so It would work with a weapon I had created, I also have a running animation for the same arms which I need to tweak as well to work with my weapon, Is It possible to use the first keyframe from the Idle animation to offset the running animation's key to match it?
Ok so if anybody needs this in the future, I ended up using 3ds max's merge animation, it worked perfectly.
I've edited my previous answer to clearly state how this is done.
First, you want to export your idle animation. This can be done using the ATOM exporter in Maya or a third party plugin like PAIE or Studio Library. From here you can choose to only export the first frame or all frames. Make sure to select all the relevant controllers.
Secondly, you open up your running animation and select the same controllers as before, and then create a new animation layer with that selection. Animation layers are found in Channel Box / Layer Editor as a tab called Anim.
After setting up the layer, you can import your idle animation onto the layer. Again, this can be done using ATOM or a third party plug-in. Perhaps lock the BaseAnimation layer to prevent accidentally changing it. If you only wish to merge certain parts, like the arms, make sure to only export, import and add that animation to the animation layer.
I'm trying to integrate a animated 3D character in a Web navigator.
I use MakeHuman 1.02 to create a character which I import in Blender 2.74 in .mhx format.
I retarget to a BVH using the MakeWalk plugin for Blender. It's for the motion.
When I try to export the character in .json format (three.js), the following error appears :
MakeHuman is not a valid mesh object.
A mesh object is an object that we can modify properties or vertices, isn't it ?
I try others format like .dae format (collada) but it seems that the navigators doesn't find the skeleton and the textures of the character (even if they are in the same directory) necessary for the character's motion.
How to get the character like a mesh object ? Or somebody knows another process to success ?
Like Erica pointed out, you need to have a mesh selected to export it. The problem with this is it doesn't seem to work if you have multiple meshes. Only one will export. This is a problem when using MakeHuman because their clothes are separate meshes.
One way to fix this is to select all meshes and combine them into one (I believe that's CTRL + J). However, you'd have to somehow merge all your texture files into one big one and I have no idea how to do that.
What I do is to export the entire scene. Then it doesn't matter what is selected. All meshes get exported. You can load it using either the ColladaLoader, which I would recommend since you're retargeting to a BVH (worked great for me), or the new ObjectLoader.
If you have your own Scene object on the page that you want to use, you can still load the scene created by the exporter, traverse it to get the items you care about, and add those items to your scene that will display on the page.
I'm currently looking for a way to create a 'configurator' for a upholsters, similar to http://digitaldraping.com/configurator/furniture-sofa/?Cushions_Plain-Cream.png,Sofa_Stripe-Orange.png - you select your fabrics and they are 'drawn' on the sofa automatically.
Unfortunately, all the sites I've looked at seem to use pre-rendered transparent PNGs that are overlaid over each other to build up the full picture. The problem here is that we've figured out that we'd require over 120,000 different images to cover all models, fabrics etc!!
I've looked at a few 3d texture tools such as http://www.arahne.si/products/arah-drape.html, hoping that one of them would have a CLI option where you give it a pre-created wireframe, and a fabric to overlay, and it generates the required image on the fly, but so far everything seems to require real-time use of the GUI to use it.
So, is there a CLI tool that would do what I'm after, or can anyone suggest a way to manipulate the GUI automatically? (from a tech point of view, I'm comfortable with C, Bash, Python or PHP as a solution!)
Thanks!
ArahDrape 2.2 can now work from a command line without any GUI interface. You can also call ArahDrape as a C library. In this way, it can be used in a web server to create texture mapped images on the fly. The command line options are explained below.
ArahDrape 2.2j command line version, ©2015 Arahne
usage:
adCommand -o /tmp/outputImage.png -tN /home/user/texture.png [-hidemodel] [-divide 2] [-filterPNG] [-compressPNG 2] [-m /home/user/model.png] -owner name -activation 174b3cfb49e9 /home/user/project.drape
Input and output images can have png, .tif or .jpg extensions
-o output_image_file
-tN texture_image_file [N goes from 0 to 199]
-hidemodel will render all areas not in region as white
-divide N [N goes from 2 to 5] divide resulting image pixel size
-filterPNG if you do not filter it, rendering is faster
-compressPNG N [N goes from 0 to 9] lower number saves faster, but bigger files
-m model_image_file use this if you want to replace model image from the project; must have same pixel size
-owner owner_name pass the given owner name
-activation activation_code pass the given activation code
last parameter should be ArahDrape project file
All files should be entered with full path.
If you need spaces in filenames, use quotes "" around the filename.
If you provide only Owner name, without activation code, program returns registration code.
ArahDrape supports batch export.
Open ArahDrape project, click on texture you wish to replace, put all your texture in a directory, select from menu
Textures > Browse textures, and as you click the texture to load it, program will save the draped picture. If you have thousands of images, use keyboard shortcut = and program will automatically do them all.
Alpha channel transparency is supported in loading model images or textures, and saving the draped images, as long as you use PNG or TIFF.
Please check this video to see how
ArahDrape works in batch mode.
we (http://digitaldraping.com/) can do just what you are asking. We have two options creating images and rendering a meshed image on the fly. Just get in touch if you still need this solution.
I am creating D3.js animations, like this: Demo
Let's say I want to present my work here (or on a blog). About the best I can do is post a picture:
On the other hand, if I, let's say, use the Python library Matplotlib for data visualization, I could produce an animated GIF file, and post it here:
I would like to programatically obtain similar animated GIF files out of my D3.js animations. How can I do this?
NOTE: I started working on getting events from d3.transition(), but so far I didn't have any luck.
The solution uses a tool called LICEcap, a screen capture utility for Windows and Mac. Steps are following:
Download LICEcap and install it. Now, if you start this program, it will have a rather unusual shape, just a thin frame, and everything inside the frame will be transparent:
Go to the window with your D3.js animation and prepare everything so that you could start animation at some point. Let's say we want to record this example from d3js.org:
Now start LICEcap and position it over the area you want to have in your animated GIF file:
Make sure that you enter at least 20 FPS in the bottom left edit box, otherwise the recording will be low quality. Press record. A dialog will first appear, and you choose here whether you want your GIF file to be in an infinite loop, or just repeated once, or any number of times. Also an interesting option is to add some visual clues for mouse clicks. Choose also filename, and press Save.
Now you do whatever you have to do to trigger animations. I pressed buttons Grouped and Stacked several times. After I decided it's enough, I pressed Stop. The resulting file is:
That's it!