I'm using ArcGISRuntime 100.0 with Qt (linux) and I'm looking for helper classes that provide conversions between DMS / DD and possible toScreenCoords etc.
Do these exist ? thanks.
Use the CoordinateFormatter class - https://developers.arcgis.com/qt/latest/cpp/api-reference/esri-arcgisruntime-coordinateformatter.html
For example, here is how you could take in a lat/long string and convert it to a few formats:
// Convert Lat Long Coordinates as String to Point
Point pt = CoordinateFormatter::fromLatitudeLongitude(inputString,
SpatialReference(4326));
// Convert Point to various String formats
qDebug() << CoordinateFormatter::toLatitudeLongitude(pt, LatitudeLongitudeFormat::DecimalDegrees, 5);
qDebug() << CoordinateFormatter::toLatitudeLongitude(pt, LatitudeLongitudeFormat::DegreesDecimalMinutes, 5);
qDebug() << CoordinateFormatter::toLatitudeLongitude(pt, LatitudeLongitudeFormat::DegreesMinutesSeconds, 5);
Here is a sample that showcases how to use it https://github.com/Esri/arcgis-runtime-samples-qt/tree/master/ArcGISRuntimeSDKQt_CppSamples/Geometry/FormatCoordinates
Related
I'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg AVFrame I already have (in AV_PIX_FMT_YUV420P format) and create a texture in DXGI_FORMAT_NV12 format using that data.
I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the AV_PIX_FMT_YUV420P doesn't.
My main question is what does the D3D12_RESOURCE_DESC look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format?
I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.
Just create the resource as normal, and then when you query the layout description, it will be planar.
D3D12_RESOURCE_DESC desc = {};
desc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
desc.Format = DXGI_FORMAT_NV12;
desc.MipLevels = 1;
desc.DepthOrArraySize = 1;
desc.Width = 1024;
desc.Height = 720;
desc.SampleDesc.Count = 1;
const CD3DX12_HEAP_PROPERTIES defaultHeapProperties(D3D12_HEAP_TYPE_DEFAULT);
ComPtr<ID3D12Resource> res;
HRESULT hr = device->CreateCommittedResource(
&defaultHeapProperties,
D3D12_HEAP_FLAG_NONE,
&desc,
D3D12_RESOURCE_STATE_COMMON,
nullptr,
IID_PPV_ARGS(res.GetAddressOf()));
if (FAILED(hr))
{
// error
}
D3D12_FEATURE_DATA_FORMAT_INFO formatInfo = { DXGI_FORMAT_NV12, 0 };
if (FAILED(device->CheckFeatureSupport(D3D12_FEATURE_FORMAT_INFO, &formatInfo, sizeof(formatInfo))))
{
formatInfo = {};
}
D3D12_PLACED_SUBRESOURCE_FOOTPRINT footprint[2] = {};
UINT numRows;
UINT64 rowBytes, totalBytes;
device->GetCopyableFootprints(&desc, 0, 2, 0, footprint, &numRows, &rowBytes, &totalBytes);
The formatInfo.PlaneCount is 2, which is why you have to ask for two subresources.
footprint[0].Format is DXGI_FORMAT_R8_TYPELESS with 1024x720 size. The footprint[0].Offset is likely 0.
footprint[1].Format is DXGI_FORMAT_R8G8_TYPELESS with 512x360 size. The footprint[1].Offset is something other than 0.
In Direct3D 12 Video the layouts are very simple to understand. In Direct3D 11 Video, it was all implicitly defined so it was a bit of a mess. That said, DDS files were defined as non-planar data, so you may want to examine how these are handled in DirectXTex.
How do I get convert Entry to float or numerical so I can add digits together.
I tried converting Entry to float by float.parse. and if i need float to string it's .Tostring(); but the float.parse is throwing some exception.
//input string
entry1 = n1.Text;
//convert
float floatn1 = float.Parse(entry1);
//show entered
//n1Label.Text = entry1;
entry2 = n2.Text;
float floatn2 = float.Parse(entry2);
float sum = floatn1 + floatn2;
string s = sum.ToString();
nsumLabel.Text = s;
System.format exception error
The exception is being thrown because the entry1 text is not in the correct format for a float.
Use float.TryParse to check:
//input string
entry1 = n1.Text;
if(!float.TryParse(entry1, out float floatn1)) {
// incorrect format
// tell the user to input a decimal number in a correct format
return;
}
// correct format, continue
float.TryParse returns false if the value in entry1 cannot be converted into a float (usually because of an incorrect format). If the format is correct, the float value is stored in the out parameter and you can use it.
I have a human bipedal animation file format that I would like to programmatically read into Maya using the C++ API.
The animation file format is similar to that of the Open Asset Importer's per-node animation structure.
For every joint, there is a series of up to 60 3D vector keys (to describe the translation of the joint) and 60 quaternion keys (to describe the rotation of the joint). Every joint is guaranteed to have the same number of keys (or no keys at all).
The length (time in seconds) of the animation can be specified or changed (so that you can set the 60 keys to happen over 2 seconds for a 30 FPS animation, for example).
The translations and rotations of the joints propagates down the skeleton tree every frame, producing the animation.
Here's a sample. Additional remarks about the data structure are added by the logging facility. I have truncated the keys for brevity.
Bone Bip01
Parent null
60 Position Keys
0 0.000000 4.903561 99.240829 -0.000000
1 0.033333 4.541568 99.346550 -2.809127
2 0.066667 4.182590 99.490318 -5.616183
... (truncated)
57 1.366667 5.049816 99.042770 -116.122604
58 1.400000 4.902135 99.241692 -118.754120
59 1.400000 4.902135 99.241692 -118.754120
60 Rotation Keys
0 0.000000 -0.045869 0.777062 0.063631 0.624470
1 0.033333 -0.043855 0.775018 0.061495 0.627400
2 0.066667 -0.038545 0.769311 0.055818 0.635212
... (truncated)
57 1.366667 -0.048372 0.777612 0.065493 0.623402
58 1.400000 -0.045869 0.777062 0.063631 0.624470
59 1.400000 -0.045869 0.777062 0.063631 0.624470
Bone Bip01_Spine
Parent Bip01
60 Position Keys
...
60 Rotation Keys
...
In C++, the data structure I currently have corresponds to this:
std::unordered_map<string, std::vector<Vector3>> TranslationKeyTrack is used to map a set of translation vectors to the corresponding bone.
std::unordered_map<string, std::vector<Quaternion>> RotationKeyTrack is used to map a set of rotation quaternions to the corresponding bone.
Additional notes: There are some bones that do not move relative to its parent bone; these bones have no keys at all (but has an entry with 0 keys).
There are also some bones that have only rotation, or only position keys.
The skeleton data is stored in a separate file that I can already read into Maya using MFnIkJoint.
The bones specified in the animation file is 1:1 to the bones in that skeleton data.
Now I would like to import this animation data into Maya. However, I do not understand Maya's way of accepting animation data through its C++ API.
In particular, the MFnAnimCurve function set addKeyFrame or addKey accepts only a single floating point value tied to a time key, while I have a list of vectors and quaternions. MFnAnimCurve also accepts 'tangents'; after reading the documentation, I am still unsure of how to convert the data I have into these tangents.
My question is: How do I convert the data I have into something Maya understands?
I understand better with examples, so some sample code will be helpful.
So after a few days of trial-and-error and examining the few fragments of code around the Internet, I have managed to come up with something that works.
Given the abovementioned TranslationKeyTrack and RotationKeyTrack,
Iterate through the skeleton. For each joint,
Set the initial positions and orientations of the skeleton. This is needed because there are some joints that do not move relative to its parent; if the initial positions and orientations are not set, the entire skeleton may move erratically.
Set the AnimCurve keys.
The iteration looks like this:
MStatus status;
MItDag dagIter(MItDag::kDepthFirst, MFn::kJoint, &status);
for (; !dagIter.isDone(); dagIter.next()) {
MDagPath dagPath;
status = dagIter.getPath(dagPath);
MFnIkJoint joint(dagPath);
string name_key = joint.name().asChar();
// Set initial position, and the translation AnimCurve keys.
if (TranslationKeyTrack.find(name_key) != TranslationKeyTrack.end()) {
auto pos = TranslationKeyTrack[name_key][0];
joint.setTranslation(MVector(pos.x, pos.y, pos.z), MSpace::kTransform);
setPositionAnimKeys(dagPath.node(), positionTracks[name_key]);
}
// Set initial orientation, and the rotation AnimCurve keys.
if (RotationKeyTrack.find(name_key) != RotationKeyTrack.end()) {
auto rot = rotationTracks[name_key][0];
joint.setOrientation(rot.x, rot.y, rot.z, rot.w);
setRotationAnimKeys(dagPath.node(), RotationKeyTrack[name_key]);
}
}
For brevity, I will omit showing setPositionAnimKeys, and show only setRotationAnimKeys only. However, the ideas for both are the same. Note that I used kAnimCurveTL for translation tracks.
void MayaImporter::setRotationAnimKeys(MObject joint, const vector<Quaternion>& rotationTrack) {
if (rotationTrack.size() < 2) return; // Check for empty tracks.
MFnAnimCurve rotX, rotY, rotZ;
setAnimCurve(joint, "rotateX", rotX, MFnAnimCurve::kAnimCurveTA);
setAnimCurve(joint, "rotateY", rotY, MFnAnimCurve::kAnimCurveTA);
setAnimCurve(joint, "rotateZ", rotZ, MFnAnimCurve::kAnimCurveTA);
MFnIkJoint j(joint);
string name = j.name().asChar();
for (int i = 0; i < rotationTrack.size(); i++) {
auto rot = rotationTrack[i];
MQuaternion rotation(rot.x, rot.y, rot.z, rot.w);
// Depending on your input, you may have to do additional processing
// to get the correct Euler rotation here.
auto euler = rotation.asEulerRotation();
MTime time(FPS*i, MTime::kSeconds); // FPS is a number defined elsewhere.
rotX.addKeyframe(time, euler.x);
rotY.addKeyframe(time, euler.y);
rotZ.addKeyframe(time, euler.z);
}
}
Finally, the bit of code I used for setAnimCurve. It essentially attaches the AnimCurve to the joint. This bit of code is adapted from a mocap file importer here. Hooray open source!
void MayaImporter::setAnimCurve(const MObject& joint, const MString attr, MFnAnimCurve& curve, MFnAnimCurve::AnimCurveType type) {
MStatus status;
MPlug plug = MFnDependencyNode(joint).findPlug(attr, false, &status);
if (!plug.isKeyable())
plug.setKeyable(true);
if (plug.isLocked())
plug.setLocked(false);
if (!plug.isConnected()) {
curve.create(joint, plug, type, nullptr, &status);
if (status != MStatus::kSuccess)
cout << "Creating anim curve at joint failed!" << endl;
} else {
MFnAnimCurve animCurve(plug, &status);
if (status == MStatus::kNotImplemented)
cout << "Joint " << animCurve.name() << " has more than one anim curve." << endl;
else if (status != MStatus::kSuccess)
cout << "No anim curves found at joint " << animCurve.name() << endl;
curve.setObject(animCurve.object(&status));
}
}
my problem is that I don´t have much experience with Autocad. Thus I don´t know how to save a project in a good quality image (png?) to insert in a latex document.
Could you give me a hint?
Thank you
Autocad's Publish to Web printers are pretty bad. What I would do is print using DWG to PDF printer or similar (there are a few in autocad's default printer list) then convert that pdf to raster images using a second software like Photoshop, GIMP, etc. There are even small software that convert pdf's to jpgs like TTRPDFToJPG3. If you have a specific idea of what kind of output you're looking for, please feel free to elaborate further. cheers!
If you're looking for a programmatic way to capture the screen, here it is:
using acApp = Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.Runtime;
using System.Drawing.Imaging;
using System.Drawing;
namespace ScreenshotTest
{
public class Commands
{
[CommandMethod("CSS")]
static public void CaptureScreenShot()
{
ScreenShotToFile(
acApp.Application.MainWindow,
"c:\\main-window.png",
0, 0, 0, 0
);
ScreenShotToFile(
acApp.Application.DocumentManager.MdiActiveDocument.Window,
"c:\\doc-window.png",
30, 26, 10, 10
);
}
private static void ScreenShotToFile(
Autodesk.AutoCAD.Windows.Window wd,
string filename,
int top, int bottom, int left, int right
)
{
Point pt = wd.Location;
Size sz = wd.Size;
pt.X += left;
pt.Y += top;
sz.Height -= top + bottom;
sz.Width -= left + right;
// Set the bitmap object to the size of the screen
Bitmap bmp =
new Bitmap(
sz.Width,
sz.Height,
PixelFormat.Format32bppArgb
);
using (bmp)
{
// Create a graphics object from the bitmap
using (Graphics gfx = Graphics.FromImage(bmp))
{
// Take a screenshot of our window
gfx.CopyFromScreen(
pt.X, pt.Y, 0,0, sz,
CopyPixelOperation.SourceCopy
);
// Save the screenshot to the specified location
bmp.Save(filename, ImageFormat.Png);
}
}
}
}
}
Source: Taking screenshots of AutoCAD’s main and drawing windows using .NET
Thanks to everyone. I am saving the files in pdf and after I´m using GIMP to convert them in PNG.
How can I convert single channel IplImage (grayscale), depth=8, into a Bitmap?
The following code runs, but displays the image in 256 color, not grayscale. (Color very different from the original)
btmap = gcnew Bitmap(
cvImg->width ,
cvImg->height ,
cvImg->widthStep ,
System::Drawing::Imaging::PixelFormat::Format8bppIndexed,
(System::IntPtr)cvImg->imageData)
;
I believe my problem lies in the PixelFormat. Ive tried scaling the image to 16bit and setting the pixel format to 16bppGrayscale, but this crashes the form when uploading the image.
The destination is a PicturePox in a C# form.Thanks.
You need to create ColorPalette instance, fill it with grayscale palette and assign to btmap->Palette property.
Edit: Actually, creating ColorPalette class is a bit tricky, it is better to modify color entries directly in btmap->Palette. Set these entries to RGB(0,0,0), RGB(1,1,1) ... RGB(255,255,255). Something like this:
ColorPalette^ palette = btmap->Palette;
array<Color>^ entries = palette->Entries;
for ( int i = 0; i < 256; ++i )
{
entries[i] = Color::FromArgb(i, i, i);
}
int intStride = (AfterHist.width * AfterHist.nChannels + 3) & -4;
Bitmap BMP = new Bitmap(AfterHist.width,
AfterHist.height, intStride,
PixelFormat.Format24bppRgb, AfterHist.imageData);
this way is correct to create a bitmap of a IPLimage.