Projecting negative coordinates inside display area - omnet++

I am testing the RandomWaypointMobility with a constrained area minX=-3000m, maxX=3000m, minY=-3000m and maxY=3000m. The #displaystrings sets bgp=6000,6000. The result is that nodes in the negative part of the coordinate system are rendered outside the display/canvas area.
Are there some parameters I can use to tell OMNeT++/INET that origo for the coordinate system is at the center of the display/canvas? I have tried
*.visualizer.sceneVisualizer.sceneMaxX = 3000m
*.visualizer.sceneVisualizer.sceneMinX = -3000m
*.visualizer.sceneVisualizer.sceneMaxY = 3000m
*.visualizer.sceneVisualizer.sceneMinY = -3000m
*.visualizer.sceneVisualizer.sceneMaxZ = 3000m
*.visualizer.sceneVisualizer.sceneMinZ = -3000m
but it doesn't work as I hoped for.
I realize that for RandomWaypointMobility I can just use a constrained area with positive coordinates only, which would keep objects within the canvas. However, my next task is to pull in mobility traces that include negative coordinates. Do I need to manually shift all coordinates so they become positive and stay within the canvas/display, or is there a smarter way of doing things?
Any hints appreciated!
Thanks,
Dragos

What you set is in fact bgb=6000,6000 which sets the size of the module. There were indeed plans to add a tag called bgp directly into OMNeT++ which would introduce an offset, but at the end it was not implemented. The reason is that once you go down into that rabbit hole, you want to implement also scaling and then rotation etc. So the default display string based visualization left as simple as possible and all these transformation stuff was left for the model code.
So indeed, SceneCanvasVisualizer in INET has a viewScale and viewTranslation parameter that can be used for these purposes.

Related

D3 Domain Values Go Over Range

I am working with D3 to try and create a simple bar chart. My x-axis uses ScaleTime and my y-axis uses ScaleLinear. In the pictures below, you can see that the values I've put for the domain (date values) go past the the range. Shouldn't the ticks be confined to the line? Been struggling with this for a while and haven't been able to find anything on the internet.
Graph
Inspect Element
EDIT
After applying .clamp, this is the result
New Graph
And here is the main part of my code I'm looking at (some of the values are arbitrary)
Code
I think this is because clamping is disabled by default on time scales:
Constructs a new time scale with the specified domain and range, the default interpolator and clamping disabled.
It's hard to suggest a fix without seeing your code, but try something like this:
d3.scaleTime()
.domain([domain)
.range([range])
.clamp(true)(Date.now())

How to implement rc command on virtual joystick?

I need to implement a Tello command, which is rc a b c d on a virtual joystick. From different forums, I came to know that for virtual joystick, we need to use rc commands to move the Tello drone. But I don't know how to implement it. In their SDK documentation, they have mentioned it as
a:left/right (-100~100) b: forward/backward (-100~100) c: up/down (-100~100) d: yaw (-100~100)
What do these negative values mean? How can I use the rc command to move the drone?
This is the virtual joystick code which I am using:
JoystickView joystick = (JoystickView) findViewById(R.id.joystickView);
joystick.setOnMoveListener(new JoystickView.OnMoveListener() {
#Override
public void onMove(int angle, int strength) {
// code goes here
}
});
The values -100~100 normally are the velocity for their respective axis. Depending on the coordinate system set and the control modes for the axes, the aircraft move along the axis corresponding to the value. Based on the code you provided I assume the strength value represents the percentage how much the stick is pushed/shifted and the angle value shows the direction into which the stick is pushed.
For the virtual Sticks you need to set the Control modes and coordinate system by accessing the flight Controller:
setRollPitchControlMode(RollPitchControlMode.VELOCITY);
setYawControlMode(YawControlMode.ANGULAR_VELOCITY);
setVerticalControlMode(VerticalControlMode.VELOCITY);
setRollPitchCoordinateSystem(FlightCoordinateSystem.BODY);
The modes chosen above ensure that the virtual controller behaves the same as the default physical remote controller.
Additionally you need to activate the virtual Sticks with the setVirtualStickModeEnabled method before you can use them.
Now for the continuous control over the aircraft you need to send the virtualStickData with at least 5 Hz:
SendVirtualStickDataTask task = new SendVirtualStickDataTask();
this.timer = new Timer();
this.timer.schedule(task, 0, 200);
In this example SendVirtualStickDataTask extends TimerTask and only sends the current pitch, roll, yaw, and vertical throttle values to the drone via the sendVirtualStickFlightControlData method from DJI SDK inside the run() method of TimerTask.
Finally the current pitch, roll, yaw, and vertical throttle values are set inside your onMove() method you posted in your question. E.g. you can use the trigonometric functions sin and cos to determine the x- and y- parts of the strength value, something like this:
pitch = Math.cos(angle)*strength;
roll = Math.sin(angle)*strength;
Please note that the angle needs to be radian and you probably need to cast the angle/strength to float. Furthermore depending on how the angle value is determined you need to adjust the value accordingly.
The second joystick can be used to control the vertical throttle and the yaw. you will need a bit of fine-tuning and testing.
For the control modes/coordinate system, the following DJI SDK Documentation is helpful (scroll down to "Virtual Sticks"):
https://developer.dji.com/mobile-sdk/documentation/introduction/component-guide-flightController.html
DJI also has a basic code example for the virtual Sticks usage:
https://developer.dji.com/mobile-sdk/documentation/android-tutorials/SimulatorDemo.html
I highly recommend using the DJI Flight Assistant 2 Software to test your code before you attempt to fly in the real world.

ID2D1RenderTarget::GetSize returing physical pixels instead of DIP

I'm currently getting started with Win32 and Direct2D and reached the chapter on DPI and DIP. At the very bottom it says ID2D1RenderTarget::GetSize returns size as DIP and ID2D1RenderTarget::GetPixelSize as physical pixels. Their individual documentation confirms that.
However I cannot observe that ID2D1RenderTarget::GetSize actually returns DIP.
I tested it by
setting the scale of one of my two otherwise identical displays to 175%,
adding <dpiAwareness xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">PerMonitorV2</dpiAwareness> to my application manifest,
obtaining
D2D1_SIZE_U sizeP = pRenderTarget->GetPixelSize();
D2D1_SIZE_F size = pRenderTarget->GetSize();
in method MainWindow::CalculateLayout from this example (and printing the values),
and moving the window from one screen to the other, and arbitrarily resizing it.
I can see the window-border changing size when moving from one display to another. However, the values in both sizeP and size (besides being int and float) are always identical and correspond to the physical size of the ID2D1HwndRenderTarget.
Since I do not expect the documentation to be flawed, I wonder what I am missing to actually get the DIP of the window of the ID2D1HwndRenderTarget pRenderTarget.
The size is only relative to the DPI of the render target, set using ID2D1RenderTarget::SetDpi. This value is not automatically connected to the value provided by the display system, which can be queried using ID2D1Factory::GetDesktopDpi or GetDpiForMonitor.

How to project (or paste)panorama to model?

Before question,I seached many places, I find some similar idea but without my solution.And my question can be also described as how to recalculate the model's uv to fix the panorama designed for six faces skybox.
Recently,I came upon a unique way to get fluent 3D roaming experience on matterport's Official network https://matterport.com/gallery/
I just want to know how did they do that?Their product is very fluent when swich the panorama picture.
After I roaming many times,I found the secret. I realized that the panorama carrier they use is not box or sphere,but is the object they show first!The evidence is that when switch the point,the object such as chair and table would have their own shadow(one chair have two image one stand up and the other one lie on the floor
With the object in panorama paste on their own correspond object and with depth information the roaming switch become more fluent (As for why they do not use the object directly ,I think because of the limited hardware,Many irregularity faces which get from scanning equipment cannot be use directly
And I want to use this idea in my project ,I have a group of six panorama which can paste on a boxGeometry perfectly,and I just want to paste them on model.but I stuck in project 360 degree.Yes I just find how to project one direction but I cannot project the remaining five.
var _p=BufferGeometry.attributes;
for (var i = 0; i < _p.position.count; i++){
var uvtempbeforeconvert= ( new THREE.Vector3(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]) ).clone().applyMatrix4(houseObject.matrixWorld).project(camera1)
//use the worldvertices to get its screen coordinate
if(uvtempbeforeconvert.x<1&&uvtempbeforeconvert.x>-1 && uvtempbeforeconvert.y<1 &&uvtempbeforeconvert.y>-1) {
VerticesArray1.push(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]);uvArray1.push(uvtempbeforeconvert.x*0.5+0.5,uvtempbeforeconvert.y*0.5+0.5);
Yes,I success in calculating one direction.BUT I cant deal with the triangle faces which occupy two more view frustum,like a face at the edge of the box.
How should I deal with this problem?Or I run in the FALSE direction at first?Which direction should i run in ?
After asking many people,I just find that I need to usd shadermaterial in threejs ,and use a function named cubetexture,samplercube.With that I can get the pixel color what I need!

Automatic selection of control points in Matlab

Is there a way to select the control points automatically in Matlab instead of manually selecting them by cpselect? Thank you very much.
I just recently worked on a project where I had to do the same thing -- eventually I found that you can select control points automatically, but only if you use automatic selection to find the control points for both the unregistered and the orthophoto. (The control points used to define image transforms are stored in matrices, so if you can get your automated system to output a set of point coordinates in matrix form you can pass them to straight to cp2tform and bypass cpselect entirely.) On the other hand, cpselect stores corresponding pairs of image points using some kind of special data structure, so that I was never able to just pass it a set of control points for one image while leaving the other image blank.
I don't have the Matlab Image Processing Toolbox, but I see from the documentation that cpeselect can be called with an argument specifying the initial set of control points. Can you reduce your task to automating the creation of that initial set?

Resources