how to rotate the scene(qgraphicsscene) but not the view(qgraphicsview) in pyqtgraph plotwidget - rotation

I'm using pyqtgraph PlotWidget to draw something, that works well. But when I want to rotate the "view".
Here is the first pic, degree 0:
[1]: https://i.stack.imgur.com/gQnGd.png
Then after rotate with transform(codes later), degree 40 as example:
[2]: https://i.stack.imgur.com/qX12z.png
As marked in pic2, actually, after rotation, the "out of view" area is supposed to be filled with the grid and the item in invisible area of pic1 would show too.
Code:
#plotwidget initialized in ui file with a QFrame parent,
#which also the parent of buttons and sliders.
range = QRectF(0, 0, self.plotwidget.size().width(), self.plotwidget.size().height()).center()
transform=self.plotwidget.transform()
transform.translate(range.x(),range.y())
transform.rotate(angle)
transform.translate(-range.x(),-range.y())
self.plotwidget.setTransform(transform)
I checked the api of QGraphicsScene and QGraphicsView, only QGraphicsView has the rotate method, which actually the same as "rotate" with transform.
So I think the rotate of QGraphicsView or PlotWidget actually rotate the view widget itself and the QGraphicsScene in it meanwhile. But how to rotate the scene only?
Thanks for your help ahead.

Related

d3 trigger mouse click on specific coordinates

I have an issue on triggering mouse event on a svg.
I am using the library d3.js to handle some graphic tasks. Specifically, when I manually click on a position on the svg, d3 draws a red or blue (depending on the path I am clicking on) circle on it and then returns the d3.mouse coordinates of the click.
Now, suppose I have a set of coordinates and want to trigger programmatically click on the corresponding point on the svg, so that it draws a red or blue circle automatically. How can I achieve that?
I read many solutions here but none allows to click on specific coordinates (while I can easily click on a specific path, for example).
My ideal function would be:
var svgd3 = d3.select('svg#id_svg')
function d3click(x,y,svgd3){
//does the click on [x,y, x,y are relative coord. depending only svg viewBox
...
}
Any idea?
Thanks a lot!

How to position a button using X and Y coordinates in xamarin forms?

I want to position a button at place having certain pixels distance from left and top. I have the exact position in pixels of where to place the button. But, how can I do so?
Try Absolute layout
https://developer.xamarin.com/guides/xamarin-forms/user-interface/layouts/absolute-layout/
Another option is create view renderer which implement this funcionality in native part.

orthographic view on object with combined camera on three.js

i am trying to use the combined camera (found it under "Examples").
the problem is when i use it in Orthographic mode i still see the arrow and the box helper like in perspective view.
for example, when i am trying to zoom in with the mouse scroll, i can see the plane in the same size (as it supposed to be in orthographic view) but the arrows and the small box between the arrows is getting smaller or bigger.
when i tried to debug it at the renderer function i saw the camera is still in orthograpic mode when it render the arrows.
any idea how can i make all the object to be in orthograpic view but still use the combined camera?
edit:
i am not sure which part of the code i should post so i add a picture to describe my problem.
you can see that i am in an orthographic camera and i'm trying to zoom in and i can see the axis arrow getting bigger or smaller.
the difference between the plane when zooming
Found a possible answer which worked for me:
under TransformControls.js
change the update function to:
scale = (camera.inOrthographicMode == true)? 100 : worldPosition.distanceTo(camPosition ) / 6 * scope.size;

React Native: Continue animation with last velocity of touch gesture

In my React Native application I have some tiles (wrapped in a View for the example) which are half of the full width wide. They act as buttons and slide to the opposite side to open a menu. When I perform a swipe gesture and release the finger before the slide reaches its final position, I want the slide to animate to its final 'opened' position. The animation should start with the last velocity of the touch gesture for a smooth impression.
I implemented different variations but did not find a good solution (You can find my test-component in my GitHub repository). My View has a PanResponder to manage the gesture and to get the velocity. I started to use the Animated library, but the provided methods do not solve my problem. The only method where I can pass a initial velocity for the animation is the decay, but I can't pass a parameter where the animation should stop. With a timing animation I can set a final value, but can not pass a initial velocity (so the animation starts with a velocity of 0 which looks very jumpy). I tried to combine these two methods, but that does not work properly.
On iOS I could use a horizontal ScrollView with pagingEnabled, which shows the desired effect - but then I do not have the feature on Android.
Any ideas how I can solve this problem and show a smooth animation, starting with an initial velocity and ending on a given position, after the touch gestures end?
Thanks for your help!
EDIT I added the link to my last test component...
You can get a close approximation of the velocity by setting the duration of the timing animation
const duration = Math.abs((this.props.MAXDISTANCE - Math.abs(distanceMoved)) / velocity);
MAXDISTANCE is your final position
distanceMoved is the current position (gestureState.dx)
velocity is the current velocity (gestureState.vx)
You can use Animated.decay or Animated.spring to achieve this effect.

xna wp7 how to let the user crop his photo

hi
how can i let the user to crop his picture before choosing one
in the photoChooserTask?
( in msdn it says that the pixelheight/width doesn't work yet)
thanks
You would have to do the pixel operations yourself after the image has selected. You can render the image to the screen, and then look for two touch points to determine the x/y and height/width ... then create a new texture with those dimensions and render to it as a RenderTarget. You can then save this new texture, or do whatever you need to do with it
Not sure if Windows Phone 7 can handle RenderTargets but if it can. You can just have the user drag a rectangle, create a RenderTarget of that size, and then draw to the RenderTarget using the rectangle as the source rectangle of the image you want to crop.

Resources