I want to automate this game.
https://fernandoruizrico.com/examples/phaser/first-game/index.html
In this, how do i make the object reach a specific position on the screen?
I can programmatically drag the joystick, but i want to know the logic / algorithm to implement on the joystick.
Eg: If i want the object to go towards top right of the screen, I will drag the joystick from center at an angle of 45 degrees. Similarly, I want to know what logic should be implemented to make the object move to specific position on the screen.
Related
So i know how to capture photo (either by using camera or image_picker), I know how to crop those images in a certain square/ rectangle dimension, and I know how to put a rotate button with certain angle functionality (e.g. rotate 90 degree or 180 degree).
But I would like to know how to rotate captured image based on user dragging movement.
So if for example there is an id card that slant 20 degree, I will be able to drag it and make it straight.
Or, if there is any method to auto detect the frame of id card and rotate it automatically?
I have tried edge_detection and flutter_realtime_detection libs, but edge_detection didn't detect the edge properly, especially if there are other images in the background of the card; flutter_realtime_detection also falsely detect my id card as book, put way big distance between the edges and how they frame.
And I don't think it's using Transform widget as well.
Can you please help me to know how can i do it? Thanks.
Edit: I've found the rotate using finger gesture in https://github.com/flutter/flutter/pull/17345
What I need now is the auto-detect and auto-rotate id card frame part.
Thanks a lot.
I am trying to do some cad operations like zoom, rotate and pan. I want to provide a button for each operation and do the corresponding operation on click.
The controls should automatically allow the user to use the mouse to perform those operations. Button controls wouldn't be as smooth or freeform as the default controls, since a click has no direction or duration. For instance, a rotate button would only be able to rotate a certain number of radians at a time in one direction, while the default controls allow for any amount of rotation.
But if you do really want buttons, I'd suggest implementing them in the following ways:
Zoom: Two buttons to move the camera's position a certain amount forward or backward.
Rotate: Buttons rotate the object n radians.
Pan: Buttons shift the camera's position left or right.
You may notice that none of these solutions use Trackballcontrols. Trackballcontrols is a set of mouse controls, it's not meant to be transferred to button commands. You can achieve the same result much more simply by assigning functions to the buttons that change the object's or camera's rotation or position. I think you'll find the following list useful: https://threejs.org/docs/#api/en/core/Object3D. Look at the rotate and translate methods.
Hi I would like to implement an rotating imageview where user can rotate the imageview by touching only the corners.
how could i implement that. Please help.
I would like to implement an rotating imageview where user can rotate the imageview by touching only the corners.
There are two parts to this task, and I'm not sure which one you're asking about:
How to do something in response to a dragging gesture.
How to rotate a view.
Whether you're talking about iOS or macOS, there are at least two options for responding to a dragging operation. One is to track the touch or mouse event yourself. Touches and mouse interactions both have a beginning, when the finger touches the screen or the mouse button is depressed; a middle, when the location of the finger or cursor may change; and an end, when the finger leaves the screen or the mouse button is released. Both operating systems have events for these things that are sent through their respective event handling systems.
An easier method is to use a gesture recognizer, which is usually easier to get right AND the existing gesture recognizers encourage you to implement the expected behavior by making that the easiest option. For example, UIKit and AppKit each have a rotation gesture recognizer that recognizes two touches moving in opposite directions as a rotation gesture, since that's a common way to rotate objects. But you can also implement your own gesture recognizer that can notice that a touch happens a) for longer than some minimum time, b) within some minimum distance from a corner, and c) with movement in a direction that would cause rotation. So, if you want to handle dragging in order to rotate something, look into gesture recognizers.
UIView and NSView both provide methods for rotating a view in its superview's coordinate system. NSView provides a frameRotation property that you can set, and UIView has a transform property to which you can apply Core Graphics functions like CGAffineTransformRotate().
In summary, what you should do is to create a gesture recognizer subclass that recognizes the rotation gesture that you want and rotates the view that it's attached to. Then instantiate that gesture recognizer and apply it to the view that you want to rotate.
I have a cupboard with 9 boxes. On one of them I have animation, which open / close box. It is only change X coordinate of the box, but I can't apply this animation to another boxes, because animation will move it to the coordinate of the first box.
In the debug mode parameter Keep Original Position XZ are disabled. Can't understand, what is wrong.
Should I create 9 similar animations for 9 boxes?
I know that it is possible to animate stuffs using relative positions on the UI when using the anchors, but there does not seem to be any clean solution for 3D objects... This post offers what seem to be the "best" solution for now (it uses an empty parent transform to move the animated object correctly...)
You should be able to apply the animation to any object. I would recommend making a prefab of the "box" with the animation attached, then using the prefab for each. Honestly I don't have much experience with animations of 3D objects, but even my 2D animations are in a 3D space, and each object animates properly with the same animations individually regardless of their location.
I am using Unity3D, and I have a function which is being called inside of OnGUI to lay out the various gui components of my application. Ordinarily, the labels and buttons are all inside of a certain Rect that I supply, which is centered on the screen.
No problem there... however, what I want to is sometime render the exact same gui elements, which can be dynamic, and thus not just put into a prefabbed texture, into a trapezoid-shaped area off to the side, looking as if that gui were actually on a flat plane, pushed away from the center of the screen, and rotated slightly. All gui buttons that were drawn in the function should still respond normally.
I was rather hoping I could just specify some values in GUI.matrix to map the rectangle to a trapezoid, but my initial exploration seems to show that the gui elements don't appear to use homogenous coordinates, and everything still shows up as rectangular.
Is there any way to do this with Unity, ideally without requiring access to pro-only features?
Since now Unity3D GUI system isn't very flexible. The new GUI system is one of the features still not released in Unity 4 (we are all waiting for it).
From my point of view it has several problems, particularly:
You are forced to layout components using the flow of the code, instead of having a more declarative (or at least a more structured) way to do that.
It's quite inefficient (at least one draw call for button).
It isn't flexible at all. Add, Remove, Enable/Disable buttons can be come quick a painful operation when the number of buttons increase.
however, what I want to is sometime render the exact same gui
elements, which can be dynamic, and thus not just put into a prefabbed
texture, into a trapezoid-shaped area off to the side, looking as if
that gui were actually on a flat plane, pushed away from the center of
the screen, and rotated slightly. All gui buttons that were drawn in
the function should still respond normally.
This is quite hard if not impossible to obtain using Unity's GUI classes.
I see 2 possibilities:
Don't use GUI classes to do that. If your GUI is simple enough, you can implement your own (even 3d) buttons using for example:
A mesh (a plane or a trapezoid mesh) with a texture for the button background
TextMesh for drawing 3D text
RayCasting to check if a button has been pressed
Use a library that implements a more advanced GUI system like NGUI
When I ran into the same problem, I just used normal 3D GameObjects cubes with textures and called OnMouseDown(PC/Mac) or RayCasting(Android/iOS) on them. I guess that's how everyone does it.