I am having difficulty with finding out how to get tap coordinates from tapEvent object, which is passed to my custom handler (I didn't find it's specification anyway). There is also singleTap event, which passes custom variables "X" as "Y", which is coordinates, i guess, but I can't invoke that one in Emulator.
The point is that I am working on one application, where I have big element and I need to know where exactly user tapped (it maybe global screen coordinate or relative coordinate of my element).
Here is example code:
//inside of assistant's setup method:
Mojo.Event.listen(this.controller.get('elem'), Mojo.Event.tap, this.listenSingleTap.bindAsEventListener(this));
//custom handler:
SomeAssistant.prototype.listenSingleTap = function(singleTapEvent){
this.someOtherMethod(singleTapEvent.x, singleTapEvent.y); //This is wrong and doesn't work - how I suppose to get tap coordinates?
}
Thank you very much for any suggestions.
The x and y coordinates for the tap event are in the "down" property of the event.
Ex.
MyAssistant.prototype.setup = function() {
Mojo.Event.listen(this.controller.get('elem'), Mojo.Event.tap, this.handleTap.bind(this));
}
MyAssistant.prototype.handleTap = function(event) {
Mojo.Log.info("tap down at x: " + event.down.x + " y: " + event.down.y);
}
Related
I'm trying to create some sort of "item displayer" in a game to showcase items or act as an icon in the inventory (it will include informations like item tier, item name, exc).
To achieve this, i wanted to create a ItemDisplay class extending FlxSpriteGroup, and put inside it the frame, background and info for the item as Sprites, so that i would be able to work with all as if they were a single Sprite.
So i did just that, but the group isn't showing up when the ItemDisplay object is created and supposedly added to the FlxState.
After some troubleshooting, i discovered that the object exists, but isOnScreen() returns false, and i don't know why.
Here's the code i'm using to create the ItemDisplay object:
var itd:ItemDisplay = new ItemDisplay(FlxG.width / 2, FlxG.height / 2, test_sword);
add(itd);
...and here's the ItemDisplay class in all it's glory:
class ItemDisplay extends FlxSpriteGroup
{
override public function new(posX:Float, posY:Float, itemToShow:Item)
{
super();
x = posX;
y = posY;
// create sprites
var bckgr:FlxSprite = new FlxSprite(x, y);
var itPng:FlxSprite = new FlxSprite(x, y);
var itFrm:FlxSprite = new FlxSprite(x, y);
// load sprites graphics (problem's not here, i checked)
bckgr.loadGraphic("assets/images/ui/item_framing/ifbg_" + itemToShow.tier + "Tier.png");
itPng.loadGraphic(itemToShow.pngPath);
itFrm.loadGraphic("assets/images/ui/item_framing/item_frame.png");
// add all sprites to group
this.add(bckgr);
this.add(itPng);
this.add(itFrm);
}
}
(i'm running the code on macos, not HTML5)
If you have any idea why the ItemDisplay is not showing up, please explain it to me, as i'm not that good of a programmer, and i might have missed something.
Thank you ^-^
Nvm, as i thought, it was my stupid error: when creating the sprites in lines 10-12, i set their positions to X and Y, to make them the same as the group positions.
I just found out that the sprites consider the group's x and y as (0, 0), and start calculating their position from there.
So, by setting the sprites' x/y the same as the group's, i was essentially doubling the values, and putting the sprites outside of the screen
lmao sorry for bad english
I'm currently trying to convert a slider in to a rotary knob and having a tough time of it all. The knob works in design but i'm struggling to set the correct value within the knob and as a result change the value within the app in real time.
I'm using AVAudio to set up an engine for people to record with that has effects like Reverb and Delay.
The reverb value is set as followed within the Audio Class:
#Published var reverbValue: Float = 0.0
and later on referenced in a function to change it's value
func changeReverbValue() {
setReverb.wetDryMix = reverbValue
}
When I use a regular slider as follows the change works:
Slider(value: $recordingsettings.reverbValue, in: Float(0.0)...recordingsettings.reverbMaxValue, onEditingChanged: { _ in
self.recordingsettings.changeReverbValue()
}).accentColor(Color.white)
As mentioned the knob works fine in its design:
ZStack {
Knobs(color: .orange)
.rotationEffect(
.degrees(max(0, initialCircleState()))
)
.gesture(DragGesture(minimumDistance: 0)
.onEnded({ _ in
startDragValue = -1.0
})
.onChanged { dragValue in
let touchDifferential = touchDifference(dragValue)
setInitialDragVal()
let computedTouch = computeTouch(touchDifferential)
print(computedTouch)
baseValue = getBaseVal(computedTouch)
let normalizeVal = baseValue / touchAmt
value = Float(normalizeVal * rngOffset(range: bounds) + bounds.lowerBound)
print("vaule is: \(value)")
}
)
GrayCircle(bounds: bounds)
OrangeCircle(baseValue: $value, bounds: bounds)
}
.rotationEffect(bounds.lowerBound < 0 ? .degrees(90) : .degrees(107))
I've had some success connecting the knob to the reverb value to the point where the slider also moves when the rotary knob does, however the changeReverbValue function doesn't work.
The success comes from setting the value within the knob view as follows:
#Binding var value: AUValue
And then referencing the knob on the same struct of the main view as the slider:
Knob(value: $recordingsettings.reverbValue, bounds: 0...CGFloat(recordingsettings.reverbMaxValue))
.onTapGesture {
self.recordingsettings.changeReverbValue()
}
The on tap gesture was a way in which I thought it might call the change reverb value function when the knob was turned but to no avail.
The binding value passed in the knob also has other challenges. For some reason when I playback audio without headphones and then turn the knob the audio starts to stutter. This doesn't happen with headphones and I find that pretty weird.
Anyone know how I could reference the reverb value within the rotary knob and have the changeReverbValue function called at the same time?
I just want to replace the slider with something that looks better. Otherwise i'm going to have to leave this for a bit and just implement the sliders instead throughout the app.
If I don't set the value of the knob as #binding in the rotary knob view the track doesn't stutter on playback but then I don't know if it's possible to change the reverb value without a #binding var.
I struggled to parse a precise singular problem statement from the narrative, so this is perhaps just an off-base commentary and not a solution. I walked away thinking your problem is: a custom UI component is "jumpy/stuttery" during interaction and produces similarly punctate effects on app state.
If that's fair, I worked around the same issue in my first SwiftUI app. The cause could be two things:
Not using the right async queue by accident.
Forcing an #State or #Published property to update for all global state changes. That means you are pushing stale state from earlier back into an interaction, possibly with a circular feedback loop.
The solution is pretty simple. Request and consume model updates with both a value and a source tag. Throttle and filter out the self-tag to keep local state responsive to only one just-in-time data stream.
I used that pattern in that first app (free, Inclusivity for Mac) to coordinate an HSV color wheel and color channel custom slider components. The wheel, sliders, and other interactions feed/read a shared Combine pipeline (CurrentValueSubject<SourcedColorVector,Never>.erasedToAny()).
Some sample gestures, which simply punt the gating work to a view model:
The HSVWheel drag-around or click gesture
private func touchUpInWheel() -> ExclusiveGesture<_ChangedGesture<DragGesture>, _EndedGesture<DragGesture>> {
ExclusiveGesture(
DragGesture(minimumDistance: 10, coordinateSpace: .named(wheel))
.onChanged { change in
let adjusted = CGPoint(x: change.translation.width - targetDiameter + change.startLocation.x / 2,
y: change.translation.height - targetDiameter + change.startLocation.y / 2)
vm.setHueSat(drag: adjusted)
},
DragGesture(minimumDistance: 0, coordinateSpace: .named(wheel))
.onEnded { end in
let click = CGPoint(x: end.location.x - vm.radius,
y: end.location.y - vm.radius)
vm.setHueSat(click: click)
}
)
}
A typical slider gesture (this is the vertical value slider)
private func tapAndDrag() -> _EndedGesture<_ChangedGesture<DragGesture>> {
DragGesture(minimumDistance: 0,
coordinateSpace: .named(valuePickerSpace))
.onChanged { value in
let location = value.location.y - .valueSliderGestureOffset
vm.setValueKnobLocation(raw: location)
}
.onEnded { end in
let location = end.location.y - .valueSliderGestureOffset
vm.setValueKnobLocation(raw: location)
}
}
I'm new to p5.
My aim is to display ASCII value of the key I type and also leave a trail of vertical lines whose distance from left is 200+the ASCII value of the key,
which can be done using createGraphics() (adding an additional canvas layer on top with same dimensions as original and drawing on that additional canvas layer)
But the code doesn't seem to work and also it is not displaying any errors in the console.
const c5=function(p){
let pg;
p.setup=function(){
p.createCanvas(600,400);
pg=p.createGraphics(600,400);
};
p.draw=function(){
p.background(200);
p.textAlign(p.CENTER,p.TOP);
p.textSize(20);
p.text('ASCII Value : '+p.keyCode,300,100);
pg.line(200+p.keyCode,200,200+p.keyCode,300);//shift right by 200
};
};
The first issue is that you have to tell the engine that the thing you name p is actually a p5 instance. You can construct a p5 object using new p5(...) as follows:
const c5 = new p5(function(p) {
p.setup = function(){
...
};
p.draw = function(){
...
};
});
You then correctly fill up your pg graphic object with vertical lines. However, you do not "draw" it on your original canvas. You can do so using the p5.js image() function (see also the example shown in the createGraphics() documentation).
I've made a working example in the p5.js editor here.
Your code is very close. You are creating the graphic object and drawing to it but you also need to display it as an image to your canvas. In your code snippet you are also missing the call to create the new p5js object but that may be just a copy paste error.
Here is a working snippet of your code with the call to draw the image. I also moved the key detection logic to keyPressed so the logic only runs when a key is pressed.
Also notice that running the logic inside of keyPressed allows the sketch to handle keys such as f5 by returning false and preventing default behavior. In a real application we would need to be very careful about overriding default behavior. Here we assume that the user wants to know the key code of the f5 key and will be ok with the page not reloading. In a real application that might not be the case.
const c5=function(p){
let pg;
p.setup=function(){
p.createCanvas(600,400);
pg=p.createGraphics(600,400);
};
p.draw=function(){
};
p.keyPressed = function() {
p.background(200);
p.textAlign(p.CENTER,p.TOP);
p.textSize(20);
p.text('ASCII Value : '+p.key + " " +p.keyCode,300,100);
pg.line(200+p.keyCode,200,200+p.keyCode,300);//shift right by 200
p.image(pg, 0, 0);
return false; // prevent default
}
};
var myp5 = new p5(c5)
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.8.0/p5.min.js"></script>
I tried to understand angularJS, which has a different approach from what I have seen with its dirty check model update.
Basically, I have a directive ng-draggable which make elements... draggable. I have a model linking with each of this element with attribute x and y – the position of the element linked with the model – and I want the directive to update the model.
To do so, I tried to use the $apply function in my directive and set the model value x and y. Then I also use the $observe function to update the view: here is my jsfiddle. (Note I use a factory to catch the mouse event)
var mousemove = function(event) {
var y = event.screenY - startY;
var x = event.screenX - startX;
scope.$apply(function(){
attr.$set('ngModel', {'x': x, 'y': y});
});
}
It seems to work fine. However when I check the model value with the save button (it prints the model in the console), the position x and y are not updated.
So my question is how do I make it work? And more generally what is happening here? Any reading suggestion around the MVC pattern in angularJS (I found the Developer guide a bit harsh for a beginner)
Thank you for your help.
The problem on your approach is that you are changing the entire ngModel reference instead of changing just one of its properties (x and/or y). When you do that, you loose the connection to the initial ngModel (objects inside players array) and consequently the connection to the 'real' model (object play).
Try doing it like this:
scope.$apply(function() {
model.$modelValue.x = event.screenX - startX;
model.$modelValue.y = event.screenY - startY;
});
jsFiddle: http://fiddle.jshell.net/y7nVJ/
I have some problems with basic drag and drop scrolling algorithm. Here is my algorithm:
When mouse pressed down i set boolean dragging = true and store the current mouse x and y position in stored_position variable.
When mouse up i set boolean dragging = false.
On each frame i check dragging == true and if it is i calculate the dx = current_mouse.x - stored_position.x and dy = current_mouse.x - stored_position.y. Then i store current mouse position as the new stored_position and scroll my view (it is 2d camera object) by this dx dy, as the Camera.x -= dx, Camera.y -= dy (i need the inversion one because of camera specific).
The problem with this algorithm is that when i drag the camera it starting to blink and move around/shake. I think it is because when i move my mouse from left to right it traces dx like this:
71
-67
69
-68
69
-68
8
-5
So i think it is the mouse twitching(i mean the mouse is jumps back sometimes when we try to draw a line). Any idea of changing algorithm, maybe i miss something?
Here is example of this problem: https://dl.dropbox.com/u/78904724/as_host/buld_build_other.rar (you need to run the index.html chose the level and try to drag the screen).
Updated
Here is the example full source link (this is the random picture, i swear): https://dl.dropbox.com/u/78904724/as_host/scroll_test.rar
And this is the code i used (in example i use native flash events instead of using axgl checks to not confuse someone, i have both examples and it cause the same problems):
//variables with comments
private var dragging:Boolean = false; //dragging flag
private var current_mouse:Array; //stored mouse position array [0] - x, [1] - y
private var d:Array; //dx dy array [0] - x, [1] - y
[Embed(source = "test.jpg")] public static const _sprite:Class; //sprite graphics
private var view_sprite:AxSprite; //some image on the stage to drag it
//this is the class constructor code
view_sprite = new AxSprite(0, 0, _sprite);
add(view_sprite);
current_mouse = new Array();
d = new Array();
Ax.stage2D.addEventListener(MouseEvent.MOUSE_DOWN, function(e:MouseEvent):void {
current_mouse[0] = Ax.mouse.x;
current_mouse[1] = Ax.mouse.y;
dragging = true;
});
Ax.stage2D.addEventListener(MouseEvent.MOUSE_UP, function(e:MouseEvent):void {
dragging = false;
});
Ax.stage2D.addEventListener(MouseEvent.MOUSE_MOVE, function(e:MouseEvent):void {
if (dragging) {
d[0] = Ax.mouse.x - current_mouse[0];
d[1] = Ax.mouse.y - current_mouse[1];
Ax.camera.x -= d[0];
Ax.camera.y -= d[1];
current_mouse[0] = Ax.mouse.x;
current_mouse[1] = Ax.mouse.y;
}
});
I am totally confused but the problem was this two strings(thanks the axgl author for helping me):
current_mouse[0] = Ax.mouse.x;
current_mouse[1] = Ax.mouse.y;
And when i remove them the dragging will work perfectly. But... I swear i tried this before and nothing gonna happed, camera just started to move faster without this and now... it works!
Thank you all for trying to help my. If someone would have the similar problems here is the full worked source: https://dl.dropbox.com/u/78904724/as_host/scroll_test_worked.rar
And this is the thread on axgl.org: http://axgl.org/forums/viewtopic.php?f=12&p=394