Playing a sound at certain points on a UIScrollView - uiscrollview

I'm trying to figure out how to play a sound when a UIScrollView point lines up with a certain image, or actually, play a sound at a certain point. I have a scroll view that can be scrolled to the sides (only left and right, not up and down). I have spent some time researching this but found nothing. I did find this: http://cl.ly/65FM When I applied this, every time I would drag the UIScrolView it would play the sound at every .x point making it sound really bad therefore didn't work. Anyone know how I can perform this action? My goal is to play a short sound (less than a second) whenever the scroll view is moved X pixels to either side. How can this be done? Code samples will be greatly appreciated...

I figured it out. I managed rot get it working now like this.
static NSInteger previousPage = 0;
CGFloat pageWidth = 64;
float fractionalPage = scrollView.contentOffset.x / pageWidth;
NSInteger page = lround(fractionalPage);
if (previousPage != page) {
[scrollViewSound play];
previousPage = page;
}
Now my goal is to figure out when the UIScrollView is aligned with a certain image it performs an action.

Related

After mouse click image disappears and shows up in new random position. How can I check the new position? (processing)

I need help with my code in processing. It is actually a short and easy code, but I am a beginner in programming, so for me seems everything difficult... :(
My goal is...
To click on an image.
After the mouse click, the image should disappear and show up in a new random position.
Then I should be able to click on the image in the new random position, and it should do the same again: disappear and show up in a new random position. and so on.
I have written some code (see below), but it does not work properly. I would really appreciate it if someone could help me to find out, what is wrong with my code. Thank you very much in advance! :)
Here is my code:
PImage pic;
// Declare variables for the picture
float pic_x;
float pic_y;
float pic_r = 100;
float pic_x_new = random(0, 400);
float pic_y_new = random(0, 400);
boolean mouseOverPic;
void setup(){
size(500,500);
background(0,100,0);
//loading the picture
pic = loadImage("pic.png");
image(pic, pic_x, pic_y, pic_r, pic_r);
}
void draw(){
mouseOverPic = mouseX <= pic.width
&& mouseX >= pic_x
&& mouseY <= pic.height
&& mouseY >= pic_y;
if (mousePressed && mouseOverPic) {
background(100);
image(pic, pic_x_new, pic_y_new, pic_r, pic_r);
}
}
Can you please try to be more specific than saying your code does not work properly? Have you tried debugging your code to narrow your problem down? Which line of code behaves differently from what you expected?
The code you have doesn't make a ton of sense, because you're only ever drawing the image when it's being clicked. That doesn't sound like what you want to do. And your collision detection code is not correct. Try running through your code with some example values to see exactly what it's doing. I've written a tutorial on collision detection in Processing available here.
To fix this, you really need to break your problem down into smaller pieces and take those pieces on one at a time. For example:
Can you create a simple example program that just shows a hard-coded rectangle that changes color if the mouse is inside it?
Can you then make it so the rectangle displays in a random location every time the program is run?
Then can you make it so the rectangle changes location when you click it?
If you get stuck on a specific step, please narrow your problem down and post a MCVE in a new question. Good luck.
I belive you test against pic_x, but uses pic_x_new to draw the image (same with y). You should use the same var to place and test the image.
Another approach would be to make a function to test mouse against the image passing the new values as parameters.

Fullscreen inside firefox-sdk panel

Here goes my first question. I've embedded a youtube video (HTML5) in a panel created using the Panel API from Firefox SDK. The problem is that the video won't go fullscreen. It tries to, but goes back to normal size within the panel. I've also tried to use the method described here with a random div but the same thing happens. So, is this a limitation from the api or is there any way I could get it to work? Thanks.
I've just started experimenting with a floating YouTube player plugin using the Firefox sdk and ran in to the same issue. I did find a some what sloppy work around, that you might find suitable to use.
This method causes the panel to resize to the full screen. However, when resizing it, even with the border property set to 0, the panel will still show a bit of a border.
in the main document, "index.js"
var self = require('sdk/self');
var data = require("sdk/self").data;
let { getActiveView }=require("sdk/view/core");
let myPanel = require("sdk/panel").Panel({
contentURL: "./index.htm",
contentScriptFile: ["./youtube.js", "./index.js"]
});
var player = getActiveView(myPanel);
player.setAttribute("noautohide", true);
player.setAttribute("border", 0);
player.setAttribute('style', 'background-color:rgba(0,0,0,0);');
myPanel.show();
init();
myPanel.port.on('fullscreen', fullScreen);
function init(){
var size = 30,
width = size * 16,
height = size * 9;
myPanel.resize(width,height);
}
function fullScreen(width, height){
player.moveTo(3840, 0); // Need to moove the video to the top left
//of the monitor or else the resize only takes a potion of the screen.
//My left position is 3840 because i'm doing this on my right
//screen and the left is 4k so i need the offset.
myPanel.resize(width,height);
}
And in my content script file
var container = document.getElementById('container'),
overlay = document.getElementById('overlay');
overlay.addEventListener('click', fullscreen);
function fullscreen() {
var x = window.screen.availWidth,
y = window.screen.availHeight;
self.port.emit('fullscreen', x, y);
}
Some things I've noticed while experimenting is that when playing YouTube videos in the panel, the video has a tendency to lag, but the audio plays fine. The lag gets more apparent when moving the mouse over elements on other pages, and over the panel itself. The faster the movement the more apparent it becomes. I've found a workaround for mousing over the panel by placing an div that stretches over the video. The problem with this is that the default YouTube controls don't react to the mouse over, which would be possible to get around by using the YouTube api and creating custom controls. Also multiple monitor support would be hard to work with when positioning the video.
Edit:
Here's another way that does the same thing, but this one appears to deal with multi monitor support. It will position to the top left of whatever window firefox is currently on.
in the index.js file
function fullScreen(width, height){
myPanel.hide();
myPanel.show({position:{left:0,top:0}});
myPanel.resize(width,height);
}

NSScrollView Zooming of subviews

Apologies for the noob question - coming from an iOS background I'm struggling a little with OSX.
The good news - I have an NSScrollView with a large NSView as it's documentView. I have been adjusting the bounds of the contentView to effectively zoom in on the documentView - and all works well with respect to anything I do in drawRect (of the documentView)
The not so good news - I have now added another NSView as a child of the large documentView and expected it to simply zoom just like it would in iOS land - but it doesn't. If anyone can help fill in the rather large gap in my understanding of all this, I'd be extremely grateful
Thanks.
[UPDATE] Fixed it myself - 'problem' was that autolayout (layout constraints) were enabled. Once I disabled them and set the autosizing appropriately then everything was ok. I guess I should learn about layout constraints...
I know this is very old but I just implemented mouse scroll zooming using the following after spending days trying to figure it out using various solutions posted by others, all of which had fundamental issues. As background I and using a CALayers in a NSView subclass with a large PDF building layout in the background and 100+ draggable CALayer objects overplayed on top of that.
The zooming is instant and smooth and everything scales perfectly with no pixellation that I was expecting from something called 'magnification'. I wasted many days on this.
override func scrollWheel(with event: NSEvent) {
guard event.modifierFlags.contains(.option) else {
super.scrollWheel(with: event)
return
}
let dy = event.deltaY
if dy != 0.0 {
let magnification = self.scrollView.magnification + dy/30
let point = self.scrollView.contentView.convert(event.locationInWindow, from: nil)
self.scrollView.setMagnification(magnification, centeredAt: point)
}
}
LOL, I had exactly the same problem. I lost like two days messing around with autolayout. After I read your update I went in and just added another NSBox to the view and it gets drawn correctly and zooms as well.
Though, does it work for NSImageViews as subviews as well?

AdWhirl - moving view to centre screen on rotation

This is probably something quite basic. I am trying to implement AdWhirl into my app, which I have done successfully for the technical part. When I load my app, the add loads and then slides down from the top to sit at the bottom of the screen. However, when I rotate the device, the advert uses "precise" locations and moves off screen. When the advert reloads (refreshes every 15 seconds) the advert moves up to the bottom of the screen of the landscape window. Again, when rotating back from landscape, the Advert Aligns it's self in the middle of the page vertically (covering content) until a new advert loads. I have attached a number of photos, in a series showing what happens, all in order and taken at least 10 seconds apart (showing test advert of "Hello").
My code from the Implementation file is included at the end of this post - sorry for not using the code format, just didn't want to put spaces in front of the whole block, and I think it's all relatively relevant. It's also available at the paste bin: http://pastebin.com/mzavbj2L
Sam
Sorry - it wouldn't let me upload images. Please send me a PM for images.
I recommend handling the rotation in the willRotateToInterfaceOrientation:duration: method or the didRotateFromInterfaceOrientation:method. You will be able to determine what your new orientation is, the new size of your view, and then change the frame of your AdWhirl view to the new location.
After looking a bit closer, however, it looks like you might need to make *adView a variable declared in your .h file so you can access it from the rotation methods.
Once you do that, you can set your new frame as you did in the viewDidLoad: method:
CGSize adSize = [adView actualAdSize];
CGRect newFrame = adView.frame;
newFrame.size = adSize;
newFrame.origin.x = (self.view.bounds.size.width - adSize.width)/ 2;
newFrame.origin.y = (self.view.bounds.size.height - adSize.height);
adView.frame = newFrame;
[UIView commitAnimations];
Ideally, you would move this code into its own method so you can just call it from wherever you want in your view controller code (e.g. viewDidLoad and the rotation function(s)).
Thanks for your help - it was close to a solution (only just got it working tonight - had mostly forgotten about doing it!). After I made you changes to my .h, I was trying to call [adWhirlView adWhirlDidReceiveAd:(AdWhirlView *)adView]. This kept returning errors, even though it was defined in the AdWhirlView class. As a fix, I added -(void) adWhirlDidReceiveAd:(AdWhirlView *)adView and then called each time the frame rotated [self adWhirlDidReceiveAd:(AdWhirlView *)adView];.
Thanks again - so glad it's finally working.
Sam

Alpha Detection in Layer OK on Simulator, not iPhone

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

Resources