How to render a specific audio unit in a AUGraph - cocoa

I setup an AUGraph wich contains several audio unit.
I know that when I call AUGraphStart() the graph starts the rendering of all the audio unit. But I would like to be able to start the rendering of one specific AUFilePlayer.
Is it possible?
UPDATE
I saw in comment that my question lacked of details.
I have the current setup for my AUGraph:
AUFilePlayer -> AUMixer -> AUOutput
I setup the AUFilePlayer with a specific AudioFileID.
When I start the graph I would like just to "start" the graph without having any sound.
And later I would like to do something like: AUFilePlayer.Play() to start making sounds.
I don't know if it's possible...
UPDATE: workaround found
I think I found a workaround for this. See my answer for my details.

I found the following workaround:
When I setup my AUFilePlayer I set the value mFramesToPlay of the ScheduledAudioFileRegion struct to:
ScheduledAudioFileRegion audio_file_region;
/*... init values...*/
audio_file_region_.mFramesToPlay = 0;
instead of
audio_file_region_.mFramesToPlay = number_of_frames_for_the_audio_file;
That way when I start the graph, the AUFilePlayer doesn't play any sound.
Then I have a methods Play() where I set the right frames to play for the mFramesToPlay field.
I don't know if it's a good way to do this but it does its job.

Related

Can Groups be used to emulate the "class" or "struct" data structures from other languages

Is there a data structure within LiveCode that can be used as a "holder" for associated data, letting me handle it collectively? I come from a Java / Javascript / C background so I am looking for a Class or Struct sort of data structure.
I've found examples of Groups, which seem to have some of this functionality, but it feels a bit like I'm bending the language to meet my needs.
As a specific example, suppose I had an image field on my screen that would randomly display an image and, when pressed, play an associated sound clip. I'd expect to create a list of "structures" that contained the path to the image and the path to the associated sound clip, and use that data to populate the image field and to decide what sound clip to play.
Would a Group be the correct structure to use in this case? Or am I approaching this in a way that isn't really fitting with the way LiveCode works?
It takes a little getting used to, but the xTalk world is much simpler and more open than any ordinary procedural language. So much of what you once had to manage is no longer required.
So when splash21 said that you could store all your image and sound references in a custom property, he was really saying that the LiveCode environment contains intrinsic, high level functionality that makes these sorts of things instantly accessible, and the only thing required of you is to call for them, and they simply work.
The only way to appreciate this is to make a few simple programs, to really see what is possible. Make your application. Everything you mentioned can be accomplished with perhaps a dozen lines of code in a single handler. I recommend that you join the LiveCode use list and forums. The community is vibrant and eager to help, frequently with full blown solutions to specific problems, but more importantly, as guides and mentors to new users
Craig Newman
Arrays in LiveCode are actually associative arrays (like hash maps). A key is associated with a value. The value might be as well an array.
Chapter 5.5.7 of the User's Guide says
Array elements may contain nested or sub-elements, making them multi-dimensional.
This type of array is ideal for processing hierarchical data structures such as trees or
XML. To access a sub-element, simply declare it using an additional set of square
brackets.
put "ABC" into myVariable["myKeyName"][“aChildElement”]
see also
How to store pictures in a stack?
Dave- I'm hoping to get a struct-like container implemented in the near future. Meanwhile you can, as splash21 mentioned, use custom properties (or better yet, custom property sets) to do what you want. This will give you a pseudo-struct for each object and you can implement the file and sound specifications into the properties. And if you use that in conjunction with a behavior object you'll end up very close to a real inheritable class formation.

Error calling similar() in Sikuli

l = find("Start_menu.png").similar(0.5).anyColor()
click(l)
The above is an excerpt from my code. "Start_menu.png" refers to an image of the Windows Start Menu. I got the following error when I executed this:
File "C:\Users\VPRAVE~1.TSI\AppData\Local\Temp\sikuli-tmp8636618870597770744.py", line 1, in
l = find("1368426219510.png").similar(0.5).anyColor().anySize()
AttributeError: 'org.sikuli.script.Match' object has no attribute 'similar'
Could some one help me out with this? And could some one tell me how to use anyColor() and anySize()?
find attempts to find something when it's called. So what your code says, in prose, is "find something that looks like 'Start_menu', then make the thing you found 0.5 similar, then make that any color"
This is wrong--you can't set the similarity threshold after the fact. Instead, call it as seen in the Sikuli docs.
Instead, say
l = find(Pattern("Start_menu.png").similar(0.5))
Here's the same code arranged vertically:
pattern = Pattern("Start_menu.png")
pattern.similar(0.5)
l = find(pattern)
The other problem is your reference to the anyColor() function, which doesn't exist. I see the code you're trying to run is from "Sikuli: Using GUI Screenshots for Search and Automation" (linked from the Sikuli docs), but this function (and the syntax used in that paper) don't exist in any extant version of Sikuli. You can see an open feature request for it on the Sikuli launchpad page.
This doesn't help you now, though. I don't know of another visual automation package that can do anyColor. If you wanted to use that feature for something, I suggest asking a new question where you describe the problem you're trying to solve, and someone may be able to suggest a work-around for that specific case.

Xcode, is it possible to type in co-ordinates and then get location?

I was wondering if this type of task is possible and if so how can it could be done or is there a tutorial on this? I believe a task like this involve collocation ,but I am not for sure.
Reverse Geocoding made simple
thats a great tutorial for reverse Geocoding. Adapt what they show you to switch input of a place to input of the coordinate and you will have what you want.

Saving files in xCode and making graph

I am new to programming. Now i have been learning for a few weeks and am now making my first app. Probably not for public, just for me. At least for now. So here it goes. I want the user to be able to enter his information (for example weight or something like that) into textField and then save it, so I can later form a graph (for example of weight loss through time). Now the graph should not be that much of a problem, since there are many tutorials on that. I am more interested in how to enter information, then save it so it can be later accessed. Any help? What should I read?
Thanks!
Working with UITextFields in Objective C is pretty straightforward - you can grab the NSString from such an object using the 'text' property. Use plists for storage locally, or JSON.
Look at core data tutorials especially related to Apples' doc on “Core Data and Cocoa Bindings”
CoreData shows how to setup objects and save them to a file or simple database. Cocoa Bindings are how to make input screens pass data to object models.
You should be able to write a program to enter weights, save them and show in a table without writing any code.

How to find out what an Image is about

Is there a way to understand what an image is about? I mean, if I scann a picture, how can I tell that the picture is about a spesific object? I am thinking that if I have some shape in mind, say the shape - pattern of a spesific object that meets its requirements against the object I am searhcing for, then it must be what I am looking for. Anyway I am thinking of an algorithm to scann a picture database and figure out the pictures I am actually looking for,Is there a known way to accomplish such operation?.
If I am reading your question correctly...
This is a very daunting task even for full-fledged corporations like Google, though they are attempting to create something along these lines.
Take a look at Google Goggles for Android if you'd like to see how this sort of system behaves. You'll also notice that it requires very specific circumstances to be even slightly reliable, but the base technology is there.

Resources