Can't control audio volume on device using AudioToolbox code in Swift - xcode

I created an app and used the following code to produce sound.
func playPianoButtonSound(button: UIButton) {
var soundID: SystemSoundID = 0
let soundURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(), pianoButtonsSoundNames[button], "wav", nil)
AudioServicesCreateSystemSoundID(soundURL, &soundID)
AudioServicesPlaySystemSound(soundID)
}
Sometimes it works and responds to the external volume control and sometimes it does not. Has anyone encountered this problem, if so I'd like to hear their experience and how they solved the issue.
Thank you.

Related

Can't get CoreSpotlight to work properly on the Mac... sample code anyone?

Can someone point me to some sample code for using CoreSpotlight on the Mac? I can’t find a single sample code example (for mac, plenty of iOS ones out there). I have already successfully implemented CS on our iOS app.
Using code similar to what I used for iOS, I’m getting it to index the content, but it’s very minimal: it gives me the generic app icon for the content (rather than the custom thumbnail image I specify) and has no description, etc (on the Spotlight pane, the entire right side is blank).
Also, it's unclear to me what code I need to add to AppDelegate to handle the click on the spotlight data. I used `func application(_ application: NSApplication, continue userActivity: NSUserActivity, restorationHandler:) on iOS but this doesn't seem to get called on the Mac.
Here's the code I'm using to index the content:
attributeSet.title = content.name
attributeSet.contentDescription = content.description
if let image = image {
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)!
let bitmapRep = NSBitmapImageRep(cgImage: cgImage)
attributeSet.thumbnailData = bitmapRep.representation(using: NSBitmapImageRep.FileType.jpeg, properties: [:])!
}
let item = CSSearchableItem(uniqueIdentifier:content.uuid, domainIdentifier: "com.my.app.bundle.id", attributeSet: attributeSet)
searchableIndex?.indexSearchableItems([item]) { error in
if let error = error {
NSLog("CoreSpotlight Indexing error: \(error.localizedDescription)")
}
}
And this is an example of what I get in Spotlight:
A bit of sample code would clear this right up, Methinks.
Thanks in advance.

MultiRoute Audio Input in iOS

We've been working with AudioUnits in Core Audio. It is simultaniously a very powerful audio framework, and one of the worst documented which makes it both a joy and a frustration to work with.
We want to accomplish something we know iPads had been able to do since iOS 6.0 - Multiple audio inputs.
So far - from the 2012 Developer Talk - It appears you have to set the audio session to MultiRoute. We've done this. If I plug in an a soundcard from a keyboard. I can see that there are two inputs. Great. We're then told that we need to set a ChannelMap on a Remote I/O unit.
To what? Well... here's where it gets vague. We need to set all the channels we don't want to -1 and the channels we want to 0 and 1 (for stereo input or for mono?).
We attempt this and... nothing. Sound still plays through on the 'last in wins' principle. Microphone if everything plugged out, soundcard if that's the one plugged in. But we can't switch between them.
This setup code is always run before the other function listed
func setupAudioSession() {
self.audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryMultiRoute, with: [.mixWithOthers])
try audioSession.setActive(true)
audioSessionWasSetup = true
} catch let error {
//TODO: Implement something here
print(error)
audioSessionWasSetup = false
}
}
We then have a remote I/O with an associated audiograph set up. This has been tested and works beautifully. But we need to be able to set where it's pulling sound from.
I've attempted to do it with this, but not only doesn't it have any effect... nothing happens.
Am I missing something?
private func setChannelMap(onAudioUnit audioUnit: AudioUnit?, toChannel channelIndex: Int = 0) {
var channelMap: [Int32] = []
if audioUnit == nil {
return
}
var numberOfInputChannels: UInt32 = 4 // Two stereo inputs? - I'm just guessing here
let mapSize: UInt32 = numberOfInputChannels * UInt32(MemoryLayout<Int32>.size);
for _ in 0...(numberOfInputChannels) {
channelMap.append(-1)
}
channelMap[2 * channelIndex] = 0;
channelMap[2 * channelIndex + 1] = 1;
let status = AudioUnitSetProperty(audioUnit!,
kAudioOutputUnitProperty_ChannelMap,
kAudioUnitScope_Input,
0,
&channelMap,
mapSize);
self.checkError(status, "Failed to set Channel Map on input unit")
}
There isn't any documentation on this at all as far as I've been able to find. Nor any code examples.
I hope you can help us.

Swift ios9 Building MacinTalk voice for asset: (null)

I am using xcode 7 , swift 2.0
I am getting voice text to Speech working in Simulator but not in real iphone6 plus device, iOS 9. I have properly imported AVFOUNDATION and its Framework.
I tried...
#IBAction func SpeakTheList(sender: AnyObject) {
let mySpeechUtterance = AVSpeechUtterance(string: speakString)
//let voice = AVSpeechSynthesisVoice(language: "en-US")
// mySpeechUtterance.voice = voice
let voices = AVSpeechSynthesisVoice.speechVoices()
for voice in voices {
if "en-US" == voice.language {
mySpeechUtterance.voice = voice
print(voice.language)
break;
}
}
mySpeechSynthesizer.speakUtterance(mySpeechUtterance)
}
I get the following error :
Building MacinTalk voice for asset: (null)
Is there anything I ned to do settings in my iphone6plus iOS 9 , or I have to download something.
I have found a suggestion here Why I'm getting "Building MacinTalk voice for asset: (null)" in iOS device test
saying that..
" since iOS9, possibly a log event turned on during development that they forgot to turn off"
Just want to add to this (and by extension, the linked discussion in the original post):
I have two devices: an iPad2 and an iPad Air. They are running exactly the same version of iOS (9.2, 13C75). I have the following objective-C++ function for generating speech from Qt using Xcode 7.2 (7C68) on Yosemite:
void iOSTTSClient::speakSpeedGender(const QString &msg, const float speechRateModifier, const QString &gender, const bool cutOff) {
QString noHTML(msg);
noHTML.remove(QRegularExpression("<[^<]*?>"));
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:noHTML.toNSString()];
/* See https://forums.developer.apple.com/thread/18178 */
const float baseSpeechRate = (m_iOSVersion < 9.0) ? 0.15 : AVSpeechUtteranceDefaultSpeechRate;
utterance.rate = baseSpeechRate * speechRateModifier;
NSString *locale;
if (gender.compare("male", Qt::CaseInsensitive) == 0)
locale = #"en-GB"; // "Daniel" by default
else if (gender.compare("female", Qt::CaseInsensitive) == 0)
locale = #"en-US"; // "Samantha" by default
else
locale = [AVSpeechSynthesisVoice currentLanguageCode];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
const QString errMsg = QString("Null pointer to AVSpeechSynthesisVoice (could not fetch voice for locale '%1')!").arg(QString::fromNSString(locale));
Q_ASSERT_X(voice, "speakSpeedGender", errMsg.toLatin1().data());
utterance.voice = voice;
static const AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
if (synthesizer.speaking && cutOff) {
const bool stopped = [synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Q_ASSERT_X(stopped, "speakSpeedGender", "Could not stop previous utterance!");
}
[synthesizer speakUtterance:utterance];
}
On the iPad Air, everything works beautifully:
Building MacinTalk voice for asset:
file:///private/var/mobile/Library/Assets/com_apple_MobileAsset_MacinTalkVoiceAssets/db2bf75d6d3dbf8d4825a3ea16b1a879ac31466b.asset/AssetData/
But on the iPad2, I hear nothing and get the following:
Building MacinTalk voice for asset: (null)
Out of curiosity, I fired up the iPad2 simulator and ran my app there. I got yet another console message:
AXSpeechAssetDownloader|error| ASAssetQuery error fetching results
(for com.apple.MobileAsset.MacinTalkVoiceAssets) Error Domain=ASError
Code=21 "Unable to copy asset information"
UserInfo={NSDescription=Unable to copy asset information}
However, I heard speech! And I realized I was wearing headphones. Sure enough, when I plugged ear buds into the iPad2, I heard speech there too. So now I'm searching for information about that. The following link is recent and has the usual assortment of this-worked-for-me voodoo (none of it helped me, but maybe will help others with this problem):
https://forums.developer.apple.com/thread/18444
In summary: TTS "works" but is not necessarily audible without headphones/ear buds. It appears to be a hardware settings issue with iOS 9.2. The console messages may or may not be relevant.
Final update: in the interests of full, if sheepish, disclosure, I figured I'd share how I finally solved the issue. The iPad2 in question had the "Use side switch to:" option set to "Mute". I left that alone but went ahead and toggled the switch itself. Wham! Everything worked without ear buds. So if you are unable to hear text-to-speech, try ear-buds. If that works, check whether your device is set to mute!
Do not use pauseSpeakingAtBoundary(). Instead, use stopSpeakingAtBoundary and continueSpeaking. This works for me.
Finally Found that there was a bug in iOS9, soon after XCODE new release 7.2 update, and iOS 9.2 Update release,
I tested same above code, text to speech started working.

Cocoa: take screenshot of desktop wallpaper (without icons and windows)

Is is possible to capture the Mac OS X desktop without desktop items and any windows that may be open (i.e. just the wallpaper)?
I've experimented with CGWindowListCreateImage, CGWindowListCreateImageFromArray, and CGDisplayCreateImage, but no luck.
Essentially I'm trying to capture the desktop wallpaper without using [NSWorkspace desktopImageURLForScreen:] (it's a sandboxed app without access to the file system).
You'll need to be careful to test that this is still correct, but the desktop window sits below the Finder (it's drawn by the Dock). Passing the kCGWindowListOptionOnScreenBelowWindow CGWindowListOption to CGWindowListCreateImage should get you what you want (unless something else is drawing below that level).
Otherwise, you'll need to use CGWindowListCreate and iterate through the response excluding anything that isn't drawn by the dock at the window level kCGMinimumWindowLevel + 19.
It gets kind of tricky when there are multiple screens, but hopefully this information is enough for you to do what you need.
I know this is a super old question, and Tony Arnold's question is right, and what I used to build my own "grab the desktop" code.
I have some example code that shows how to do all these things (it's a wonderful thing walking in parts of Cocoa that are barely documented... )
I've put that sample code up in a bitbucket repository. Specifically the code sample to take a picture. (There's more interesting Cocoa code in my learning Cocoa repository, where that sample code is from )
Swift version:
extension NSImage {
static func desktopPicture() -> NSImage {
let windows = CGWindowListCopyWindowInfo(
CGWindowListOption.OptionOnScreenOnly,
CGWindowID(0))! as NSArray
var index = 0
for var i = 0; i < windows.count; i++ {
let window = windows[i]
// we need windows owned by Dock
let owner = window["kCGWindowOwnerName"] as! String
if owner != "Dock" {
continue
}
// we need windows named like "Desktop Picture %"
let name = window["kCGWindowName"] as! String
if !name.hasPrefix("Desktop Picture") {
continue
}
// wee need the one which belongs to the current screen
let bounds = window["kCGWindowBounds"] as! NSDictionary
let x = bounds["X"] as! CGFloat
if x == NSScreen.mainScreen()!.frame.origin.x {
index = window["kCGWindowNumber"] as! Int
break
}
}
let cgImage = CGWindowListCreateImage(
CGRectZero,
CGWindowListOption(arrayLiteral: CGWindowListOption.OptionIncludingWindow),
CGWindowID(index),
CGWindowImageOption.Default)!
let image = NSImage(CGImage: cgImage, size: NSScreen.mainScreen()!.frame.size)
return image
}
}

How can I tell if Voice Over is turned on in System Preferences?

Is there an way, ideally backwards compatible to Mac OS X 10.3, to tell if "Voice Over" is activated in System Preferences?
This appears to be stored in a preferences file for Universal Access. The app identifier is "com.apple.universalaccess" and the key containing the flag for whether VoiceOver is on or off is "voiceOverOnOffKey". You should be able to retrieve this using the CFPreferences API, something looking like:
CFBooleanRef flag = CFPreferencesCopyAppValue(CFSTR("voiceOverOnOffKey"), CFSTR("com.apple.universalaccess"));
If anyone has the same question, it could be good to know, that Voice Over status is accessible via convenient interface now:
NSWorkspace.shared.isVoiceOverEnabled
Based on Petes excellent answer I’ve created this Swift 4.2 solution, which I find much easier to read. I also think it’s more handy to use a computed property in this case instead of a function.
var hasVoiceOverActivated: Bool {
let key = "voiceOverOnOffKey" as CFString
let id = "com.apple.universalaccess" as CFString
if let voiceOverActivated = CFPreferencesCopyAppValue(key, id) as? Bool {
return voiceOverActivated
}
return false
}
VoiceOver and Accessibility in general are very important topics and it is sad that the lack of Apples documentation especially for macOS makes it so hard for developers to implement it properly.
Solution in Swift 4 is as follows:
func NSIsVoiceOverRunning() -> Bool {
if let flag = CFPreferencesCopyAppValue("voiceOverOnOffKey" as CFString, "com.apple.universalaccess" as CFString) {
if let voiceOverOn = flag as? Bool {
return voiceOverOn
}
}
return false
}
Furthermore, to make a text announcement with VoiceOver on macOS, do the following:
let message = "Hello, World!"
NSAccessibilityPostNotificationWithUserInfo(NSApp.mainWindow!,
NSAccessibilityNotificationName.announcementRequested,
[NSAccessibilityNotificationUserInfoKey.announcement: message,
NSAccessibilityNotificationUserInfoKey.priority:
NSAccessibilityPriorityLevel.high.rawValue])

Resources