Swift ios9 Building MacinTalk voice for asset: (null) - swift2

I am using xcode 7 , swift 2.0
I am getting voice text to Speech working in Simulator but not in real iphone6 plus device, iOS 9. I have properly imported AVFOUNDATION and its Framework.
I tried...
#IBAction func SpeakTheList(sender: AnyObject) {
let mySpeechUtterance = AVSpeechUtterance(string: speakString)
//let voice = AVSpeechSynthesisVoice(language: "en-US")
// mySpeechUtterance.voice = voice
let voices = AVSpeechSynthesisVoice.speechVoices()
for voice in voices {
if "en-US" == voice.language {
mySpeechUtterance.voice = voice
print(voice.language)
break;
}
}
mySpeechSynthesizer.speakUtterance(mySpeechUtterance)
}
I get the following error :
Building MacinTalk voice for asset: (null)
Is there anything I ned to do settings in my iphone6plus iOS 9 , or I have to download something.
I have found a suggestion here Why I'm getting "Building MacinTalk voice for asset: (null)" in iOS device test
saying that..
" since iOS9, possibly a log event turned on during development that they forgot to turn off"

Just want to add to this (and by extension, the linked discussion in the original post):
I have two devices: an iPad2 and an iPad Air. They are running exactly the same version of iOS (9.2, 13C75). I have the following objective-C++ function for generating speech from Qt using Xcode 7.2 (7C68) on Yosemite:
void iOSTTSClient::speakSpeedGender(const QString &msg, const float speechRateModifier, const QString &gender, const bool cutOff) {
QString noHTML(msg);
noHTML.remove(QRegularExpression("<[^<]*?>"));
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:noHTML.toNSString()];
/* See https://forums.developer.apple.com/thread/18178 */
const float baseSpeechRate = (m_iOSVersion < 9.0) ? 0.15 : AVSpeechUtteranceDefaultSpeechRate;
utterance.rate = baseSpeechRate * speechRateModifier;
NSString *locale;
if (gender.compare("male", Qt::CaseInsensitive) == 0)
locale = #"en-GB"; // "Daniel" by default
else if (gender.compare("female", Qt::CaseInsensitive) == 0)
locale = #"en-US"; // "Samantha" by default
else
locale = [AVSpeechSynthesisVoice currentLanguageCode];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
const QString errMsg = QString("Null pointer to AVSpeechSynthesisVoice (could not fetch voice for locale '%1')!").arg(QString::fromNSString(locale));
Q_ASSERT_X(voice, "speakSpeedGender", errMsg.toLatin1().data());
utterance.voice = voice;
static const AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
if (synthesizer.speaking && cutOff) {
const bool stopped = [synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Q_ASSERT_X(stopped, "speakSpeedGender", "Could not stop previous utterance!");
}
[synthesizer speakUtterance:utterance];
}
On the iPad Air, everything works beautifully:
Building MacinTalk voice for asset:
file:///private/var/mobile/Library/Assets/com_apple_MobileAsset_MacinTalkVoiceAssets/db2bf75d6d3dbf8d4825a3ea16b1a879ac31466b.asset/AssetData/
But on the iPad2, I hear nothing and get the following:
Building MacinTalk voice for asset: (null)
Out of curiosity, I fired up the iPad2 simulator and ran my app there. I got yet another console message:
AXSpeechAssetDownloader|error| ASAssetQuery error fetching results
(for com.apple.MobileAsset.MacinTalkVoiceAssets) Error Domain=ASError
Code=21 "Unable to copy asset information"
UserInfo={NSDescription=Unable to copy asset information}
However, I heard speech! And I realized I was wearing headphones. Sure enough, when I plugged ear buds into the iPad2, I heard speech there too. So now I'm searching for information about that. The following link is recent and has the usual assortment of this-worked-for-me voodoo (none of it helped me, but maybe will help others with this problem):
https://forums.developer.apple.com/thread/18444
In summary: TTS "works" but is not necessarily audible without headphones/ear buds. It appears to be a hardware settings issue with iOS 9.2. The console messages may or may not be relevant.
Final update: in the interests of full, if sheepish, disclosure, I figured I'd share how I finally solved the issue. The iPad2 in question had the "Use side switch to:" option set to "Mute". I left that alone but went ahead and toggled the switch itself. Wham! Everything worked without ear buds. So if you are unable to hear text-to-speech, try ear-buds. If that works, check whether your device is set to mute!

Do not use pauseSpeakingAtBoundary(). Instead, use stopSpeakingAtBoundary and continueSpeaking. This works for me.

Finally Found that there was a bug in iOS9, soon after XCODE new release 7.2 update, and iOS 9.2 Update release,
I tested same above code, text to speech started working.

Related

Mac Catalyst - control window resize

I have an app for ipad/iphone, now adding also mac support by Mac Catalyst. On Mac, I want to control window resizing, in order to allow only some sizes and aspects. It goes beyond simple minimal height and weight, or aspect. I want to allow user to resize window freely, but when app gets too high and narrow, I want to also seemlessly increase width, to keep some minimal aspect.
I believe that in AppKit it can be done through NSWindowDelegate.windowWillResize() (get user defined size, count required size and return it). However I am getting error "NSWindowDelegate is unavailable in Mac Catalyst" . Is it possible to achieve the result I want by Catalyst means?
Answering my own question. It is NOT possible to create own NSWindowDelegate with windowWillResize() implemented in Catalyst. However, it IS possible to create a new target only for mac, and use it as a plugin from catalyst target.
First I load mac-only plugin (using Bundle.load() ), and instantiate its principalClass. Then I get NSWindow from UIWindow, which is easy through Dynamic library. Then I pass NSWindow to method of a plugin, which then can set own NSWindowDelegate, because it does not run in catalyst.
Sample code:
guard let bundle = Bundle(url: bundleURL) else { return }
let succ = bundle.load()
if (succ) {
let macUtilsClass = bundle.principalClass! as! MacUtilsProtocol.Type
self.macUtils = macUtilsClass.init()
var dnsw: NSObject? = nil
if (ProcessInfo.processInfo.isOperatingSystemAtLeast(
OperatingSystemVersion(majorVersion: 11, minorVersion: 0, patchVersion: 0))) {
dnsw = Dynamic.NSApplication.sharedApplication.delegate.hostWindowForUIWindow(AppDelegate.ref!.window).attachedWindow
}
else {
dnsw = Dynamic.NSApplication.sharedApplication.delegate.hostWindowForUIWindow(AppDelegate.ref!.window)
}
self.macUtils.SetupMainWindow(win: dnsw!)
}

Can't get CoreSpotlight to work properly on the Mac... sample code anyone?

Can someone point me to some sample code for using CoreSpotlight on the Mac? I can’t find a single sample code example (for mac, plenty of iOS ones out there). I have already successfully implemented CS on our iOS app.
Using code similar to what I used for iOS, I’m getting it to index the content, but it’s very minimal: it gives me the generic app icon for the content (rather than the custom thumbnail image I specify) and has no description, etc (on the Spotlight pane, the entire right side is blank).
Also, it's unclear to me what code I need to add to AppDelegate to handle the click on the spotlight data. I used `func application(_ application: NSApplication, continue userActivity: NSUserActivity, restorationHandler:) on iOS but this doesn't seem to get called on the Mac.
Here's the code I'm using to index the content:
attributeSet.title = content.name
attributeSet.contentDescription = content.description
if let image = image {
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)!
let bitmapRep = NSBitmapImageRep(cgImage: cgImage)
attributeSet.thumbnailData = bitmapRep.representation(using: NSBitmapImageRep.FileType.jpeg, properties: [:])!
}
let item = CSSearchableItem(uniqueIdentifier:content.uuid, domainIdentifier: "com.my.app.bundle.id", attributeSet: attributeSet)
searchableIndex?.indexSearchableItems([item]) { error in
if let error = error {
NSLog("CoreSpotlight Indexing error: \(error.localizedDescription)")
}
}
And this is an example of what I get in Spotlight:
A bit of sample code would clear this right up, Methinks.
Thanks in advance.

Can't control audio volume on device using AudioToolbox code in Swift

I created an app and used the following code to produce sound.
func playPianoButtonSound(button: UIButton) {
var soundID: SystemSoundID = 0
let soundURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(), pianoButtonsSoundNames[button], "wav", nil)
AudioServicesCreateSystemSoundID(soundURL, &soundID)
AudioServicesPlaySystemSound(soundID)
}
Sometimes it works and responds to the external volume control and sometimes it does not. Has anyone encountered this problem, if so I'd like to hear their experience and how they solved the issue.
Thank you.

Is the AudioFilePlayer audio unit sandbox compatible?

I've run into a problem using the AudioFilePlayer audio unit with app sandboxing enabled on OS X 10.8. I have an AUGraph with only two nodes, consisting of an AudioFilePlayer unit connected to a DefaultOutput unit. The goal (right now) is to simply play a single audio file. If sandboxing is not enabled, everything works fine. If I enable sandboxing, AUGraphOpen() returns error -3000 (invalidComponentID). If I remove the file player node from the AUGraph, the error goes away, which at least implies that the audio file player is causing the problem.
Here's the code I use to set the file player node up:
OSStatus AddFileToGraph(AUGraph graph, NSURL *fileURL, AudioFileInfo *outFileInfo, AUNode *outFilePlayerNode)
{
OSStatus error = noErr;
if ((error = AudioFileOpenURL((__bridge CFURLRef)fileURL, kAudioFileReadPermission, 0, &outFileInfo->inputFile))) {
NSLog(#"Could not open audio file at %# (%ld)", fileURL, (long)error);
return error;
}
// Get the audio data format from the file
UInt32 propSize = sizeof(outFileInfo->inputFormat);
if ((error = AudioFileGetProperty(outFileInfo->inputFile, kAudioFilePropertyDataFormat, &propSize, &outFileInfo->inputFormat))) {
NSLog(#"Couldn't get format of input file %#", fileURL);
return error;
}
// Add AUAudioFilePlayer node
AudioComponentDescription fileplayercd = {0};
fileplayercd.componentType = kAudioUnitType_Generator;
fileplayercd.componentSubType = kAudioUnitSubType_AudioFilePlayer;
fileplayercd.componentManufacturer = kAudioUnitManufacturer_Apple;
fileplayercd.componentFlags = kAudioComponentFlag_SandboxSafe;
if ((error = AUGraphAddNode(graph, &fileplayercd, outFilePlayerNode))) {
NSLog(#"AUAudioFilePlayer node not found (%ld)", (long)error);
return error;
}
return error;
}
Note that fileURL in the AudioFileOpenURL() call is a URL obtained from security scoped bookmark data, and is the URL to a file that has been dragged into the application by the user.
If I set the com.apple.security.temporary-exception.audio-unit-host sandboxing entitlement, when AUGraphOpen() is called, the user is prompted to lower security settings, and assuming they accept, playback again works fine (the sandbox is disabled).
So, this points to the AudioFilePlayer unit not being sandbox-safe/compatible. Is this true? It's difficult to believe that Apple wouldn't have fixed such an important part of the CoreAudio API to be sandbox compatible. Also note that I specify the kAudioComponentFlag_SandboxSafe flag in the description passed to AUGraphAddNode, and that call does not fail. Also, I can only find one reference to AudioFilePlayer not being sandbox-safe online, in the form of this post to the CoreAudio mailing list, and it didn't receive any replies. Perhaps I'm making some other subtle mistake that happens to cause a problem with sandboxing enabled, but not when it's off (I'm new to Core Audio)?

mouse double click is not working quite good

I am using the following code to record screen, when recording, when using mouse to double click some item, for example double click a ppt to open it in PowerPoint, it is not very responsive. I have tried and it is much better when using screen recording function of Windows Media Encoder 9. Any ideas what is wrong?
My environment: Windows Vista + Windows Media Encoder 9 + VSTS 2008 + C#. I wrote the following code in the initialization code of a Windows Forms application, and I suspect something wrong with my Windows Forms application?
My code,
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
IWMEncSource SrcAud;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
SrcAud.SetInput("Device://Default_Audio_Device", "", "");
// Specify a file object in which to save encoded content.
IWMEncFile File = encoder.File;
string CurrentFileName = Guid.NewGuid().ToString();
File.LocalFileName = CurrentFileName;
CurrentFileName = File.LocalFileName;
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Screen Video/Audio High (CBR)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
I faced the same problem. But the problem doesn't reside in your code or mine. When I tried to capture screen from Windows Media Encoder application itself I faced the same problem too in about 50% of the sessions. It's evident that it's a bug in WindowsMediaEncoder itself.
George
Here are a couple options (from http://www.windowsmoviemakers.net/Forums/ShowPost.aspx?PostID=1982):
Enable the MouseKeys Accessibility option, and type + to double-click
Run the encoder and target application on different machines, and capture a remote desktop session

Resources