LoadVars() and method POST for swiffty - google-swiffy

He did not know how someone could remake the piece of code to make it Google swiffty received since I wrote that this was not supported. This is a game in flash as1.
on (release) {
sendscore = new LoadVars();
sendscore.gscore = _root.Score;
sendscore.gname = "gamename";
sendscore.send("index.php?act=Arcade&do=newscore","_self","POST");
}

As far as I know, Swiffy only supports AS 2.0 and above

Related

How to get the battery level from Kontakt.io beacons using AltBeacon API

I need to get the battery level from the kontakt.io beacons. I have set the layout as below and the DataFields are empty when I read the beacons in RangingBeaconsInRegion.
I was expecting I could read the battery level from the last bit as described in the Kontakt.io documentation.
This is my current code:
private BeaconManager InitializeBeaconManager()
{
BeaconManager bm = BeaconManager.GetInstanceForApplication(Xamarin.Forms.Forms.Context);
var iBeaconParser = new BeaconParser();
iBeaconParser.SetBeaconLayout("m:2-3=0215,i:4-19,i:20-21,i:22-23,p:24-24,d:25-25");
bm.BeaconParsers.Add(iBeaconParser);
_rangeNotifier.DidRangeBeaconsInRegionComplete += RangingBeaconsInRegion;
bm.Bind((IBeaconConsumer)Xamarin.Forms.Forms.Context);
return bm;
}
void RangingBeaconsInRegion(object sender, RangeEventArgs e)
{
if (e.Beacons.Count > 0)
{
var beacon = e.Beacons.FirstOrDefault();
var data = beacon.DataFields.FirstOrDefault();
// here DataFields is empty!
}
}
I am using Xamarin Forms and this is the code for the Android Version.
Is this possible? or do I need to use the Kontakt.io API?
UPDATE
I have removed all parsers before apply the new layout and I am able to read the dataFields. However, I am getting a value 8 which I have no idea what this value means.
I am not positive the syntax on Xamarin, but try removing all existing beacon parsers before adding your custom one. I suspect the built in iBeacon parser is still active And it is matching first.
The battery level in Kontakt.io beacons are part of their "scan response packet", not part of the iBeacon structure, you'll have to use CoreBluetooth to read Characteristic 1, Service 5.
A quick breakdown of how this works is also described here, and the recently launched Xamarin component uses the same CoreBluetooth approach.

SystemMediaTransportControls - setting properties not working

I'm trying to use the SystemMediaTransportControls in an background audio app. I am using the MediaPlayer class to play the audio. Setting the music properties, thumbnail all seems to work fine, but setting the control buttons (i.e. "next" button) is not working at all. My use case is somewhat unique in that I can't get a complete playlist at once, the next track is only available through a internal method call.
Here is what I am doing:
This part is working fine, the volume control shows all the audio information and thumbnail correctly:
var playbackItem = new MediaPlaybackItem(source);
var displayProperties = playbackItem.GetDisplayProperties();
displayProperties.Type = Windows.Media.MediaPlaybackType.Music;
displayProperties.Thumbnail = RandomAccessStreamReference.CreateFromUri(new Uri(_currentTrack.AlbumArtUrl));
displayProperties.MusicProperties.AlbumArtist = displayProperties.MusicProperties.Artist = _currentTrack.Artist;
displayProperties.MusicProperties.Title = _currentTrack.SongTitle;
displayProperties.MusicProperties.AlbumTitle = _currentTrack.Album;
playbackItem.CanSkip = true;
playbackItem.ApplyDisplayProperties(displayProperties);
_player.Source = playbackItem;
This part is not working, the "Next" button is still disabled, the "Record" button is not showing.
var smtc = _player.SystemMediaTransportControls;
smtc.ButtonPressed += OnSMTCButtonPressed;
smtc.IsEnabled = true;
smtc.IsNextEnabled = true;
smtc.IsRecordEnabled = true;
I've been trying to look for answers online but was unable to find anything useful. Any answer is appreciated.
In UWP, apart SMTC, there is something like CommandManager - to properly work with your SMTC you will have to disable it. Just put the line:
mediaPlayer.CommandManager.IsEnabled = false;
once you initialize the player and it should work. You will find more information at MSDN:
If you are using MediaPlayer to play media, you can get an instance of the SystemMediaTransportControls class by accessing the MediaPlayer.SystemMediaTransportControls property. If you are going to manually control the SMTC, you should disable the automatic integration provided by MediaPlayer by setting the CommandManager.IsEnabled property to false.

Swift ios9 Building MacinTalk voice for asset: (null)

I am using xcode 7 , swift 2.0
I am getting voice text to Speech working in Simulator but not in real iphone6 plus device, iOS 9. I have properly imported AVFOUNDATION and its Framework.
I tried...
#IBAction func SpeakTheList(sender: AnyObject) {
let mySpeechUtterance = AVSpeechUtterance(string: speakString)
//let voice = AVSpeechSynthesisVoice(language: "en-US")
// mySpeechUtterance.voice = voice
let voices = AVSpeechSynthesisVoice.speechVoices()
for voice in voices {
if "en-US" == voice.language {
mySpeechUtterance.voice = voice
print(voice.language)
break;
}
}
mySpeechSynthesizer.speakUtterance(mySpeechUtterance)
}
I get the following error :
Building MacinTalk voice for asset: (null)
Is there anything I ned to do settings in my iphone6plus iOS 9 , or I have to download something.
I have found a suggestion here Why I'm getting "Building MacinTalk voice for asset: (null)" in iOS device test
saying that..
" since iOS9, possibly a log event turned on during development that they forgot to turn off"
Just want to add to this (and by extension, the linked discussion in the original post):
I have two devices: an iPad2 and an iPad Air. They are running exactly the same version of iOS (9.2, 13C75). I have the following objective-C++ function for generating speech from Qt using Xcode 7.2 (7C68) on Yosemite:
void iOSTTSClient::speakSpeedGender(const QString &msg, const float speechRateModifier, const QString &gender, const bool cutOff) {
QString noHTML(msg);
noHTML.remove(QRegularExpression("<[^<]*?>"));
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:noHTML.toNSString()];
/* See https://forums.developer.apple.com/thread/18178 */
const float baseSpeechRate = (m_iOSVersion < 9.0) ? 0.15 : AVSpeechUtteranceDefaultSpeechRate;
utterance.rate = baseSpeechRate * speechRateModifier;
NSString *locale;
if (gender.compare("male", Qt::CaseInsensitive) == 0)
locale = #"en-GB"; // "Daniel" by default
else if (gender.compare("female", Qt::CaseInsensitive) == 0)
locale = #"en-US"; // "Samantha" by default
else
locale = [AVSpeechSynthesisVoice currentLanguageCode];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
const QString errMsg = QString("Null pointer to AVSpeechSynthesisVoice (could not fetch voice for locale '%1')!").arg(QString::fromNSString(locale));
Q_ASSERT_X(voice, "speakSpeedGender", errMsg.toLatin1().data());
utterance.voice = voice;
static const AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
if (synthesizer.speaking && cutOff) {
const bool stopped = [synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Q_ASSERT_X(stopped, "speakSpeedGender", "Could not stop previous utterance!");
}
[synthesizer speakUtterance:utterance];
}
On the iPad Air, everything works beautifully:
Building MacinTalk voice for asset:
file:///private/var/mobile/Library/Assets/com_apple_MobileAsset_MacinTalkVoiceAssets/db2bf75d6d3dbf8d4825a3ea16b1a879ac31466b.asset/AssetData/
But on the iPad2, I hear nothing and get the following:
Building MacinTalk voice for asset: (null)
Out of curiosity, I fired up the iPad2 simulator and ran my app there. I got yet another console message:
AXSpeechAssetDownloader|error| ASAssetQuery error fetching results
(for com.apple.MobileAsset.MacinTalkVoiceAssets) Error Domain=ASError
Code=21 "Unable to copy asset information"
UserInfo={NSDescription=Unable to copy asset information}
However, I heard speech! And I realized I was wearing headphones. Sure enough, when I plugged ear buds into the iPad2, I heard speech there too. So now I'm searching for information about that. The following link is recent and has the usual assortment of this-worked-for-me voodoo (none of it helped me, but maybe will help others with this problem):
https://forums.developer.apple.com/thread/18444
In summary: TTS "works" but is not necessarily audible without headphones/ear buds. It appears to be a hardware settings issue with iOS 9.2. The console messages may or may not be relevant.
Final update: in the interests of full, if sheepish, disclosure, I figured I'd share how I finally solved the issue. The iPad2 in question had the "Use side switch to:" option set to "Mute". I left that alone but went ahead and toggled the switch itself. Wham! Everything worked without ear buds. So if you are unable to hear text-to-speech, try ear-buds. If that works, check whether your device is set to mute!
Do not use pauseSpeakingAtBoundary(). Instead, use stopSpeakingAtBoundary and continueSpeaking. This works for me.
Finally Found that there was a bug in iOS9, soon after XCODE new release 7.2 update, and iOS 9.2 Update release,
I tested same above code, text to speech started working.

jsplumb library getconnection function is not returning values

Hi all I am working in JS plumb library for making connections.I am stuck at one point and need help from experts.
Here is my scenario.
I have many connections and what I want is that when I click on one connection a certain label appears on it to show that it is selected.When I click one some other connection previously clicked connection disappears and new connection get selected.
What I have done so far is that
jsPlumbInst.bind('click', function(c) {
c.showOverlay('selected');
var previously_active = jsPlumbInst.getConnections({scope:"active"});//this function not returning me values
if(previously_active.length != 0) {
/*So never go in this statement*/
previously_active[0].hideOverlay('selected');
previously_active.scope("jsPlumb_DefaultScope");
}
c.scope = "active";
});
Here the problem is that my connection scope is set to "active"
jsPlumbInst.getConnections({scope:"active"})
is not returning anything.
So can any one kindly guide me that whether I am doing right?
Or is there any other way to achieve this?
var sourcecon = jsPlumb.getConnections({source: e}) ;
for(i=0; i<sourcecon.length; i++)
{
var target = getName(sourcecon[i].targetId) ;
var source = getName(sourcecon[i].sourceId) ;
removefrommatrix(source, target,sourcecon[i].sourceId,sourcecon[i].targetId) ;
}
This is code snippet which I am using. It works fine. Your code looks fine except just one difference that you have used jsPlumbInst rather than jsPlumb. I guess that could be the problem. For me its like static class in Java.Not sure about that. But try and see if it could help you.Seems like I am almost a year late in replying. All the best :-)

How to upload file with BackgroundTransfer for windows phone

have check the documentation but there is no sample to show how this is done. This is the code for download :
void Download()
{
btr = new BackgroundTransferRequest(remoteVideoUri, localDownloadUri);
btr.TransferPreferences = TransferPreferences.AllowCellularAndBattery;
BackgroundTransferService.Add(btr);
btr.TransferProgressChanged += new EventHandler<BackgroundTransferEventArgs>(btr_TransferProgressChanged);
btr.TransferStatusChanged += new EventHandler<BackgroundTransferEventArgs>(btr_TransferStatusChanged);
}
For upload, can it be done with this backgroudTransfer ? what and how to do it?
I haven't yet had chance to look at the background transfer stuff in Mango yet, but there is a lab that covers it in the Mango Offline Training Kit: http://msdn.microsoft.com/en-us/WP7MangoTrainingCourse and I do know that it can do uploads as well as downloads.

Resources