I'm showing a local html file in a WebView which is the center node of a Glisten View. When I presss the android back button, instead of going back to the previous view, the app gets closed. When I use an appBar button to switch to the previous view it works fine.
I tried to attach an event filter to both, webView and scene, but it doesn't get triggered.
javafxports version: 8.60.6
UPDATE:
The issue occures only when the webview is focused.
public class ImportHelpPresenter extends BasePresenter {
#FXML
private WebView web;
#Override
protected void initialize() {
super.initialize();
web.setContextMenuEnabled(false);
loadHelpPage();
}
private void loadHelpPage() {
String htmlContent = null;
try {
htmlContent = readContent("importhelp.html");
} catch (IOException e) {
e.printStackTrace();
}
web.getEngine().loadContent(htmlContent);
}
String readContent(String fileName) throws IOException {
InputStream is = getClass().getResourceAsStream(fileName);
BufferedReader br = new BufferedReader(new InputStreamReader(is));
try {
StringBuilder sb = new StringBuilder();
String line = br.readLine();
while (line != null) {
sb.append(line);
line = br.readLine();
}
return sb.toString();
} finally {
br.close();
}
}
EventHandler<? super KeyEvent> backButtonFilter = evt -> {
System.out.println(evt);
if (KeyCode.ESCAPE.equals(evt.getCode())) {
evt.consume();
showPreviousView();
}
};
#Override
protected void onShown() {
super.onShown();
web.getScene().addEventFilter(KeyEvent.ANY, backButtonFilter);
web.addEventFilter(KeyEvent.ANY, backButtonFilter);
}
#Override
protected void onHidden() {
web.getScene().removeEventFilter(KeyEvent.ANY, backButtonFilter);
web.removeEventFilter(KeyEvent.ANY, backButtonFilter);
}
}
logcat:
04-19 04:49:54.760: V/InternalWebView(11536): WebView added to ViewGroup [x: 0, y: 84 , w: 480 h: 678]
04-19 04:49:54.760: V/InternalWebView(11536): Loading content: <html>
//content omitted
04-19 04:49:54.930: D/webcoreglue(11536): netstack: Memory Cache feature is ON
04-19 04:49:55.070: V/chromium(11536): external/chromium/net/host_resolver_helper/host_resolver_helper.cc:66: [0419/044955:INFO:host_resolver_helper.cc(66)] DNSPreResolver::Init got hostprovider:0x51d36010
04-19 04:49:55.070: V/chromium(11536): external/chromium/net/base/host_resolver_impl.cc:1510: [0419/044955:INFO:host_resolver_impl.cc(1510)] HostResolverImpl::SetPreresolver preresolver:0x51d25d10
04-19 04:49:55.070: D/HostStatisticManager(11536): netstack: DNS Host Prioritization is: ON, Version: 5.0.1
04-19 04:49:55.080: D/(11536): external/chromium/net/socket/tcp_fin_aggregation_factory.cc: libtcpfinaggr.so successfully loaded
04-19 04:49:55.080: D/(11536): external/chromium/net/socket/tcp_fin_aggregation_factory.cc,: TCP Fin Aggregation initializing method was found in libtcpfinaggr.so
04-19 04:49:55.080: D/TCPFinAggregation(11536): netstack: TCPFinAggregation is 1, Version 5.0.1
04-19 04:49:55.080: D/TCPFinAggregation(11536): system property net.tcp.fin.aggregation.wait was set, value: 20
04-19 04:49:55.080: D/TCPFinAggregation(11536): system property net.tcp.fin.aggregation.close was set, value: 300
04-19 04:49:55.080: D/TCPFinAggregation(11536): netstack: CloseUnusedSockets is ON, (TCPFinAggregation), Version 5.0.1
04-19 04:49:55.080: D/TCPFinAggregation(11536): Failed to get network status! received ret: -2
04-19 04:49:55.080: D/Socket_Pool(11536): netstack: CloseUnusedSockets is ON
04-19 04:49:55.080: D/Socket_Pool(11536): netstack: system net.statistics value: 0
04-19 04:49:55.080: D/Socket_Pool(11536): netstack: CloseUnusedSockets is ON
04-19 04:49:55.080: D/Socket_Pool(11536): netstack: system net.statistics value: 0
04-19 04:49:55.090: D/(11536): external/chromium/net/http/http_getzip_factory.cc: libgetzip.so successfully loaded
04-19 04:49:55.090: D/(11536): external/chromium/net/http/http_getzip_factory.cc,: GETzip initializing method was found in libgetzip.so
04-19 04:49:55.090: D/netstack(11536): netstack: Request Priority is ON
04-19 04:49:55.090: D/(11536): netstack: Getzip is: ON, Version: 5.0.1
04-19 04:49:55.180: D/WebKit(11536): ERROR:
04-19 04:49:55.180: D/WebKit(11536): alias gb18030 maps to GB18030 already, but someone is trying to make it map to GBK
04-19 04:49:55.180: D/WebKit(11536): external/webkit/Source/WebCore/platform/text/TextEncodingRegistry.cpp(152) : void WebCore::checkExistingName(char const*, char const*)
04-19 04:50:00.120: E/dalvikvm(11536): GC_CONCURRENT freed 1207K, 24% free 17251K/22599K, paused 5ms+7ms, total 350ms
04-19 04:50:02.810: D/AudioManager(11536): currentDeviceType = 1
04-19 04:50:02.870: V/FXActivity(11536): onPause
04-19 04:50:03.690: W/IInputConnectionWrapper(11536): showStatusIcon on inactive InputConnection
04-19 04:50:03.770: V/FXEntity(11536): Called Surface destroyed
04-19 04:50:03.780: V/FXActivity native(11536): [JVDBG] SURFACE created native android window at 0x0, surface = 0x0
04-19 04:50:03.790: I/GLASS(11536): Native code is notified that surface has changed (repaintall)!
04-19 04:50:04.210: W/ManagedEGLContext(11536): doTerminate failed: EGL count is 2 but managed count is 1
04-19 04:50:04.210: V/FXActivity(11536): onStop
04-19 04:50:04.210: V/FXActivity(11536): onDestroy
Related
I am developing a trading app with a GUI and charting modules using JavaFX that has to be run on a server to be close to trading centers. While my app runs very smoothly on my home PC it is unresponsive on any dedicated server or VPS I have tried so far.
The trading app is too big to share the code here, so I made a small test app that allows to quantify the difference. It is showing the frame rate (thanks to What is the preferred way of getting the frame rate of a JavaFX application?) and if you click the button it simulates some GUI work.
The dedicated server specs are:
AMD Ryzen 5 3600X 6-Core Processor 3.80 GHz
16.0 GB RAM
64-bit Operating System, x64-based processor
Windows Server 2016 Standard
My home PC specs are:
AMD Ryzen 7 5700U with Radeon Graphics 1.80 GHz
16.0 GB RAM
64-bit Operating System, x64-based processor
Windows 10 Home
Running the test on my home PC:
Frame rate after starting: 60 per second.
Frame rate when simulation GUI work: 60 per second.
Running the test on the dedicated server:
Frame rate after starting: 52 per second.
Frame rate when simulation GUI work: 40 per second.
When doing the test with the app started simultaneously five times it pushes the frame rate down towards 30 frames per second on the server, while on the PC it stays at 60 per second.
Running the app with -Dprism.verbose=true shows that on the server the Microsoft Basic Render Driver is used what might cause the difference in performance:
What can I do to improve the performance of JavaFX on the server or if possible to make it run as smoothly as it is running on the home PC?
Edit: I figured out that running the app with -Dprism.order=sw as VM option enhances the performance on the server significantly. The test app is there now running at same FPS as on local PC.
Here is the code of the test app:
import javafx.animation.AnimationTimer;
import javafx.application.Application;
import javafx.application.Platform;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Label;
import javafx.scene.layout.BorderPane;
import javafx.stage.Stage;
public class SimpleFrameRateMeter extends Application {
private final long[] frameTimes = new long[100];
private int frameTimeIndex = 0;
private boolean arrayFilled = false;
#Override
public void start(Stage primaryStage) {
borderPane = new BorderPane();
runFrameRateMeter();
//add a button to start some GUI action
button = new Button("Start");
button.setOnAction(e -> {
if (button.getText().equals("Start")) {
button.setText("Running...");
doSomething();
}
});
borderPane.setLeft(button);
primaryStage.setScene(new Scene(borderPane, 250, 150));
primaryStage.show();
}
private BorderPane borderPane;
private Button button;
private Label label = new Label();
//some GUI work
private void doSomething() {
new Thread(() -> {
for (int i = 0; i < 5; i++) {
final int cnt = i;
Platform.runLater(() -> {
//Label label = new Label(String.valueOf(cnt));
label.setText(String.valueOf(cnt));
borderPane.setCenter(label);
});
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Platform.runLater(() -> {
button.setText("Start");
});
}).start();
}
//Measure frame rates from https://stackoverflow.com/questions/28287398/what-is-the-preferred-way-of-getting-the-frame-rate-of-a-javafx-application
private void runFrameRateMeter() {
Label label = new Label();
borderPane.setTop(label);
AnimationTimer frameRateMeter = new AnimationTimer() {
#Override
public void handle(long now) {
long oldFrameTime = frameTimes[frameTimeIndex];
frameTimes[frameTimeIndex] = now;
frameTimeIndex = (frameTimeIndex + 1) % frameTimes.length;
if (frameTimeIndex == 0) {
arrayFilled = true;
}
if (arrayFilled) {
long elapsedNanos = now - oldFrameTime;
long elapsedNanosPerFrame = elapsedNanos / frameTimes.length;
double frameRate = 1_000_000_000.0 / elapsedNanosPerFrame;
label.setText(String.format("Current frame rate: %.3f", frameRate));
}
}
};
frameRateMeter.start();
}
public static void main(String[] args) {
launch(args);
}
}
I have got a solution, it's actually a demo on how Win App Driver should work but I can't for the life of me get it to work. Using Win App Driver with selenium and appium web drivers (as mentioned at 5 minutes into this video). I have the solution as shown below and when I run my AddAlarm test I get the error ... "the target machine actively refused it 127.0.0.1:4723".
The full error message is at the bottom of this post.
My question is, what do I need to do to make the application we're testing "Alarm & Clock" actually launch on the url 127.0.0.1:4723 is there anything I have to do to make it available on that url / port? Also, how do I verify is "app" and "Microsoft.WindowsAlarms_8wekyb3d8bbwe!App" are correct in the setup?
//Class with my test "AddAlarm"
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium.Appium.Windows;
using System.Threading;
using System;
namespace AlarmClockTest
{
[TestClass]
public class ScenarioAlarm : AutoTest_SynTQ.UnitTestSession
{
private const string NewAlarmName = "Sample Test Alarm";
[TestMethod]
public void AlarmAdd()
{
// Navigate to New Alarm page
session.FindElementByAccessibilityId("AddAlarmButton").Click();
// Set alarm name
session.FindElementByAccessibilityId("AlarmNameTextBox").Clear();
session.FindElementByAccessibilityId("AlarmNameTextBox").SendKeys(NewAlarmName);
// Set alarm hour
WindowsElement hourSelector = session.FindElementByAccessibilityId("HourLoopingSelector");
hourSelector.FindElementByName("3").Click();
Assert.AreEqual("3", hourSelector.Text);
// Set alarm minute
WindowsElement minuteSelector = session.FindElementByAccessibilityId("MinuteLoopingSelector");
minuteSelector.FindElementByName("55").Click();
Assert.AreEqual("55", minuteSelector.Text);
// Save the newly configured alarm
session.FindElementByAccessibilityId("AlarmSaveButton").Click();
Thread.Sleep(TimeSpan.FromSeconds(3));
// Verify that a new alarm entry is created with the given hour, minute, and name
WindowsElement alarmEntry = session.FindElementByXPath($"//ListItem[starts-with(#Name, \"{NewAlarmName}\")]");
Assert.IsNotNull(alarmEntry);
Assert.IsTrue(alarmEntry.Text.Contains("3"));
Assert.IsTrue(alarmEntry.Text.Contains("55"));
Assert.IsTrue(alarmEntry.Text.Contains(NewAlarmName));
// Verify that the alarm is active and deactivate it
WindowsElement alarmEntryToggleSwitch = alarmEntry.FindElementByAccessibilityId("AlarmToggleSwitch") as WindowsElement;
Assert.IsTrue(alarmEntryToggleSwitch.Selected);
alarmEntryToggleSwitch.Click();
Assert.IsFalse(alarmEntryToggleSwitch.Selected);
}
[ClassInitialize]
public static void ClassInitialize(TestContext context)
{
Setup(context);
}
[ClassCleanup]
public static void ClassCleanup()
{
// Try to delete any alarm entry that may have been created
while (true)
{
try
{
var alarmEntry = session.FindElementByXPath($"//ListItem[starts-with(#Name, \"{NewAlarmName}\")]");
session.Mouse.ContextClick(alarmEntry.Coordinates);
session.FindElementByName("Delete").Click();
}
catch
{
break;
}
}
TearDown();
}
[TestInitialize]
public override void TestInit()
{
// Invoke base class test initialization to ensure that the app is in the main page
base.TestInit();
// Navigate to Alarm tab
session.FindElementByAccessibilityId("AlarmPivotItem").Click();
}
}
}
//Inherited class below
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium.Appium.Windows;
using OpenQA.Selenium.Remote;
using System;
using System.Threading;
namespace AutoTest_SynTQ
{
[TestClass]
public class UnitTestSession
{
private const string WindowsApplicationDriverUrl = "http://127.0.0.1:4723";
private const string AlarmClockAppId = "Microsoft.WindowsAlarms_8wekyb3d8bbwe!App";
protected static WindowsDriver<WindowsElement> session;
protected static RemoteTouchScreen touchScreen;
public static void Setup(TestContext context)
{
// Launch Alarms & Clock application if it is not yet launched
if (session == null || touchScreen == null)
{
TearDown();
// Create a new session to bring up the Alarms & Clock application
DesiredCapabilities appCapabilities = new DesiredCapabilities();
appCapabilities.SetCapability("app", AlarmClockAppId);
session = new WindowsDriver<WindowsElement>(new Uri(WindowsApplicationDriverUrl), appCapabilities);
Assert.IsNotNull(session);
Assert.IsNotNull(session.SessionId);
// Set implicit timeout to 1.5 seconds to make element search to retry every 500 ms for at most three times
session.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(1.5));
// Initialize touch screen object
touchScreen = new RemoteTouchScreen(session);
Assert.IsNotNull(touchScreen);
}
}
public static void TearDown()
{
// Cleanup RemoteTouchScreen object if initialized
touchScreen = null;
// Close the application and delete the session
if (session != null)
{
session.Quit();
session = null;
}
}
[TestInitialize]
public virtual void TestInit()
{
WindowsElement alarmTabElement = null;
// Attempt to go back to the main page in case Alarms & Clock app is started in EditAlarm view
try
{
alarmTabElement = session.FindElementByAccessibilityId("AlarmPivotItem");
}
catch
{
// Click back button if application is in a nested page such as New Alarm or New Timer
session.FindElementByAccessibilityId("Back").Click();
Thread.Sleep(TimeSpan.FromSeconds(1));
alarmTabElement = session.FindElementByAccessibilityId("AlarmPivotItem");
}
// Verify that the app is in the main view showing alarmTabElement
Assert.IsNotNull(alarmTabElement);
Assert.IsTrue(alarmTabElement.Displayed);
}
}
}
Test Name: AlarmAdd
Test FullName: AlarmClockTest.ScenarioAlarm.AlarmAdd
Test Source: C:\Users\ECombe.OPTIDOORS\Documents\SynTQCodedUITesting\AutoTest_SynTQ\SCN_Alarm.cs : line 30
Test Outcome: Failed
Test Duration: 0:00:00
Result StackTrace:
at OpenQA.Selenium.Remote.RemoteWebDriver.UnpackAndThrowOnError(Response errorResponse)
at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary2 parameters)
at OpenQA.Selenium.Remote.RemoteWebDriver.StartSession(ICapabilities desiredCapabilities)
at OpenQA.Selenium.Remote.RemoteWebDriver..ctor(ICommandExecutor commandExecutor, ICapabilities desiredCapabilities)
at OpenQA.Selenium.Appium.AppiumDriver1..ctor(Uri remoteAddress, ICapabilities desiredCapabilities)
at OpenQA.Selenium.Appium.Windows.WindowsDriver1..ctor(Uri remoteAddress, DesiredCapabilities desiredCapabilities)
at AutoTest_SynTQ.UnitTestSession.Setup(TestContext context) in C:\Users\ECombe.OPTIDOORS\Documents\SynTQCodedUITesting\AutoTest_SynTQ\UnitTestSession.cs:line 28
at AlarmClockTest.ScenarioAlarm.ClassInitialize(TestContext context) in C:\Users\ECombe.OPTIDOORS\Documents\SynTQCodedUITesting\AutoTest_SynTQ\SCN_Alarm.cs:line 71
Result Message:
Class Initialization method AlarmClockTest.ScenarioAlarm.ClassInitialize threw exception. OpenQA.Selenium.WebDriverException: OpenQA.Selenium.WebDriverException: Unexpected error. System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:4723
at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
--- End of inner exception stack trace ---
at OpenQA.Selenium.Appium.Service.AppiumCommandExecutor.Execute(Command commandToExecute)
at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary2 parameters).
The answer in my case was to use Developer mode. The actual problem was winappdriver.exe closing immediately. More details here.
I have created, with JavaFX, a game on desktop that works fine (20000 Java lines. As it is a game, the Real Time constraint is important (response time of player's actions).
The final aim is to run this application with Android. I have almost finished to "transfer the Java code" from PC to Android, even if I have encountered some real time trouble. I think almost all of them are solved now.
For instance, I have minimized the CPU time (consumption) of Shape or Rectangle.intersect(node1, node2) calls that are used for detecting impacts between two mobiles. Thus, the real time has been divided by 3. Great!
For testing this Android version, I use Eclipse + Neon2, JavaFX, JavaFXports + gluon and my phone (Archos Diamond S).
But, for Android phones, I had a real time problem related to the sounds that are generated with MediaPlayer and NativeAudioSrvice.
Yet, I have followed this advice that suggests the synchronous mode:
javafxports how to call android native Media Player
1st question:
Does it exist an asynchronous mode with this Mediaplayer class?I think that would solve this latency problem?
In practice, I have tried the asynchronous solution ... without success: the real time problem due to the audio generation with MediaPlayer stays: an audio generation costs from 50 ms to 80 ms whereas the main cyclic processing runs each 110 ms. Each audio generation can interfer with the main processing execution.
And, in each periodic task (rate: 110 ms), I can play several sounds like that. And, in a trace, there was up to six sound activations that take (together) about 300 ms (against the 110 ms of the main cyclic task )
QUESTION:
How to improve the performance of NativeAudio class (especially, the method play() with its calls that create the real time problem: setDataSource(...), prepare() and start() )?
THE SOLUTION
The main processing must be a "synchronized" method to be sure that this complete processing will be run, without any audio interruption.
More, each complete processing for generating a sound is under a dedicated thread, defined with a Thread.MIN_PRIORITY priority.
Now, the main processing is run each 110 ms and, when it begins, it cannot be disturbed by any audio generation. The display is very "soft" (no more jerky moving).
There is just a minor problem: when an audio seDataSource(), a start() or a prepare() method has begun, it seems to be that the next main processing shall wait the end of the method before beginning (TBC)
I hope this solution could help another people. It is applicable in any case of audio generations with MediaPlayer.
JAVA code of the solution
The main processing is defined like that:
public static ***synchronized*** void mainProcessing() {
// the method handles the impacts, explosions, sounds, movings, ... , in other words almost the entiere game .. in a CRITICAL SECTION
}
/****************************************************/
In the NativeAudio class that implements "NativeAudioService":
#Override
public void play() {
if (bSon) {
Task<Void> taskSound = new Task<Void>() {
#Override
protected Void call() throws Exception {
generateSound();
return null;
}};
Thread threadSound = new Thread(taskSound);
threadSound.setPriority(Thread.MIN_PRIORITY);
threadSound.start();
}
}
/****************************************************/
private void generateSound() {
currentPosition = 0;
nbTask++;
noTask = nbTask;
try {
if (mediaPlayer != null) {
stop();
}
mediaPlayer = new MediaPlayer();
AssetFileDescriptor afd = FXActivity.getInstance().getAssets().openFd(audioFileName);
mediaPlayer.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
mediaPlayer.setAudioStreamType(AudioManager.STREAM_RING);
float floatLevel = (float) audioLevel;
mediaPlayer.setVolume(floatLevel, floatLevel);
mediaPlayer.setOnCompletionListener(new OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mediaPlayer) {
if (nbCyclesAudio >= 1) {
mediaPlayer.start();
nbCyclesAudio--;
} else {
mediaPlayer.stop();
mediaPlayer.release(); // for freeing the resource - useful for the phone codec
mediaPlayer = null;
}
}
});
mediaPlayer.prepare();
mediaPlayer.start();
nbCyclesAudio--;
} catch (IOException e) {
}
}
I've changed a little bit the implementation you mentioned, given that you have a bunch of short audio files to play, and that you want a very short time to play them on demand. Basically I'll create the AssetFileDescriptor for all the files once, and also I'll use the same single MediaPlayer instance all the time.
The design follows the pattern of the Charm Down library, so you need to keep the package names below.
EDIT
After the OP's feedback, I've changed the implementation to have one MediaPlayer for each audio file, so you can play any of them at any time.
Source Packages/Java:
package: com.gluonhq.charm.down.plugins
AudioService interface
public interface AudioService {
void addAudioName(String audioName);
void play(String audioName, double volume);
void stop(String audioName);
void pause(String audioName);
void resume(String audioName);
void release();
}
AudioServiceFactory class
public class AudioServiceFactory extends DefaultServiceFactory<AudioService> {
public AudioServiceFactory() {
super(AudioService.class);
}
}
Android/Java Packages
package: com.gluonhq.charm.down.plugins.android
AndroidAudioService class
public class AndroidAudioService implements AudioService {
private final Map<String, MediaPlayer> playList;
private final Map<String, Integer> positionList;
public AndroidAudioService() {
playList = new HashMap<>();
positionList = new HashMap<>();
}
#Override
public void addAudioName(String audioName) {
MediaPlayer mediaPlayer = new MediaPlayer();
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mediaPlayer.setOnCompletionListener(m -> pause(audioName)); // don't call stop, allows reuse
try {
mediaPlayer.setDataSource(FXActivity.getInstance().getAssets().openFd(audioName));
mediaPlayer.setOnPreparedListener(mp -> {
System.out.println("Adding audio resource " + audioName);
playList.put(audioName, mp);
positionList.put(audioName, 0);
});
mediaPlayer.prepareAsync();
} catch (IOException ex) {
System.out.println("Error retrieving audio resource " + audioName + " " + ex);
}
}
#Override
public void play(String audioName, double volume) {
MediaPlayer mp = playList.get(audioName);
if (mp != null) {
if (positionList.get(audioName) > 0) {
positionList.put(audioName, 0);
mp.pause();
mp.seekTo(0);
}
mp.start();
}
}
#Override
public void stop(String audioName) {
MediaPlayer mp = playList.get(audioName);
if (mp != null) {
mp.stop();
}
}
#Override
public void pause(String audioName) {
MediaPlayer mp = playList.get(audioName);
if (mp != null) {
mp.pause();
positionList.put(audioName, mp.getCurrentPosition());
}
}
#Override
public void resume(String audioName) {
MediaPlayer mp = playList.get(audioName);
if (mp != null) {
mp.start();
mp.seekTo(positionList.get(audioName));
}
}
#Override
public void release() {
for (MediaPlayer mp : playList.values()) {
if (mp != null) {
mp.stop();
mp.release();
}
}
}
}
Sample
I've added five short audio files (from here), and added five buttons to my main view:
#Override
public void start(Stage primaryStage) throws Exception {
Button play1 = new Button("p1");
Button play2 = new Button("p2");
Button play3 = new Button("p3");
Button play4 = new Button("p4");
Button play5 = new Button("p5");
HBox hBox = new HBox(10, play1, play2, play3, play4, play5);
hBox.setAlignment(Pos.CENTER);
Services.get(AudioService.class).ifPresent(audio -> {
audio.addAudioName("beep28.mp3");
audio.addAudioName("beep36.mp3");
audio.addAudioName("beep37.mp3");
audio.addAudioName("beep39.mp3");
audio.addAudioName("beep50.mp3");
play1.setOnAction(e -> audio.play("beep28.mp3", 5));
play2.setOnAction(e -> audio.play("beep36.mp3", 5));
play3.setOnAction(e -> audio.play("beep37.mp3", 5));
play4.setOnAction(e -> audio.play("beep39.mp3", 5));
play5.setOnAction(e -> audio.play("beep50.mp3", 5));
});
Scene scene = new Scene(new StackPane(hBox), Screen.getPrimary().getVisualBounds().getWidth(),
Screen.getPrimary().getVisualBounds().getHeight());
primaryStage.setScene(scene);
primaryStage.show();
}
#Override
public void stop() throws Exception {
Services.get(AudioService.class).ifPresent(AudioService::release);
}
The prepare step takes place when the app is launched and the service is instanced, so when playing later on any of the audio files, there won't be any delay.
I haven't checked if there could be any memory issues when adding several media players with big audio files, as that wasn't the initial scenario. Maybe a cache strategy will help in this case (see CacheService in Gluon Charm Down).
I am using PushStreamContent to keep a persistent connection to each client. Pushing short heartbeat messages to each client stream every 20 seconds works great with 100 clients, but at about 200 clients, the client first starts receiving it a few seconds delayed, then it doesn't show up at all.
My controller code is
// Based loosely on https://aspnetwebstack.codeplex.com/discussions/359056
// and http://blogs.msdn.com/b/henrikn/archive/2012/04/23/using-cookies-with-asp-net-web-api.aspx
public class LiveController : ApiController
{
public HttpResponseMessage Get(HttpRequestMessage request)
{
if (_timer == null)
{
// 20 second timer
_timer = new Timer(TimerCallback, this, 20000, 20000);
}
// Get '?clientid=xxx'
HttpResponseMessage response = request.CreateResponse();
var kvp = request.GetQueryNameValuePairs().Where(q => q.Key.ToLower() == "clientid").FirstOrDefault();
string clientId = kvp.Value;
HttpContext.Current.Response.ClientDisconnectedToken.Register(
delegate(object obj)
{
// Client has cleanly disconnected
var disconnectedClientId = (string)obj;
CloseStreamFor(disconnectedClientId);
}
, clientId);
response.Content = new PushStreamContent(
delegate(Stream stream, HttpContent content, TransportContext context)
{
SaveStreamFor(clientId, stream);
}
, "text/event-stream");
return response;
}
private static void CloseStreamFor(string clientId)
{
Stream oldStream;
_streams.TryRemove(clientId, out oldStream);
if (oldStream != null)
oldStream.Close();
}
private static void SaveStreamFor(string clientId, Stream stream)
{
_streams.TryAdd(clientId, stream);
}
private static void TimerCallback(object obj)
{
DateTime start = DateTime.Now;
// Disable timer
_timer.Change(Timeout.Infinite, Timeout.Infinite);
// Every 20 seconds, send a heartbeat to each client
var recipients = _streams.ToArray();
foreach (var kvp in recipients)
{
string clientId = kvp.Key;
var stream = kvp.Value;
try
{
// ***
// Adding this Trace statement and running in debugger caused
// heartbeats to be reliably flushed!
// ***
Trace.WriteLine(string.Format("** {0}: Timercallback: {1}", DateTime.Now.ToString("G"), clientId));
WriteHeartBeat(stream);
}
catch (Exception ex)
{
CloseStreamFor(clientId);
}
}
// Trace... (this trace statement had no effect)
_timer.Change(20000, 20000); // re-enable timer
}
private static void WriteHeartBeat(Stream stream)
{
WriteStream(stream, "event:heartbeat\ndata:-\n\n");
}
private static void WriteStream(Stream stream, string data)
{
byte[] arr = Encoding.ASCII.GetBytes(data);
stream.Write(arr, 0, arr.Length);
stream.Flush();
}
private static readonly ConcurrentDictionary<string, Stream> _streams = new ConcurrentDictionary<string, Stream>();
private static Timer _timer;
}
Could there be some ASP.NET or IIS setting that affects this? I am running on Windows Server 2008 R2.
UPDATE:
Heartbeats are reliably sent if 1) the Trace.WriteLine statement is added, 2) Visual Studio 2013 debugger is attached and debugging and capturing the Trace.WriteLines).
Both of these are necessary; if the Trace.WriteLine is removed, running under the debugger has no effect. And if the Trace.WriteLine is there but the program is not running under the debugger (instead SysInternals' DbgView is showing the trace messages), the heartbeats are unreliable.
UPDATE 2:
Two support incidents with Microsoft later, here are the conclusions:
1) The delays with 200 clients were resolved by using a business class Internet connection instead of a Home connection
2) whether the debugger is attached or not really doesn't make any difference;
3) The following two additions to web.config are required to ensure heartbeats are sent timely, and failed heartbeats due to client disconnecting "uncleanly" (e.g. by unplugging computer rather than normal closing of program which cleanly issues TCP RST) trigger a timely ClientDisconnected callback as well:
<httpRuntime executionTimeout="5" />
<serverRuntime appConcurrentRequestLimit="50000" uploadReadAheadSize="1" frequentHitThreshold="2147483647" />
I'm working on WP7/8 application with barcode scanning. And have a problem with disposing camera. Camera initialize too long, and when camera is still initializing and I press back button, I've got a fatal error:
A first chance exception of type 'System.ObjectDisposedException'
occurred in Microsoft.Devices.Camera.ni.dll WinRT information: Fatal
error. Disposing capture device.
Could anybody helps me how to avoid this error?
my code:
protected override void OnNavigatedTo(NavigationEventArgs e)
{
InitializeAndGo();
base.OnNavigatedTo(e);
}
protected override void OnNavigatingFrom(System.Windows.Navigation.NavigatingCancelEventArgs e)
{
disposeCamera();
}
private void PhotoCameraOnInitialized(object sender, CameraOperationCompletedEventArgs cameraOperationCompletedEventArgs)
{
_width = Convert.ToInt32(_photoCamera.PreviewResolution.Width);
_height = Convert.ToInt32(_photoCamera.PreviewResolution.Height);
_luminance = new PhotoCameraLuminanceSource(_width, _height);
if (_photoCamera.IsFlashModeSupported(FlashMode.Auto))
{
_photoCamera.FlashMode = FlashMode.Off;
}
cameraInitialized = true;
Dispatcher.BeginInvoke(() =>
{
FlashCheckbox.IsEnabled = true;
if (_photoCamera.IsFlashModeSupported(FlashMode.Auto))
{
_photoCamera.FlashMode = FlashMode.Off;
}
});
_photoCamera.Focus();
}
private void InitializeAndGo()
{
stopScan = false;
_photoCamera = new PhotoCamera();
_photoCamera.Initialized += PhotoCameraOnInitialized;
_photoCamera.AutoFocusCompleted += PhotoCameraOnAutoFocusCompleted;
viewfinderBrush.SetSource(_photoCamera);
_previewTransform.Rotation = _photoCamera.Orientation;
_results = new ObservableCollection<Result>();
_barcodeReader = new BarcodeReader();
_barcodeReader.TryHarder = true;
_barcodeReader.AutoRotate = true;
_service = new MyMoviesDataService(ErrorDataService);
}
private void disposeCamera()
{
try
{
cameraInitialized = false;
StopScan();
_photoCamera.Initialized -= PhotoCameraOnInitialized;
_photoCamera.AutoFocusCompleted -= PhotoCameraOnAutoFocusCompleted;
_photoCamera.Dispose();
_photoCamera = null;
}
catch (Exception ex)
{
App.ShowErrorToast(ex.Message);
}
}
Don't use the camera until it's been successfully initialized (You can check this in the camera's Initialized event).
Also, wrap any usages of the camera in a
try
{
// camera code here
}
catch (ObjectDisposedException)
{
// re-initialize the camera?
}
to handle situations like suspension, which will dispose of the camera automatically.
As for the
An exception of type 'System.ObjectDisposedException' occurred in
Microsoft.Devices.Camera.ni.dll and wasn't handled before a
managed/native boundary WinRT information: Fatal error. Disposing
capture device.
This is something Microsoft needs to fix; I mean, how are you supposed to handle a native code exception if it isn't allowed to propagate to managed code?
Where is the exception coming from (which code line / block)?
I would for starter put a try...catch around InitializeAndGo() in the OnNavigatedTo event handler. And on the whole PhotoCameraOnInitialized event handler also.
Cheers,