I'm developing a cross-platform mobile app, and I need to read the proximity sensor of the device, that provides information about the distance of a nearby physical object using the proximity sensor of a device.
Has anyone implemented this/wrote a plugin for this purpose in Nativescript?
I found a partial answer to my question, on how to read proximity sensor in Android using NativeScript. I will update my answer once I write the code for iOS as well.
To get access to a sensor in Android, first we have to import 'application' and 'platform' modules that NS provides:
import * as application from "tns-core-modules/application";
import * as platform from 'tns-core-modules/platform';
declare var android: any;
Then, acquire android's Sensor Manager, proximity sensor and create an android event listener and register it to listen to changes in the proximity sensor.
To register proximity sensor:
registerProximityListener() {
// Get android context and Sensor Manager object
const activity: android.app.Activity = application.android.startActivity || application.android.foregroundActivity;
this.SensorManager = activity.getSystemService(android.content.Context.SENSOR_SERVICE) as android.hardware.SensorManager;
// Creating the listener and setting up what happens on change
this.proximitySensorListener = new android.hardware.SensorEventListener({
onAccuracyChanged: (sensor, accuracy) => {
console.log('Sensor ' + sensor + ' accuracy has changed to ' + accuracy);
},
onSensorChanged: (event) => {
console.log('Sensor value changed to: ' + event.values[0]);
}
});
// Get the proximity sensor
this.proximitySensor = this.SensorManager.getDefaultSensor(
android.hardware.Sensor.TYPE_PROXIMITY
);
// Register the listener to the sensor
const success = this.SensorManager.registerListener(
this.proximitySensorListener,
this.proximitySensor,
android.hardware.SensorManager. SENSOR_DELAY_NORMAL
);
console.log('Registering listener succeeded: ' + success);
}
To unregister the event listener, use:
unRegisterProximityListener() {
console.log('Prox listener: ' + this.proximitySensorListener);
let res = this.SensorManager.unregisterListener( this.proximitySensorListener);
this.proximitySensorListener = undefined;
console.log('unRegistering listener: ' + res);
};
Of course, we can change android.hardware.Sensor.TYPE_PROXIMITY to any other sensor that Android OS provides us. More about sensors in Android Sensor Overview. I didn't check this with other sensors, so the implementation might be a bit diffrent, but I believe the concept is still the same
This solution is based on Brad Martin's solution found here.
To make this answer complete, please post your solution for iOS if you have it.
Related
I am attempting to get the basic tutorial for the AWS Chime SDK to work in our application and the meetingSession.audioVideo.listVideoInputDevices() always returns nothing/null.
I am running this on lastest chrome, my operating system is a windows 10 workspace instance. I have headphones plugged in; but that shouldn't make a difference.
My expected result is to return at least one device for the video. Here is the output from the Logger.
2020-08-26T15:29:19.127Z [INFO] MyLogger - attempting to trigger media device labels since they are hidden
chime-bundle.js:1 2020-08-26T15:29:19.133Z [INFO] MyLogger - unable to get media device labels
chime-bundle.js:1 2020-08-26T15:29:19.134Z [INFO] MyLogger - API/DefaultDeviceController/listVideoInputDevices null -> []
chime-bundle.js:1 Uncaught (in promise) TypeError: Cannot read property 'deviceId' of undefined
*Note. The video and audio elements are not hidden.
I have tried the code snippits from various demos. Which are all just a copy of AWS's walkthrough. So pretty much zero information. I have researched how the audio devices work in html5 and looking through the files provided in the sdk-js, I am even more confused. Can someone point me in the right direction?
Here is the basic code, you can get it, and a description from the link above.
var fetchResult = await window.fetch(
window.encodeURI("<our endpoint for backend (running c# instead of node)>",
{
method: 'POST'
}
);
let result = await fetchResult.json();
console.log("Result from Chime API:", result);
const logger = new ConsoleLogger('MyLogger', LogLevel.INFO);
const deviceController = new DefaultDeviceController(logger);
const meetingResponse = result.JoinInfo.Meeting;
const attendeeResponse = result.JoinInfo.Attendee;
const configuration = new MeetingSessionConfiguration(meetingResponse, attendeeResponse);
// In the usage examples below, you will use this meetingSession object.
const meetingSession = new DefaultMeetingSession(
configuration,
logger,
deviceController
);
console.log("MEETING SESSION", meetingSession);
//SETUP AUDIO
const audioElement = document.getElementById('notary-audio');
meetingSession.audioVideo.bindAudioElement(audioElement);
const videoElement = document.getElementById('notary-video');
// Make sure you have chosen your camera. In this use case, you will choose the first device.
const videoInputDevices = await meetingSession.audioVideo.listVideoInputDevices();
// The camera LED light will turn on indicating that it is now capturing.
// See the "Device" section for details.
await meetingSession.audioVideo.chooseVideoInputDevice(videoInputDevices[0].deviceId);
const observer = {
audioVideoDidStart: () => {
console.log('Started');
},
audioVideoDidStop: sessionStatus => {
// See the "Stopping a session" section for details.
console.log('Stopped with a session status code: ', sessionStatus.statusCode());
},
audioVideoDidStartConnecting: reconnecting => {
if (reconnecting) {
// e.g. the WiFi connection is dropped.
console.log('Attempting to reconnect');
}
},
// videoTileDidUpdate is called whenever a new tile is created or tileState changes.
videoTileDidUpdate: tileState => {
// Ignore a tile without attendee ID and other attendee's tile.
if (!tileState.boundAttendeeId || !tileState.localTile) {
return;
}
// videoTileDidUpdate is also invoked when you call startLocalVideoTile or tileState changes.
console.log(`If you called stopLocalVideoTile, ${tileState.active} is false.`);
meetingSession.audioVideo.bindVideoElement(tileState.tileId, videoElement);
localTileId = tileState.tileId;
},
videoTileWasRemoved: tileId => {
if (localTileId === tileId) {
console.log(`You called removeLocalVideoTile. videoElement can be bound to another tile.`);
localTileId = null;
}
}
};
meetingSession.audioVideo.addObserver(observer);
meetingSession.audioVideo.start();
I am reading this documentation/article from Microsoft on how to Distribute Mobile apps with app center. The problem is I really don't understand how to implement this. I have a app on app center (Android) I want to implement mandatory update so that I can eliminate the bugs of the previous version. I tried to distribute the app with mandatory update enabled and it is not working. How can I fix this?
https://learn.microsoft.com/en-us/appcenter/distribution/
Here is what I did I added this code on my App.xaml.cs (XAMARIN FORMS PROJECT):
protected override void OnStart ()
{
AppCenter.Start("android={Secret Code};", typeof(Analytics), typeof(Crashes), typeof(Distribute));
Analytics.SetEnabledAsync(true);
Distribute.SetEnabledAsync(true);
Distribute.ReleaseAvailable = OnReleaseAvailable;
}
bool OnReleaseAvailable(ReleaseDetails releaseDetails)
{
string versionName = releaseDetails.ShortVersion;
string versionCodeOrBuildNumber = releaseDetails.Version;
string releaseNotes = releaseDetails.ReleaseNotes;
Uri releaseNotesUrl = releaseDetails.ReleaseNotesUrl;
var title = "Version " + versionName + " available!";
Task answer;
if (releaseDetails.MandatoryUpdate)
{
answer = Current.MainPage.DisplayAlert(title, releaseNotes, "Download and Install");
}
else
{
answer = Current.MainPage.DisplayAlert(title, releaseNotes, "Download and Install", "Ask Later");
}
answer.ContinueWith((task) =>
{
if (releaseDetails.MandatoryUpdate || (task as Task<bool>).Result)
{
Distribute.NotifyUpdateAction(UpdateAction.Update);
}
else
{
Distribute.NotifyUpdateAction(UpdateAction.Postpone);
}
});
return true;
}
And here is what I added on my MainActivity.cs(ANDROID PROJECT):
AppCenter.Start("{Secret Code}", typeof(Analytics), typeof(Crashes), typeof(Distribute));
Looking at this App Center documentation here for Xamarin Forms -
You can customize the default update dialog's appearance by implementing the ReleaseAvailable callback. You need to register the callback before calling AppCenter.Start
It looks like you need to swap your current ordering to get in-app updates working.
There could be a lot of different reasons as to why they are not working. As you can see in the Notes here and here,
Did your testers download the app from the default browser?
Are cookies enabled for the browser in their settings?
Another important point you'll read in the links, is that the feature is only available for listed distribution group users. It is not for all your members. You could use a simple version checker for your purpose instead or you could use a plugin.
I see the UID transmitted in Locate App from Radius but I am not able to see in Google Beacon Tools app . I want to register in Google the Eddystone-UID when I use a transmitter build with the Beacon Library. Code is the following. What am I not considering?
uuid = "0x56987753868952999aaa";
major = "0x577886654591";
beacon = new Beacon.Builder()
.setId1(uuid)
.setId2(major)
.setManufacturer(0x0118)
.setTxPower(-56)
.build();
beaconParser = new BeaconParser()
.setBeaconLayout("s:0-1=feaa,m:2-2=00,p:3-3:-41,i:4-13,i:14-19");
beaconTransmitter = new BeaconTransmitter(getApplicationContext(),
beaconParser);
beaconTransmitter.setAdvertiseMode(
AdvertiseSettings.ADVERTISE_MODE_LOW_LATENCY);
beaconTransmitter.startAdvertising(beacon, new AdvertiseCallback() {
#Override
public void onStartSuccess(AdvertiseSettings settingsInEffect) {
Log.d(TAG, "EMISION COORECTA: ");
super.onStartSuccess(settingsInEffect);
}
I have developing a map app by using Google Maps Android API. I used Google Maps Android API Utility Library for adding a GeoJSON layer (in polygon geometry).
String gj = loadJSONfromAssets();
GeoJsonLayer layer = new GeoJsonLayer(mMap, gj);
And also added a WMS layer as TileOverlay. I want map objects selectable. For example users can click on map objects (GeoJSON layer) and get their attributes. About this case I just found that only objects like Point, Polyline, Polygon can have click events. My question is: how can i set this event for all objects in a layer (GeoJSON layer).
I found the example provided at https://github.com/googlemaps/android-maps-utils/blob/master/demo/src/com/google/maps/android/utils/demo/GeoJsonDemoActivity.java had a feature on click listener
// Demonstrate receiving features via GeoJsonLayer clicks.
layer.setOnFeatureClickListener(new GeoJsonLayer.GeoJsonOnFeatureClickListener() {
#Override
public void onFeatureClick(GeoJsonFeature feature) {
Toast.makeText(GeoJsonDemoActivity.this,
"Feature clicked: " + feature.getProperty("title"),
Toast.LENGTH_SHORT).show();
}
});
Any updates on this topic? I got the same problem.
for (i in 0 until body.lands.size) {
val geo = body.lands[i]
val geos = geo.get("geometry")
val properties = geo.get("properties")
Log.i("Properties", properties.toString())
val geometryJson: JSONObject = JSONObject(geos.toString())
val geoJsonData: JSONObject = geometryJson
val layer = GeoJsonLayer(mMap, geoJsonData)
val style: GeoJsonPolygonStyle = layer.defaultPolygonStyle
style.fillColor = resources.getColor(R.color.darkGray)
style.strokeColor = resources.getColor(R.color.darkerGray)
style.strokeWidth = 2f
layer.addLayerToMap()
layer.setOnFeatureClickListener {
Log.i("Properties", properties.toString())
}
}
and thanks for taking the time to read this.
Concept
Connect a wiimote to a windows (or mac) using the default HID driver and service (because BT L2CAP is not yet supported in chrome.bluetoothSocket). I wan't to send and receive data from the device; buttons, gyroscope, IR camera, etc ... .
My setup (relevant parts)
Macbook pro (installed Yosemite)
Chrome Canary (Version 41.0.2246.0 canary (64-bit)), running in verbose logging mode
Windows 7 through bootcamp
Wiimote (Nintendo RVL-CNT-01)
Code to send data to device
var _wiimote = this;
var bytes = new Uint8Array(21);
var reportId = 0x11;
bytes[0] = 0x10;
chrome.hid.send(_wiimote.connectionId, reportId, bytes.buffer, function() {
if (chrome.runtime.lastError) {
console.log(chrome.runtime.lastError.message);
console.log(chrome.runtime.lastError);
}
console.log("Wiimote send data");
chrome.hid.receive(_wiimote.connectionId, function(reportId, data) {
if (chrome.runtime.lastError) {
console.log(chrome.runtime.lastError.message);
console.log(chrome.runtime.lastError);
return;
}
console.log("done receiving.");
console.log("received:" + data.byteLength);
});
});
Expected behavior?
Light up the first LED and stop the blinking that starts when connection, but not pairing, the device.
Actual behavior?
The console shows "Wiimote send data" but the LEDs do not react on the sent report.
So this code doesn't do much, but according to the documentation on wiibrew(see resources below).
It should send to the wiimote to light up the first LED. This goes for any output report I'm sending to the device. It does react when I use wrong reportIds or different byte lengths, then it fails.
Next up would be the receiving part, if I were to send anything (really any data) to reportId 0x15 on the wiimote, it should send me an information report. I've tried polling for messages like this:
Code to receive data from the device
var _wiimote = this;
var pollForInput = function() {
chrome.hid.receive(_wiimote.connectionId, function(reportId, data) {
console.log("receiving something",reportId);
if (_wiimote.pollReceive) {
setTimeout(pollForInput, 0);
}
});
};
Expected behavior?
After sending 0x00 to the reportId 0x15 I should receive an information report from the wiimote device.
Actual behavior?
I've never received anything in the console output indicating communication from the device.
Resources used
http://wiibrew.org/wiki/Wiimote
https://developer.chrome.com/apps/hid