I have developed a Watchapp with Pebble.js that fetches a remote file, containing an integer, and emits that many "short" Vibe events.
The trouble is: Vibe events do not happen if one is currently in process. I have resorted to something like this to try to spread them out (where BUMP_COUNT_INT == number of Vibes to emit):
for (var i = 0; i < BUMP_COUNT_INT; i++) {
setTimeout(function(){
Vibe.vibrate('short');
}, 900*i);
However, even the 900ms( * Vibes) isn't consistent. There is sometimes more or less space between them, and they sometimes merge (causing fewer Vibes than expected).
It appears that the C SDK is capable of custom sequences.
I was hoping someone had come across a cleaner workaround, or a more stable way to pull this off using Pebble.js ... ?
Should I just accept that I'll have to spread the Vibes out even further, if I want to continue with Pebble.js?
What would you do?
Custom patterns are not available in Pebble.js but you could easily add a new 'type' of vibe in Pebble.js and implement it as a custom pattern in the C side of Pebble.js.
The steps would be:
Clone the Pebble.js project on GitHub and get a local copy. You will need to download and install the Pebble SDK to compile it locally on your computer (this will not work on CloudPebble).
Declare a new type of vibe command in src/js/ui/simply-pebble.js (the Pebble.js JavaScript library):
var vibeTypes = [
'short',
'long',
'double',
'custom'
];
var VibeType = makeArrayType(vibeTypes);
Create a new type of Vibe in src/simply/simply_msg.c
enum VibeType {
VibeShort = 0,
VibeLong = 1,
VibeDouble = 2,
VibeCustom = 3,
};
And then extend the Vibe command handler to support this new type of vibe in src/simply/simply_msg.c
static void handle_vibe_packet(Simply *simply, Packet *data) {
VibePacket *packet = (VibePacket*) data;
switch (packet->type) {
case VibeShort: vibes_short_pulse(); break;
case VibeLong: vibes_break_pulse(); break;
case VibeDouble: vibes_double_pulse(); break;
case VibeCustom:
static const uint32_t const segments[] = { 200, 100, 400 };
VibePattern pat = {
.durations = segments,
.num_segments = ARRAY_LENGTH(segments),
};
vibes_enqueue_custom_pattern(pat);
break;
}
}
An even better solution would be to suggest a patch so that any custom pattern could be designed on the JavaScript side and sent to the watch.
Related
I understand that Cypress is designed for e2e testing and is not a generic browser automation tool. However, I'm wondering if it's possible to use Cypress to log into a website and crawl through it pulling all the hrefs, thus building a list of the pages you'd like to test.
For a large website, this seems almost necessary for certain kinds of e2e tests, but I'm stuck trying to implement it. Here's what I have:
describe('Link crawler', () => {
const linksQueue = ['www.example.com/'];
const seen = {};
before(() => {
cy.login(email, password);
});
// Build a queue, typical BFS algorithm.
// For all links in the queue, pull out the anchor tags and add new hrefs to the queue.
// Mark links as seen so you don't infinitely loop.
while (linksQueue.length) {
let currentLink = linksQueue.pop();
it(`${currentLink} should have links.`, () => {
cy.visit(`${currentLink}`);
cy.window().then(win => {
let anchorTags = win.document.getElementsByTagName('a');
for (let idx = 0; idx < anchorTags.length; ++idx) {
let newLink = anchorTags[idx].href;
if (!(newLink in seen)) {
linksQueue.unshift(newLink);
}
seen[newLink] = true;
}
});
});
}
});
The problem with the above is that Cypress only processes what's in the queue to begin with, so this will run and extract links, but only on 'www.example.com/'.
How can I use Cypress to work over a queue of links that continues to grow? Is there something else I can use besides cy.window?
I've made this work using Puppeteer, but it would be great to use a single library and Cypress is my team's tool of choice for e2e.
This might not be the question but it was the list of doubts which comes when learning native script from scratch.
I had a 1000 or more list of data stored in data table. know i want to display it on a list view but i don't want to read all the data at once. because i have images stored in other directory and want to read that also. So, for 20 to 30 data's the performance is quite good. but for 1000 data it is taking more than 15 minutes to read the data as well as images associated with it. since i'm storing some high quality images.
Therefore i decided to read only 20 data's with their respective images. and display it on list. know when user reaches the 15th data of the list. i decided to read 10 more data from the server.
know when i search this i came across "RadListView Load on Demand".
then i just looked at the code below.
public addMoreItemsFromSource(chunkSize: number) {
let newItems = this._sourceDataItems.splice(0, chunkSize);
this.dataItems.push(newItems);
}
public onLoadMoreItemsRequested(args: LoadOnDemandListViewEventData) {
const that = new WeakRef(this);
const listView: RadListView = args.object;
if (this._sourceDataItems.length > 0) {
setTimeout(function () {
that.get().addMoreItemsFromSource(2);
listView.notifyLoadOnDemandFinished();
}, 1500);
args.returnValue = true;
} else {
args.returnValue = false;
listView.notifyLoadOnDemandFinished(true);
}
}
In nativescript if i want to access binding element xml element. i must use observables in viewmodel or exports.com_name on associated js file.
but in this example it is started with public..! how to use this in javascript.
what is new WeakRef(this) ?
why it is needed ?
how to identify user has scrolled to 15 data, as i want to load more data when he came at 15th data.
after getting data how to update array of list and show it in listview ?
Finally i just want to know how to use load on demand
i tried to create a playground sample of what i have tried but it is giving error. it cannot found module of radlistview.
Remember i'm a fresher So, kindly keep this in mind when answering. thank you,
please modify the question if you feel it is not upto standards.
you can check the updated answer here
https://play.nativescript.org/?template=play-js&id=1Xireo
TypeScript to JavaScript
You may use any TypeScript compiler to convert the source code to JavaScript. There are even online compilers like the official TypeScript Playground for instance.
In my opinion, it's hard to expect ES5 examples any more. ES6-9 introduced a lot of new features that makes JavaScript development much more easier and TypeScript takes JavaScript to next level, interpreter to compiler.
To answer your question, you will use the prototype chain to define methods on your class in ES5.
YourClass.prototype.addMoreItemsFromSource = function (chunkSize) {
var newItems = this._sourceDataItems.splice(0, chunkSize);
this.dataItems.push(newItems);
};
YourClass.prototype.onLoadMoreItemsRequested = (args) {
var that = new WeakRef(this);
var listView = args.object;
if (this._sourceDataItems.length > 0) {
setTimeout(function () {
that.get().addMoreItemsFromSource(2);
listView.notifyLoadOnDemandFinished();
}, 1500);
args.returnValue = true;
} else {
args.returnValue = false;
listView.notifyLoadOnDemandFinished(true);
}
}
If you are using fromObject syntax for your Observable, then these functions can be passed inside
addMoreItemsFromSource: function (chunkSize) {
....
};
WeakRef: It helps managing your memory effiencetly by keeping a loose reference to the target, read more on docs.
How to load more:
If you set loadOnDemandMode to Auto then loadMoreDataRequested event will be triggered whenever user reaches the end of scrolling.
loadOnDemandBufferSize decides how many items before the end of scroll the event should be triggered.
Read more on docs.
How to update the array:
That's exactly what showcased in addMoreItemsFromSource function. Use .push(item) on the ObservableArray that is linked to your list view.
When writing custom functions to be used in spreadsheet cells, the default behavior for a sheet is to recalculate on edits, i.e. adding column or rows will cause a custom function to update.
This is a problem if the custom function calls a paid API and uses credits, the user will consuming API credits automatically.
I couldn't figure out a way to prevent this, so I decided to use the UserCache to cache the results for an arbitrary 25 minutes, and serve it back to the user should they happen to repeat the same function call. It's definitely not bulletproof but it's better than nothing I suppose. Apparently the cache can hold 10mb, but is this the right approach? Could I be doing something smarter?
var _ROOT = {
cache : CacheService.getUserCache(),
cacheDefaultTime: 1500,
// Step 1 -- Construct a unique name for function call storage using the
// function name and arguments passed to the function
// example: function getPaidApi(1,2,3) becomes "getPaidApi123"
stringifyFunctionArguments : function(functionName,argumentsPassed) {
var argstring = ''
for (var i = 0; i < argumentsPassed.length; i++) {
argstring += argumentsPassed[i]
}
return functionName+argstring
},
//Step 2 -- when a user calls a function that uses a paid api, we want to
//cache the results for 25 minutes
addToCache : function (encoded, returnedValues) {
var values = {
returnValues : returnedValues
}
Logger.log(encoded)
this.cache.put(encoded, JSON.stringify(values), this.cacheDefaultTime)
}
//Step 3 -- if the user repeats the exact same function call with the same
//arguments, we give them the cached result
//this way, we don't consume API credits as easily.
checkCache : function(encoded) {
var cached = this.cache.get(encoded);
try {
cached = JSON.parse(cached)
return cached.returnValues
} catch (e) {
return false;
}
}
}
Google Sheets already caches the values of custom functions, and will only run them again when either a) the inputs to the function have changed or b) the spreadsheet is being opened after being closed for a long time. I'm not able to replicate the recalculation you mentioned when adding and removing columns. Here's a simple example function I used to test that:
function rng() {
return Math.random();
}
Your approach of using an additional cache for expensive queries looks fine in general. I'd recommend using the DocumentCache instead of the UserCache, since all users of the document can and should see the same cell values.
I'd also recommend a more robust encoding of function signatures, since your current implementation is able to distinguish between the arguments [1, 2] and [12]. You could stringify the inputs and then base64 encode it for compactness:
function encode(functionName, argumentsPassed) {
var data = [functionName].concat(argumentsPassed);
var json = JSON.stringify(data);
return Utilities.base64Encode(json);
}
I've stumbled upon the problem that some browsers and devices render the MeshStandardMaterial reflection poorly.
Consider the example below:
and this example below:
Both comparisons are running simultaneously on the same computer, same graphics card, identical attributes, but different browsers. As you can see, the reflections on the right are almost unidentifiable.
Additionally, I'm getting some triangulation issues at sharp angles that make it seem as if the reflection is being calculated in the vertex shader:
I understand that different browsers have different WebGL capabilities, as the results on http://webglreport.com/ illustrate:
Does anybody know what WebGL extension or feature the IE/Edge browsers are missing that I can look for? I want to put a sniffer that uses a different material if it doesn't meet the necessary requirements. Or if anybody has a full solution, that would be even better. I've already tried playing with the EnvMap's minFilter attribute, but the reflections are still calculated differently.
I don't know which extensions are needed but you can easily test. Before you init THREE.js put some code like this
const extensionsToDisable = [
"OES_texture_float",
"OES_texture_float_linear",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
Then just selectively disable extensions until Firefox/Chrome look the same as IE/Edge.
The first thing I'd test is disabling every extension that's in Chrome/Firefox that's not in IE/Edge just to verify that turning them all off reproduces the IE/Edge behavior.
If it does reproduce the issue then I'd do a binary search (turn on half the disabled extensions), and repeat until I found the required ones.
const extensionsToDisable = [
"EXT_blend_minmax",
"EXT_disjoint_timer_query",
"EXT_shader_texture_lod",
"EXT_sRGB",
"OES_vertex_array_object",
"WEBGL_compressed_texture_s3tc_srgb",
"WEBGL_debug_shaders",
"WEBKIT_WEBGL_depth_texture",
"WEBGL_draw_buffers",
"WEBGL_lose_context",
"WEBKIT_WEBGL_lose_context",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
const gl = document.createElement("canvas").getContext("webgl");
console.log(gl.getSupportedExtensions().join('\n'));
console.log("WEBGL_draw_buffers:", gl.getExtension("WEBGL_draw_buffers"));
Collaboration Mode:
What is the best way to propagate changes from Client #1's canvas to client #2's canvas? Here's how I capture and send events to Socket.io.
$scope.canvas.on('object:modified',function(e) {
Socket.whiteboardMessage({
eventId:'object:modified',
event:e.target.toJSON()
});
});
On the receiver side, this code works splendidly for adding new objects to the screen, but I could not find documentation on how to select and update an existing object in the canvas.
fabric.util.enlivenObjects([e.event], function(objects) {
objects.forEach(function(o) {
$scope.canvas.add(o);
});
});
I did see that Objects have individual setters and one bulk setter, but I could not figure out how to select an existing object based on the event data.
Ideally, the flow would be:
Receive event with targeted object data.
Select the existing object in the canvas.
Perform bulk update.
Refresh canvas.
Hopefully someone with Fabric.JS experience can help me figure this out. Thanks!
UPDATED ANSWER - Thanks AJM!
AJM was correct in suggesting a unique ID for every newly created element. I was also able to create a new ID for all newly created drawing paths as well. Here's how it worked:
var t = new fabric.IText('Edit me...', {
left: $scope.width/2-100,
top: $scope.height/2-50
});
t.set('id',randomHash());
$scope.canvas.add(t);
I also captured newly created paths and added an id:
$scope.canvas.on('path:created',function(e) {
if (e.target.id === undefined) {
e.target.set('id',randomHash());
}
});
However, I encountered an issue where my ID was visible in console log, but it was not present after executing object.toJSON(). This is because Fabric has its own serialization method which trims down the data to a standardized list of properties. To include additional properties, I had to serialize the data for transport like so:
$scope.canvas.on('object:modified',function(e) {
Socket.whiteboardMessage({
object:e.target.toJSON(['id']) // includes "id" in output.
})
});
Now each object has a unique ID with which to perform updates. On the receiver's side of my code, I added AJM's object-lookup function. I placed this code in the "startup" section of my application so it would only run once (after Fabric.js is loaded, of course!)
fabric.Canvas.prototype.getObjectById = function (id) {
var objs = this.getObjects();
for (var i = 0, len = objs.length; i < len; i++) {
if (objs[i].id == id) {
return objs[i];
}
}
return 0;
};
Now, whenever a new socket.io message is received with whiteboard data, I am able to find it in the canvas via this line:
var obj = $scope.canvas.getObjectById(e.object.id);
Inserting and removing are easy, but for updating, this final piece of code did the trick:
obj.set(e.object); // Updates properties
$scope.canvas.renderAll(); // Redraws canvas
$scope.canvas.calcOffset(); // Updates offsets
All of this required me to handle the following events. Paths are treated as objects once they're created.
$scope.canvas.on('object:added',function(e) { });
$scope.canvas.on('object:modified',function(e) { });
$scope.canvas.on('object:moving',function(e) { });
$scope.canvas.on('object:removed',function(e) { });
$scope.canvas.on('path:created',function(e) { });
I did something similar involving a single shared canvas between multiple users and ran into this exact issue.
To solve this problem, I added unique IDs (using a javascript UUID generator) to each object added to the canvas (in my case, there could be many users working on a canvas at a time, thus I needed to avoid collisions; in your case, something simpler could work).
Fabric objects' set method will let you add an arbitrary property, like an id: o.set('id', yourid). Before you add() a new Fabric object to your canvas (and send that across the wire), tack on an ID property. Now, you'll have a unique key by which you can pick out individual objects.
From there, you'd need a method to retrieve an object by ID. Here's what I used:
fabric.Canvas.prototype.getObjectById = function (id) {
var objs = this.getObjects();
for (var i = 0, len = objs.length; i < len; i++) {
if (objs[i].id == id) {
return objs[i];
}
}
return null;
};
When you receive data from your socket, grab that object from the canvas by ID and mutate it using the appropriate set methods or copying properties wholesale (or, if getObjectById returns null, create it).