What is the best tool / practice to enable browser history for Flash (or AJAX) websites?
I guess the established practice is to set and read a hash-addition to the URL like
http://example.com/#id=1
I am aware of the Flex History Manager, but was wondering if there are any good alternatives to consider. Would also be interested in a general AJAX solution or best practice.
SWFAddress has been widely used and tested. It makes it almost trivial (given you plan ahead) to handle deeplinking in Flash. It provides a JS and AS library that work together and make the whole process pretty foolproof. You'd want to look at something like RSH for AJAX.
I've used swfadress for some small stuff.
For AJAX, something like Really Simple History is great.
This will seem a bit roundabout, but I'm currently using the dojo framework for that. There's a dojo.back that was very useful when my UI was mostly JS/HTML. Now that I've gone to flex for more power, fluid animations, and browser stability, the only thing I've need to keep using has been the back URL.
FlexBuilder seemed to have it's own browser history in default projects.
Also, the Flex 3 Cookbook has a recipe for using mx.managers.HistoryManager to create your own custom history management. I have plans to give this a try someday to remove our dependence on the dojo.back, but haven't had time yet.
I've rolled my own solutions that were ultra-simple like this:
(function() {
var oldHash, newHash;
function checkHash() {
// Grab the hash
newHash = document.location.hash;
// Check to see if it changed
if (oldHash != newHash) {
// Trigger a custom event if it changed,
// passing the old and new values as
// metadata on the event.
$(document).trigger('hash.changed', {
old: oldHash,
new: newHash
});
// Update the oldHash for the next check
oldHash = newHash;
}
}
// Poll the hash every 10 milliseconds.
// You might need to alter this time based
// on performance
window.setInterval(checkHash, 10);
})(jQuery);
Then you just need to have event handlers for the 'hash.changed' event to respond accordingly based on what the new value is. The approach works will in super simple cases.
Related
I would like to be able to identify when the object is already fully loaded in the engine
const box = new Entity()
box.getComponent(Transform).position.set(3, 1, 3)
var model=new GLTFShape()
box.addComponent(model)
engine.addEntity(box)
I mean something like this:
model.OnLoaded(()=>{/*The model load in Cache*/})
or
await engine.addEntity(box)
or
engine.addEntity(box,()=>{/*charging is complete*/})
I can't find a way to do this
any suggestions other than waiting without knowing what happens?
Unfortunately, as it stands now, there is no real concrete way to know when an asset is loaded. This feature (onLoading, onLoadComplete, ect) may be included in a future update. I believe it's on the roadmap. In the meantime, you may want to put a delay of some kind with either a setInterval or otherwise and set it to a few seconds, and then call whatever code you want.
There are workarounds, but they are crazy and I wouldn't recommend it, for your own sanity sake.
I'm using react-redux for a project I'm working on. I noticed that when I grab an object from my store and edit it, the object in state changes without me dispatching the change (but doesn't trigger a re-render on the components attached to that reducer's object). How can I stop state from changing without a dispatch?
For example if I do:
export function changeNeonGreenColourValue(colour) {
return (dispatch) => {
var neonColours = store.getState().colours.neon;
neonColours.green = colour;
dispatch(push('./home'));
};
}
And then in the layoutComponent I log:
console.log(this.props.state.colours.neon.green)
The output is still whatever I passed into changeNeonGreenColourValue() as "colour" but the page doesn't re-render to show that change. I know to get the page to re-render, all I have to do is dispatch the appropriate reducer case but I don't want the state object being altered at all unless I have an appropriate dispatch.
Apparently the 'standard' deep copying technique for solving this is to do a JSON stringifying and parse as so: const copiedObj = JSON.parse(JSON.stringify(sourceObj)); Unfortunately if you use this on large objects that will need parsing frequently you're going to have performance issues in your app as I do, if anyone has any suggestions for this I welcome them.
edit: so both jQuery and Loadash have their own implementations of deep cloning that are supposed to be better performance-wise:
https://lodash.com/docs/#cloneDeep
I personally implemented loadash to resolve my issue and it worked fine with little to no performance issues. I highly recommend it over JSON.stringify.
I am creating my first project that uses ui-router.
My project has about 10 views, each with their own controller and state. I am trying to modularise/encapsulate/decouple as best as possible but I am having trouble working out where to put the onExit and onEnter state callbacks.
The first option is to put it in app.js which is currently defining all of my states, however I feel that this would not be a good place as it could cause this file to blow up and become hard to read as more states are introduced and the logic gets more complex.
The second option I looked into was to put it into a controller (I have one for each state), however from researching it doesn't seem to be best practice to do this.
The third option is to create a service that is resolved, however with this option I would end up with either a giant service full of state change functions for each of the states (not decoupled) or an additional service per state which would contain the state change functionality, and I worry that would increase project complexity.
What is the standard way to achieve this?
Are there any other options that I am missing?
Our strategy for this has been to disregard the onEnter and onExit on the state object, because as you are discovering, they feel like they are in the wrong place in terms of separation of concerns (app.js).
For onEnter: we handle setup in an activate() function in each controller, which we manually execute inside the controller. This happens to also match the callback that will get executed in Angular 2.0, which was not an accident ;).
function activate() {
// your setup code here
}
// execute it. this line can be removed in Angular 2.0
activate();
For onExit: We rarely need an exit callback, but when we do, we listen for the $scope $destroy event.
$scope.$on("$destroy", function() {
if (timer) {
$timeout.cancel(timer);
}
});
Mark Dalgleish wrote a nice little article about how to use promises in AngularJS views.
Some people asked questions about this in the comments, but Mark didn't answer them (yet). Because I'm asking me the same question, I will ask on StackOverflow instead to get an answer:
If you use promises in views, how do I handle "loading"/"waiting" indication, because they are async? Does a promise have something like a "resolved" or "withinRequest" property?
How do I handle errors? Normally they would arise in the second callback, but if I use a promise directly in the view I don't handle this case. Is there another way?
Thank you.
EDIT: as of angular v1.2 the resolution of promise in views is not activated by default.
The automatic resolution of promises in a view looks like a handy tool at first but it has number of limitations that need to be understood and evaluated carefully. The biggest issue with this approach is that it is AngularJS who will add callbacks to a promise and we've got little control over it.
Answering your questions:
1) As indicated, it is ultimately AngularJS who will add a success / error callbacks so we don't have much control here. What you could do is to wrap the original promise into a custom one that would track resolution. But this kind of deft the whole purpose of saving few keystrokes. And no, there is no things like 'resolved'. In short - there is no universal mechanism for tracking progress that would work for all promises. If your promises are $http-based you might use interceptors or pendingRequests property to track request in progress.
2) You can't. Once again, it is AngularJS that adds a handler inside the $parse service and it looks like this: promise.then(function(val) { promise.$$v = val; }); (see code here). You can see that only a success callback are added so all the failures are going to be silently ignored.
Those are not the only limitations of the automatic promise resolution in the view. The other problem is that promises returned by a function won't be resolved correctly. For example, if you would rewrite an example like so:
myModule.controller('HelloCtrl', function($scope, HelloWorld) {
$scope.messages = function() {
return HelloWorld.getMessages();
}
});
and try to use the following markup:
<li ng-repeat="message in messages()"></li>
things would work as expected, which might come as a surprise.
In short: while the automatic resolution of promises might seem like a handy shortcut it has number of limitations and non-obvious behaviors. Evaluate those carefully and decide if saving few keystrokes are worth it.
I have a main page that uses ajax to load subpages and one of these subpages contains a google map, and so it loads the google maps api using the <script> tag:
<script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&sensor=SET_TO_TRUE_OR_FALSE">
I noticed that this loads a bunch of css and js files into both my main page and subpage. When I click on a different link in my main page, I want to be able to unload all of these files and remove any js objects that were created, i.e., clean up everything and return to the original state. Is there any way to do this?
The answer to your question is actually a bit more complicated than you might think. A good question and set of answers that deal with many of the related details are at: What is the Proper Way to Destroy a Map Instance?.
I'm not sure from your question, but it sounds like you may have created a page that loads the Google Maps API more than one time (or could, depending on user choices) and you should avoid that entirely. Google admits there are memory leak bugs associated with reloading the map and strongly recommends against multiple map reloads. Google essentially does not support multiple map load use cases.
Check out some of the information available at the question link included above; it contains some good discussion and information.
EDIT in Response to #Engineer's Comment:
Check out the Google Maps API Office Hours May 9 2012 Video where Chris Broadfoot and Luke Mahe from Google discuss: that they don't support use cases that involve reloading the map, that the API is intended to be loaded only once, and their acknowledgement that there is a memory leak bug. Set the playback to ~12:50 to view the section about destroying the map, problems with reloading the map, and suggestions they offer to avoid problems. Primarily, if you must hide and then show a map, they recommend reusing a single map instance.
Old question.. But here is my solution:
// First getting rid of the google.maps object (to avoid memory leaks)
// Then, we are also removing google-maps related script tags we can identify.
// After unloaded, if maps is reloaded more than once on the same page;
// we'll also get a warning in the console saying: "Warning: you have included the
// Google Maps API multiple times on this page. This may cause unexpected errors."
// This script will also avoid that warning.
if (window.google !== undefined && google.maps !== undefined) {
delete google.maps;
$('script').each(function () {
if (this.src.indexOf('googleapis.com/maps') >= 0
|| this.src.indexOf('maps.gstatic.com') >= 0
|| this.src.indexOf('earthbuilder.googleapis.com') >= 0) {
// console.log('removed', this.src);
$(this).remove();
}
});
}
Update: Note that this is not a full-proof solution. There might be copied/cloned/referenced objects. A better way would be to sandbox the map in an iframe and remove the iframe from DOM.
Not the way you're thinking. The easiest way to accomplish this would be to use an iframe to load the "heavy" parts of your application. Then when you get rid of the iframe you get rid of the CSS and JS associated with the map.
In version 2 the Google Maps API had a GUnload() call but the version 3 API doesn't have this call.
with vanilla JavaScript:
if (window.google?.maps) {
delete google.maps
document.querySelectorAll("script").forEach((script) => {
if (
script.src.includes("googleapis.com/maps") ||
script.src.includes("maps.gstatic.com") ||
script.src.includes("earthbuilder.googleapis.com")
) {
script.remove()
}
})
}