If my computer is the web server for multiple live websites, is there any harm if I type ipconfig/dnsflush in my command prompt editor?
I always got this problem. I embed a flash (swf) in a .html file. Whenever I update the swf, the .html file always use the old swf even if I clear my cache and whatever else.
Is there any ways to have my .html file always get the latest/updated swf file?
Typing ipconfig/dnsflush won't fix your problem, thats flushing the DNS cache, your problem is that your SWF file is being cached by the browser. There are a few ways to stop this. The easiest is probably by adding a random query-string onto the URL of the SWF file in the EMBED/OBJECT tag:
<script type="text/javascript">
<!--
document.write('<object etc ... ');
document.write('<param name="movie" value="filename.swf?r=' + Math.round(Math.random() * 99999) + '">');
document.write(' the other param tags here );
document.write('<embed src="filename.swf?r=' + Math.round(Math.random() * 99999) + '" etc .... </embed>');
document.write('</object>');
//-->
</script>
But be aware that this means your SWF will be downloaded afresh every time, and never from the browser cache. If you don't want this, consider adding a version number in the querystring rather than a random number, and increment this when you want clients to download a new SWF file.
I have a little php snippet that I use to append the unixtime of when i uploaded the file to the server at the end of the flash' url. This way I just upload the new one and everything sorts itself out automagically.
Related
So, it looks like the best way to do this is with the exportAreaDefinitionFile() call. I thought I would just export the file to the local file system then manually send it to where I need it. However when I make the exportAreaDefinitionFile() call, I don't get a file in the local file system. When I handle the onActivityResult() for the export it gets a RESULT_CANCELED result every time. Does anyone know why this would occur? Everything I've seen online says it should work.
When I look at logcat I get these messages after the exportAreaDefinitionFile() Call:
I/tango_client_api: void TangoService_disconnect(): Disconnecting from Tango...
I/tango_client_api: void TangoService_disconnect(): Successfully disconnected from Tango.
Is this normal?
I figured it out!!
It turns out that the problem was I was using this:
String mapsFolder = getFilesDir() + File.separator + "ADFs";
When I should have been using this:
String mapsFolder = getFilesDir().getAbsolutePath() + File.separator + "ADFs";
This being my first ever Android app I have no idea why these two calls are functionally different. I still haven't wrapped my head around the Android file system.
We are using backbone routing, mod_rewrite, requirejs. The app is in a folder, not on the web root, so relative folder references are required for images,css, and js files (if we could use absolute folders the files would load).
When accessing a route with a trailing slash none of the js and css files loads correctly unless there is an appropriate base tag set in the header. Like so:
<base href="//localhost/myapp/" />
This solution works. The problem is we need to variablize the base tag so that we can have dev and production versions of the code. But to load a js file with the variable wont work without a base tag.
Just to be sure I did the standard fixes for backbone. Fix optional slash (/):
routes: {
'faq(/)':'jumpToText',
'register(/)':'jumpToForm',
},
And setting the root in history
Backbone.history.start({pushState: true, root: "//localhost/myapp/");
The problem appears to be an unresolvable mod_rewrite issue. So the final thought is to dynamically set the base tag.
We ultimately used JavaScript to parse the value out of location.href . Wrap this code in a script tag in the head:
document.write("<base href="+'//'+document.location.host +'/'+ location.href.split('/')[3]+'/'+" />");
And did the same in routes.js (parsing out the uri)
Backbone.history.start({pushState: true, root: "/"+location.href.split('/')[3]});
A working solution that I have to account for protocol / host / port is the following
var base = document.createElement('base');
base.href = window.location.protocol + '//' + window.location.hostname + (window.location.port ? ':' + window.location.port : '');
document.getElementsByTagName('head')[0].appendChild(base);
This currently works fine in all major browsers including IE11 (Which does not support window.location.origin)
I have used this to make an npm package that also supports adding a suffix to the end of this base href if anyone is interested
https://www.npmjs.com/package/dynamic-base
https://github.com/codymikol/dynamic-base
I'm trying to figure out what the purpose of the file /var/resource_config.json is in Magento. It appears to perhaps be a caching of a configuration, but can't see where in the source code it is being created and/or updated.
I'm in the process of setting up local/dev/staging/prod environments for an EE1.12 build and want to figure out if I can safely exclude it from my repo or whether I need to script some updates to it for deploys.
Maybe the flash image uploader in admin creates it?
Any ideas or directions to look?
This is a configuration cache file for the "alternative media store" system. This is a system where requests for media files are routed through get.php, and allows you to store media in the database instead of the file system. (That may be a gross over simplification, as I've never used the feature myself)
You can safely, (and should) exclude this file from deployments/source control, as it's a cache file and will be auto generated as needed. See the following codeblock in the root level get.php for more information.
if (!$mediaDirectory) {
$config = Mage_Core_Model_File_Storage::getScriptConfig();
$mediaDirectory = str_replace($bp . $ds, '', $config['media_directory']);
$allowedResources = array_merge($allowedResources, $config['allowed_resources']);
$relativeFilename = str_replace($mediaDirectory . '/', '', $pathInfo);
$fp = fopen($configCacheFile, 'w');
if (flock($fp, LOCK_EX | LOCK_NB)) {
ftruncate($fp, 0);
fwrite($fp, json_encode($config));
}
flock($fp, LOCK_UN);
fclose($fp);
checkResource($relativeFilename, $allowedResources);
}
Speaking in general terms, Magento's var folder serves the same purpose as the *nix var folder
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files. Sometimes a separate partition
and should be isolated to particular systems (i.e. not a part of deployments)
I have a financial system where I create pdf forms for tax forms, receipts and etc
I have a printing page where I open the document for the client in an iframe
which suits dynamically the src to the client's pdf -
curUser = usrSrv.getUserFromCookie(cookie);
string formSrc = "UserForms/" + curUser.Id + ".pdf";
ifPdf.Attributes.Add("src", formSrc);
iI my code behind I've inserted the clear cache properties as such:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetNoStore();
but still - in several cases (when the user goes back from the print page for ex') - the pdf file is being saved in the cache and the system is losing its purpose.
I've figured out there might be a way with - server.Mappath() - but when I use it - the location seems fine and the file exists but the browser never finds the actual file or simply don't show it.
If you add a querystring parameter to the end of the frame's URL you will get the result you need, as long as the parameter is generated fresh and unique every time. A common way of doing that is to add something like a timestamp:
url += "?ts=" + DateTime.Ticks;
or:
url += "?ts=" + Date.getTime();
I am using Javascript to load XML and transform it with XSLT. Everything works until I try to use <xsl:include>. Is it possible to include files this way?
Here is the relevant XSLT code.
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
...
<xsl:include href="another_xslt.xsl"/>
</xsl:stylesheet>
And here is the javascript code:
var xmlRequest = new XMLHttpRequest();
xmlRequest.open("GET", "data.xml", false);
xmlRequest.send(null);
var xsltRequest = new XMLHttpRequest();
xsltRequest.open("GET", "https://hostname.com/directory/xslt/transform.xsl", false);
xsltRequest.send(null);
var xsltProcessor = new XSLTProcessor();
xsltProcessor.importStylesheet(xsltRequest.responseXML);
return xsltProcessor.transformToDocument(xmlRequest.responseXML);
I am using Firefox 3.5 and the script fails on the xsltProcessor.importStylesheet line without any error messages.
Update:
Tamper Data shows a request for another_xslt.xsl but in the parent directory so it is getting a 404.
All the xslt files are in https://hostname.edu/directory/xslt/ but Firefox is requesting http://hostname.edu/directory/another_xslt.xsl instead. Why would it be requesting the file from the parent directory and without the https? The html file is in /directory/admin/edit
Using the full URL fixes the problem, but I need it to work on the server, so I would prefer to use the relative path like it is now.
It is possible to xsl-include with importStylesheet() so that shouldn't be the problem.
I do not really have the answer but perhaps some pointers to verify:
Do you have an infinite inclusion recursion where file x includes y which in turn includes x?
Make sure the xsl:include is not followed by an xsl:import.
If you use something like Tamper Data do you see FireFox requesting 'another_xslt.xsl' ?
Long shot guess: I never seen an xsl:include that wasnt defined straight under xsl:stylesheet before any xsl:template declarations perhaps moving this xsl:include declaration to the top might help (keeping in mind it xsl:import should go first).