To display a dynamically loaded image in my webapp I'm using a BufferedDynamicImageResource. (It just loads the image from a backend server based on an database id.)
The URL of the image resource ends up as:
http://localhost:8080/wicket/page?17-IResourceListener-logotype
^^
sequence number
where the sequence number increases for each such image I generate.
The problem is that the URL is reused from execution to execution (the sequence number is reset to 0) so when I restart the server the browser does not fetch the newly generated images, but instead uses the cached versions (which were generated last execution of the webapp).
My Question: What is the best way to avoid this behavior? (If I could for instance add the database id of the image which is loaded to the URL, everything would work fine.)
The most common way to solve this would be to mount the resource as seen here. Using this approach, you could use the id as a parameter or add an (ignored) random parameter to prevent caching completely.
Related
so my current task is to receive image from my host (currently s3), but the catch is that nothing should be persisted about this image, what this means is that i can not persist its url, for example since s3 always includes the same key name in the url even if it is presigned, i can not use that data directly, the solution for this would be to create a image server which would download the image from s3 and send it back to client but url for this image would be always dynamic and random (with jwt), now problem is that, base64 that i have just received is persistent, it is not changed, what can i tradeoff is that i can randomly modify characters inside the base64 string, maybe 2,3 characters that would mess up pixel or two but as long as its not noticeable thats okay with me, but this technique seems bit slow, because of bandwidth size, is there any way that i can use to make image non persistent and random every time client receives it?
Situation:
I am scanning a directory using NtQueryDirectoryFile(..., FileBothDirectoryInformation, ...). In addition to data returned by this call I need security data (typically returned by GetKernelObjectSecurity) and list of alternate streams (NtQueryInformationFile(..., FileStreamInformation)).
Problem:
To retrieve security and alternate stream info I need to open (and close) each file. In my tests it slows down the operation by factor of 3. Adding GetKernelObjectSecurity and NtQueryInformationFile slows it down by factor of 4 (making it 12x).
Question:
Is there a better/faster way to get this information (by either opening files faster or avoiding file open altogether)?
Ideas:
If target file system is local I could access it directly and (knowing NTFS/FAT/etc details extract info from raw data). But it isn't going to work for remote file systems.
Custom SMB client is the answer, it seems. Skipping Windows/NT API layer opens all doors.
I have a very similar scenario to the one described in
how to add dynamic kml to google earth?
Note: My KML file is fetched every single second. The KML file size is ~1 MB.
When getting the KML updates the url is changed as suggested in the aforementioned thread.
var url = 'test.kml?rnd='+Math.random();
This works perfectly. On the other hand, it causes the geplugin.exe process to consume more and more memory, which leads to a crash of the plugin.
Does anyone run into the same issue? Is there a way to force GE Plugin to purge the cache?
Is there a way to force GE Plugin to purge the cache?
AFAIK there isn't any way to clear the cache from javascript or the API.
My KML file is fetched every single second. The KML file size is ~1
MB.
Fetching a circa 1 MB kml file every second smells. How are you calling fetchKml every second and adding the data to the plugin?
Without actually seeing your code it is impossible to say what is actually happening but this sounds like the root of the problem.
On the other hand, it causes the geplugin.exe process to consume more
and more memory, which leads to a crash of the plugin.
It sounds as if you are creating some objects inside a tight, never ending, loop. Running out of memory would be expected in this case.
You should probably be using Networklinks to load the kml data rather than fetchKml, but again, without seeing your code it is impossible to say.
I have created a Processing code (.pde file) to make a time series (coffee production v/s time) which takes its data from an excel file(.tsv table). Can anyone tell me how to include this to my webpage?
I have tried with processing.js but it does not show anything in the browser.
without additional information, you probably have your .tsv file in a "data" directory, but aren't explicitly loading it from "./data/myfile.tsv", instead relying on Processing to autoresolve. If you intend to use your sketch online, always include "data/" in your file locations, because browsers resolve locations relative to "where the page is right now".
I imagine a very common scenario is where an entire dynamic page can be cached in such a way that an entire framework/CMS stack can be bypassed, except that some small amounts of information change depending on whether somebody is logged in or not. For example, the menu might change from "login" to "Welcome Somebody!". No there's not way to cache the page obviously.
One solution I was thinking of would be load this information via AJAX after the page has loaded already.
Does anybody have an advice here?
Write the page stream to the file system. Name the file with the entire URL including the query string. If the page contains session data, include a session id in the file name. Keep a list of cached pages with their names somewhere so that you can look up whether something is in the cache without having to go to the file system.
This is essentially what FatWire Content Server does.
Since this appears to be language-agnostic, you could create a temp file with the raw output of the page, and then when the same page is loaded again, dump the contents of the temp file directly into the HTTP response of the current page.