Download dynamically created Flash - download

Is there a way to download dynamically created Flash (SWF) files? I'd like to fetch them and convert to PNG. (This can be done for example with swfrender.)
For example, when pointing my webbrowser to http://flashserver.company.com/statsgraph.jsp?data=2012-04 (URL completely made up), a JSP script on that server creates a statistics graph. But this is done on-the-fly, so there's no SWF file that I could download (for example with wget).
I could take a screen shot from what my browser displays, but I prefer to do this automatically with a server-side (shell or PHP) script, because it's about a few dozen graphs to be downloaded, once per month. BWT, what I intend to do is fully legal. :-)

It's unlikely and overly complicated to render SWFs server side. The SWF probably holds no data itself, anyway, it probably just displays it.
You need to use Firebug or something similar to dig up the SWFs data source and save that data set instead of the SWF. Then, you can create your own image graph from that data set, using jpGraph or something similar.

Related

Display STL file web

I'm working on a web application, using laravel, and I have to display stl files within a slider. So far I have been using png pictures to build the rest and deal with the 3d part later. But later has come, eventually.
For now, the user uploads its png/jpeg file and it goes straight to my 'public/storage/images/' folder. Then it is displayed within the slider, on another view.
I now need to replace those basic pictures with stl file and I am not sure on how to do that.
I wanted to use Three.js but my manager would like something easier (no need to use parameters), like a basic viewer, since there's no need to interact with the obj, yet.
Having to deal with 3D is new to me so I am a little bit confused on how to approach it.
Any help greatly appreciated.

Generate custom Finder thumbnails for some file types

I'd like to be able to generate my own thumbnails for some image files with custom extensions (say, a .canon file that is really a TIFF), so that Finder would use them.
I don't want to change the file contents (nor am I interested in the embedded tiff thumbnail).
Creating the thumbnail from the file's contents would be easy, the tricky part is integration. Does anyone know if it's possible?
The custom extensions won't be associated to any/other app.
I've done a lot of iOS development but know very little about OS X components.
If it's not possible to use Finder at all, is it at least possible to store the thumbnails in resource forks and have them used by, say, a custom filesystem browser?
File thumbnails, as well as full-size previews (which are displayed when you tap the space bar), can be generated dynamically by Quick Look plugins for any file type that they're registered for. The thumbnails do not need to be stored in the file, although you can certainly use pregenerated thumbnails if they're already in there.
For more information on Quick Look, please refer to Apple's Quick Look Programming Guide.

A way to prevent 3rd-party elements to be loaded on Safari?

Basically, I'm looking for RequestPolicy for Safari. GlimmerBlocker, Privoxy and BFilter etc, those work well but none of them support "block 3rd party elements" feature.
I use GlimmerBlocker, and to imitate (barely) the function, I mainly put this code to filter script flooded website.
replace(/<(script|noscript|iframe)([\s\S]*?)<\/(script|noscript|iframe)>/img, "")
However I'm tired of repeating creating filters for each websites. Vice-versa, whitelisting will be the same.
If anybody had an idea to solve this, that would be so great. Thanks.
I made this proof-of-concept Safari extension to block external resources (images, objects, and scripts, but NOT link elements, such as stylesheet links) until allowed. It has a bare minimum number of features, but if you are interested, I might develop it further.
I say "external" and not "third-party" because I don't know to tell reliably if a resource is third-party or not. This extension just blocks all resources that come from a different host than the web page. As a result, it blocks too many resources by default.
You can right-click a blocked image and use a context menu command to whitelist the image host. If the blocked image didn't have a specified width and height, it will be invisible, so you won't be able to right-click it. (To remedy this, I will need to add code to make the empty image visible as a box.)
The whitelist command does not show up for blocked plugin objects (such as Flash objects) or scripts. I will have to add code to deal with that.
You can also whitelist the current site itself, meaning that all external resources will be allowed on that site. Again, this is done with a context menu command.
As yet, there is no way to remove items from either whitelist. This can be added.
Download the extension from here.
You can extract the source files from the extension package using this command:
xar -xf PartyPooper.safariextz
You are welcome to do whatever you like with the source.

Web Page Rendering Capture

I start with describing the problem itself. Rather than a problem I'm looking for a better solution. I have a asp.net page which has a bunch of images and a link underneath it, Each image is infact the latest rendering of the link underneath it.
I scheduled a bat script which runs every hour to fetch the images through IECapt a web page rendering capture utility. One thing am annoyed about this utility is it takes a lot of time for the 20 images I have and for few because of the flash content it misses to take the actual screenshot of the website.
Now I like to know can this rendering be done by traditional programming am not interested in using any utilities. I'm interested in trying this. The solution need not be necessarily a C# based am ready to try in any other language. Because it gives me a chance to learn.
Thank you.
You should probably look at moz-headless-screenshot
You should be able to embed the functionality you need.
http://blog.mozilla.com/ted/2010/07/29/moz-headless-screenshot/
he also provided a sample embedding client application called moz-headless-screenshot.
This is a simple command line tool that takes a URL, image size, and output filename
and generates a PNG screenshot of the webpage.
You should look into browser shots:
http://browsershots.org/
They do what you want to do for lots of different browsers. It is even open source.
There's no simple-simple solution for what you're asking to do. This is because rendering HTML, CSS, and Flash is actually a very sophisticated process.
If you're up for quite a bit of coding, you can use the Gecko engine (which powers firefox) or another open-source web-browser core (ie Dillo) to render the page onto a custom canvas. Then save that canvas to a file. Unless you implement support for browser plug-ins, you won't get Flash this way, though. You could try using Gnash or its like. Good luck with that.
I don't know of an open-source project that already does this. It would be neat, though :-). If you write something, please push it to the world; it would be really cool to have a "get a screencap of this URL" tool.
One way is to use IRobotSoft web scraper. You can design a robot to go to the URL every hour, and capture the whole web page as an image via a function CapturePage(imagefile).
I am not sure if it will be better than IECapt though.
We have used ACA WebThumb ActiveX Control (http://www.acasystems.com/en/web-thumb-activex/) quite successfully to capture parts or whole of a web page in the web server and then to write them to a file, just passing in the url. It performs fast enough for our need.
I am not familiar with IECapt, but this might be something you might want to have a look at.

Best way to make a newsletter slideshow kiosk for the office?

So, I've been tasked with making a kiosk for the office for showing statistics about our SCRUM progress, build server status, rentability and so forth. It should ideally run a slideshow with bunch of different pages, some of them showing text, some showing graphs and so on.
What is the best approach for this? I first thought of powerpoint, but It should be able to take the images from a webserver so I can automate the graph generation procedure. I would also like to take text from an external source when showing "Who broke the build" or some page like that.
I have no doubt that ready-made systems exist, but I don't really know where to look for them.
Is this easy/hard in powerpoint? Or are there an ubiquous app that everybody but me knows about?
I would recommend creating it as a series of web-pages, which uses Javascript or the meta refresh tag to cycle though the different pages. Simply full-screen the browser on a spare machine, and connect it to a projector/monitor/big TV.
This has lots of benefits:
it's trivial to display images from an external server (an <img> tag)
it will cost nothing to setup (it can run on basically any functioning machine), and runs in a browser
it is quick to do (you do not have to worry about cross-browser compatibility, or different screen resolutions as you know the exact machine you are developing for
it's expandable - while what you describe is probably possible within Powerpoint, but if you do it as a web-page, you can use Javascript (or a JS framework like jQuery), and it's very easy to serve the pages via a web-server, then you can use any server-side scripting language.
Basically, you would have a series of files, say slide001.htm, slide002.htm and slide003.htm. Slide 1 would redirect to slide002 after 30 seconds, slide002 to slide 003, and slide003 would redirect to slide001..
The specific things you mention: graph generation and "Who broke the build" text:
Not sure which CI tool you use, but many of them generate graphs anyway, so that would be required is having one "slide" with something like <img src="http://hudson.abc/job/proj042/buildTimeGraph">
For the who-broke-the-build text, you would be easiest to run the slides as .php files served though a web-server, using XAMMP.
Then you would have a function that scrapes your CI server for whoever broke the last build, and in one of the slides, you would have <?PHP echo(who_broke_build()); ?>
(Obviously if you know some other language/system better, use that!)
The final benefit I can think of is that, if you serve the files through a web-server, you can allow people display it locally, say as their browsers home-page.
Thanks. I found jqS5 which did most of what you mentioned.
It requires 1 document where every h2 becomes a new slide.
I can then use the meta-refresh to reload to next page every 10 seconds. When I reach the end of the slides, I pull data from an aggregated RSS feed from all the different systems in order to pull information.
http://staticfree.info/projects/jqs5/

Resources