I have created a Processing code (.pde file) to make a time series (coffee production v/s time) which takes its data from an excel file(.tsv table). Can anyone tell me how to include this to my webpage?
I have tried with processing.js but it does not show anything in the browser.
without additional information, you probably have your .tsv file in a "data" directory, but aren't explicitly loading it from "./data/myfile.tsv", instead relying on Processing to autoresolve. If you intend to use your sketch online, always include "data/" in your file locations, because browsers resolve locations relative to "where the page is right now".
Related
I'm creating my custom binary file extension.
I use the RIFF standard for encoding data. And it seems to work pretty well.
But there are some additional requirements:
Binary files could be large up to 500 MB.
Real-time saving data into the binary file in intervals when data on the application has changed.
Application could run on the browser.
The problem I face is when I want to save data it needs to read everything from memory and rewrite the whole binary file.
This won't be a problem when data is small. But when it's getting larger, the Real-time saving feature seems to be unscalable.
So main requirement of this binary file could be:
Able to partially read the binary file (Cause file is huge)
Able to partially write changed data into the file without rewriting the whole file.
Streaming protocol like .m3u8 is not an option, We can't split it into chunks and point it using separate URLs.
Any guidance on how to design a binary file system that scales in this scenario?
There is an answer from a random user that has been deleted here.
It seems great to me.
You can claim your answer back and I'll delete this one.
He said:
If we design the file to be support addition then we able to add whatever data we want without needing to rewrite the whole file.
This idea gives me a very great starting point.
So I can append more and more changes at the end of the file.
Then obsolete old chunks of data in the middle of the file.
I can then reuse these obsolete data slots later if I want to.
The downside is that I need to clean up the obsolete slot when I have a chance to rewrite the whole file.
I am using wkhtmltopdf on my ubuntu server to generate pdfs out of html-templates.
wkhtmltopdf is therefore started from a php-script with shell_exec.
My problem is, that I want to create up to 200 pdfs at (almost) the same time, which makes the runtime of wkhtmltopdf kind of stack for every pdf. One file needs 0.6 seconds, 15 files need 9 seconds.
My idea was to start wkhtmltopdf in a screen-session to decrease the runtime, but I can't make it work from php plus this might not make that much sense, because I want to additionally summarize all pdfs in one after creation, so I would have to check if every session is terminated?!
Do you have any ideas how I can decrease the runtime for this amount of pdfs or can you give me advice how to realize this correctly and smart with screen?
My script looks like the following:
loop up to 200times {
- get data for html-template from database
- fill template-string and write .html-file
- create pdf out of html-template via shell_exec("wkhtmltopdf....")
- delete template-file
}
merge all generated pdfs together to one and send it via mail
Thank you in advance and sorry for my bad english.
best wishes
Just create a single large HTML file and convert it in one pass instead of merging multiple PDFs afterwards.
We have a PDF document processing system, implemented in AppleScript (where we call the scripts from the shell using osascript). In some of the scripts, we call Acrobat Preflight Droplets from the Applescript.
This does usually work without problems. However, in some cases, where the processed document is big or/and complex. the droplet returns control to the script before the report is written and the document is moved to the "success" or "failure" folder. The consequence is that the process continues, but without the moved file, it eventually fails.
The workaround so far has been to add a delay after those droplet calls. This does help, but it is a waste of time for small documents, and there will always be a document big and complex enough to take longer than the delay.
We also found out that the time needed for finishing writing the report and moving the document depends on the speed of the system (had to be expected…).
The workaround would be to calculate the delay from the document size, its number of pages, and a machine-dependent parameter. Document size, and number of pages are no big deal; they can be retrieved in the Applescript.
The problem is the machine-dependent parameter, which can be determined experimentally. But how do I make that parameter available to all the scripts needing it?
Incorporating it into the scripts is not an option, because we have a number of systems installed, and if we would do that, we'd end up in a maintenance nightmare. Passing it as an argument in the initial system call is also not possible, because the calls are many, and again would lead to a maintenance nightmare.
So, is there a way to set up a place where that machine parameter can be stored and easily called from any Applescript, no matter how it itself is called.
Thanks a lot for your advice.
You might find the Property List Suite in System Events useful. It’s a standard means of storing and then retrieving such information. Property List files themselves are simply XML files, so you can even create them outside of AppleScript and then read them within your scripts.
There’s a description with examples at https://apple.stackexchange.com/questions/58007/how-do-i-pass-variables-values-between-subsequent-applescript-runs-persistent
A simple suggestion if you only have one paramater to keep track of would be to just have a text file in a known location on each machine. The only content of the text file would be the machine paramater. I like to use the Application Support folder this kind of thing.
Assuming your machine parameter is CPU speed. You can save a text file in /Library/Application Support/Preflight Scripts/machinecpu.txt with the contents:
2.4
Then in Applescript, you would just read the text file.:
set machineParam to read file "Macintosh HD:Library:Application Support:Preflight Scripts:machinecpu.txt"
I am reading the content out from a xml file over the internet!
The file contains about 10000 xml-elements and is loaded into a list (one picture and headline for each element)!
This slows down the app extremly!
Is there a way to speed this up?
Maybe with a select-command?
Are there some examples or tutorials out there?
You are out of luck for a easy-straight forward answer.
If you control the server that the XML file is coming from, you should make the changes on it to support pagination of the results instead of sending the complete document.
If you don't control the server, you could set up one to proxy the results and do the pagination for the application on the server side.
The last option is the process the file in chunks. This would mean, processing sub-strings of the text. Just take a sub-string of the first x characters, parse it and then do something with the results. If you needed more you would process the next x characters. This could get very messy fast (as XML doesn't really parse nicely in this manner) and just downloading a document with 10k elements and loading it into memory is probably going to be taxing/slow/expensive (if downloading over a 3G connection) for mobile devices.
I am using JMeter and have 2 questions (I have read the FAQ + Wiki etc):
I use the Graph Results listener. It seems to have a fixed span, e.g. 2 hours (just guessing - this is not indicated anywhere AFAIK), after which it wraps around and starts drawing on same canvas from the left again. Hence after a long weekend run it only shows the results of last 2 hours. Can I configure that span or other properties (beyond the check boxes I see on the Graph Results listener itself)?
Can I save the results of a run and later open them? I know I can save the test plan or parts of it. I am unclear if I can save separately just the test results data, and later open them and perform comparisons etc. And furthermore can I open them with different listeners even if they weren't part of original test (i.e. I think of the test as accumulating data, and later on I want to view and interpret the data using different "viewers").
Thanks,
-- Shaul
Don't know about 1. Regarding 2: listeners typically have a configuration field for "Write All Data to a File", which lets you specify the file name. You can use the Simple Data Writer to store results efficiently for later analysis.
You can load results from a previous test into a visualizer by choosing "Write All Data to a File" and browsing for the file you wish to load. Somewhat counterintuitively, selecting a file for writing also loads that file into the visualizer and displays the results. Just make sure you don't run the test again while that file is selected, otherwise you will lose your saved test data. :-)
Well, I later found a JMeter group that was discussing the issue raised in my first question, and B.Ramann gave me an excellent suggestion to use instead a better graph found here.
-- Shaul