Origin of "embarrassingly parallel" phrase [closed] - parallel-processing

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
For the purposes of history on wikipedia, is anyone familiar with the origin of the phrase "embarrassingly parallel". I've always thought that it may have been coined by a random Google employee who first worked on map-reduce. Does anyone have any concrete info on the origin?

The first use I could find in an advanced Google book search was from an IEEE Computer Society digest published in 1978. The context and the fact that the author had "embarrassingly" in quotes indicates to me that the phrase was not coined here, but had been used before this.

It's decades old, but I first heard it used in reference to graphics rendering. Imagine you're rendering an animated movie: each frame is 2000x1000 pixels, there are 24 frames per second, 60 seconds in a minute, and 100 minutes in the movie. That's almost 300 billion pixels that can all be computed in parallel. That's so parallel that it's embarassing to compute it serially.

Try this search : http://www.google.co.nz/search?rlz=1C1GGLS_enNZ364NZ365&sourceid=chrome&ie=UTF-8&q=etymology+embarrassingly+parallel

Related

What counts as a transaction on the free API license? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I'm working on a book for Manning and want to use Alchemy News API as part of one of the examples. I have a free license which says it allows for 1,000 transactions per day. Does that mean 1,000 queries or something else? I hit the limit today way earlier than I expected to, at significantly less than 1,000 queries.
Each type of call has different amounts of transactions it uses. The text analysis uses just 1 transaction but the image analysis uses more. I believe it used about 5 transactions per image but it's been awhile since I've used the image recognition.
The number of transactions used is given in the response from AlchemyAPI. It also gives you more details in the documentation.
Query Cost + Result Cost = Total Cost
This is something I learnt today. Therefore you'll run out of transactions easily with loads of data. Keep your queries to a minimum.

What is the algorithm behind photoshop's quick selection tool? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I work on image processing. I just saw the quick selection tool of photoshop and I was quite impressed to see that this tool was capable to segment images along real edges, most of the time.
I could imagine two or more ways of doing what that tool does:
starting with an edge detector (say Canny), with adapted parameters I would just get the connected region (maybe after some dilate, then compensating with some "open" operation on the segment).
Doing watershed algorithm with additional boundary constraints, virtually sth like surface tension.
But maybe Im wrong.
I plan to implement a similar segmentation algorithm, so I'm interested in an idea description (like my two guesses). Can you point me to the right direction?

After Effects online/Cloud rendering [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Does any company offer an online rendering service for After Effects?
I have a file with 144 HD clips all masked, and my poor little macbook can't handle it (288 hours render time).
There are no effects, only masks, as it is, in effect, a pre-render.
If anyone has suggestions of how I can speed up this render, would be super appreciated.
Here's catalog of render form services wiht special topic about After Effects online http://rentrender.com/after-effects-render-farms/ and Cloud rendering http://rentrender.com/cloud-render-farms-list/ Look and choose what you want and make rendering of your clips
None that I know of. You can set up After Effects to render over a local network, of course you'll need multiple computers with After Effects installed. See here: http://help.adobe.com/en_US/AfterEffects/9.0/WS3878526689cb91655866c1103a4f2dff7-79a2a.html
There is a company that will be distributing extra CPU cycles but they are not operational yet. http://cpusage.com

How can I calculate the trending nature of a link? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
The above image represents an article's page views over time. I'm looking for a decent, not to complex either physics or statistical calculation that would be able to give me (based on the history of the page views) what the current trending of the page views is for the past n days (which is represented by the blue box).
So basically, in the past 5 days is this link trending unusually higher than it usually does and if so by what degree/magnitude?
Ideally the accepted answer would provide an algorithm class that applies to this problem as well as some example of that using the data provided from this chart above.
thanks!
One approach could be to perform a least squares fit of the points within the blue box. Trends could then measured by the difference between the points and the least squares fit approximation value.
It sounds like you want to compare a short term (5-day) moving average to a longer-term moving average (e.g., something like 90 days).
As a refinement, you might want to do a least-squares linear regression over the longer term, and then compare the shorter term average to the projection you get from that.

CSI style zoom in and enhance now possible? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have always thought the way they zoom in and enhance on TV and movies was strictly impossible. Essentially because you cannot create more information that there is to begin with.
There were ways to get better looking or clearer images like with re-sampling, but never to the extent seen on film.
Now, it seems that is not true.
I was reading this article, and it seems they have a way to do that now?
Or, is this just a better version of what was already possible? You still need to have a fairly clear image to start with? Otherwise, what are the limits of this technique?
There is something called Super-resolution. Some companies claim to use fractal theory to enhance images when they are upscaled. But, what you see in most movies is just fiction.
Image enhancement always involves pixel interpolation (aka. prediction) - in one way or the other. Interpolation can be good, bad or whatever, but it will never out-perform real pixel which was recorded by imaging device at greater resolution.

Resources