How to cite Processing software - processing

I'm trying to find a way to cite Processing software http://www.processing.org/ But I am not sure how to do.
Any Help ??

I am assuming you mean referencing it in a scientific publication. You can either simply use the URL, and do that according to whatever citation style you are using. Or, probably better, you can cite one of the publications by Casey Reas and Ben Fry. There are plenty. Depending on the topic of your publication you might for instance cite:
Reas, C. and Fry, B. Processing: programming for the media arts (2006). Journal AI & Society, volume 20(4), pages 526-538, Springer
By the way, some libraries do have publications too. The map library Unfolding, which I believe you are using, asks you to cite:
Nagel, T., Klerkx, J., Vande Moere, A., Duval, E. Unfolding - A Library for Interactive Maps, Human Factors in Computing and Informatics, Lecture Notes in Computer Science Volume 7946, 2013, pp 497-513, Springer

Related

Spontaneous emergence of replicators in Artificial Life

One of the corner stones of The Selfish Gene (Dawkins) is the spontaneous emergence of replicators, i.e. molecules capable of replicating themselves.
Has this been modeled in silico in open-ended evolutionary / artificial life simulations?
Systems like Avida or Tierra explicitly specify the replication mechanisms; other genetic algorithm/genetic programming systems explicitly search for the replication mechanisms (e.g. to simplify the von Neumann universal constructor)
Links to simulations where replicators emerge from a primordial digital soup are welcome.
The software system Amoeba by Andrew Pargellis has studying the origins of life as an explicit goal; he saw interesting patterns developing first, that eventually turned into self-replicators.
More recently (well, 2011) Evan Dorn used Avida to study a range of questions to do with the origin of life, with a focus on astrobiology. They wanted to examine how chemical distributions changed as abotic environments shifted to biotic so that astronomers would know what to look for.
For a good starting point, take a look at:
https://link.springer.com/article/10.1007/s00239-011-9429-4

Human Computer Interaction vs Interaction Design

According to Wikipedia Human Computer Interaction involves the study, planning, and design of the interaction between people (users) and computers.
Interaction Design is the practice of:
understanding users’ needs and goals
designing tools for users to achieve those goals
envisioning all states and transitions of the system
considering limitations of the user’s environment and technology
So what is the difference between studying Master in Human Computer Interaction vs Master in Interaction Design? I think interaction design has a broader scope and includes Human computer interaction as well. which one is more practical?
Human Computer Interaction (HCI) is a subset of Interaction Design. You could be forgiven for thinking that interaction design is rebranding of HCI.
Interaction design can be placed on a continuum which begins with the earliest tools, passes through the industrial revolution and stretches out into Weiser’s utopian predictions. In the early 1900’s Frederick Taylor employed current technologies, photography, moving pictures and statistical analysis to improve work practises. Engineering psychology was born and the terms ‘human factors’, ‘ergonomics’ entered into common lexicon. The explosion of information, brought about by what Grudin (2012:5) refers to as: “technologies and practices for compressing, distributing, and organizing information bloomed…were important inventions that influenced the management of information and organizations in the early 20th century”
The earliest computers, where incredibly expensive and where only accessed by specialists, Grudin (2012:7) reports that: “ENIAC, arguably the first general-purpose computer, was…eight to ten feet high, occupied about 1800 square feet, and consumed as much energy as a small town.” While some notable researchers such as Grace Hopper where concerned with the area of ‘programmer-computer interaction’ (a phrase coined by Grace Hopper), the affordability of these massive machines and their relative scarcity would be the single biggest stumbling block the evolution of usability and theories thereof.
Ivan Sutherland’s PhD thesis “Sketchpad: A man-machine graphical communication system” was groundbreaking rethink of the interface between operators and machines. Blackwell & Rodden write in the introduction (2003: 4) that while Sutherland’s demo could only run on one modified TX-2 in laboratory, it was: “one of the first graphical user interfaces. It exploited the light-pen, predecessor of the mouse, allowing the user to point at and interact with objects displayed on the screen.”
Sutherland’s ideas had a major impact on the work on Xerox’s Star’s designers, they used his idea of ‘icons’, a ‘GUI’ (Graphic User Interface), pointer control (in their case a mouse). Johnson et al (1989:11) reports that his team:
“assumed that the target users were interested in getting their work done and not at all interested in computers. Therefore, an important design goal was to make the ‘computer’ as invisible to users as possible…Another important assumption was that Star’s users would he casual, occasional users rather than people who spent most of their time at the machine. This assumption led to the goal of having Star be easy to learn and remember.’
The Star was not a commercial success, but it’s innovations ushered in a new era of ‘personal computing’ - this led to a boon in the area of research and the emergence of Human Computer Interaction (HCI), Grudin (2012:19) reports: “As personal computing spread, experimental methods were applied to study other contexts involving discretionary use. Studies of programming gradually disappeared from HCI conferences.”
Alan Cooper, an early Interaction Design practitioner in interview with Patton (2008:16) reports:
“I began experimenting with this whole new idea that it’s not about computer operators running a batch process, but about people sitting in front of the software and interacting directly.…it was really the microcomputers that drove that into my head.”
The evolution of Interaction design, notes Cooper, was in part driven by the need to specialise, he tells Patton (2008:17):
“I found myself in kind of a bind. I was going to have to either become part of a larger organization or let go of the implementation part of what I did.”
Industry practitioners realised that this interaction between human and computers, needed to develop a methodology. Alan Cooper (2008:17) relates:
“it would be much more valuable and interesting if I could figure out some objective methodology that I was going through. That would give me some leverage, and it would be good for the world, good for the industry.”
Bill Verplank, who worked on the Xerox Star, along with Bill Moggeridge first coined the phrase ‘interaction design’ (we should probably be thankful that Verplank convinced Moggeridge not to use the term (2007:14) ‘Soft-face’). Interaction design, then named a current pressing concern for industry Cooper et al (2012:8) describe how:
“the user experience of digital products has become front page news…institutions such as Harvard Business School and Stanford have recognised the need to train the next generation of MBAs and technologists to incorporate design thinking into their business and development plans…Consumers are sending a clear message that what they want is good technology: technology that has been designed to provide a compelling and effective user experience.”
My key concern as a student of ID is that interaction Design is such a large area. Rogers et al (2013:9) list a dizzying array of areas:
“user interface design, software design, user-centered design, product design, web design, experience design, and interactive system design. Interaction design is increasingly being accepted as the umbrella term, covering all of these aspects.”
References
Papers
Patton, Jeff (2008), ‘A Conversation with Alan Cooper: The Origin of Interaction Design’
Software, IEEE Volume: 25 , Issue: 6, Page(s): 15 - 17
Johnson, J. ; Roberts, T.L. ; Verplank, W. ; Smith, D.C. ; Irby, C.H. ; Beard, M. ; Mackey, K. (1989) ‘The Xerox Star: a retrospective’
Computer Volume: 22 , Issue: 9, Page(s): 11 - 26
Grudin, J. (2012) ‘Introduction: A moving target-The evolution of human-computer interaction.’
To appear in Jacko, J., Ed., Human-Computer Interaction Handbook: Fundamentals, evolving technologies, and emerging applications, 3rd ed., Taylor and Francis.
Weiser M (1991) ‘The computer for the 21st Century’.
Scientific American 265(3):94–104, 1991
Books
Cooper A, Reimann R, Cronin D ‘About Face 3: The Essentials of Interaction Design’
John Wiley & Sons, 12 June 2012
Rogers, Yvonne ‘HCI Theory: Classical, Modern, and Contemporary’
Morgan & Claypool Publishers, Pennsylvania State University Press 1 June 2012
Moggridge, Bill (2007): Designing Interactions. The MIT Press 2007
Web
Sutherland, I.E. (1963/2003). ‘Sketchpad, A Man-Machine Graphical Communication System. PhD Thesis at Massachusetts Institute of Technology’,
online version and editors’ introduction by A. F. Blackwell & K. Rodden. Technical Report 574. Cambridge University Computer Laboratory [http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf]

Any reference resource for programming language UI experience?

Programming languages let their users feel terrible or smooth just like GUI designing does. When it comes with bad syntax features, users endure it with twitching fingers and eyes. And such issues already wasted a lot of time and other resources due to wars between language's fans and opponents ( ex: "goto considered harmful", "Node.js is cancer" ... ).
I wonder why UI designing at least became a researching target and own some stable standard like the distance between of user's mouse and the target component while languages didn't. I know some issues related to semantics, not only syntax. But I seriously feel these arguments should be formalized by some strong enough standards.
It seems there is a course at Cambridge entitled "Usability of Programming Languages" that addresses this exact issue.
From the 2015-16 course page:
A programming language is essentially a means of
communicating between humans and computers. Traditional computer
science research has studied the machine end of the communications
link at great length, but there is a shortage of knowledge and
research methods for understanding the human end of the link. This
course provides practical research skills necessary to make advances
in this essential field.
The same page lists the following recommended reading:
Online proceedings of the Psychology of Programming Interest Group
Cambridge guidance for human participants in technology research
Cairns, P. and Cox, A.L. (2008) Research Methods for Human-Computer Interaction. Cambridge University Press.
Hoc, J.M, Green , T.R.G, Samurcay, R and Gilmore, D.J (Eds.) (1990) Psychology of Programming. Academic Press.
Carroll, J.M. (Ed) (2003). HCI Models, Theories and Frameworks: Toward a multidisciplinary science. Morgan Kaufmann.
The 2015 lecture notes seem like a good place to start: http://www.cl.cam.ac.uk/teaching/1415/P201/p201-lecturenotes-2015.pdf

Learning AI by practice ( Perceptrons, Neural networks and Bayesian AI)

I'm about to take a course in AI and I want to practice before. I'm using a book to learn the theory, but resources and concrete examples in any language to help with the practice would be amazing. Can anyone recommend me good sites or books with plenty of examples and tutorials ?
Thanks !
Edit: My course will deal with Perceptrons, Neural networks and Bayesian AI.
Really depends on what area you want to specialize on. There is the startup - resource : is
here. I learned about neural nets from their example. Can you elaborate what kind of AI it should be?
Ah and i forgot: this link is a very nice forum where you can look at problems other people have and learn from that.
Cheers.
My advice would be to learn it by trying to implement the various types of learners yourself. See if you can find yourself a dataset related to some interest you have (sports, games, health, etc.) and then try and create a learner to do some kind of classification (predicting a winner in a sports game, learning how to classify backgammon positions, detecting cancer based on patient data, etc.) using different methods. Start with Decision Trees if that's part of your future course work since they're relatively simple, then move on to neural networks.
Here is a set of sources, each one of which i recommend highly--for the quality of the explanation, for the quality of the code, and for the 'completeness' of the algorithm demo.
Least-Squares Regression
(Python)
k-means clustering (Python)
Multi-Layer Perceptron (Python)
Hopfield Network (Python)
Decision Tree (ID3 & C4.5)
In addition, the excellent textbook Elements of Statistical Learning by Hastie, et al. is actually free to download. The authors have an R package that accompanies this textbook which contains all of the code. This book includes detailed discussion of most (if not all) of the major classes of ML algorithms, with specific examples that refer to working code and 'real-world' data.
Personally I would recommend this M.Tim.Jones book on AI.
Has many many topics on AI and almost every type of AI discussion is followed by C example code. Very pragmatic book on AI indeed !!
Russel & Norvig have a good survey of the broad field.

Utilizing algorithms in academic papers

I'm unclear about legal status of utilizing an algorithm from a published academic paper. Is there an implicit patent over that material? How about open source applications? Would it be ok to implement that algorithm in an open source app, with one of free software licenses?
Let's say I have access to paper A which describes algorithm B. How can I determine if I can use algorithm B in my commercial closed-source app C or open source app D? Is the answer always "no"? Is there an expiration date?
There's no such thing as an "implicit patent".
Unfortunately, I'd think you need to evaluate the IP restrictions for each paper.
One of the more famous situations where the algorithm described in an academic paper was ultimately encumbered by a patent as the RSA asymmetric encryption algorithm. A paper, "On digital signatures and public-key cryptosystems", was published in an ACM journal in 1977 describing the algorithm, a patent was awarded in 1983 (US Patent 4,405,829).
I have not read the paper, so I don't know if the application for a patent was mentioned - I do know that the algorithm was rather widely implemented, then when the patent was awarded MIT/RSADSI started enforcing it. The patent became a rather big issue with PGP, for example. MIT/RSADSI ultimately permitted free use of the patent for non-commercial use, I believe. A similar situation occurred with the LZW compression.
AFAIK, Having something published in a publicly-accessible way (e.g., academic paper) actually eliminates the possibility of a patent in Europe. In the US, there is a one year grace period from first publication to obtain a patent. Students and professors usually figure out the potential, and they apply to the university's technology transfer office that figures out what to do with it. This, for example, was the case with Captchas. If the paper is by commercial company (e.g., IBM), it is much more likely to get patented or to have additional protections because employees at Research are evaluated also based on number of patent applications.
The problem is that there is often no way to know because the patent lawyers will usually write a general name that has no contact to the original idea. So contacting the author just in case may be prudent. The author may also have an existing implementation available, often open-source.
This is what happened with GIF, the compression algorithm was published openly but without mentioning that it had been patented.
Even if the authors of the paper claim not to have patented it - there is no guarantee that another algorithm hasn't already been patented and a court might decide that it covers that work.
Remember as well that inventing it yourself doesn't protect you from patents - you can invent an algorithm that you have never seen in print and later discover it's covered by some patent.
As Michael said, there's no such thing as an implicit patent.
Generally, when an algorithm or any other research is published, the authors want to put it out into the world for other people to use. If you have access to a published paper which describes some algorithm, it's probably going to be okay for you to use it in any program you might create. However, some algorithms may be patented by their inventors before being published, and in that case if you want to use the algorithm you would have to come to some arrangement with the patent holder, regardless of whether your application is open-source or not. Basically to be safe, you need to check the specific restrictions on that algorithm. (The type of people who publish papers about algorithms are often more likely to let you use their algorithms if your application is open source than if it's proprietary.)

Resources