Ruby regex for pulling out psycinfo references - ruby

I need a regex to seperate references from a mountain of psycinfo lit searches that look like this:
http://rubular.com/r/bKMoDpAJvY
(I can't post the text - something about this edit control bungs it up horribly)
I just want matches that are all the text that is between the numbering but it is doing my head in. Also an explanation would be fabulous so I can learn.

Does teststring.split(/^\d+\./) work for you?
With String#split you get an array out of your string, the string is splitted at the regex, in this case a numbers at begin of the line, followed by a dot, optional some spaces and a newline.
My testcode
teststring = DATA.read
teststring.split(/^\d+\.\s*$/).each{|m|
puts "==========="
puts m
}
__END__
1.
Reframing the rocky road: From causal analysis to mindreading as the drama of disposition inference. [References].
Ames, Daniel R.
Psychological Inquiry. Vol.20(1), Jan 2009, pp. 19-23.
AN: Peer Reviewed Journal: 2009-04633-002.
Comments on an article by Glenn D. Reeder (see record 2009-04633-001). My misgivings with Reeder's account are relatively minor. For one, I am not sure that the "multiple inference model" label quite captures the essential part of Reeder's argument. Although it suggests the plurality of judgments that perceivers often make, it does not seem to reflect Reeder's central point that, for intentional behaviors, perceivers typically make motive inferences and these guide trait inferences. Another stumbling point for me was the identification of five categories that accounted for "the majority of studies" on dispositional inference (attitude attribution, moral attribution, ability attribution, the silent interview paradigm, and the quiz-role paradigm). These are noteworthy paradigms, to be sure, but they hardly seem to exhaust the research on dispositional inference, which I take as a perceiver's ascription of an enduring trait to a target. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
Jan 2009
Year of Publication
2009
E-Mail Address
Ames, Daniel R.: da358#columbia.edu
Other Publishers
Lawrence Erlbaum; US
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc6&AN=2009-04633-002
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:10.1080%2F10478400902744253&issn=1047-840X&isbn=&volume=20&issue=1&spage=19&pages=19-23&date=2009&title=Psychological+Inquiry&atitle=Reframing+the+rocky+road%3A+From+causal+analysis+to+mindreading+as+the+drama+of+disposition+inference.&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2009-04633-002%3C%2FAN%3E%3CDT%3EComment%2FReply%3C%2FDT%3E
2.
Everyday Solutions to the Problem of Other Minds: Which Tools Are Used When? [References].
Ames, Daniel R.
Malle, Bertram F [Ed]; Hodges, Sara D [Ed]. (2005). Other minds: How humans bridge the divide between self and others. (pp. 158-173). xiii, 354 pp. New York, NY, US: Guilford Press; US.
AN: Book: 2005-09375-010.
(from the chapter) Intuiting what the people around us think, want, and feel is essential to much of social life. Some scholars have gone so far as to declare the "problem of other minds"--whether a person can know if anyone else has thoughts and, if so, what they are--intractable. And yet countless times a day, we solve such problems with ease, if not perfectly then at least to our own satisfaction. What strategies underlie these everyday solutions? And how are these tools employed? This chapter offers 4 contingencies about when various inferential tools might be used. First, that affect qualifies behavior in the near term: perceived remorseful affect can lead to ascriptions of good intent to harm-doers in the short run, but repeated harm drives long-run ascriptions of bad intent. Second, that perceived similarity governs projection and stereotyping: perceptions of general similarity to a target typically draw a mindreader toward projection and away from stereotyping; perceived dissimilarity does the opposite. Third, that cumulative behavioral evidence supersedes extratarget strategies: projection and stereotyping will drive mindreading when behavioral evidence is ambiguous, but as apparent evidence accumulates, inductive judgments will dominate. Fourth, that negative social intention information weighs heavily in mindreading: within a mindreading strategy, cues signaling negative social intentions may dominate neutral or positive cues; between mindreading strategies, those strategies that signal negative social intentions may dominate. These contingencies have varying degrees of empirical support and would benefit from additional research and thinking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
2005
Year of Publication
2005
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc5&AN=2005-09375-010
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:&issn=&isbn=1-59385-187-1&volume=&issue=&spage=158&pages=158-173&date=2005&title=Other+minds%3A+How+humans+bridge+the+divide+between+self+and+others.&atitle=Everyday+Solutions+to+the+Problem+of+Other+Minds%3A+Which+Tools+Are+Used+When%3F&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2005-09375-010%3C%2FAN%3E%3CDT%3EChapter%3C%2FDT%3E
results in:
===========
===========
Reframing the rocky road: From causal analysis to mindreading as the drama of disposition inference. [References].
Ames, Daniel R.
Psychological Inquiry. Vol.20(1), Jan 2009, pp. 19-23.
AN: Peer Reviewed Journal: 2009-04633-002.
Comments on an article by Glenn D. Reeder (see record 2009-04633-001). My misgivings with Reeder's account are relatively minor. For one, I am not sure that the "multiple inference model" label quite captures the essential part of Reeder's argument. Although it suggests the plurality of judgments that perceivers often make, it does not seem to reflect Reeder's central point that, for intentional behaviors, perceivers typically make motive inferences and these guide trait inferences. Another stumbling point for me was the identification of five categories that accounted for "the majority of studies" on dispositional inference (attitude attribution, moral attribution, ability attribution, the silent interview paradigm, and the quiz-role paradigm). These are noteworthy paradigms, to be sure, but they hardly seem to exhaust the research on dispositional inference, which I take as a perceiver's ascription of an enduring trait to a target. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
Jan 2009
Year of Publication
2009
E-Mail Address
Ames, Daniel R.: da358#columbia.edu
Other Publishers
Lawrence Erlbaum; US
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc6&AN=2009-04633-002
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:10.1080%2F10478400902744253&issn=1047-840X&isbn=&volume=20&issue=1&spage=19&pages=19-23&date=2009&title=Psychological+Inquiry&atitle=Reframing+the+rocky+road%3A+From+causal+analysis+to+mindreading+as+the+drama+of+disposition+inference.&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2009-04633-002%3C%2FAN%3E%3CDT%3EComment%2FReply%3C%2FDT%3E
===========
Everyday Solutions to the Problem of Other Minds: Which Tools Are Used When? [References].
Ames, Daniel R.
Malle, Bertram F [Ed]; Hodges, Sara D [Ed]. (2005). Other minds: How humans bridge the divide between self and others. (pp. 158-173). xiii, 354 pp. New York, NY, US: Guilford Press; US.
AN: Book: 2005-09375-010.
(from the chapter) Intuiting what the people around us think, want, and feel is essential to much of social life. Some scholars have gone so far as to declare the "problem of other minds"--whether a person can know if anyone else has thoughts and, if so, what they are--intractable. And yet countless times a day, we solve such problems with ease, if not perfectly then at least to our own satisfaction. What strategies underlie these everyday solutions? And how are these tools employed? This chapter offers 4 contingencies about when various inferential tools might be used. First, that affect qualifies behavior in the near term: perceived remorseful affect can lead to ascriptions of good intent to harm-doers in the short run, but repeated harm drives long-run ascriptions of bad intent. Second, that perceived similarity governs projection and stereotyping: perceptions of general similarity to a target typically draw a mindreader toward projection and away from stereotyping; perceived dissimilarity does the opposite. Third, that cumulative behavioral evidence supersedes extratarget strategies: projection and stereotyping will drive mindreading when behavioral evidence is ambiguous, but as apparent evidence accumulates, inductive judgments will dominate. Fourth, that negative social intention information weighs heavily in mindreading: within a mindreading strategy, cues signaling negative social intentions may dominate neutral or positive cues; between mindreading strategies, those strategies that signal negative social intentions may dominate. These contingencies have varying degrees of empirical support and would benefit from additional research and thinking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
2005
Year of Publication
2005
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc5&AN=2005-09375-010
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:&issn=&isbn=1-59385-187-1&volume=&issue=&spage=158&pages=158-173&date=2005&title=Other+minds%3A+How+humans+bridge+the+divide+between+self+and+others.&atitle=Everyday+Solutions+to+the+Problem+of+Other+Minds%3A+Which+Tools+Are+Used+When%3F&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2005-09375-010%3C%2FAN%3E%3CDT%3EChapter%3C%2FDT%3E
The first empty "" is obsolete, you may delete it.
I found another solution with String#scan:
(teststring + "99.\n").scan(/^\d+\.\s*\n(.*?)(?=^\d+\.\s*\n)/m).each{|m|
puts "==========="
puts m
}
Explanation:
^\d+\.\s*\n Look for numbers with dot at line start until line end. Ignore trailing spaces
(.*?) take everything, but not greedy (use shortest hit)
(?=^\d+\.\s*\n) Check for next entry, but don't consume it
m use multiline code
(teststring + "99.\n") This solution will loose the last entry. So we add a 'endtag'

Related

Spontaneous emergence of replicators in Artificial Life

One of the corner stones of The Selfish Gene (Dawkins) is the spontaneous emergence of replicators, i.e. molecules capable of replicating themselves.
Has this been modeled in silico in open-ended evolutionary / artificial life simulations?
Systems like Avida or Tierra explicitly specify the replication mechanisms; other genetic algorithm/genetic programming systems explicitly search for the replication mechanisms (e.g. to simplify the von Neumann universal constructor)
Links to simulations where replicators emerge from a primordial digital soup are welcome.
The software system Amoeba by Andrew Pargellis has studying the origins of life as an explicit goal; he saw interesting patterns developing first, that eventually turned into self-replicators.
More recently (well, 2011) Evan Dorn used Avida to study a range of questions to do with the origin of life, with a focus on astrobiology. They wanted to examine how chemical distributions changed as abotic environments shifted to biotic so that astronomers would know what to look for.
For a good starting point, take a look at:
https://link.springer.com/article/10.1007/s00239-011-9429-4

Human Computer Interaction vs Interaction Design

According to Wikipedia Human Computer Interaction involves the study, planning, and design of the interaction between people (users) and computers.
Interaction Design is the practice of:
understanding users’ needs and goals
designing tools for users to achieve those goals
envisioning all states and transitions of the system
considering limitations of the user’s environment and technology
So what is the difference between studying Master in Human Computer Interaction vs Master in Interaction Design? I think interaction design has a broader scope and includes Human computer interaction as well. which one is more practical?
Human Computer Interaction (HCI) is a subset of Interaction Design. You could be forgiven for thinking that interaction design is rebranding of HCI.
Interaction design can be placed on a continuum which begins with the earliest tools, passes through the industrial revolution and stretches out into Weiser’s utopian predictions. In the early 1900’s Frederick Taylor employed current technologies, photography, moving pictures and statistical analysis to improve work practises. Engineering psychology was born and the terms ‘human factors’, ‘ergonomics’ entered into common lexicon. The explosion of information, brought about by what Grudin (2012:5) refers to as: “technologies and practices for compressing, distributing, and organizing information bloomed…were important inventions that influenced the management of information and organizations in the early 20th century”
The earliest computers, where incredibly expensive and where only accessed by specialists, Grudin (2012:7) reports that: “ENIAC, arguably the first general-purpose computer, was…eight to ten feet high, occupied about 1800 square feet, and consumed as much energy as a small town.” While some notable researchers such as Grace Hopper where concerned with the area of ‘programmer-computer interaction’ (a phrase coined by Grace Hopper), the affordability of these massive machines and their relative scarcity would be the single biggest stumbling block the evolution of usability and theories thereof.
Ivan Sutherland’s PhD thesis “Sketchpad: A man-machine graphical communication system” was groundbreaking rethink of the interface between operators and machines. Blackwell & Rodden write in the introduction (2003: 4) that while Sutherland’s demo could only run on one modified TX-2 in laboratory, it was: “one of the first graphical user interfaces. It exploited the light-pen, predecessor of the mouse, allowing the user to point at and interact with objects displayed on the screen.”
Sutherland’s ideas had a major impact on the work on Xerox’s Star’s designers, they used his idea of ‘icons’, a ‘GUI’ (Graphic User Interface), pointer control (in their case a mouse). Johnson et al (1989:11) reports that his team:
“assumed that the target users were interested in getting their work done and not at all interested in computers. Therefore, an important design goal was to make the ‘computer’ as invisible to users as possible…Another important assumption was that Star’s users would he casual, occasional users rather than people who spent most of their time at the machine. This assumption led to the goal of having Star be easy to learn and remember.’
The Star was not a commercial success, but it’s innovations ushered in a new era of ‘personal computing’ - this led to a boon in the area of research and the emergence of Human Computer Interaction (HCI), Grudin (2012:19) reports: “As personal computing spread, experimental methods were applied to study other contexts involving discretionary use. Studies of programming gradually disappeared from HCI conferences.”
Alan Cooper, an early Interaction Design practitioner in interview with Patton (2008:16) reports:
“I began experimenting with this whole new idea that it’s not about computer operators running a batch process, but about people sitting in front of the software and interacting directly.…it was really the microcomputers that drove that into my head.”
The evolution of Interaction design, notes Cooper, was in part driven by the need to specialise, he tells Patton (2008:17):
“I found myself in kind of a bind. I was going to have to either become part of a larger organization or let go of the implementation part of what I did.”
Industry practitioners realised that this interaction between human and computers, needed to develop a methodology. Alan Cooper (2008:17) relates:
“it would be much more valuable and interesting if I could figure out some objective methodology that I was going through. That would give me some leverage, and it would be good for the world, good for the industry.”
Bill Verplank, who worked on the Xerox Star, along with Bill Moggeridge first coined the phrase ‘interaction design’ (we should probably be thankful that Verplank convinced Moggeridge not to use the term (2007:14) ‘Soft-face’). Interaction design, then named a current pressing concern for industry Cooper et al (2012:8) describe how:
“the user experience of digital products has become front page news…institutions such as Harvard Business School and Stanford have recognised the need to train the next generation of MBAs and technologists to incorporate design thinking into their business and development plans…Consumers are sending a clear message that what they want is good technology: technology that has been designed to provide a compelling and effective user experience.”
My key concern as a student of ID is that interaction Design is such a large area. Rogers et al (2013:9) list a dizzying array of areas:
“user interface design, software design, user-centered design, product design, web design, experience design, and interactive system design. Interaction design is increasingly being accepted as the umbrella term, covering all of these aspects.”
References
Papers
Patton, Jeff (2008), ‘A Conversation with Alan Cooper: The Origin of Interaction Design’
Software, IEEE Volume: 25 , Issue: 6, Page(s): 15 - 17
Johnson, J. ; Roberts, T.L. ; Verplank, W. ; Smith, D.C. ; Irby, C.H. ; Beard, M. ; Mackey, K. (1989) ‘The Xerox Star: a retrospective’
Computer Volume: 22 , Issue: 9, Page(s): 11 - 26
Grudin, J. (2012) ‘Introduction: A moving target-The evolution of human-computer interaction.’
To appear in Jacko, J., Ed., Human-Computer Interaction Handbook: Fundamentals, evolving technologies, and emerging applications, 3rd ed., Taylor and Francis.
Weiser M (1991) ‘The computer for the 21st Century’.
Scientific American 265(3):94–104, 1991
Books
Cooper A, Reimann R, Cronin D ‘About Face 3: The Essentials of Interaction Design’
John Wiley & Sons, 12 June 2012
Rogers, Yvonne ‘HCI Theory: Classical, Modern, and Contemporary’
Morgan & Claypool Publishers, Pennsylvania State University Press 1 June 2012
Moggridge, Bill (2007): Designing Interactions. The MIT Press 2007
Web
Sutherland, I.E. (1963/2003). ‘Sketchpad, A Man-Machine Graphical Communication System. PhD Thesis at Massachusetts Institute of Technology’,
online version and editors’ introduction by A. F. Blackwell & K. Rodden. Technical Report 574. Cambridge University Computer Laboratory [http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf]

What does "LifeCycle expectations" mean in SQALE?

I was going through this wiki article on SQALE(Software Quality Assessment based on Lifecycle Expectations). The Software Quality assurance part of it is clear. But I am unable to understand the "based on LifeCycle expectations" part of the model. Can someone please explain in a clear to understand way the LifeCycle expectations part.
Ever since I first encountered SQALE, I've suspected that the creators were working backward from the acronym, and "lifecycle expectations" was the best that they could come up with that wasn't already taken. (There are other similarly acronymed guidelines or methodologies in the software quality space, such as Squale and SQuaRE).
All but the most rudimentary models of software quality incorporate the notion of quality over the full lifecycle of a software product (from "gleam in someone's eye" phase all the way through eventual decomissioning), not just its state at the time of initial shipping.
Such models also acknowledge that the desirable investment in any given aspect of software quality (e.g.: maintainability) is not equal for all software products. For example, one would tend to make very different decisions in maintainability investments for a boxed-style software product one hopes to sell (and upgrade) for years vs for a short-term marketing promo site that will be decomissioned one month after it goes live.
So... All that "lifecycle expectations" bit probably implies is something along the lines of covering software quality aspects that affect multiple part of the lifecycle of a product, as well as allowing adjustment/calibration to different expectations for the various quality characteristics.
If you're interested in how the SQALE authors might have meant this (assuming they weren't only trying to shoehorn the name into the acronym), there are a few hints in the SQALE manual that can be downloaded from http://www.sqale.org/download. They seem to feel that they've done something novel and useful in projecting the ISO/IEC 9126 quality characteristics over a product lifecycle in a chronological order, and perhaps this is what the name is meant to reflect.
BTW, if you're interested in software quality and want to understand this sort of quality modeling in more depth, I'd recommend taking a look at SQuaRE (incl. the parts of ISO/IEC 9126 for which there are no SQuaRE replacements yet). It's not necessarily the most exciting reading, but I find that it provides excellent background for evaluating the utility of various quality methodologies and tools.

Any reference resource for programming language UI experience?

Programming languages let their users feel terrible or smooth just like GUI designing does. When it comes with bad syntax features, users endure it with twitching fingers and eyes. And such issues already wasted a lot of time and other resources due to wars between language's fans and opponents ( ex: "goto considered harmful", "Node.js is cancer" ... ).
I wonder why UI designing at least became a researching target and own some stable standard like the distance between of user's mouse and the target component while languages didn't. I know some issues related to semantics, not only syntax. But I seriously feel these arguments should be formalized by some strong enough standards.
It seems there is a course at Cambridge entitled "Usability of Programming Languages" that addresses this exact issue.
From the 2015-16 course page:
A programming language is essentially a means of
communicating between humans and computers. Traditional computer
science research has studied the machine end of the communications
link at great length, but there is a shortage of knowledge and
research methods for understanding the human end of the link. This
course provides practical research skills necessary to make advances
in this essential field.
The same page lists the following recommended reading:
Online proceedings of the Psychology of Programming Interest Group
Cambridge guidance for human participants in technology research
Cairns, P. and Cox, A.L. (2008) Research Methods for Human-Computer Interaction. Cambridge University Press.
Hoc, J.M, Green , T.R.G, Samurcay, R and Gilmore, D.J (Eds.) (1990) Psychology of Programming. Academic Press.
Carroll, J.M. (Ed) (2003). HCI Models, Theories and Frameworks: Toward a multidisciplinary science. Morgan Kaufmann.
The 2015 lecture notes seem like a good place to start: http://www.cl.cam.ac.uk/teaching/1415/P201/p201-lecturenotes-2015.pdf

Utilizing algorithms in academic papers

I'm unclear about legal status of utilizing an algorithm from a published academic paper. Is there an implicit patent over that material? How about open source applications? Would it be ok to implement that algorithm in an open source app, with one of free software licenses?
Let's say I have access to paper A which describes algorithm B. How can I determine if I can use algorithm B in my commercial closed-source app C or open source app D? Is the answer always "no"? Is there an expiration date?
There's no such thing as an "implicit patent".
Unfortunately, I'd think you need to evaluate the IP restrictions for each paper.
One of the more famous situations where the algorithm described in an academic paper was ultimately encumbered by a patent as the RSA asymmetric encryption algorithm. A paper, "On digital signatures and public-key cryptosystems", was published in an ACM journal in 1977 describing the algorithm, a patent was awarded in 1983 (US Patent 4,405,829).
I have not read the paper, so I don't know if the application for a patent was mentioned - I do know that the algorithm was rather widely implemented, then when the patent was awarded MIT/RSADSI started enforcing it. The patent became a rather big issue with PGP, for example. MIT/RSADSI ultimately permitted free use of the patent for non-commercial use, I believe. A similar situation occurred with the LZW compression.
AFAIK, Having something published in a publicly-accessible way (e.g., academic paper) actually eliminates the possibility of a patent in Europe. In the US, there is a one year grace period from first publication to obtain a patent. Students and professors usually figure out the potential, and they apply to the university's technology transfer office that figures out what to do with it. This, for example, was the case with Captchas. If the paper is by commercial company (e.g., IBM), it is much more likely to get patented or to have additional protections because employees at Research are evaluated also based on number of patent applications.
The problem is that there is often no way to know because the patent lawyers will usually write a general name that has no contact to the original idea. So contacting the author just in case may be prudent. The author may also have an existing implementation available, often open-source.
This is what happened with GIF, the compression algorithm was published openly but without mentioning that it had been patented.
Even if the authors of the paper claim not to have patented it - there is no guarantee that another algorithm hasn't already been patented and a court might decide that it covers that work.
Remember as well that inventing it yourself doesn't protect you from patents - you can invent an algorithm that you have never seen in print and later discover it's covered by some patent.
As Michael said, there's no such thing as an implicit patent.
Generally, when an algorithm or any other research is published, the authors want to put it out into the world for other people to use. If you have access to a published paper which describes some algorithm, it's probably going to be okay for you to use it in any program you might create. However, some algorithms may be patented by their inventors before being published, and in that case if you want to use the algorithm you would have to come to some arrangement with the patent holder, regardless of whether your application is open-source or not. Basically to be safe, you need to check the specific restrictions on that algorithm. (The type of people who publish papers about algorithms are often more likely to let you use their algorithms if your application is open source than if it's proprietary.)

Resources