Differences Between R and HR in SIL# Norms - safety-critical

SIL norms define different rules that must be applied to code (i.e. : cyclomatic complexity, etc.)
These rules are defined by "NA - Non Applicable", "R - Recommended" and "HR - Higly Recommended".
I understand this appreciation is up to the people that define the software.
How constrained am I to follow the "R" rules from the "HR" rules ? Are the first one recommended and the second mandatory ?

It is not clearly defined in the standards, so it is subject to interpretation by the certification authority. There are also variations in interpretation between standards (e.g. IEC 61508 vs EN 50128)
Most auditors would consider HR mandatory without some documented justification.
R is generally considered optional, but you generally need to select some of those options. (As opposed to optional meaning "can be ignored")

IEC 61508-3:2010, Annex A gives the following definitions:
HR: "the technicque or measure is highly recommended for this safety integrity level. If this technicque or measure is not used then the rationale behind not using it should be detailed with reference to Annex C during the safety planning and agreed with the assessor."
R: "the technique or measure is recommended for this safety integrity level as a lower recommendation to a HR recommendation." (sic!)
--: "the technique or measure has no recommendation for or against being used."
NR: "the technique or measure is positively not recommended for this safety integrity level. If this technique or measure is used then the rationale behind using it should be detailed with reference to Annex C during the safety planning and agreed with the assessor."
The assessor is the representative of the certification authority, so it is really about convincing the certifier.
As far as I know, in theory there is no absolutely mandatory or forbidden rating, but it is quite easy to overload oneself with extra measures necessary to make some untypical, unrecommended strategy convince the certifier, that "HR" is quite close to "mandatory" and "NR" is quite close to "forbidden".
"R" is not as close to "mandatory", but often it is enough to show the certifier that a concept is made that takes into account the detail tables - and to take responsibility (!) that this concept is implementing a reasonable substitute for the "more typical" measure which has not been applied.

Related

Is there a higher order Prolog that wouldn't need a type system?

I suspect that λProlog needs a type system to make their higher
order unification sound. Otherwise through self application some
Russell type anomalies can appear.
Are there alternative higher order Prologs that don't need .sig files?
Maybe by a much simpler type system, that doesn't need that many
declarations but still has some form of higher order unification?
Can this dilemma be solved?
Is there a higher order Prolog that wouldn't need a type system?
These are type-free:
HiLog
HiOrd
From the HiOrd paper:
The framework proposed gives rise to many questions the authors hope to ad-
dress in future research. In particular, a rigorous treatment must be developed for
comparison with other higher-order formal systems (Hilog, Lambda-Prolog). For
example, it is reasonably straightforward to conservatively translate the Higher-
order Horn fragment of λProlog into Hiord by erasing types, as the resolution
rules are essentially the same (assuming a type-safe higher-order unification pro-
cedure).
Ciao (includes HiOrd)

Forward and Backward Chaining

I am attempting to understand the best uses of backward and forward chaining in AI programming for a program I am writing. Would anyone be able to explain the most ideal uses of backward and forward chaining? Also, could you provide an example?
I have done some research on the current understanding of "forward chaining" and "backward chaining". This brings up a lot of material. Here is a résumé.
First a diagram, partially based on:
The Sad State Concerning the Relationships between Logic, Rules and Logic Programming (Robert Kowalski)
LHS stands for "left-hand-side", RHS stands for "right-hand-side" of a rule throughout.
Let us separate "Rule-Based Systems" (i.e. systems which do local computation based on rules), into three groups as follows:
Production Rule Systems, which include the old-school Expert System Shells, which are not built on logical principles, i.e. "without a guiding model".
Logic Rule Systems, i.e. system based on a logical formalism (generally a fragment of first-order logic, classical or intuitionistic). This includes Prolog.
Rewrite Rule Systems, systems which rewrite some working memory based on, LHS => RHS rewrite rules.
There may be others. Features of one group can be found in another group. Systems of one group may be partially or wholly implemented by systems of another group. Overlap is not only possible but certain.
(Sadly, imgur does not accept .svg in 2020, so it's a .png)
Green: Forward Chaining
Orange: Backward Chaining
Yellow: Prolog
RuleML (an organization) tries to XML-ize the various rulesets which exist. They classify rules as follows:
The above appears in The RuleML Perspective on Reaction Rule Standards by Adrian Paschke.
So they make a differentiation between "deliberative rules" and "reactive rules", which fits.
First box: "Production Rule Systems"
The General Idea of the "Production Rule System" (PRS)
There are "LHS->RHS" rules & meta-rules, with the latter controlling application of the first. Rules can be "logical" (similar to Prolog Horn Clauses), but they need not be!
The PRS has a "working memory", which is changed destructively whenever a rule is applied: elements or facts can be removed, added or replaced in the working memory.
PRS have "operational semantics" only (they are defined by what they do).
PRS have no "declarative semantics", which means there is no proper way to reason about the ruleset itself: What it computes, what its fixpoint is (if there is one), what its invariants are, whether it terminates etc.
More features:
Ad-hoc handling of uncertainty using locally computable functions (i.e.
not probability computations) as in MYCIN, with Fuzzy rules, Dempster-Shaefer theory etc.
Strong Negation may be expressed in an ad-hoc fashion.
Generally, backtracking on impasse is not performed, one has to implement it explicitly.
PRS can connect to other systems rather directly: Call a neural network, call an optimizer or SAT Solver, call a sensor, call Prolog etc.
Special support for explanations & debugging may or may not exist.
Example Implementations
Ancient:
Old-school "expert systems shells", often written in LISP.
Planner of 1971, which is language with rudimentary (?) forward and backward chaining. The implementations of that language were never complete.
The original OPSx series, in particular OPS5, on which R1/XCON - a VAX system configurator with 2500 rules - was running. This was actually a forward-chaining implementation.
Recent:
CLIPS (written in C): http://www.clipsrules.net/
Jess (written in Java): https://jess.sandia.gov/
Drools (writen in "Enterprise" Java): https://www.drools.org/
Drools supports "backwards-chaining" (how exactly), but I'm not sure any of the others does, and if they do, how it looks like)
"Forward chaining" in PRS
Forward-chaining is the original approach to the PRS "cycle", also called "recognize-act" cycle, or the "data-driven cycle", which indicates what it is for. Event-Condition-Action architecture is another commonly used description.
The inner working are straightforward:
The rule LHSs are matched against the working memory (which happens at every working memory update thanks to the RETE algorithm).
One of the matching rules is selected according to some criterium (e.g. priority) and its RHS is executed. This continues until no LHS matches anymore.
This cycle can be seen as higher-level approach to imperative state-based languages.
Robert Kowalski notes that the "forward chaining" rules are actually an amalgamation of two distinct uses:
Forward-chained logic rules
These rules apply Modus Ponens repeatedly to the working memory and add deduced facts.
Example:
"IF X is a man, THEN X is mortal"
Uses:
Deliberation, refinement of representations.
Exploration of state spaces.
Planning if you want more control or space is at a premium (R1/XCON was a forward chaining system, which I find astonishing. This was apparently due to the desire to keep resource usage within bounds).
In Making forward chaining relevant (1998), Fahiem Bacchus writes:
Forward chaining planners have two particularly useful properties. First, they maintain complete information about the intermediate states generated by a potential plan. This information can be utilized to provide highly effective search control, both domain independent heuristic control and even more effective domain dependent control ... The second advantage of forward chaining planners is they can support rich planning languages. The TLPlan system for example, supports the full ADL language, including functions and numeric calculations. Numbers and functions are essential for modeling many features of real planning domains, particularly resourcs and resource consumption.
How much of the above really applies is debatable. You can always write your backward-chaining planner to retain more information or to be open to configuration by a search strategy selecting module.
Forward-chaining "reactive rules" aka "stimulus-response rules"
Example:
"IF you are hungry THEN eat something"
The stimulus is "hunger" (which can be read off a sensor). The response is to "eat something" (which may mean controlling an effector). There is an unstated goal, hich is to be "less hungry", which is attained by eating, but there is no deliberative phase where that goal is made explicit.
Uses:
Immediate, non-deliberative agent control: LHS can be sensor input, RHS can be effector output.
"Backward chaining" in PRS
Backward chaining, also called "goal-directed search", applies "goal-reduction rules" and runs the "hypothesis-driven cycle", which indicates what it is for.
Examples:
BDI Agents
MYCIN
Use this when:
Your problem looks like a "goal" that may be broken up into "subgoals", which can be solved individually. Depending on the problem, this may not be possible. The subgoals have too many interdependencies or too little structure.
You need to "pull in more data" on demand. For example, you ask the user Y/N question until you have classified an object properly, or, equivalently, until a diagnosis has been obtained.
When you need to plan, search, or build a proof of a goal.
One can encode backward-chaining rules also as forward-chaining rules as a programming exercise. However, one should choose the representation and the computational approach that is best adapted to one's problem. That's why backward chaining exists after all.
Second box: "Logic Rule Systems" (LRS)
These are systems based on some underlying logic. The system's behaviour can (at least generally) be studied independently from its implementation.
See this overview: Stanford Encyclopedia of Philosophy: Automated Reasoning.
I make a distinction between systems for "Modeling Problems in Logic" and systems for "Programming in Logic". The two are merged in textbooks on Prolog. Simple "Problems in Logic" can be directly modeled in Prolog (i.e. using Logic
Programming) because the language is "good enough" and there is no mismatch. However, at some point you need dedicated systems for your task, and these may be quite different from Prolog. See Isabelle or Coq for examples.
Restricting ourselves to Prolog family of systems for "Logic Programming":
"Forward chaining" in LRS
Forward-chaining is not supported by a Prolog system as such.
Forward-chained logic rules
If you want to forward-chained logic rules you can write your own interpreter "on top of Prolog". This is possible because Prolog is general purpose programming language.
Here is a very silly example of forward chaining of logic rules. It would certainly be preferable to define a domain-specific language and appropriate data structures instead:
add_but_fail_if_exists(Fact,KB,[Fact|KB]) :- \+member(Fact,KB).
fwd_chain(KB,KBFinal,"forall x: man(x) -> mortal(x)") :-
member(man(X),KB),
add_but_fail_if_exists(mortal(X),KB,KB2),
!,
fwd_chain(KB2,KBFinal,_).
fwd_chain(KB,KBFinal,"forall x: man(x),woman(y),(married(x,y);married(y,x)) -> needles(y,x)") :-
member(man(X),KB),
member(woman(Y),KB),
(member(married(X,Y),KB);member(married(Y,X),KB)),
add_but_fail_if_exists(needles(Y,X),KB,KB2),
!,
fwd_chain(KB2,KBFinal,_).
fwd_chain(KB,KB,"nothing to deduce anymore").
rt(KBin,KBout) :- fwd_chain(KBin,KBout,_).
Try it:
?- rt([man(socrates),man(plato),woman(xanthippe),married(socrates,xanthippe)],KB).
KB = [needles(xanthippe, socrates), mortal(plato),
mortal(socrates), man(socrates), man(plato),
woman(xanthippe), married(socrates, xanthippe)].
Extensions to add efficient forward-chaining to Prolog have been studied but they seem to all have been abandoned. I found:
1989: Adding Forward Chaining and Truth Maintenance to Prolog (PDF) (Tom_Finin, Rich Fritzson, Dave Matuszek)
There is an active implementation of this on GitHub: Pfc -- forward chaining in Prolog, and an SWI-Prolog pack, see also this discussion.
1997: Efficient Support for Reactive Rules in Prolog (PDF) (Mauro Gaspari) ... the author talks about "reactive rules" but apparently means "forward-chained deliberative rules".
1998: On Active Deductive Database: The Statelog Approach (Georg Lausen, Bertram Ludäscher, Wolfgang May).
Kowalski writes:
"Zaniolo (LDL++?) and Statelog use a situation calculus-like representation with frame axioms, and reduce Production Rules and Event-Condition-Action rules to Logic Programs. Both suffer from the frame problem."
Forward-chained reactive rules
Prolog is not really made for "reactive rules". There have been some attempts:
LUPS : A language for updating logic programs (1999) (Moniz Pereira , Halina Przymusinska , Teodor C. Przymusinski C)
The "Logic-Based Production System" (LPS) is recent and rather interesting:
Integrating Logic Programming and Production Systems in Abductive Logic Programming Agents (Robert Kowalski, Fariba Sadri)
Presentation at RR2009: Integrating Logic Programming and Production Systems in Abductive Logic Programming Agents
LPS website
It defines a new language where Observations lead to Forward-Chaining and Backward-Chaining lead to Acts. Both "silos" are linked by Integrity Constraints from Abductive Logic Programming.
So you can replace a reactive rule like this:
By something like this, which has a logic interpretation:
Third Box: "Rewrite Rule Systems" (forward-chaining)
See also: Rewriting.
Here, I will just mention CHR. It is a forward-chaining system which successively rewrites elements in a working memory according to rules with match working memory elements, verify a logic guard condition , and removed/add working memory elements if the logic guard condition succeeds.
CHR can be understood as an application of a fragment of linear logic (see "A Unified Analytical Foundation for Constraint Handling Rules" by Hariolf Betz).
A CHR implementation exists for SWI Prolog. It provides backtracking capability for CHR rules and a CHR goal can be called like any other Prolog goal.
Usage of CHR:
General model of computational (i.e. like Turing Machines etc.)
Bottom up parsing.
Type checking.
Constraint propagation in constraint logic programmning.
Anything that you would rather forward-chain (process bottom-up)
rather than backward-chain (process top-down).
I find it useful to start with your process and goals.
If your process can be easily expressed as trying to satisfy a goal by satisfying sub-goals then you should consider a backward-chaining system such as Prolog. These systems work by processing rules for the various ways in which a goal can be satisfied and the constraints on these applying these ways. Rule processing searches the network of goals with backtracking to try alternatives when one way of satisfying a goal fails.
If your process starts with a set of known information and applies the rules to add information then you should consider a forward-chaining system such as Ops5, CLIPS or JESS. These languages apply matching to the left hand side of the rule and invoke the right hand side of rules for which the matching succeeds. The working memory is better thought of as "what is known" than "true facts". Working memory can contain information known to be true, information known to be false, goals, sub-goals, and even domain rules. How this information is used is determined by the rules, not the language. To these languages there is no difference between rules that create values (deduce facts), rules that create goals, rules that create new domain knowledge or rules that change state. It is all in how you write your rules and organize your data and add base clauses to represent this knowledge.
It is fairly easy to implement either method using the other method. If you have a body of knowledge and want to make dedications but this needs to be directed by some goals go ahead and use a forward chaining language with rules to keep track of goals. In a backward chaining language you can have goals to deduce knowledge.
I would suggest that you consider writing rules to handle the processing of domain knowledge and not to encode your domain knowledge directly in the rules processed by the inference engine. Instead, the working memory or base clauses can contain your domain knowledge and the language rules can apply them. By representing the domain knowledge in working memory you can also write rules to check the domain knowledge (validate data, check for overlapping ranges, check for missing values, etc.), add a logic system on top of the rules (to calculate probabilities, confidence values, or truth values) or handle missing values by prompting for user input.

What influence the maintainability result for Sonarqube?

I'm confronted to a huge "spaghetti code" with known lack of documentation, lack of test covering, high complexity, lack of design rules to be follow, etc. I let the code be analysed by a default sonar-scan, and surprisingly for me, the maintability has a really great score with a technical debt of 1,1% ! Reality shows that almost each change introduce new bugs
I'm quite perplex, and wonder if some particularities in the implementation could explain this score... We have for example quite a lot of interfaces (feeling 4-5 Interfaces for 1 implementation), uses reflexion and service locator pattern.
Are there other indicator that I could use that would be eventually more relevant for improving the quality?
The maintainability rating is the product of the estimated time to fix all the issues of type Code Smell in your code base versus the estimated time to write the code in its current state.
You should also look at the Bugs and Vulnerabilities in the code base.
Regarding your specific points (and assuming we're talking about Java):
known lack of documentation - there is a rule in the default profile that looks for Javadocs. You might read its description and parameter values to see what it does an does not find.
lack of test coverage - there is currently a "hole" in this detection; if there is no coverage for a class, then the class is not taken into account when computing lines that could/should be covered, and therefore when calculating coverage percentages. It should be fixed "soon". The first steps will appear on the platform side in 6.2, but will need accompanying changes in the language plugins to take effect.
high complexity - there are rules for this. If they are not finding what you think they should, then take a look at their (adjustable) thresholds.
lack of design rules - the only rule that might address this (Architectural Constraint) is
deprecated
slated for removal
not on by default dropped from the latest versions of the plugin
use of reflection - there aren't currently rules available to detect this

Is static analysis really formal verification?

I have been reading about formal verification and the basic point is that it requires a formal specification and model to work with. However, many sources classify static analysis as a formal verification technique, some mention abstract intepretation and mention its use in compilers.
So I am confused - how can these be formal verification if there is no formal description of the model?
EDIT: A source I found reads:
Static analysis: the abstract semantics is computed automatically from
the program text according to predefined abstractions (that can
sometimes be tailored automatically/manually by the user)
So does it mean it works just on the source code with no need for formal specification? This would be what static analysers do.
Also, is static analysis possible without formal verification? E.g. does SonarQube really perform formal methods?
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
How can these be formal verification if there is no formal description of the model?
A static analyser will generate control/data flow of a piece of code, upon which formal methods can then be applied to verify conformance to the system's/unit's expected design model.
Note that modelling/formal-specification is NOT a part of static-analysis.
However combined together, both of these tools are useful in formal verification.
For example if a system is modeled as a Finite State Machine (FSM) with
a pre-defined number of states
defined by a combination of specific values of certain member data.
a pre-defined set of transitions between various states
defined by the list of member functions.
Then the results of static analysis will help in formal verification of the fact that
the control NEVER flows along a path that is NOT present in the above FSM model.
Also, if a model can be simply defined in terms of type-definition, data-flow, control-flow/call-graph, i.e. code-metrics that a static-analyser can verify, then static-analysis itself is sufficient to formally verify that code conforms to such a model.
NOTE1. The yellow region above would be static analysers used to enforce stuff like coding-guidelines and naming-conventions i.e. aspects of code that cannot affect the program's behavior.
NOTE2. The red region above would be formal verification that requires additional steps like 100% dynamic code-coverage, elimination of unused and dead code. These cannot be detected/enforced using a static-analyser.
Static analysis is highly effective in verifying that a system/unit is implemented using a subset of the language specification to meet goals laid out in the system/unit design.
For example, if it is a design goal to prevent the stack memory from exceeding a particular limit, then one could apply a limit on the depth of recursion (or forbid recursive functions calls altogether). Static-analysis is used to identify such violations of design goals.
In the absence of any warnings from the static-analyser,
the system/unit code stands formally verified against such design-goals of its respective model.
eg. MISRA-C standard for Automotive software defines a subset of C for use in automotive systems.
MISRA-C:2012 contains
143 rules - each of which is checkable using static program analysis.
16 "directives" more open to interpretation, or relate to process.
Static analysis just means "read the source code and possibly complain". (Contrast to "dynamic analysis", meaning, "run the program and possibly complain about some execution behavior").
There are lots of different types of possible static-analysis complaints.
One possible complaint might be,
Your source code does not provably satisfy a formal specification
This complaint would be based on formal verification if the static analyzer had a formal specification which it interpreted "formally", a formal interpretation of the source code, and a trusted theorem prover that could not find an appropriate theorem.
All the other kinds of complaints you might get from a static analyzer are pretty much heuristic opinions, that is, they are based on some informal interpretation of the code (or specification if it indeed even exists).
The "heavy duty" static analyzers such as Coverity etc. have pretty good program models, but they don't tell you that your code meets a specification (they don't even look to see if you have one). At best they only tell you that your code does something undefined according to the language ("dereference a null pointer") and even that complaint isn't always right.
So-called "style checkers" such as MISRA are also static analyzers, but their complaints are essentially "You used a construct that some committee decided was bad form". That's not actually a bug, it is pure opinion.
You can certainly classify static analysis as a kind of formal verification.
how can these be formal verification if there is no formal description of the model?
For static analysis tools, the model is implicit (or in some tools, partly implicit). For example, "a well-formed C++ program will not leak memory, and will not access memory that hasn't been initialized". These sorts of rules can be derived from the language specification, or from the coding standards of a particular project.

Ruby regex for pulling out psycinfo references

I need a regex to seperate references from a mountain of psycinfo lit searches that look like this:
http://rubular.com/r/bKMoDpAJvY
(I can't post the text - something about this edit control bungs it up horribly)
I just want matches that are all the text that is between the numbering but it is doing my head in. Also an explanation would be fabulous so I can learn.
Does teststring.split(/^\d+\./) work for you?
With String#split you get an array out of your string, the string is splitted at the regex, in this case a numbers at begin of the line, followed by a dot, optional some spaces and a newline.
My testcode
teststring = DATA.read
teststring.split(/^\d+\.\s*$/).each{|m|
puts "==========="
puts m
}
__END__
1.
Reframing the rocky road: From causal analysis to mindreading as the drama of disposition inference. [References].
Ames, Daniel R.
Psychological Inquiry. Vol.20(1), Jan 2009, pp. 19-23.
AN: Peer Reviewed Journal: 2009-04633-002.
Comments on an article by Glenn D. Reeder (see record 2009-04633-001). My misgivings with Reeder's account are relatively minor. For one, I am not sure that the "multiple inference model" label quite captures the essential part of Reeder's argument. Although it suggests the plurality of judgments that perceivers often make, it does not seem to reflect Reeder's central point that, for intentional behaviors, perceivers typically make motive inferences and these guide trait inferences. Another stumbling point for me was the identification of five categories that accounted for "the majority of studies" on dispositional inference (attitude attribution, moral attribution, ability attribution, the silent interview paradigm, and the quiz-role paradigm). These are noteworthy paradigms, to be sure, but they hardly seem to exhaust the research on dispositional inference, which I take as a perceiver's ascription of an enduring trait to a target. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
Jan 2009
Year of Publication
2009
E-Mail Address
Ames, Daniel R.: da358#columbia.edu
Other Publishers
Lawrence Erlbaum; US
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc6&AN=2009-04633-002
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:10.1080%2F10478400902744253&issn=1047-840X&isbn=&volume=20&issue=1&spage=19&pages=19-23&date=2009&title=Psychological+Inquiry&atitle=Reframing+the+rocky+road%3A+From+causal+analysis+to+mindreading+as+the+drama+of+disposition+inference.&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2009-04633-002%3C%2FAN%3E%3CDT%3EComment%2FReply%3C%2FDT%3E
2.
Everyday Solutions to the Problem of Other Minds: Which Tools Are Used When? [References].
Ames, Daniel R.
Malle, Bertram F [Ed]; Hodges, Sara D [Ed]. (2005). Other minds: How humans bridge the divide between self and others. (pp. 158-173). xiii, 354 pp. New York, NY, US: Guilford Press; US.
AN: Book: 2005-09375-010.
(from the chapter) Intuiting what the people around us think, want, and feel is essential to much of social life. Some scholars have gone so far as to declare the "problem of other minds"--whether a person can know if anyone else has thoughts and, if so, what they are--intractable. And yet countless times a day, we solve such problems with ease, if not perfectly then at least to our own satisfaction. What strategies underlie these everyday solutions? And how are these tools employed? This chapter offers 4 contingencies about when various inferential tools might be used. First, that affect qualifies behavior in the near term: perceived remorseful affect can lead to ascriptions of good intent to harm-doers in the short run, but repeated harm drives long-run ascriptions of bad intent. Second, that perceived similarity governs projection and stereotyping: perceptions of general similarity to a target typically draw a mindreader toward projection and away from stereotyping; perceived dissimilarity does the opposite. Third, that cumulative behavioral evidence supersedes extratarget strategies: projection and stereotyping will drive mindreading when behavioral evidence is ambiguous, but as apparent evidence accumulates, inductive judgments will dominate. Fourth, that negative social intention information weighs heavily in mindreading: within a mindreading strategy, cues signaling negative social intentions may dominate neutral or positive cues; between mindreading strategies, those strategies that signal negative social intentions may dominate. These contingencies have varying degrees of empirical support and would benefit from additional research and thinking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
2005
Year of Publication
2005
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc5&AN=2005-09375-010
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:&issn=&isbn=1-59385-187-1&volume=&issue=&spage=158&pages=158-173&date=2005&title=Other+minds%3A+How+humans+bridge+the+divide+between+self+and+others.&atitle=Everyday+Solutions+to+the+Problem+of+Other+Minds%3A+Which+Tools+Are+Used+When%3F&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2005-09375-010%3C%2FAN%3E%3CDT%3EChapter%3C%2FDT%3E
results in:
===========
===========
Reframing the rocky road: From causal analysis to mindreading as the drama of disposition inference. [References].
Ames, Daniel R.
Psychological Inquiry. Vol.20(1), Jan 2009, pp. 19-23.
AN: Peer Reviewed Journal: 2009-04633-002.
Comments on an article by Glenn D. Reeder (see record 2009-04633-001). My misgivings with Reeder's account are relatively minor. For one, I am not sure that the "multiple inference model" label quite captures the essential part of Reeder's argument. Although it suggests the plurality of judgments that perceivers often make, it does not seem to reflect Reeder's central point that, for intentional behaviors, perceivers typically make motive inferences and these guide trait inferences. Another stumbling point for me was the identification of five categories that accounted for "the majority of studies" on dispositional inference (attitude attribution, moral attribution, ability attribution, the silent interview paradigm, and the quiz-role paradigm). These are noteworthy paradigms, to be sure, but they hardly seem to exhaust the research on dispositional inference, which I take as a perceiver's ascription of an enduring trait to a target. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
Jan 2009
Year of Publication
2009
E-Mail Address
Ames, Daniel R.: da358#columbia.edu
Other Publishers
Lawrence Erlbaum; US
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc6&AN=2009-04633-002
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:10.1080%2F10478400902744253&issn=1047-840X&isbn=&volume=20&issue=1&spage=19&pages=19-23&date=2009&title=Psychological+Inquiry&atitle=Reframing+the+rocky+road%3A+From+causal+analysis+to+mindreading+as+the+drama+of+disposition+inference.&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2009-04633-002%3C%2FAN%3E%3CDT%3EComment%2FReply%3C%2FDT%3E
===========
Everyday Solutions to the Problem of Other Minds: Which Tools Are Used When? [References].
Ames, Daniel R.
Malle, Bertram F [Ed]; Hodges, Sara D [Ed]. (2005). Other minds: How humans bridge the divide between self and others. (pp. 158-173). xiii, 354 pp. New York, NY, US: Guilford Press; US.
AN: Book: 2005-09375-010.
(from the chapter) Intuiting what the people around us think, want, and feel is essential to much of social life. Some scholars have gone so far as to declare the "problem of other minds"--whether a person can know if anyone else has thoughts and, if so, what they are--intractable. And yet countless times a day, we solve such problems with ease, if not perfectly then at least to our own satisfaction. What strategies underlie these everyday solutions? And how are these tools employed? This chapter offers 4 contingencies about when various inferential tools might be used. First, that affect qualifies behavior in the near term: perceived remorseful affect can lead to ascriptions of good intent to harm-doers in the short run, but repeated harm drives long-run ascriptions of bad intent. Second, that perceived similarity governs projection and stereotyping: perceptions of general similarity to a target typically draw a mindreader toward projection and away from stereotyping; perceived dissimilarity does the opposite. Third, that cumulative behavioral evidence supersedes extratarget strategies: projection and stereotyping will drive mindreading when behavioral evidence is ambiguous, but as apparent evidence accumulates, inductive judgments will dominate. Fourth, that negative social intention information weighs heavily in mindreading: within a mindreading strategy, cues signaling negative social intentions may dominate neutral or positive cues; between mindreading strategies, those strategies that signal negative social intentions may dominate. These contingencies have varying degrees of empirical support and would benefit from additional research and thinking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Publication Date
2005
Year of Publication
2005
Link to the Ovid Full Text or citation:
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=psyc5&AN=2005-09375-010
Link to the External Link Resolver:
http://diglib1.bham.ac.uk:3210/sfxlcl3?sid=OVID:psycdb&id=pmid:&id=doi:&issn=&isbn=1-59385-187-1&volume=&issue=&spage=158&pages=158-173&date=2005&title=Other+minds%3A+How+humans+bridge+the+divide+between+self+and+others.&atitle=Everyday+Solutions+to+the+Problem+of+Other+Minds%3A+Which+Tools+Are+Used+When%3F&aulast=Ames&pid=%3Cauthor%3EAmes%2C+Daniel+R%3C%2Fauthor%3E%3CAN%3E2005-09375-010%3C%2FAN%3E%3CDT%3EChapter%3C%2FDT%3E
The first empty "" is obsolete, you may delete it.
I found another solution with String#scan:
(teststring + "99.\n").scan(/^\d+\.\s*\n(.*?)(?=^\d+\.\s*\n)/m).each{|m|
puts "==========="
puts m
}
Explanation:
^\d+\.\s*\n Look for numbers with dot at line start until line end. Ignore trailing spaces
(.*?) take everything, but not greedy (use shortest hit)
(?=^\d+\.\s*\n) Check for next entry, but don't consume it
m use multiline code
(teststring + "99.\n") This solution will loose the last entry. So we add a 'endtag'

Resources