In Richard Price's novel "Lush Life," an NYPD detective explains how he determines whether someone is lying about his level of participation in a crime:
"When we're interviewing somebody who claims to be a witness but we think was maybe a little more…involved than that? It's called an I test. You sit them down and take their statement, written, dictated, whatever, and when you're finished, you count up and divide the pronouns. If a girl gets shot and the boyfriend's story consists of sixteen Is and mys, but only three hers and shes?--he just flunked."
Price doesn’t get more specific about this system - it’s a novel, after all, not an investigations manual - but it actually has a basis in fact. The technique he describes in this passage is a crude form of the discipline known as forensic linguistics.
The Birth of Forensic Linguistics
The term "forensic linguistics" was coined in 1968 to describe the work of a linguistics professor who analyzed statements that were purportedly dictated by suspects to police in the United Kingdom. The analysis revealed that that the suspects were not actually giving a narrative account, which was how their statements were supposed to be taken, as required by the judicial rules in place at that time. In fact, suspects were being interrupted, and dialog between the police and the suspect occurred. The end result was that the statements that were used in court turned out to have a lot of information that came from the police officers doing the questioning, rather than just from the suspect.
The officers were not deliberately trying to contaminate suspect statements. There was a disconnect between the judicial rules, which assumed that a person could recount events in a logical manner without prompting, and the realities of how police interviews actually work. In order to achieve a statement that incorporated a logical progression of an event, the interviewers needed to ask questions. The officers would then incorporate the answers into the suspect’s statement.
This technique was revealed as problematic when forensic linguistic analysis of suspect statements was conducted in several high-profile cases in the UK that resulted in capital punishment. Based on the analysis, several executed murderers were given posthumous pardons.
The statements used to convict these suspects contained, among other tell-tale signs, wording in what is known as “police register.” In linguistics, “register” means a way of talking and vocabulary that is particular to a certain field. Police register differs from everyday speech by incorporating such words as “perpetrator,” and radio codes, as well as other specific ways of speaking. As an example, a police officer might state in a report, “I then exited the vehicle,” whereas if that same officer were talking to a friend, a more natural phrasing might be, “Then I got out of the car.” When police register is present in statements supposedly written by a victim, witness or suspect, the question is raised as to how much of the statement is truly representative of the writer’s own account, and how much has been influenced by the police officer.
Forensic linguistics has been accepted by the courts in the UK, U.S. and many other countries as a valid science. The practitioners of the discipline tend to be grounded in the field of sociolinguistics, although there are now several graduate degrees available specifically in forensic linguistics.
What Forensic Linguistics Can Do
At its core, forensic linguistics is a method for determining authorship, and it has applications in a variety of areas, including as a way to determine whether plagiarism has occurred, as in the case of Dan Brown’s book "The Da Vinci Code." By applying forensic linguistic techniques, a good case was made that Brown plagiarized two novels by Lewis Perdue: "The Da Vinci Legacy" and "Daughter of God."
Other areas include trademark disputes, such as whether using the phrase “Mc-something,” with or without accompanying golden arches, to indicate that the item in question is generic infringes on rights of the McDonald’s hamburger chain.
In the judicial system, the field of forensic linguistics can be applied to a broad variety of cases. Some of the most frequently used applications include the following:
- Voice identification, which can help determine who made an utterance on a 911 call, for instance, or exactly what a person said on a given recording.
- Dialectology, which can be used to identify a person’s spoken dialect, usually as a way to identify a particular speaker on a recording.
- Language analysis, which identifies a speaker’s native language and dialect, and which is often used in immigration cases.
- Discourse analysis, which looks at conversation and other interaction and can be used to determine which person introduced a topic or idea, such as in the case of a criminal conspiracy.
- Linguistic proficiency, to help figure out whether a person understood his or her rights.
- Author identification, to include or exclude a particular individual as the possible writer of a text.
Interestingly, while forensic linguistics has been applied to overcome wrongful convictions - and provide better methods for interviewing in order to obtain convictions that will be upheld - the field does not lend itself to determining whether a person is telling the truth or lying in a written or spoken text. There are some companies that claim to be able to teach investigators how to separate the truth tellers from liars, but their techniques have not been upheld by academic and scientific investigation.
However, that doesn’t mean that forensic linguistic analysis cannot determine veracity. The inclusion or exclusion of a person as the author of a written or spoken text, for example, can be made by comparing samples of the person’s writing to the questioned text.
Sometimes the language that is used in the text can be telling too. In a famous kidnapping case, a ransom note instructed that money be left “in the green trash [c]an on the devil strip at the corner 18th and Carlson.” Forensic linguist and Professor Roger Shuy, who was called in as a consultant on the case, correctly identified the note writer as an educated man from Akron, Ohio. His linguistic clue? The strip of grass that lies between the street and the sidewalk and is known as a “tree belt” or “sidewalk buffer” is referred to as a “devil’s strip” only in Akron.
Suicide Notes: Real or Faked?
One area in which forensic linguistics has proved particularly helpful is in the determination of valid versus faked suicide notes. Conventional wisdom dictates that the vaguer a note is, such as “Goodbye, cruel world,” the more likely it is to be a fake designed to cover up a murder or other crime, whereas genuine suicide notes contain more detail. This method for determining real suicide notes may be crude, but it does have some basis in fact.
Forensic linguistic analysis of suicide notes has also revealed more about the content of genuine notes. A 2007 study published in the journal "Crisis" analyzed suicide notes left both by persons who completed the act (completers), and those who attempted suicide but did not complete the act (attempters). The linguistic elements of a suicide note can show whether a person was truly trying to end his or her life or making a cry for help.
On the other hand, attempters employ more metaphysical references.
And as with other genres of writing, the gender of the writer could not be determined by the wording in the notes.
Applied forensic linguistics has been demonstrated to be valuable in various types of investigations. The field continues to expand and to be accepted in courtrooms around the world. The discipline can be applied to determine veracity in investigations, and its scientific methods can be used to provide training so that investigators can make solid cases whose decisions are not vulnerable to appeal.
Esta noticia ha sido vista por