For May at CJ Access check out Editorials and Opinions where I examine whether the National Firearms Act has outlived its usefulness. Current firearms designs, the criminality, or lack thereof, associated with NFA weapons, and a faulty registration system used in prosecutions suggest it’s time to amend the National Firearms Act.
And on the Original Research front, I’m going to be starting a new research project utilizing unused data from my dissertation in an exploratory study of beat officer patrol patterns. During the ride-alongs with 59 officers, I tracked the patrol car movement throughout their patrol for approximately six hour periods. I have turn by turn directions, as well as the location in the beats for the calls for service, and self-initiated stops. I’m planning on analyzing this patrol and stop geographic and time data to examine questions such as;
Do some officers cover more area than others working the same beat and shift and is there a similarity in areas that officers think they should patrol?
How do the patrol patterns of each beat differ by shift?
Do some shifts on the beats engage in broader beat coverage and do some beats get broader coverage than others?
Determining the level of patrol based on the number of passes through areas of the beat and in areas surrounding calls for service and self-initiated stops and do officers focus their patrol closer to areas where they receive calls for service?
Data cleanup and operationalization is going to be first on the list and I’ll be providing updates as the research progresses.
This month at CJ access were looking at issues of
race, police shooting, and police performance so be sure to check out:
Research Briefs-exploring the connection between race,
minority dense neighborhoods, and fatal shootings by the police; using better
benchmarks to generate more accurate data on racial disparities in fatal
officer involved shootings; constructing and utilizing a typology of police
shooting errors; and using detailed police officer performance metrics to
analyze their performance in police-citizen encounters
For Discussion-Racial profiling is on its face viewed
as discriminatory, but does the use of race or ethnicity to focus an
investigation or inquiry ever have a place? What are officers’ views? From an
investigative standpoint, it may be something to be used with discretion as I
explore with an excerpt from my dissertation
Original Research-An academic research article from
2013 where I utilized NCVS data from 12 cities to examine the differences between
races on their satisfaction with the police and whether utilizing components of
Community Oriented Policing affected that level of satisfaction
Also this month, a new and improved PDF reader is
installed on the site, allowing convenient full screen reading and the ability
to download PDFs found in Original Research
Race, Place, and Police-Caused Homicide in
U.S. Municipalities
Holmes, Painter II & Smith, Justice
Quarterly, 2019
The authors consider that approaches to studying police caused homicides (PCH) have focused typically on two theories, the Minority Threat hypothesis, which borrows from Conflict Theory which suggests that the amount of crime control is directly proportional to the size of the population that threatens the powerful’s interests. Framed as Minority Threat, the theory suggests the level of police caused homicide is in direct relation to the relative size of the Black population. Large populations of Black people are associated with serious criminality and urban violence and are seen as a threat. When increased crime control on the population is enacted, it will thus result in an increase in PCH. In contrast to this linear relationship model, a Power Threat hypothesis suggests a curve, where increases in crime control continues until the minority population reaches enough positions of power, to where their influence decreases the level of crime control on minority populations. The alternative theoretical perspective is the Community Violence hypothesis, which postulates that violent offending will result in more police caused homicides of suspects. Disadvantaged urban black populations have relatively high rates of violence so that Black over-representation in police caused homicides is actually a reflection of the very real threats that officers face in dealing with these greater levels of violence in these communities. Officers use deadly force when it is necessary in the face of danger and the level of violence in these communities increases the likelihood officers will be put in those situations.
The authors suggest another theoretical approach. The Place
hypothesis maintains that the residential segregation of minority populations
into areas of concentrated socioeconomic disadvantage increases the likelihood
of police officers employing violence against minority citizens. Police
patrolling in these disadvantaged places may see minority citizens as
particularly threatening, though this is a more subjective threat based on
place, rather than the objective threat involved in the Community Violence
hypothesis. In this theory the level of threat by minorities is based on the segregation of the population into what
are viewed as dangerous areas, and because minorities are associated with
violent crime, they may be automatically viewed as a threat by being segregated
in these places. However, research testing Place hypotheses about PCH has produced
mixed findings but the authors suggest there may be a non-linear relationship between
racial segregation into the disadvantaged areas and PCH.
The authors also considered that the relationship between
Hispanics and PCH may need additional exploration. While percentage Hispanic
has not typically been found to be a factor in incidence of PCH, the authors
consider that group specific models (minority compared to White) may reveal
disparities not evident in total incidence analysis, as well as examining the
segregation aspect between White and Hispanic populations.
It should be clarified that when the authors are using
structural theories like Minority Threat and Place, it is to examine whether
these community structures are related to PCH but these theories operate under
the unproven assumption that if there is a relationship between community
structure and PCH, then that relationship exists because of biases held by police officers against
minorities. These theories, in attempting to make that connection, do not actually
examine if the biases exist, nor do they take into account situational factors
like suspect demeanor and behavior, the race of officers in these encounters,
and attitudes in the community toward police which may either drive that
statistical relationship or even negate the relationship between structural
conditions and PCH.
Using data from 230 cities with over 100K population who
filed Supplemental Homicide Reports with the UCR between 2008 and 2013, their
outcome variable was the incidence of felon killed by police officer for the
study time period (Range 0-96, Avg. 5.71, S.D. 12.92). The authors noted the
small sample size but recognized that other databases include small cities and
may have incomplete data, limited methodological
documentation, and a lack of verification procedures. Other variables included
city population, population density and geographical region as control
variables as well as percent Black and Hispanic to represent the Minority Threat
hypothesis, and average violent crime rate, arrest rate per 1,000, and total
number of police officers killed in the line of duty during the study period to
represent the Community Violence hypothesis. To test Place hypothesis they used
two variables, Black and Hispanic dissimilarity taken from the 2010 Discover
America in a New Century website, which indicates the degree of separation from
Whites across all neighborhoods of a city.
Using negative binomial regression because the data
was a count variable, they examined total incidences, finding a larger city
population was significantly related to a greater number of PCH, while the Northeast
and Midwest regions were negatively associated with PCH. In total incidence,
the authors did not find support for the Minority Threat hypothesis; Black percentage
was significantly negatively associated with PCH (but ceased to be
significant in the group specific analysis) and there was no significant
association between Hispanic percentage and PCH. Finding partial support, analysis
of Place showed a large significant effect in Black separation but a negligible
effect with Hispanic separation. In examining the Power Threat hypothesis there
was a curve-linear relationship with the most segregated cites having more
incidence of PCH than less segregated cities. In support of the Community
Violence hypothesis, the violent crime rate had a large statistically positive
relationship with PCH (while both the overall index crime rate and property
crime rate were not) as did higher arrest rates. Police officers killed in the line
of duty also had a small but significant positive relationship with PCH as
well. In addition the researchers also examined but failed to find a
relationship between the ratio of Black and Hispanic officers to Black and Hispanic
citizen population with PCH, however female officers were significantly
positively associated with PCH.
In group specific analysis of Black PCH there were four
predictors—black–white segregation, violent crime rate, police officers killed,
and percent female officers—with statistically significant, positive
relationships to PCH of Blacks. They also saw a similar non-linear effect with
Black-White separation with more PCH incidence in areas of greater separation.
For Hispanics, the percentage Hispanic, Hispanic-White separation, as well as
the Southwest region all had statistically significant positive effects on PCH.
However for Hispanics, and in accordance with the Power Threat theory there was
a positive relationship with Hispanic population and incidence of PCH until
Hispanics reach about 60 % of the population and the relationship reverses with
PCH decreasing as Hispanic population increases and they found no non-linear
relationship between Hispanic separation and PCH.
The discuss how they found support for both Community
Violence and Place hypotheses and some support for all three hypothesis in
group specific analyses, noting their study highlighted the importance of using
both structural and event based data and variable and group specific analyses.
They also note future research could examine officer race in relation to PCH as
well as more detailed city and neighborhood analysis of PCH.
Holmes, M. D., Painter, M. A., & Smith, B. W.
(2019). Race, place, and police-caused homicide in US municipalities. Justice
Quarterly, 36(5), 751-786.
Holmes Painter, II and Smith used variables like
population, and arrest rate, to examine the disparity in minority PCH but
Tregle, Nix and Alpert remind us that disparity doesn’t equal bias and caution
against using imperfect variables like these in examining officer involved shootings
(OIS)
Disparity Does Not Mean Bias: Making Sense
of Observed Racial Disparities in Fatal Officer-Involved Shootings with
Multiple Benchmarks
Tregle, Nix & Alpert, Journal of Crime
and Justice, 2019
Following well publicized officer involved shootings
incidents starting in2014, Officer Involved Shootings (OIS) started
being viewed as not isolated incidents but as a national problem involving bias
on the part of the police in their interactions with minorities. However,
recent agency level studies show that Blacks are not more likely to be shot by
the police than Whites. Unfortunately, the government has failed as to adequately
compile data related to OIS to examine this issue on a larger scale. However,
in 2015, the Washington Post started compiling data related to fatal OIS,
indicating that officers shoot and kill just under 1,000 people a year and 25%
are black and 48% are white. While UCR data showed that Blacks made up
approximately 37% of violent crime arrests, the Washington Post data revealed
that in 2015 more than 80% of fatal OIS invoked a suspect with a weapon (with
the UCR showing Blacks accounting for 40-44% of weapon possession arrests).
However, the authors note this data cannot show
whether Blacks are more likely to be shot by the police than Whites. Simply
because Blacks are over-represented in fatal shootings, relative to their
population in general, does not mean there is bias toward Blacks by the police.
The authors explain that using population as a measure in this way is flawed. Because,
as in medical disease models, the entirety of the population do not all face
the same risk of disease, nor do all members of a population face the same risk
of coming into contact with the police. For example examining racial disparity
in traffic stops based on racial population is inappropriate without
determining what portion of the population is actually driving and thus at risk
of being stopped. Another issue to contend with is that within that driving
population, which groups, because of their driving behavior or vehicle
condition (young people, low income citizens), might be more likely to be
pulled over.
The authors examine seven variables including,
population data, police-citizen interaction data (from the Bureau of Justice Statistics’
Police Public Contact Survey (PPCS), a supplement to the National Crime
Victimization Survey carried out triennial) and UCR arrest data from 2015-2017
to report the odds of Black citizens being shot, relative to White citizens.
They note that many studies examining OIS showed Blacks were less likely to be shot
or killed by the police compared to Whites, however some studies demonstrated
opposite findings, but comparing these studies are difficult because of the use
of different benchmarks. To examine whether there were any disparities between
race in OIS, the authors utilized seven benchmarks to examine the
issue-population, police citizen interactions (police-initiated contacts,
traffic stops, and street stops), arrests (total arrests, violent crime
arrests, and weapon offense arrests).
Analyzing the odds ratios of Blacks and Whites shot
against the benchmarks, the authors first note that fatal OIS are a rare occurrence.
For example, although police fatally shot 259 Black citizens in 2015, they did
not use lethal force in 140,543 arrests of Black citizens for violent crimes.
Similarly, while police fatally shot 497 White citizens in 2015, they did not
fatally shoot suspects during 63,967 arrests of White citizens for weapons
offenses. The also note that population is a flawed benchmark, that while it
indicates that Blacks are over 3.5 times more likely to be shot by the police
than Whites, the problem is that the majority of either population are not
exposed to the risk of being fatally shot
by the police. Other benchmarks provide mixed and varying results. For
clarification, note that odds ratios over 1 indicate Blacks were more likely
than Whites to be shot while odds ratios less than one indicate Blacks are less
likely than Whites to be shot and the horizontal line represents the confidence
interval (the high likelihood that the data point lies within that range). (See
Table 1)
Table 1. Black Citizen Odds Ratios of Fatal Officer
Involved Shootings Benchmarks
The authors note that the popular perception that blacks are disproportionately shot by the police is based on the flawed benchmark of population, which doesn’t consider the races’ different exposure rates to the police. They suggest that arrest rates are a more appropriate measure since it represents the subset of the population that had interactions with the police that could turn deadly, working under these assumptions: (1) OIS occur in response to perceived imminently dangerous citizen behaviors, (2) Criminal behavior is a reasonable proxy for imminently dangerous behavior, and (3) Arrests are a reasonable proxy for criminal behavior. Based on total arrests, Blacks are 1.23 to 1.37 more likely to be fatally shot that Whites in that three-year period but when examining arrests that pose a greater threat to officers like those of weapons offenses or violent crimes, Blacks were slightly less likely to be fatally shot than Whites. However the authors also note that UCR data is not a complete accounting of all police departments, with small departments being underrepresented, and that arrests are only a subset of police-citizens interactions that could escalate into lethal force incidents like traffic stops, domestics, and mentally ill and suspicious person calls. The authors state that a better benchmark might be police-citizen interactions, however the National Crime Victimization survey also has its limitations regarding who is sampled and that in regards to the risk of being shot, there are a vast number of police-citizen encounters that do not require a level of force, let alone lethal force.
An even better benchmark would be scenarios where officers
drew their weapons but did not shoot, comparing shoot-no shoot would exclude
interaction where it is improbable that citizens would be shot. However, this
benchmark may be more appropriate at a city or agency level, as reporting
standards for drawing a firearm vary widely and it may be difficult to compile
national data. The authors also note that in examining OIS that the Washington
Post database does not include non-fatal OIS. Data from larger cities show that
non-fatal OIS range from 20-45%, and fatality may be dependent on other factors
like immediacy of medical care. They also note that individual circumstances
are not accounted for including suspects’ level of resistance and threatening
behavior which will prompt the use of force, and level of force, which may
explain some of the racial disparity. In addition, another noteworthy
limitation of the study is the inability to benchmark fatal shootings of
citizens who posed no imminent threat (i.e., unarmed and not aggressing).
In this case, the research question would be: In order
to answer the question of whether Black citizens who pose no imminent threat
are more likely to be fatally shot by police than White citizens who pose no
imminent threat, given each group’s exposure to police contact, benchmarks would
be needed that indicated how often officers interact with unarmed and
non-aggressing citizens of each racial group. The authors conclude that the
federal government should be compiling data on all OIS to better understand and
analyze the conditions under which they occur and that while databases like the
Washington Post’s can provide valuable information, the benchmarks used to
analyze OIS have assumptions and limitations that must be acknowledged.
Tregle, B., Nix, J., & Alpert, G. P. (2019).
Disparity does not mean bias: Making sense of observed racial disparities in
fatal officer-involved shootings with multiple benchmarks. Journal of crime
and justice, 42(1), 18-31.
While it is apparent that in order to examine any
racial disparities in officer involved shooting that appropriate benchmarks be
used, we also know that not all OIS are appropriate and that the police do make
errors in the application of force. Taylor examined OIS and constructed a
typology of police shooting errors, with suggestions on how those errors may
addressed.
Beyond False Positives: A Typology of
Police Shooting Errors
Taylor, Criminology and Public Policy,
2019
Taylor quotes David Kahneman saying that “There
are distinctive patterns in the errors people make. Systemic errors are known
as biases, and they recur predictably in particular circumstances. …The
availability of diagnostic labels for [these] biases make [them] easier to
anticipate, recognize, and understand”. Taylor explains that behavior tends
to be systematically connected to the features of peoples’ tools, tasks,
previous experiences, training, and environments and that the research findings
on human error have consistently demonstrated that situations, behaviors, and
decision processes that result in error tend to result in repeated errors
across time and people. The examination of errors can be applied to criminal
justice research, and more specifically, to police use of deadly force, and a
typology of police shooting errors can be constructed.
Error should be defined as, absent any chance outside influence, when a sequence of thoughts or behaviors do not lead to their intended outcome. An officer shooting an unarmed man intentionally is not an error. It may be a violation, but it is not an error because the intent met the outcome. Systemic errors occur when people rely on pattern recognition, developed from repeated exposure to similar patterns and experiences, and automaticity, which is the development of implicit shortcuts in our cognition which speed up our decision making process with a high degree of reliability but can also lead to errors.
In the context of police shooting, errors are
typically viewed as either a False Positive error, where a person is presumed
to be dangerous by the officer, but is in fact not dangerous, and shot by the
officer, or a False Negative, where a police officer or citizen is killed when
an officer fails to shoot a dangerous individual. However the authors believe
this simple typology can be expanded to cover a wider variety of scenarios,
which include misses of the intended target and hits on unintended targets such
as citizens and other officers
Table 1. A New Typology of Police Shooting Errors
TARGET HIT
FIREARM DISCHARGED
Intended
Unintended
Intended
Misdiagnosis Errors
Misses
Unintended
Misapplication Errors
Unintentional Discharge
The authors explain misdiagnosis errors, similar to false-positive
errors, as when the officer intended to shoot his firearm, and hit the intended
target, but the outcome was unintended, i.e., a non-dangerous person was shot.
In these situations, a non-dangerous person was shot in error, sometimes
referred to as cell-phone shootings, mistake-of-fact shootings, and
perception-only shootings. They note statistics from Los Angeles and
Philadelphia that between 2013 and 2017, 14% and 10% respectively, of police
shootings involved this type of error. They suggest that while more research is
needed, that these errors may stem from pattern recognition. The classic and
current police literature notes that through experience officers are attuned to
cues of danger and impropriety and these cues prompt the reliance on pattern
recognition, where these frequently experienced cues prompt the recognition of,
and priming for, a dangerous situation. This leads to decision making shortcuts
that prompt officers to go on alert, draw their gun and fire. However these
shortcuts can lead to errors when the officer has been primed for a dangerous
scenario (such as a dispatch call about a man with a gun), attends to the wrong
information , or ignores or misinterprets the right information.
Misapplication errors involved the unintended firing of the firearm but a hit on the intended target. These are referred to in the literature as weapon confusion or Taser confusion shootings , where the officer intended to Taser a person but instead accidentally drew his firearm and shot. This type of error is well documented in the medical and aviation fields, where switching over to a new tool (like a Taser) or procedure has been introduced and a preoccupation or distraction is present, thus causing the misapplication and the unintended outcome. In these cases, training just to sufficiency may be insufficient as newer learned skills tended to be the first to disappear under pressure and replaced by those practiced for a longer period of time. The authors note the typical difference in training time with firearms compared to Tasers, and while it requires more research, this may be a factor in this error.
Misses are an error where the officer intends to fire
his firearm but doesn’t hit the intended target, either completely missing or
hitting an unintended target. Much of the research on police shooting accuracy
indicates a low hit rate, typically less than 50 %, and despite changes in
training methods, hasn’t improved over the past 50 years. Between 2013 and 2017,
Philadelphia officer hit rates averaged 18% while in that same time period LA
officer hit rates averaged 27%, varying
between 18% to 42%. This means that the error is a much more common outcome
than the correct one and the authors note there is not a comparative type error
in other fields and suggest much more research be conducted to determining and
addressing the causes of this type of error.
Unintended discharges are errors which occur when an officer
did not intend to fire his weapon, had no intention of hitting a target, but
the round in fact struck a target. They are typically referred to as accidental
or negligent discharges. Between 2013 and 2017, 17 % of reported LA shooting
incidents involved this type of error while between 2006 and 2016, the NYPD
reported 19% of their shooting incidents were unintended discharges. Research
indicates that unconscious touching of the trigger may be common and when
combined with some exertion activity, a co-muscle activation response exerted
enough pressure to discharge the weapon. A high number of accidental discharges
occurred during routine weapons activity,(storing, cleaning, loading,
unloading). Automaticity, where officers have done a task so many times it
becomes automatic, allows them to change attentional focus and with a loss of
focus on the other task, an error in unintended discharge can occur.
The authors conclude that simply trying to lump all
police error shootings into a large sample and look for causal correlation is
misguided as the causal mechanics vary between the types of errors but neither
is it appropriate to simply look at each case as an isolated incident as causal
connections to similar shooting incidents might also be missed. Utilizing this
typology will more accurately discriminate between the different types of
shooting errors and improve research on police shootings, and, based on the
type of error, appropriate means can be employed to reduce those types of
errors through policy, training or practice.
Taylor, P. L. (2019). Beyond false positives: A
typology of police shooting errors. Criminology & Public Policy, 18(4),
807-822.
Eliminating errors in the use of lethal force is just
one way of improving police performance, which can foster and build police legitimacy
with the public. James, James, Davis, and Dotson suggest that rather than
looking at outcomes to study police-citizen contacts, a more in-depth analysis
of police performance that examines officer behavior while accounting for
influencing factors, can not only enhance our understanding of officer decision
making and behavior but also improve police performance in their contact with
citizens.
Using Interval-Level Metrics to
Investigate Situational-, Suspect-, and Officer-Level Predictors of Police
Performance During Encounters with the Public
James, James, Davis and Dotson, Police
Quarterly, 2019
The authors look at factors that may influence how
police officers behave during encounters with the public, noting previous
research has examined whether suspect race influences officer involved
shootings or whether officers use greater force depending on suspect demeanor, or
whether neighborhoods predict police-citizen outcomes. However, this research
typically focuses on the outcome of the encounter, not the performance of the
officer in the encounter. For example, an officer may exhibit fairness and do
everything right but still generate a citizen complaint, while another officer
may do everything wrong and get away with it if the citizen doesn’t bother to
file a complaint. The authors examined a wide variety of 667 incident reports from
a large urban department (1500 sworn officers) to examine situational, suspect,
and officer level predictors on how officers perform in their interactions with
the public. Utilizing a recently established and rigorously developed police
encounter performance metric, the authors used interval level metrics to score
officer performance across the range of these encounters which include Use of Force,
Tactical Social Interaction (officer performance in routine citizen
encounters), and Crisis Intervention, which involved officer performance in
crisis encounter or encounter with people with mental illness.
Within these three metrics are a wide variety of
performance measurements. For example, under Use of Force there are 48
performance variables within the categories of Preplan (expecting to be
involved in a deadly force situation, waiting for backup) Observe/assess
(correctly identifies threats, identifies pre-assault indicators, aware of what
is going on in the periphery, selecting reasonable force options), Officer Behavior
(paying attention to details, drawing the weapon, able to use communication
skills to defuse, used appropriate level of assertiveness), Tactics (had
necessary equipment, prioritizing citizen safety, prioritizing other officer
safety, using cover, effectively engaging multiple opponents) and Adapt
(correctly responds to a threat, recognizes need to transition to different
force option, uses or compensates for environmental conditions). Tactical and Social
Interaction and Crisis Intervention also utilized extensive performance
variables under similar categories, including Resources, Interaction, and
Closing the Encounter.
Each of these variables carried a score indicating that behavior’s impact on performance. The incident reports were than analyzed and coded if the officer took the action, or whether the officer could have taken the action but did not. Not all performance metrics were suitable for every encounter and so were not included in the scoring and analysis. The performance scores of officers are expressed as a percentage, the proportion of all behaviors that were possible in the encounters, as measured by the metrics. In addition to this, the authors also coded situational (nighttime, children present, cultural or language barrier, more than one civilian present), suspect (age, sex, race, non-compliant, armed, hostile, homeless, emotionally disturbed, substance impaired, self-harming behavior), and officer (sex) level variables and analyzed them for their effect on officer performance.
Overall, across all incidents the average performance
score was 80.5%. Officers scored highest in crisis encounters (83.6%),
aggravated assaults (83.4%), and domestic violence incidents (82.4%) but scored
lower in traffic collisions (74.8%), harassment calls (76.9%) and investigation
of suspicious circumstances (76.7%). See Table 1 below with average officer performance
scores and their error bars at a 95% confidence interval.
Table 1. Citizen Interaction Specific Police Officer
Performance Scores
To investigate this average 20% performance deficit, the authors examined specific categories and found officers scored highly in Observe/assess (96%) and Closing (93.6%) but less proficient with Preplanning (80.5%), Adapting Tactics (83.8), and use of Tactics (84.4). They also note officers performed far better in crisis encounters (94.5%) than in routine (non-crisis) police/citizen interactions (76.9%).
When the authors examined situational factor influence on officer performance, they found similar performance irrespective of night or day, the presence of children, or the presence of cultural barriers with a slightly better performance in the presence of language barriers (84.2%) than without (81.8%) and statistically significantly better performance with more than one civilian present (81.5%) as compared to only one civilian present (78.6%). In analyzing suspect factors, performance was very similar with teens, young adults, and older adults, and slightly higher performance scores with men as opposed to women (84.7% vs 82.1 %). Officers also performed slightly better (mid 80’s percentiles) with substance impaired citizens, the homeless, self-harming individuals, hostile citizens, and armed suspects than with the opposite counterparts to these factors. Officers also had significantly better performance scores in dealing with emotionally disturbed individuals (84.8%), non-compliant citizens (86.3%) and Blacks (85.8%) compared to Whites (83.2%) or Hispanics (83.8%). While officer gender was the only officer related factor that could be analyzed in this study based on incident reports, there was no statistical difference in performance scores based on gender.
The authors suggest that the results indicate that
officers perform better in crisis or “high stakes scenarios as evidenced
by their higher performance in crisis incidents like domestics and aggravated
assault. This may occur as officers are calling upon tasks that they excel at
like vigilant situational assessment, the use of tactics, and adapting those
tactics, with officers scoring high in Observe/assess. The large difference
between crisis and routine encounters suggests that while measurements show
that officers performed very well with performance items like clearly
explaining actions, showing empathy, and demonstrating concern for the citizen
but perhaps felt the need to demonstrate this more in crisis situations than in
routine encounters. The finding that officers performed better with Blacks than
non-blacks might be difficult to interpret. The largest differences between
Blacks and non-Blacks were in the Observe/assess category, 99% compared to 95%.
It could be suggested that officers have a heightened awareness because of
implicit bias, unconsciously associating Blacks with weapons or danger, in line
with the Minority Threat hypothesis. Alternately, officers may be paying more
attention in encounters with Blacks due to a desire to perform well in these encounters
and avoid being labeled as biased, with the authors noting that the department
had received implicit bias training in the past year. Officers’ better
performance with emotionally disturbed and non-compliant individuals suggests
that while officers logically would use humanizing and de-escalation techniques
in these situations, across the range of performance behaviors, indications
seem to be that officers try harder during situations they perceive as more
challenging.
Implications from the study suggest using performance
metrics are a better way to assess officer behavior than simply analyzing
outcomes, such as whether force was used or the presence of citizens complaints
as they may provide a distorted picture of actual officer performance. The authors
also urge the use of body worn cameras to aid in the assessment of officer
performance. They also recognize that outcomes speak to fair enforcement and
building public trust to enhance police legitimacy but rather than the sole
measure of police encounters, both performance and outcomes can be analyzed to
determine how probabilistic outcomes like use of force, or arrest, are, and how
much they are dictated by good or bad officer performance. As well as being
used to assess training effectiveness like Crisis Intervention Training,
officers can be trained to incorporate de-escalation techniques in a broader
range of scenarios where there is a likelihood of escalation, including in
routine citizen encounters where techniques like empathizing, reducing the
police citizen power differential, and being respectful may foster the
perception of police legitimacy as well as reduce the 20% officer performance
deficit.
James, L., James, S., Davis, R., & Dotson, E.
(2019). Using Interval-Level Metrics to Investigate Situational-, Suspect-, and
Officer-Level Predictors of Police Performance During Encounters With the
Public. Police Quarterly, 22(4), 452-480.
Under the Microscope: Legal Challenges to Fingerprints
and DNA as Methods of Forensic Identification
Wise, International Review of Law, Computers, &
Technology, 2004
Wise discusses the advent
of both fingerprint and DNA technology, and addresses the legal challenges
they’ve faced as well as how the determination of legal validity will affect
emerging biometric identification means. He notes that Galton, a 19th
century scientist, proclaimed that fingerprints were unique to each individual
and permanent, as well as developing a system to identify the unique characteristics
of a fingerprint (called Galton points). Sir Edward Henry, a contemporary of
Galton took an interest in fingerprints, and with consultation from Galton,
developed the Henry Classification System to catalog fingerprint data, thus
ushering in the modern era of fingerprint science The classification system is
still used today and has enjoyed worldwide acceptance and use.
Challenges to the
admissibility of latent prints are based on established standards of evidence
admissibility. The 1923 case, Frye vs the United States, originally set a
standard for expert testimony (in this case, in the admissibility of lie
detector results), in that experts
should only testify if their testimony is based on “general
acceptance” in the scientific community. This standard was widely utilized
until the 1993 case, Daubert vs Merrel Dow Pharmaceuticals. This case set out
new, comprehensive criteria which included, whether the scientific theory has
been tested, whether it has been subject to peer review and publication,
whether it has a known error rate, whether it has widespread acceptance, and
whether there are operating standards. It was from the establishment of the
Daubert standard that questions about the admissibility of latent prints arose.
In 1999, the case United States vs Byron Mitchell, was the first to challenge the admissibility of fingerprint evidence, and while the judge dismissed the challenge, the issue came up again in 2002, with United States vs Carlos Ivan Llera Plaza. The judge ruled that while the analyst may testify to some components of the analysis such as the methodology and the number of matching points between the latent print evidence and the defendant, he would not be allowed to testify as to whether the evidence matched the print of the defendant. The judge ruled latent print analysis did not meet the Daubert standard, because of a lack of known error rate and differing standards on how many Galton points signifies a match. However, the judge later reversed his ruling after reviewing US and UK data, concluding that there was a clearly established standard of analysis in the scientific community to satisfy the Daubert standard. Since the Mitchell case there have been 40 challenges to the admissibility of latent print evidence but in all the cases, fingerprint evidence was allowed.
In 1984, Dr. Alec Jeffreys,
a researcher studying gene structures determined that DNA sequences are unique
to individuals and two years later was helping law enforcement utilize DNA to
identify a serial killer in the UK., which both exonerated an innocent man and
was able to match DNA from a suspect to the evidence. Jeffreys’ RFLP method has
been refined since then to an STR technique where only a small amount of DNA is
required for analysis, as well as developing the use of mitochondrial DNA
testing that utilizes a different methodology that works especially well on
degraded DNA evidence, and in cases requiring the identification of family
members.
The first challenge to RFLP
DNA evidence occurred in the late 80’s in the case of United States vs Bonds.
The District Court ruled that the DNA evidence was admissible based on the
“general acceptance” standard of Frye, however the US Court of
Appeals ruled in 1993 that the DNA evidence was admissible under the Daubert
standard, despite the laboratory not conducting external blind proficiency tests
or referencing a known error rate. The judge determined that if the scientific
community was accepting of the
technology, it must also then be accepting of the error rate as well. RFLP DNA
analysis was also challenged in the NY case of the People vs Castro. In this
case, the court used a three prong Frye test to determine if the theory, and its
techniques and experiments, could produce reliable results that were generally
accepted by the scientific community, and whether the laboratory performed the
accepted scientific techniques in analyzing the sample in the particular case.
The court ruled the first two prongs were met,, in that the science was sound, but
that the laboratory failed to the meet the accepted scientific testing
standards.
STR DNA analysis was also
challenged in many courts. For example in State vs Traylor, although Traylor
argued the validity of commercial DNA tests are unknown because of the use of
proprietary regents used in the analysis, the MN Supreme Court, in 2003, ruled
that the DNA Advisory Board and its established guidelines developed by the FBI,
met the admissibility criteria and validated the science. While challenges to
mtDNA are relatively new to the courts, recent challenges have established that
mtDNA analysis constitutes a
“scientific knowledge based on reliable methods and principles”.
Newer DNA analysis technology,
like Low Copy Number (LCN), while accepted, may face challenges as well. LCN
DNA techniques can utilize very small samples of DNA and produce 17 billion
copies to allow for analysis. However the concern is that as the original
sample gets smaller in this process, any contamination in the sample will have
a larger effect on the results of the analysis. Other concerns are the transfer
of LCN DNA from casual contact and the lack of scientific evidence available as
to how long this casual contact DNA could remain. While “shedder indexes” are being investigated
to determine the rate at which an individual donates potential DNA material (i.e.
skin cells, hair, sweat), researchers have reported that DNA can be detected
after transfer to an object for nearly three months in some cases and in one
case for 2 years. Even more importantly, with such a small sample size of LCN
DNA, there typically is no material left
over after testing for an outside source to run its own analysis and produce
data. The FBI, with the
exception of the limited application in human remains identification, remains
skeptical of LCN DNA. Their official position is that any profiles obtained
from LCN DNA should not be entered into the Combined DNA Index\System (CODIS)
database of offenders and suspects. They also issued a caution against a rush
to re-examine old cases on the hopes that LCN DNA would offer better analysis
or change a verdict, mainly because of the risk of evidence contamination from
repeated handling.
While fingerprints, and now
DNA, are classic biometric measures, new measures of identification are being
developed like ear prints, facial and voice recognition, iris, retina, and vein
patterns, and hand geometry. The newer technologies will likely face challenges
as they are introduced into the courtroom. Part of what drove the acceptance of
latent print admissibility is that the scientific standard had been developed
over a 100 years of use. Newer technologies will need to demonstrate that they
can meet the Frye or Daubert standards and this also puts a burden on the judiciary,
highlighting the situation created by Daubert,which requires judges,
who are often not trained in the sciences, to act as gatekeepers of evidence
admissibility. Even if these new technologies become “generally
accepted”, criminal defense attorneys can still criticize the application
of the method by the individual laboratory, and if the laboratory demonstrates
they meet the scientific standards, the training or performance of the
individual analyst can still be called into question.
Wise, J. (2004). Under the
microscope: Legal challenges to fingerprints and DNA as methods of forensic
identification. International Review of Law, Computers & Technology,
18(3), 425-434.
Recognizing the challenges faced in determining admissibility
is important for defense attorneys as well as prosecutors, criminalists, and detectives.
Being able to present solid, scientific based identifying evidence is crucial
for prosecutions, as is producing accurate evidence data from other sources at
the crime scene. Knock and Davison
developed a methodology that they believe will produced more detailed,
accurate information on the source of blood stain evidence.
Predicting
the Position of the Source of Blood Stains for Angled Impacts
Knock
& Davison, Journal of Forensic Sciences, 2007
The authors note that in the field determining the
source of blood splatter evidence
typically involves the “stringing method. As the authors explain “This
technique uses the fact that the width to length ratio of a blood stain is
approximately related to its impact angle. Using the calculated angle of
impact, a straight line is drawn back from the stain along the line of the impact
angle. Where the lines from several stains intersect is assumed to be the
source of the stain.” They also note though that the effect of gravity on the
flight path of blood droplets isn’t taken into account in making this
determination. Knock and Davison experimented by dropping blood droplets of
varying sizes at different height and angles against a hard surface. From this
data, they produced one equation relating stain size to drop size and velocity
for all impact angles, and a second equation, relating the number of spines
(blood fingers extending from the center of the drop caused by impact) to drop
size, velocity, and surface slope for all impact angles. The authors
demonstrated that by combining these two equations, impact velocity can be accurately
calculated and thus the true position of the blood stain’s source.
Knock, C., & Davison, M. (2007). Predicting the
position of the source of blood stains for angled impacts. Journal of
forensic sciences, 52(5), 1044-1049.
Revising and refining scientific methodology can
improve the forensic investigative ability of detectives and criminalists but
its also necessary to re-evaluate perceptions we hold about certain criminal
activity and the perpetrators. Ferguson, et al reminds us that conventional
wisdom is not always correct and that the analysis of data is necessary for the
proper assessment of contributing and causal behavior in determining who might
be at risk for perpetrating a school shooting.
Psychological Profiles of School Shooters:
Positive Directions and One Big Wrong Turn
Ferguson, Coulson, & Barnett, Journal of Police Crisis Negotiations, 2011
The authors contend that the stereotype presented by
the media and typologies produced both by the American Psychiatric Association
and the FBI are simply inaccurate, or too overly broad and vague, to be of use.
While conceding that first hand data from shooters who often die in the
incident is hard to come by, a more evidence based typology can be utilized.
School shooters are typically portrayed as loners, involved in the Goth subculture or other out-groups, who enjoy violent video games, were bullied, and had disruptive or negligent home lives. When school shootings increased in the’90’s the public, academics, politicians, and activists demanded answers, despite their relative rarity and the overall general decline of youth violence.
In 1999, the FBI provided a threat assessment profile
for school shooters cautioning against its use other than in assessing the
credibility of a threat already made by an individual. Some criteria seem
reasonable like “injustice collector, dehumanizes others, and lacks empathy”
while others were vague in definition like an “unreasonable interest in
sensational violence” and overly broad like “a failed love relationship, a
sense of superiority, exaggerated need for attention, externalizes blame,
closed social group, a fascination with violence filled entertainment”.
As of 2010 the APA maintained a warning signs list for
serious youth violence including obvious signs like “enjoying hurting animals, detailed
plans to commit acts of violence, and announcing threats or plans for hurting
others” but like the FBI’s threat assessment, others are vague in definition
like “frequent physical fighting” while others could apply to great numbers of
mentally-well juveniles like, “feeling rejected or alone, poor school
performance, and access to or a fascination with weapons, especially firearms”.as
well as including ideas that have been discredited by research like “violence
is a learned behavior” and linking violent media, like video games, with
violent behavior.
These attempts at recognizing school shooters will result in over-identification and misidentification and while empirical evidence on characteristics of school shooters is scant, a 2002 report from the Secret Service and Dept. of Education does provide a more data derived (albeit descriptive) picture of school shooters. The reported analyzed 37 attacks, involving 41 perpetrators from 1974 to 2000 and utilized school and court records, mental health and legal documents as well as interviews with the ten surviving perpetrators. The report made clear that with the wide difference among perpetrators that no profile for school shooters existed, though there were some features that tended to be more widespread. The report also demonstrated that the stereotypical school shooter image is inaccurate. The SS/DOE report found that video game playing was relatively low, only 15% expressed “some interest in violent video games” and just 59% expressed “some interest in violent media in other forms”, with the authors noting these figures are lower than those found for non-shooter males in other studies. However, 37% were exposed to media violence through their own poems, essays, and journals.
Social isolation was not found to be common with
school shooters. Most had friends, 41% belonged to a mainstream social groups,
(27% were part of fringe groups but also had friends). In categories that were
not mutually exclusive, only 12% had no friends and 34% were described as
loners. School and family background also did not figure prominently into school
shooters’ behaviors. However mental health issues were a factor. 98% of perpetrators
experienced some kind of major loss right before the incident, 78% had a
history of suicide attempts or ideation, 71% percent of then perceived
themselves as wronged, bullied, or persecuted by others, and 61% had a
documented history of significant depression. However very few of the perpetrators
had received any mental health care in the past, suggesting a failure of our
mental health system that has contributed to these incidents, Two prominent
warning signs noted in the SS/DOE report were that 81% of perpetrators warned
an uninvolved person prior to the attack and that 93% had already engaged in
behaviors that alarmed peers, teachers, parents, or mental health
professionals.
These factors have been identified in research on
adult perpetrators of mass homicide as well as figuring into youth violence in
general. Ferguson noted his upcoming study revealed that violent media was not
a factor in youth violence, however current levels of depressive symptoms
coupled with antisocial personality traits, were highly predictive of youth
violence. The authors suggest that reducing school shootings should focus on preventative
measures however, reform in addressing mental health needs is long overdue, and
funding for adequate at-risk youth and adults services will be slow in coming. They
also note that because of the likelihood that perpetrators may signal their
violent intent before the action, that prevention can take the form of peers
acting upon what they hear or see and informing law enforcement or school officials.
Ferguson, C. J., Coulson, M., & Barnett, J.
(2011). Psychological profiles of school shooters: Positive directions and one
big wrong turn. Journal of Police Crisis Negotiations, 11(2),
141-158.
Officers are often called upon to confront resistant
and/or violent individuals and by necessity have a choice of non-lethal options
they can employ in these encounters. One technique that has caused controversy
over its application and the potential for harm to suspects is the chokehold,
both historically and as recently as last year. Below, Dr. Koiwai examines
factors that may be contributing to deaths in the application of choke holds.
Deaths Allegedly Caused by the Use of “Choke
Holds” (Shime-Waza)
Koiwai, Journal of Forensic Sciences, 1987
Author Koiwai, MD, stated that the chokehold technique
used by police officers is the same chokehold (Shime-Waze) used in Judo. Yet
while officers have been involved in the deaths of suspects following
application of the hold, Koiwai notes that since the sport of Judo was
established in 1882, there have been no fatalities associated with the use of
the hold in Judo.
Koiwai briefly discusses the control techniques used by the police which are similar to Judo chokeholds, including the carotid control hold (fig. 1) the locked carotid control hold (fig 2), as well as the bar arm control hold, where the left hand is placed on the back of the subject’s head forcing it down. All of these holds are finished by using the hold to take the subject down to a seated position and applying the hold until the subject becomes unconscious or ceases resisting.
Koiwai examined the autopsy reports of 13 cases
between 1975-1985 which involved a law enforcement chokehold being applied. The
decedents were males, Black and White, between the ages of 19 and 58, though
the majority were under 40, and their weight ranged from 120-220 pounds, though
in only one case did the decedent weigh over 170 pounds. While in all but one
case the decedents were violently resisting arrest, which necessitated the use
of the chokehold, the case narratives indicate that in six of the cases, the decedents
were very violent and combative. In three of the 13 cases, acute intoxication
from alcohol or drugs was involved, two other cases involved decedents suffering
from psychosis, as well as the findings that in three other cases, pre-existing
heart conditions contributed to the death. In almost all the cases medical
attention was provided in a timely matter, though in all the cases,
asphyxiation was a primary factor, including in some cases, aspiration of vomit,
and brain death from oxygen deprivation.
In all 13 cases, the author noted evidence of injuries
to the structures of the neck ranging from bruises, lacerations, hemorrhages,
and vascular compression, as well as fractures of the cartilage of the neck in
five cases, and intervertebral discs in one case. Submucosal or mucosae
injuries are noted in the larynx in five cases. All these findings indicate
that tremendous force was exerted on the necks of the decedents. Koiwai noted
that only a relatively small amount of pressure is necessary to close off the
carotid arteries and that unconsciousness should occur in only 10-20 seconds,
with regaining consciousness occurring in about the same time length. Koiwai
stated that the force applied to collapse the airway, as occurred in these
cases, is 6 times greater than necessary to effectively apply a chokehold,
which resulted in the injuries seen in the autopsy reports. Properly applied
the chokehold puts pressure on the superior carotid triangle, closing off the
carotid artery but leaving the vertebral artery unobstructed. (fig 5) Completely
obstructing the blood flow to the brain or asphyxia by closure of the trachea
can lead to irreversible damage or death.
Koiwai suggests that police
department training manuals should emphasize that control holds should be used
only when necessary to stop a suspect’s resistance and not necessarily to cause
unconsciousness. If police officers are to use the choke holds to subdue
violent suspects as a last resort, they should be properly trained and
supervised by trained, certified judo instructors to ensure that there will be
less misuse or abuse of the technique which when used improperly results in
fatalities. These fatalities could be reduced if (1) choke holds are taught by
trained and certified instructors (2) if officers become familiar with the
anatomical structures of the neck and where the pressure is to be applied (carotid triangle), (3) if they
understand the physiology of choking, in that only a small amount of pressure
is needed to cause unconsciousness; (4) if officers are taught to recognize immediately
the state of unconsciousness and to release the pressure immediately, (5) learn
proper resuscitation methods if unconsciousness is prolonged; and to prevent the
aspiration of vomit and not to place the restrained suspect face down, (6) and
keep the subject under constant observation. (7) Additionally, police training
manuals should be revised to emphasize the above procedures and principles, all
of which will prevent deaths from chokeholds.
Koiwai, E. K. (1987).
Deaths allegedly caused by the use of “choke holds”(shime-waza). Journal of
Forensic Science, 32(2), 419-432.
Police officers face
challenges in recognizing when to apply force and the level of force itself. Proper
training in different techniques makes for better, more professional officers,
as well as decreased injuries and fatalities for suspects. However, having a
means to assess officer performance when they face potentially violent
encounters is crucial to understanding their behavior and decision making in
those encounters. Those observations can then be used to improve officer
performance and public safety as well. Vickers and Lewinski examine the
differences between elite and rookie police officers in their preparation for
use of a firearm in a violent confrontation.
Performing under pressure: Gaze control, decision
making and shooting performance of elite and rookie police officers
Vickers & Lewinski, Human Movement Science,
2011
The authors discuss that currently
most firearms training programs teach officers to focus their gaze on two
locations, first on the sights of their gun, and secondly on the target before pulling
the trigger. This gaze strategy works very well in training with rookies
achieving high accuracy scores in initial firearms training, but once on the
street and faced with a violent firearms encounter they shoot poorly, averaging
between 10 and 60% accuracy. The high pressure states that shooters face tend
to cause more visual fixations of a shorter duration and reduced ability to
detect peripheral information.
When elite shooting
athletes were studied they found that they tended to fixate on the target, and
kept that sight gaze as they aligned their firearm sights with the target,
rather than switching gaze from sights to target; this allowed for a longer
final fixation on the target leading to greater accuracy, and the reduction of
pressure, anxiety and psychological stress. The authors tested 11 elite
Emergency Response Team members and 13 rookie officers nearing the end of their
training period, with gaze tracking software, putting them in a role-playing scenario
where they are informed a threat may appear in the area they are monitoring. An
upset male enters the location and becomes increasing agitated with an
individual playing the role of a receptionist. In the final two seconds of the
encounter, the male quickly pivots and draws on the officer who is seven yards
behind him, drawing either a gun or a cellphone. Officers were assessed on
their gaze duration, gaze location, amount of gaze shifting, speed, accuracy
and locations of shots fired, time involved in the unholstering, draw, aim, and
fire phases, and the rate that they inhibited firing in the cellphone scenario.
Following data analysis, statistically
significant differences were revealed.
Elite
Officers
Rookie
Officers
Hit on
assailant
74.5 %
53.9 %
Decision
making (fired on cellphone)
12.3 %
61.5 %
Fired before
assailant
92.5 %
42.2%
High
Performance (meeting all above criteria)
75.0 %
52.9 %
Fixated on
more locations where gun could be concealed
50.3 %
30.6 %
Fixated on locations
where gun couldn’t be concealed
42.1 %
51.1 %
Fixated on
areas off the assailant
7.6 %
18.1 %
Unholstered
weapon after assailant entered
1.77 sec. avg
6.28 sec.
avg.
Statistically significant
differences were also found in the final phases of the scenario between the
onset times of the different phases based on officer status.
Onset
Elite
Rookie
Draw
4.63 sec.
6.04 sec.
Hold
4.81 sec.
6.26 sec.
Aim
5.83 sec.
6.36 sec.
Fire
6.87 sec.
6.93 sec.
There were also statistically
significant differences between elite and rookie officers in their visual
fixations during the final two seconds of the scenario.
Fixations
Elite
Rookie
Increased
visual fixation on assailant weapon
From 18 % to
71 %
From 18 % to
34 %
Decreased
fixation on non-weapon locations
From 78 % to
7 %
From 62 % to
16 %
Increased
fixation on officer’s weapon to
20 %
39 %
Fixations off
assailant
4 %
13 %
Final
fixation on officer weapon not assailant
32%
84 %
Final
fixation time on assailant before
firing
.32 sec.
.27 sec
Final
fixation time on ofcr. weapon before firing
.12 sec
.24 sec
In reacting to the threat, rookie
officers performed the final phase actions in the last second of the scenario versus
elite officers who performed the actions in the last 2.5 seconds, starting the
process earlier, taking more time to focus on the assailant and less time
focusing on their own weapon. The elite officers’ earlier draw was also
preceded by more time spent focusing on assailant weapon locations than
rookies. Elite officers maintained more visual focus, drew sooner than rookies
in anticipation of the threat, and thus gave themselves more time in the final
aim and fire phases for increased fixation focus, which accounted for their better
hit and discrimination rates. The authors stated that their results suggest
that firearms training should change from a process that inadvertently teaches rookies
to fixate on the sights of their own weapon first and the target second, to a
type of training that establishes the line of gaze on the target from the
outset, followed by alignment of the sights of the weapon to the line of gaze.
This change in gaze control would lead to a longer final visual fixation
duration on the target prior to pulling the trigger and should contribute to
better decision making and performance. If these changes in firearm’s training
were implemented, then the gaze control of novice officers should be similar to
that of elite athletes and elite officers from the first day of training, which
should decrease errors in decision making and improve shooting accuracy.
Vickers, J. N., &
Lewinski, W. (2012). Performing under pressure: Gaze control, decision making
and shooting performance of elite and rookie police officers. Human movement
science, 31(1), 101-117.