Introductory
disclaimer: This
blog post is intended to be about the selective interpretation of statistics. Many of the figures under discussion are about
reported rates of violence against women, and any criticisms or suggestions
regarding research in this field are solely in reference to research methods. Nothing in this commentary is in any way
doubting the very real experiences of women facing violence and abuse, nor
placing responsibility for the correct reporting of abuse on the women
experiencing it. Violence against women
and girls (VAWG) is an extremely serious issue, which is exactly why it
deserves the most robust research methods in order to bring it to light.
Back
in February 2014, I wrote a post
in which I noted the seemingly high correlation between “national happiness”
ratings for certain countries and per-capita consumption of antidepressants in
those countries. Now I’ve found what I
think is an even better example of the limitations of ranking countries based
on some simplified metric. I’ve asked my
friend Clare Elcombe Webber, a commissioner for VAWG services, to help me here. So from this point on, we’re writing in the
plural...
A
few months ago, this
tweet from Joe Hancock (@jahoseph)
appeared in Nick’s feed. It shows, for
28 EU countries, the percentage of women who report having been a victim of
(sexual or other) violence since the age of 15. Guess which country tops this list? Yep, Denmark. Followed by Finland, Sweden, and the
Netherlands. Remember them? The countries that are up there in the top 5
or 10 of almost every happiness survey ever performed? Down near the bottom: miserable old Portugal,
ranked #22 out of 23 in happiness in the post linked to above. (The various lists of countries don’t match
exactly between this blog post and the one linked to above because there are
different membership criteria, with some reports coming from the OECD, EU, or
UN. Portugal was kept off the bottom of
the happiness list in the post about antidepressants by South Korea.)
This
warranted some more investigating, along the lines of Nick’s previous
exploration of the link between happiness and antidepressants. The original survey data page is here; click on “EU map” and use
the dropdown list to choose the numbers you want. Joe’s tweet is based on the first drop-down
option, “Physical and/or sexual violence by a partner or a non-partner since
the age of 15”. While performing the
tests that we describe later in this post, we also tried the next option, “Physical
and/or sexual violence by a partner [i.e., not a non-partner] since the age of
15”, but this didn’t greatly change the results. In what follows, unless otherwise stated, we
have used the numbers for VAWG perpetrated by both partners and non-partners.
First,
Nick took his existing dataset with 23 countries for which the OECD supplied
the antidepressant consumption numbers, and stripped it down to those 17 which
are also EU members. Then, he ran the
same Spearman correlations as before, looking for the correlations between UN
World Happiness Index ranking and: /a/ antidepressant consumption (Nick did
this last time, but the numbers will be slightly different with this new subset
of 17 countries); /b/ violence reported by women. Here are the results, which are first sight
are rather disturbing:
- Antidepressant consumption correlated (Spearman’s rho) .572 (p = .016) with national happiness.
- Violence against women correlated (Spearman’s rho) .831 (p < .0001) with national happiness.
Let’s
repeat that: Among the 17 largest economies within the EU, the degree of
violence since age 15 reported by women is very strongly correlated with
national happiness survey outcomes. When
things turn out to be correlated at .831, you generally start looking for
reasons why you aren’t in fact measuring the same thing twice without knowing it.
Trying
to look for some way of mitigating these figures, Nick tried another approach,
this time with parametric statistics. He
took the percentage of women reporting being the victims of violence in all 28
EU countries, and compared it with the points score (out of 10) from the UN
Happiness Survey. Here is the least
pessimistic result obtained from the various combinations:
- Across all 28 EU countries, violence against women correlated (Pearson’s r) .497 (p=.007) with national happiness.
This
is still not very good news. If you’re
hoping to show that two phenomena in the social sciences are correlated, and
you find a correlation of .497, you’re generally pretty pleased.
Of
course, correlation is not the same as causation. Probably nobody would suggest that higher
levels of violence against women makes for a happier society, or that higher
levels of general societal happiness cause people to become more violent
towards women.
So
what is going on here? Maybe the methods are seriously flawed. We might have difficulty imagining why Austrian women would report rates of
interpersonal violence barely half those experienced by Luxembourgers, or that
Scandinavians are assaulting women at over twice the rate of Poles, or that the
domestic violence problem in the UK is 70% worse than in next-door Ireland.
But
perhaps there are some other factors that might help to explain these numbers. Remember, these are answers being given to an
interviewer from the EU Fundamental Rights Agency (FRA); they are not extracted
from, say, police databases of complaints filed. Thus, while we can perhaps assume that the
reports ought not to be affected too much by the perceived level of danger or
social shame involved in revealing one’s situation to the authorities (it’s
easy to imagine that that people in countries with high levels of equality and
openness—Denmark, say—might feel more able to file charges about violence than
in some other countries that are perceived as being more “macho”), the degree
to which these data reflect reality will depend to a large extend on people’s degree
of willingness to admit being a victim to a stranger. While one would hope that the FRA had thought
about that and done the maximum in terms of study and questionnaire design,
training of interviewers, etc., to allow women to be frank about their
experiences, this isn’t something we were able to find definitively in their reported
methodology (available here).
There
are huge issues, which have dogged this type of research for many decades, when
it comes to asking women to disclose their experiences of abuse. The conventional wisdom amongst researchers
and service providers is that victims of abuse are extremely unlikely to reveal
their experiences to anyone, and short of the FRA interviewers spending months
building rapport with each respondent (which, obviously, they did not do) there
is little to be done to mitigate this. Here
are just some possible reasons why experiences of abuse might not have been
disclosed to researchers, and how this could impact on the results:
·
The
sampling method involved visiting randomly selected addresses. A common tactic used by abusive partners is to
isolate their victim, primarily as a way of stopping any disclosure or attempt
to seek support; so it is not unlikely that women currently in abusive
relationships were “not allowed” to take part in the research at all. (If we wish to make great leaps of logic here,
we could theorise that this could lead to a higher apparent incidence of VAWG in
countries with better support services, as women in those countries were more
likely to have been able to leave an abusive situation, and therefore were more
able to take part in the research. But
we don’t have data for that…)
·
Many
women do not identify their experiences as violent or abusive, even when most
external observers would say that they plainly are. This may be a defence mechanism, allowing them
to avoid having to face up to the truth about their partner, the fragility of
their personal safety, or the frightening nature of the world. Admitting that they are the victims of
violence or abuse would also imply that they may have to act to change their
situation. Therefore, respondents could
simply be lying; and, even if a measure of social desirability might be able to
detect this (possibly a tall order for such a serious subject), it’s unlikely
that the interviewer would administer such a measure. Alternatively, the degree to which women deny
that their experiences are violent or abusive might have a substantial cultural
component; perhaps women in more “traditional” countries are more likely to
justify some behaviours towards them as “normal”.
·
It
is not clear, from the methodological background of the report, how issues of
confidentiality were explained to respondents. We can reasonably conjecture that if a
respondent disclosed that they were currently at serious risk from someone,
that the interviewer would have been ethically obliged to do something
additional with this information. Many
abusers make threats of violence or serious reprisals should their victim make
a disclosure (something borne out by the fact that the majority of serious
injuries or murders of women by men they know occur at or shortly after the
point of separation or disclosure of the abuse to a third party), and this
would significantly impact whether or not a woman would answer these questions
truthfully. In addition, perceived fear
of the authorities may discourage a woman from disclosing; in many countries,
the police and social workers often do not have a glowing reputation for
providing support, and women may feel that involving them would exacerbate
their problems, rather than help to resolve them.
·
Finally,
victims who have disclosed their abuse often talk of their feelings of guilt, or
that they are to blame for abuse. This
shame could be an additional barrier to giving a truthful answer.
We
can make some—admittedly sweeping—inferences from the fact that the data do not
tell us what we would intuitively expect. We could speculate that those countries we
might expect to be more socially “advanced” in terms of attitudes to violence
against women could have higher rates of disclosures of abuse in this research because
women in those countries feel more able to recognise and name their
experiences, or feel more confidence in the authorities being supportive, or
have greater trust in the confidentiality of the survey; and therefore are more
prepared to report having been the victims of violence. A further conjecture could be that in these
countries, women are socially “trained” that these experiences are neither
normal nor acceptable, and that victims of violence are entitled to be heard,
without being stigmatised. (However, a
skeptic might respond that, while these assumptions enable us to put a positive
spin on this slightly unusual dataset, they are still only assumptions for
which we have little evidence, and do little to address the initial observation,
namely that the countries in the EU deemed to be happiest also reported the
highest levels of violence against women.) We could add all sorts of social variables into the mix here: availability
of relationship education, social stigma towards single mothers, the perception
of the state as supportive (or not), and so on. Violence against women and girls is a melting
pot of individual, social, and cultural variants, and to date researchers have
not been able to neatly set out what it is which makes some men decide to be
abusive towards women, nor what makes some communities turn a blind eye to such
abuse or even place the blame on the women being abused. Respondents potentially have many more reasons
to conceal their experiences of violence and abuse than they might in other
research areas, and there is no straightforward way of controlling for these. (Psychologists have devised various ways of
controlling for social desirability biases, but it is not clear to us that
these take sufficient account of cross-cultural factors; see Saunders, 1991.)
However,
let’s assume for a moment that it might be valid to take the numbers in the
report as not being directly reflective of the underlying problem, but instead
as presenting a combination of the actual prevalence, multiplied by a “willingness
to acknowledge” factor. At a certain
point, this could mean that you could see higher numbers in the survey for
countries where there’s actually less of a problem. For example, let’s say that the true rate of
violence against women in Denmark is 60%, and that 87% of Danish women are
prepared to discuss their experiences of violence openly; multiply those
together, and there’s the 52% reported rate from the EU survey. Meanwhile, perhaps the true rate in Poland is
76% (note: we have no evidence for this; we are choosing Poland here only
because it is the country at the bottom end of the FRA’s list), but only 25% of
Polish women are prepared to discuss it; again, multiply those numbers together
and you get the reported rate of 19%. In
fact, this line of reasoning is commonly used by people working on the front
line of VAWG support. For example, in
one London borough, reports to the police of domestic abuse in 2014 were over 40%
higher than in 2013, and this is considered to be a good thing; it’s assumed that
the majority of domestic abuse goes unreported, and thus additional reports are
just that: additional reports, rather than additional instances. But without more data from other sources and
approaches, we just don’t (and can’t) know.
Here’s
the kicker, though: if you choose to take the line that these figures “can’t
possibly be right”, and that in fact they may even show the opposite of the
real problem, that raises the question of why it’s OK to look for an
alternative explanation for the figures on violence (or other social issues,
such as, perhaps, antidepressant usage), but not for those on other phenomena,
such as (self-reported) happiness. What
gives data on happiness the kind of objective quality that legitimises all the column
inches, TV airtime of happiness gurus, and government policy initiatives to try
and boost their country’s rank from 18 to 10 in the UN World Happiness Index,
if you’re simultaneously prepared to try to look very hard for reasons to explain
away numbers that appear to show that your favourite “happy” country is a
hotbed of violence against women?
And,
even more importantly: whatever your position, do you have evidence for it?
You can find the dataset for this post here.
(Yes, the filename does give away how
long we have been working on this post!) It also includes all the data you need to re-examine the post about
antidepressants from February 2014.
References
Saunders, D. (1991). Procedures
for adjusting self-reports of violence for social desirability bias. Journal
of Interpersonal Violence, 3, 336–344. https://doi.org/10.1177/088626091006003006 (Full text available here.)
I always learn from Nick, and I learned much of great value from this. Many thanks.
ReplyDelete