Most Britons really do feel the EU is bullying/behaving unfairly in the Brexit negotiations – A survey experiment to measure the effect of question-wording on Brexit-related public opinion

An interesting exchange emerged among those interested in and passionate about public opinion on social media last month. The debate concerned methodology, and more specifically, question-wording, so naturally this caught my attention.

A consensus quickly became apparent among those who comment on public opinion. In their view, the question-wording of the poll must have been designed in a way to suit the editorial line of the commissioning client. Although it’s easy to see how the way a question is phrased might impact the outcome, hard evidence that it did so in this case was actually in short supply. Anecdote and rationale settled the matter for those who felt the need to comment. However, in my sometimes-bitter experience, basing your position on intuition alone is a fool’s game. Especially without robust, independent evidence.

So, we should probably ask ourselves what does the evidence say about the actual impact question wording could have on a seemingly typical Brexit-related opinion poll?

Rather than guess, I thought I’d better find out.

A little about the method…

To assess the magnitude of any question-wording effect, I set up a randomised control trial (RCT) on a nationally representative BMG online poll. Three questions were tested; the originally worded question versus two alternative question designs. A representative sample of 2,587 GB residents were polled, randomly displaying one of the three question designs to each respondent. Roughly 850 responses were collected for each design and response options were randomly ‘flipped’ for all three question types to ensure that option order effect was eliminated. The results for all three designs were weighted to the same nationally representative weighting targets to eliminate the possibility of demographic effects.

 The question designs…

The original polling question was released in the Telegraph and asked a representative sample of adults living in Britain the following:

“do you agree the EU is trying to bully the UK in the Brexit negotiations”

It’s fair to say that, that in its current form, this isn’t the way I would have chosen to word the question. But that’s not to say that I think it wouldn’t tell you anything at all. Balance is important, that’s what we’re taught in Social Research 101. So immediately researchers reading this post will probably feel that this is an imbalanced or mildly directional design. Surely it would be less biased to ask the question in a way that feels more balanced. Perhaps the following:

“to what extent do you agree, or disagree, with the following statement: “the EU is trying to bully the UK in the Brexit negotiations””

For most people, this is job done. Asking whether a respondent agrees or disagrees gets us a minimum level of balance and it doesn’t feel too leading. So that will probably do.

But perhaps by implicitly associating the EU as a probable antagonist within the question wording itself, we may be biasing the outcome by leading uninformed/disinterested respondents? Or those who have less interest in the subject generally.

So, even though they have every opportunity to express a neutral view, this alternative question design may be planting a seed within respondent’s minds that the EU has been accused or associated with bullying behaviour.

Consequently, if a respondent felt unsure on the subject of EU negotiations they may feel compelled to agree with the statement. Or perhaps, as some have suggested, some people just tend to ‘agree’ with statements put before them generally speaking.

Consequently, there may be an inbuilt bias in the way the question is worded, which goes beyond reminding people that they can agree or disagree. Perhaps we should change the wording to something that doesn’t imply that the EU is acting unfairly and instead removes the phrase “bullying” from the statement entirely with the subject being implicitly portrayed in a positive way. Perhaps something like:

“to what extent do you agree or disagree with the following statement: “The EU is acting fairly in its negotiations with the UK in relation to Brexit””

There will, of course, be those who argue that the wording of the second alternative design is also subjective and biased, presuming that the EU is a fair actor, but at least we can all agree that the implicit assumptions are very different between them.

Personally, my view is that statements like these are all implicitly biased to some degree, and the striving for scientific levels of neutrality misunderstands what it means to conduct survey research. But I’ll save the psychology of survey design for another post.

Looking back at these three questions, they do appear to be substantially different from one another. You can see for yourself below:

Original design – “do you agree the EU is trying to bully the UK in the Brexit negotiations”

Alternative design 1 – “to what extent do you agree, or disagree, with the following statement: “the EU is trying to bully the UK in the Brexit negotiations””

Alternative design 2 – “to what extent do you agree or disagree with the following statement: “The EU is acting fairly in its negotiations with the UK in relation to Brexit””

If you had to guess, I’m fairly sure most researchers would say that the different versions of these questions would lead to drastically different outcomes, depending on which one was asked? And although some will disagree, perhaps for many, Alternative design 2 would traditionally be seen as the more balanced and less ‘subjective’ version. A question whose results can be considered closest to the ‘truth’ if you like. The others may be simply too leading, biased or are framed incorrectly to give you any meaningful information.

And herein lies the problem, on this occasion it really doesn’t, at least not in a way that suggests the interpretation of the results are wrong.

Results for different question-wording designs…

The results displayed in the chart above show that there are no significant differences between the original design and alternative design 1. Around six in ten (60%) of those who were shown the original design either strongly agree (27%) or somewhat agree (33%), and a similar proportion (60%) say that they strongly agree (26%) or somewhat agree (35%) when shown alternative design 1. There is also relatively little difference in the proportion of people who say that they don’t know. Some 16% for the original design, and 14% for Alternative design 1. None these minor differences are statistically significant.

Clearly, we would expect the results for Alternative design 2 to vary considerably, especially as the direction of the question is one that implicitly associates fairness on the part of the EU rather than bullying behaviour, and the results show that a clear majority disagree with the statement, rather than agreeing.

Over half (53%) said that they either strongly disagree (24%) or somewhat disagree (29%) with the question, and just over a quarter (28%) said that they either strongly agree (8%) or somewhat agree (21%).

Interestingly, for Alternative design 2, around one in five (19%) said that they didn’t know, which is significantly higher than the proportion for the original design or alternative design 1.

In order to present the results in a way that is more equitable, however, the data for the second alternative design needs to be flipped. The data presented in the chart below show the results flipped directionally for Alternative design 2, and while some may argue that the results are not exactly comparable, for the purposes of this post we have done just that simply as a way of examining any effect that question-wording would have on the interpretation of results.

Results with Alternative design 2 results ‘flipped’ for comparability…

Overall, it is fair to say that the results from this experiment suggest that question wording has a relatively small effect on public opinion results. Though the differences are statistically significant when compared to the second design, the impact certainly doesn’t change the overall story of the results.

There is virtually no difference in the results between a ‘balanced’ and ‘imbalanced’ introductory text design, and in-fact the original design and Alternative design 1 show almost identical results for each response code.

However, when comparing the results for Alternative design 2 with the other two designs, there are certainly small differences, and these differences have two main themes.

First, there is between a 6 and 7-point drop in the proportion of people that responded in a way that implies the EU is behaving unfairly or bullying. This difference is statistically significant and implies that there may be a small question-design effect, though clearly not enough to change the general interpretation of the results.

Second, there is between a 3 and 5-point increase in the proportion of people who say that they don’t know. Again, this difference is statistically significant and suggests that the question may be slightly harder to answer for a small proportion.

The difference is small, what could explain it?

Although the difference between the designs is small, what could explain it? Before I conducted the experiment, I thought one possible explanation could be the distance from the subject. i.e. those who felt they knew less about Brexit, or politics, in general, could be more easily swayed by the question wording, and those who felt they knew more may be less swayed.

Interestingly, my instinct was wrong, there was a slight differential effect, but not as I expected. As the results below show, those who said that they were “Not Very Interested” or “Not At All Interested” saw a significant change in the proportion of people selecting that they did not know across the designs, whereas those who said that they were “Fairly Interested” or “Very Interested” in politics saw a significant change in the majority in favour of the notion that the EU is a bully/acting unfairly.


How the results should be interpreted…

These results should be interpreted in two main ways.

First, it appears that there is some truth in the original headline result, and we can be confident that a majority of Britons do indeed feel that the EU is behaving in a way that could be considered ‘bullying’ or ‘unfair’ when it comes to the Brexit negotiations. The finding is consistent no matter how you ask the question and across most sub-groups.

Second, it should be said that the results do not suggest that question wording is an unimportant factor more generally for opinion polling. In a future post I will demonstrate just how much of an effect question wording can have, but for now readers will simply have to take my word that question wording will not always have the same effect magnitude. Some subjects will be more influenced by wording than others.

Although question design is a very important factor when conducting public opinion research, as this experiment has shown, questions that are designed to measure thoroughly versed, highly emotive and topical issues; such as the relationship between the EU and UK during the Brexit negotiations, may not sway the responses of those polled as much as we might think.

This post is the first of several to come from the polling team at BMG on survey design and methodology. So, if you’re interested in polling methodology, please watch this space for further posts over April and May.

Methodology, fieldwork dates, and a full breakdown of these results can be found here.

For a more detailed breakdown of results from this poll, or any other results from our polling series, please get in touch by email or phone.

polling@bmgresearch.co.uk

@BMGResearch

0121 333 6006

Dr. Michael Turner – Research Director & Head of Polling

Share this article: