Methodology

How we do surveys and why they are representative

 

Our Promise –

We do not share your personal information with anyone. We do not share your email address. We do share your responses to our questions, but not matched with your email address or name. We do not share with anyone any of the demographic information that is unique to you. We do not sell or share your email address or name with anyone.

Why we do the surveys –

We do this as a service to the community. We do not make money doing this. We do it because we think it allows people to participate in community discussions. It allows people who work or have to be home and can’t go to meetings to have a voice that isn’t often heard otherwise. We do this as another way that boards, councils, and commissioners can find out what the community thinks. We also provide a way for these various entities to communicate facts to the public. Many times, elected and appointed people are frustrated by the limitations they have in communicating with the public. We try to help that process of communication.

 

How we conduct our surveys –

Madrona Voices conducts its surveys online. Those who respond do so voluntarily. It doesn’t matter which method is used to conduct a survey; all responses are voluntary. Calling a random group of people on the phone still only gathers voluntary responses — those who are willing to answer the phone, who are home, and who want to finish the interview.  Surveys can be sent by snail mail, and such efforts only receive back responses that are voluntarily returned. Those surveys conducted in person are only done with people who answer the door and talk or with those who respond to the person on the corner with the clipboard.

What makes a survey representative of the community is its ability to gather responses from people who match the spectrum of those in the community. If people who support an idea are more likely to respond to the survey than people who oppose it or vice versa, then the survey is not representative, even if it was a random group that were asked.  The randomness of who is asked is an attempt to avoid the problem of oversampling one group or another, but it is no guarantee of success.

The best way of making sure that a survey is representative is for the survey to be neutral, unbiased, and have a reputation of being fair. If all parties (both those who support and oppose something) are willing to participate, then the survey is likely to be representative.

The only way to verify if a survey is representative is to compare the results of a survey to something like an election.  If the results of the survey closely match results of a vote, then there is a good probability that the methods and techniques used by those doing the survey are good. They match the community’s true opinions.

The surveys conducted by Madrona Voices regarding the public hospital district prior to the April 2018 vote predicted that the vote would be 76% (+/- 5%) in support of the district. The actual final vote result was 76%. That meant that those who voluntarily answered our surveys were an excellent match for those who voted.

 

Who we distribute the surveys to –

We have worked very hard to accumulate email addresses of those who live on-island or who own property here. We have gathered email addresses from a variety of sources. We encourage anyone who finds our survey to participate. Several hundred people have asked us to send them invitations to the surveys. We have over 3,000 island email addresses. We have over 1,000 different people who have participated in the surveys we have run so far. There are about 1,700 full-time households and about 5,400 full-time residents on Orcas Island. In addition, there are people who make this their part-time home. About 6% of the population is new to Orcas Island each year. We estimate about 28% of the island residents are new in the last five years and 44% in the last ten years. This means that any email list will rapidly go stale unless it is vigorously maintained, and we are diligent about doing so.

How do we make sure that we don’t oversample a demographic? – 

We do something different than most researchers. First, we will describe how it is usually done., then describe what we do.

Most scientists doing opinion research put a quota on the number of responses. For example, a survey company might decide that it wants 400 responses — half from males and half from females. Once 200 males have responded, then any additional male responses are blocked from taking the survey. Being blocked is annoying to the person who is prohibited from participating.  Alternatively, the researcher will accept responses that they have no intention of including in the results. They use the first 200 male responses received, even though 300 answered. They simply delete the 100 they don’t need.  These two methods adhere to the idea that a strict match to the demographics of the community will assure that the sample is representative.

What we do, instead of that, is accept and use all responses we get, even if they are out of proportion to the community’s demographics.  We clearly note in the report that we have more or less of demographic ‘x’ participating in the survey and that a reader should factor that into their understanding of the results. If the sample is very divergent from the community demographic we are trying to match, then we will do something called ‘bootstrapping’ to study the variation. We still keep everyone’s answers; bootstrapping is just part of the study of the results.

For example, let’s say we needed 200 males to match the 200 females in our sample. But, we received 300 instead. That is a big difference and a poor match for the community ratio. Bootstrapping means that we randomly select 200 out of the 300 males we have and take a look at their responses. We do that again with a different 200. We do this a thousand times. We could do it 10,000 times. Modern computers are wonderfully powerful. We compare the results of the 300 to the amalgamated 200 and see if they are similar. If they are, then it is determined that the results are just as accurate whether we included or didn’t include the extra 100 in the sample. If the 300 are not similar to the bootstrapped 200 by 1,000, then we simply report that.

Trying to make sure that the demographics of the people who take a survey match the demographics of a community is just a way to try to limit bias. But, this assumes that most people in a particular demographic share the same opinion. That is generally a bad assumption. Each of us comes to our conclusions via a complex series of experiences. On most issues, there is a wide divergence of opinions, even among people who are in the same demographic.

There are a lot of hard scientists on the island. These are people who know statistics and who know all of the steps one should follow to get a representative sample. They have honed their skills studying fish, atoms, rocks, hearts, and a wide variety of other things. Many try to apply the same rules of sampling and statistics to people’s opinions.  It is good to know all of the hard science methods. But, when studying people’s opinions some additional tools are helpful.

We don’t think it is necessary or even wise to prohibit people from participating in our surveys, even if they cause our sample to stray from the demographics of the community. Yes, we are aware of what we “should” have for numbers. Yes, we know how to apply traditional methods. But, human opinions are more complex. We think we can accept the many different variations while being representative of the community at large. We can use bootstrapping to identify variations from norm. We can be transparent in our reporting and trust people to draw their own conclusions from the data we have. The problems in society seem to start when people want to tell others how to think or what to conclude.

What really matters in our surveys is not matching the demographics spot-on, but to, instead, make sure that we include a representative sample of the full spectrum of opinions held by those on the island. The best way of doing that is by being accurate, fair, and inclusive, and by providing a full set of answer options on each question.

Critics – Some believe that it isn’t possible to create an unbiased question or an unbiased set of possible answers. Some don’t think it possible for representative data to be gathered from people who voluntarily express an opinion. We disagree. We think it is possible. You can see from our work on Madrona Voices how we think it can and should be done.

Yes, surveys can be used as a tool to push a narrative or an agenda. Most of us quickly recognize surveys that are being used this way. Dismissing those who commit such abuses is appropriate.

However, dismissing the good surveys just because someone else abuses the tool means we lose a very effective and efficient way to communicate as a community. It means decisions will likely be made based on what a small group thinks instead of on what a larger group thinks. We are thankful for those in the community who encourage and support the proper use of surveys rather than throwing out all surveys because some people misuse them.

The people who are the most harsh and critical of our work seem to be the activists who often go to meetings and say that they speak for the people. They seem to fear that our survey results might not match their interpretation of what the public wants. It is better, they say, to not do a survey and to, instead, let them divine the intent of the public.  The problem is that if there are no properly done surveys, then their voice is the only one in the room. By tearing down good surveys, they are effectively attempting to silence the public at large. We do not want to see that happen, so that’s why Madrona Voices exists.