go to top

Communication Studies Project: Audience Research

Don't be misled by the fact that only ten marks are available for the section on "Purpose" into thinking that audience research is unimportant. It is vital to the success of your project.

You should explain why audience research is necessary and what, in broad terms, every communicator needs to know about her audience - Berlo's SMCR Model provides a sound basis for this.

Your audience research must be very thorough and you must demonstrate a broad knowledge of the techniques available to you, even if you don't necessarily use them yourself. In particular, you should be aware of the difference between qualitative andquantitative research. It is worthwhile investigating the research carried out by, say, NRS and BARB as examples of the quantitative approach and the kinds of reception research carried out under the heading of New Audience Research as examples of the qualitative approach. You might also like to take a look at the external linkSurvey Question Bank's 'Data collection zone - SQB methods fact sheets', which has some excellent guidance on survey design.

All research methods have their advantages and disadvantages and you should show awareness of those. You may also find it useful to consider the shortcomings of even highly professional research. Normally we would expect you to attempt to combine a variety of methods, always allowing for the time and money constraints.

Bear in mind that you will need to show evidence of having conducted the research. Where printed questionnaires are concerned, that is quite easy but where interviews or observations are concerned you will need to supply sample tape recordings, or letters arranging and confirming the observations or interviews, or a statement from someone in authority that you actually carried out the observation/interview.

This section on audience research is arranged under the following headings:

Reliability and validity

Note: under the section on the project commentary I have suggested that some students might wish to consider the implications of post-modernism for the research they will have carried out throughout the project. In my experience, most students do not become fully confident with the ideas and concepts of post-modernism until towards the end of the course, so it would seem appropriate to leave it 'till then. However, there's no harm in looking at that section now if you wish to do so.

In sociological surveys, it is generally considered essential to establish that the surveys conducted produce results which may be considered both reliable and valid. I have not seen much attention paid to these two concepts in the marking scheme for Communication Studies. You should in principle certainly pay attention to them, but they are not necessarily easy to get to grips with. I would suggest you check out with your communication lecturer how important she considers it that you investigate these two linked concepts. If you do not already have a hand-out or notes on them, I have provided a brief overview below. However, if you wish to investigate the concepts more fully, you'll find very thorough descriptions at external linkBill Trochim's Research Methods Knowledge Base. There's also a rather simpler and shorter overview of reliability and validity external linkhere at Colorado University.


In their discussion of scaling methods used for attitude measurement, Moser and Kalton (1971) provide the following definition of reliability:

A scale or test is reliable to the extent that repeat measurements made by it under constant conditions will give the same result (assuming no change in the basic characteristics - e.g attitude - being measured.

p. 353

A parallel with the method of the natural sciences is evident here - a laboratory experiment from which researchers claim to draw new knowledge will be subjected to attmepts in other laboratories around the world to repeat the original experiment under very similar circumstances and attempt to observe the claimed results. Thus, for example, the claims made a few years ago for cold fusion have been largely discounted since no other researchers, despite exhaustive efforts, have been able to replicate the results claimed for the original experiment. It's certainly not immediately evident, though, how this natural science approach can be readily transferred to the social sciences. The natural science model would seem to imply that you should survey your audience sample and then survey them again; if you come up with the same results each time, then the survey may be said to be reliable. The problem here of course is that we are not dealing with bacteria in a Petri dish, but with thinking, emotional, rational, irrational people. If you repeat your survey immediately after first conducting it, it may appear to be more reliable than it is because the respondents will have remembered what they said first time around. If you repeat the survey a couple of days later, it may turn out that the original survey had caused them to reflect more deeply on the kinds of questions you asked and the survey itself then proves to be the cause of its own unreliability. If you conducted your survey about attitudes to forcible régime change in Iraq before the first attacks and repeat the survey after the blanket media coverage of the ensuing 'insurgency', then the intervening events may well have caused the respondents to change their views. So, by repeating your test early, you run the risk of encountering the 'memory effect', by repeating it too late, you run the risk of intervening events having changed respondents' views.

One way to attempt to overcome this problem is to use the parallel forms (or alternate forms) method. Using this method a large set of questions is established which deal with the same set of constructs several times over. The questions are then randomly separated into two sets and each set is administered in turn to the same sample of people. The two sets can be administered at the same time, thereby avoiding the potential pitfalls mentioned above. The major problem is that, if it is to be genuinely reliable, this method requires that a large number of questions be generated and probably is not something you could reasonably be required to undertake/


Moser and Kalton (1971) provide the following definition of validity:

By validity is meant the success of the scale in measuring what it sets out to measure, so that differences between individuals' scores can be taken as representing true differences in the characteristic under study. It is clear that to the extent that a scale is unreliable it also lacks validity. But a reliable scale is not necessarily valid for it could be measuring something other than what it is designed to measure.


Sociologists distinguish several different measures of validity, which I shall not discuss in great detail here. If you consider you need to know more about them, check them out external linkhere at Bill Trochim's site.

Face validity essentially a fairly commonsensical, subjective judgment as to whether or not a common thread you are looking for runs through all the items. If in your project you are dealing with a subject area you are unfamiliar with, it would be a good idea to get someone with greater expertise to check your survey for this face validity.

Content validity is a similarly subjective measurement, but adds to the requirement that a common thread should be covered that this thread should also be covered in its full range. Again, if you are investigating an area it would be a good idea to consult an expert in an attempt to ensure content validity.

predictive validity and concurrent validity are concerned with how well the measure can predict a future criterion and how well it can describe a present one.

What do you need to know?

Don't omit anything

A general warning is that you should not leave out any questions simply because the answers are "obvious" to you. It may be obvious to you that a certain kind of background music is just right for your video but it could be the kind of music which makes your audience throw up.

That example also makes it clear that you need to sit down and think carefully before you draw up your audience research - if background music is vital to your video and you haven't asked them what they like, then you won't have a clue how to choose it.

Remember also that your work is going to be assessed. You will be assessed for the quality of your "evaluative decision-making" - how can the examiner know if your decisions make sense for your audience if you haven't provided any information about the audience's musical preferences?

Pilot surveys

It might be wise to conduct a pilot questionnaire with some sample members of your target audience. It may well turn out that their understanding of 'funky', 'hip-hop', 'easy listening', 'classical', 'jazz' are quite different from yours. It may turn out to be more productive to ask them to name favourite pieces of music, composers or performers, rather than asking them to choose their preferred style.

It's helpful if pilot surveys are conducted on members of your target audience, but it's not absolutely essential. The main purpose is to avoid any ambiguities and misunderstandings - you've given them a list of newspapers from which to choose the one they read most often: when they choose 'Guardian', do they mean the national Guardian or their local Cornish Guardian, when they choose Times, are they referring to Murdoch's Times or their local Lake District Times?

Still, you should give some thought to getting pilot respondents who are in crucial ways similar to your target audience - there's no point in testing an elaborate and complex questionnaire on a highly literate pilot group if your intended audience can barely read.

Lateral thinking

There is also the problem that it is not always clear what questions you need to ask. For example if for some reason (limited budget perhaps) you have to produce something in print for people who do not normally read much, you can certainly ask them about their reading preferences, but the information they give you may not really be very helpful. So, in such a case, although your artefact is going to be in print, it may be sensible to ask your audience about their television and film preferences - hardly the most obvious question.

It might also turn out that you intend to produce a video artefact but your audience clearly state a preference for something in print - if you haven't asked them any questions about their reading preferences, how can you know what's appropriate.

Give some careful thought to the way that you formulate your questions. Suppose you are intending to produce a booklet to be made available in public libraries to people who have a fair amount of money. You make the assumption that the wealthy professionals you are after are well educated and therefore read a lot and frequently visit public libraries. As a result, you concentrate your questions on the kinds of things they read, trying to get a feel for the subject matter, layout and design etc. That could be quite misguided, though. If instead you determine your respondents'

  • Educational level
  • Newspaper reading habits
  • Book reading habits
  • Use of public libraries
  • Use of bookshops

then you are more likely to obtain useful information. After all, it is possible that wealthy people who enjoy reading buy their books instead of borrowing from libraries.

It could even be the case that it's not appropriate to ask any direct questions at all. When we think of 'surveys', most of us think of questionnaires, but there are lots of different methods of audience research. If you are interested in finding out what people think of their college, it might be more productive to ask them to draw pictures, expressing their feelings, or to ask them to go through a selection of magazines and newspapers and ask them to choose photographs which match their feelings about the college. Certainly, if you are aiming for a very young audience, this is probably the only practicable way of getting the information you need, but it can be very revealing with adults too.

You may find consideration of 'psychographics' useful.

Study examples

It would be a good idea to take a close look at the kinds of questions which are asked in the market research surveys which arrive in the junk mail to your door. You could also take a look at some examples of the research conducted for the National Readership Survey, which is an excellent example of its kind. Typical of the information which would be asked in such a questionnaire are the following:

  • Sex
  • Age
  • Town/area of residence
  • Occupation
  • Income/disposable income
  • Marital status
  • Number of children
  • Type of house
  • Educational qualifications
  • Number of hours spent watching TV
  • Preferred TV programmes
  • Number of hours spent listening to radio
  • Preferred radio programmes
  • Regular newspaper reader?
  • Preferred newspaper
  • Type of music preferred
  • Magazines read
  • Leisure activities
What have you missed?

Is there anything you've missed? Suppose you want to produce a video for your audience. Can they get to a public showing? Can they afford to buy a copy for themselves? If they can't afford it, you may have to include advertising: will your audience tolerate that? If they won't, you may have to produce something in print instead. So a whole new set of questions rises: what is their reading level? what typeface do they prefer? how would you deliver the printed artefact to them?

Take some time out now to write down a list of the essential information you need from your audience.


By now, then, you should have a detailed list of the objectives of your survey.

It would be advisable to run a pilot survey before you do the real thing because you need to check whether your questions are understood in the way you intend them. For example, if you want to know people's income, do you mean total income or disposable income? Do they know what you mean by "disposable income"?

Once you have run your pilot survey and tidied up such loose ends, you can proceed to figure out how to select people who are representative of your target audience. There is a wide variety of techniques used by sociologists and advertising agencies. You may not have time to use them, but you should show an awareness of them. You can find details of such methods in most introductory works on statistics. A couple of useful books are Success in Statistics (2nd edition) by Fred Castle (John Murray 1989) and

Quota sampling

The method you are most likely to use is one which is often used by such organisations as Gallup and MORI when they conduct opinion polls. It is not really a random sampling method, but is used because it is cheap and easy. The message is called "quota sampling" because a quota is set for different sections of the population according to sex, age, income, social class, occupation and so on.

The first thing you need to do is to establish your sampling frame. Suppose you are producing a student guide for your college. If you go along to administration they can probably tell you what proportion of students are A-level students, GNVQ students, evening class students etc. Using that information, you can ensure that you have the same proportions in the 100 people you survey. That is a simple example only - there may be many other variables which you need to take into account, for instance the educational level of the students, their age, their sex, how many live locally etc. If you are lucky, you may also acquire such information from the college authorities, which will enable you to ensure that your quota is representative of the target population.

Having established the appropriate quotas, you would probably leave the actual selection of respondents to your interviewers' discretion. Be careful - bias can creep in here. Your interviewer may choose to go and interview all the business studies students she can find in the library. Or she may choose to ask all those she can find in the canteen. Or she may ask all those she can find in the local pool hall. Those in the library may be "swots"; those in the pool hall may be "lads". Either way, the selection is biased. You could hope to overcome that bias by asking business studies lecturers to give their students your questionnaire. But what about those students who habitually skive? You may not be able to overcome such bias completely and, depending on the nature of your audience, bias may be quite high and difficult to overcome. You must however make it clear in your log and commentary that you are well aware of such sources of bias and should explain what you did to cope with them.

Possible research methods

You could conduct your research by any one or of the following methods

  • Using existing research (so-called secondary sources)
  • Printed questionnaire (this and all the following constitute primary sourcces)
  • Interview surveys
  • Face to face
  • By telephone
  • Observation
  • Internet

Now let's take a look at each of those:

Secondary sources

Don't think you have to find out everything about your audience yourself. It is quite possible that such details as you require may already have been gathered. For example, if you intend to conduct an advertising campaign in the local press, it is quite likely that the local newspaper can provide you with an audience profile. If you are providing marketing materials for a new course in your college, the marketing department may be able to show you materials for existing courses which they know to be successful with potential students.

Although this is certainly useful material for your research, you should beware of believing research data simply because they are given to you by people in authority. Your college marketing department may well have conducted no research whatsoever and are simply telling you what is attractive to them and they therefore assume that it must be attractive to potential students. Try to establish whether the research really has being carried out and, if you can, get hold of the research results so that you can check them for yourself.

There are, though, a number of secondary sources which you can generally rely on:

Bear in mind, though, that many of these organisations have spent a lot of money on their research, so you will have to adopt a very subtle approach if you want them to give you their results for nothing.

Printed questionnaires

Under this heading we shall consider all questionnaires where the respondent has her own printed copy to look at, tick boxes on, write comments on and so on.

Possible ways of reaching your respondents

The first question you have to answer is: how do you get your questionnaire to your respondents? You could

  • Stop people in the street
  • Leave the questionnaires in some central place where people can fill them in and place them in a box
  • Deliver them door to door
  • Mail them
  • Internet
Stopping people in the street or leaving questionnaires for completion


  • Self-selection - your sample may, to an extent, be self-selecting. If you conduct your survey in your local shopping centre on a Monday morning, you will miss most of those people who are employed and, if your questionnaire is lengthy, all of those who are in a hurry. The time of day or week that your survey is conducted is a commonly recognized problem. For example, in the run-up to the 1992 General Election, Labour appeared to surge ahead every weekend. It seems unlikely that the general population become more left-wing each weekend, so a more likely explanation is that women (who tend in Britain to be more right-wing) were normally found at home during weekdays. Place also has a considerable effect - if you leave your questionnaires in the local library, you will miss all of those people who don't use library.
  • Conducting research in this way can be inefficient as well because a great proportion of your respondents may turn out not to be part of your target audience.
  • You need a team of reliable helpers if you are going to stop people in the street.
  • Any follow-up research you need to conduct by these methods can be very time-consuming as well for this same reason.
Door-to-door delivery

It might be better to take your questionnaires to a certain area where you think you will find the kind of person you are after. You may find that your local council can help you here, or perhaps a local political party will have some sort of demographic profiling which can help you. It is normally less time-consuming to leave a note saying you will pick up the questionnaires later, rather than to wait while they are filled in.


  • Some of the randomness of the previous two methods will be removed
  • People will be able to spend more time on the questionnaire


  • Anonymity - for one reason or other your correspondents may wish to remain anonymous; you can hardly promise anonymity if you are going to their house to take the completed questionnaire
  • Risk - there may be a risk to you; perhaps you should consider going to the houses with a companion
  • Identification - some people may think you are a risk to them and will demand some form of ID from you
  • Who completed the form? - if you are not present when the form is completed, how do you know who completed it? It may say on the form that it was completed by the husband, but perhaps he just gave it to his kids to fill in.
Postal questionnaire


  • People can complete these questionnaires in their own time and in their own environment, so, if you have a lengthy questionnaire, it is perhaps more likely to be filled in under such circumstances
  • In principle you are less likely to miss responses from certain types of people, for example those who are temporarily away from home
  • You have no travelling problems
  • Providing the respondent can be identified, any follow-up research can be carried out more efficiently
  • People may be more inclined to give you confidential information if they complete your questionnaire at home


  • You may have to wait a long time before questionnaires are returned
  • If your questionnaires are not reply-paid, they may not be returned at all, so it can be extremely expensive
  • Who completed the questionnaire?

Disadvantages of all printed questionnaires

  • Your respondents must be literate
  • Your questions must be absolutely unambiguous because you are not there to offer clarification
Internet questionnaire


  • A very efficient and cheap way of delivering the equivalent of a printed survey
  • Can include automatic processing of results


  • Limited to respondents with Internet access
  • How do you kno wwho completed the questionnaire?
  • Your respondents must be literate
  • Your questions must be absolutely unambiguous because you are not there to offer clarification

If you have your own website, you may find that the website hosting service provides some free questionnaire software which you could use. Depending on how competent you are with computer software, you could maybe rig up some system whereby the respondent has to enter a pin (which perhaps you send to them by email) in order to complete the questionnaire.

You could, of course, simply send out your questionnaire by email. If you do so, then it's generally best to send the questionnaire as an attachment, rather than in the body of the email because it's difficult to predict how the email will be formatted in the recipient's email client and it can be very difficult and confusing for them to complete. Microsoft Word makes it easy to produce questionnaires in which the repondent can type their responses into text fields, checkboxes etc., but then you have to be sure that they have Microsoft Word. The ideal solution would be to produce a form which the user can complete using Adobe Acrobat Reader (PDF), but you need the appropriate (not cheap) software to produce it. This is changing all the time, of course, so I suggest you consult, say, the technicians and/or the marketing department in your college to see what they advise.

Interview surveys

By "interview surveys" we refer to those where the interviewer has a number of questions to ask and it is she who records the answers in some way, not the respondents. This could involve simply reading out the questions which are on a printed questionnaire. The interviewer then simply ticks boxes, makes notes etc. They could also be much less structured than that - the interviewer could have a list of questions which need to be asked, but is free to work them into an informal discussion as she sees fit. This method is often referred to as semi-structured or unstructured interviewing.


  • Many people object to the impersonality of printed questionnaires, so this method lets them know who is asking the questions
  • You have control over the way the questions are asked
  • It allows you to elicit the respondents' views on matters which can't be noted down in tick boxes
  • You can deal with respondents whose literacy is poor
  • You know who answered the questionnaire
  • Depending on how you present yourself, you may be able to ask more questions than your respondents would be willing to answer if they were faced with a printed questionnaire


  • It is time-consuming
  • Interviewer variability may be be a problem - there are all kinds of reasons why the interviewer may have an influence on the answers: age, gender, attractiveness, pronunciation, intonation, gestures etc. You can overcome this by conducting all the interviews yourself, but that will reduce your sample size
  • it's not always easy to find a suitable location for this kind of interview
By telephone

The advantages and disadvantages of telephone surveys are similar to those of face-to-face interviews. In addition, they are prohibitively expensive and respondents tend to run out of patience quickly.


The major problem with surveys, of whatever kind, is that people, for one reason or another, will lie. A recent example is the research which shows that eleven percent of men make love while watching television whereas only five percent of women do. It could be that those women are especially sexually active, but I rather suspect that some of the men are exaggerating because they like to see themselves as studs. It makes no difference that the survey is confidential and no one can identify them. Another example is some research carried out in Exeter to find how often people go to the theatre. The responses suggested that more people go to the theatre than there are seats available. Presumably, respondents wanted to make the right impression on the student interviewers. If you ask a respondent about their income, they could exaggerate it to make an impression on you, or they could deliberately underestimate it in case the taxman finds out - even if no-one could possibly identify them. The 1992 General Election is another example. The many polls which were conducted daily suggested a slight lead for Labour; in the event Conservatives won a comfortable victory. Why? No-one really knows, but it seems likely that respondents thought that they gave a better image of themselves if they said they would vote Labour. There are numerous examples of products being designed in response to consumers' expressed wishes only for it to be found that they hardly sold at all - the Ford Edsel being the classic case.

Therefore it may be desirable for you to attempt to overcome this problem by observing how people actually behave. There may be other good reasons to. If you are going to produce an artefact for 5 year olds, it is unlikely that they will be able to give comprehensive responses to a written questionnaire, so observation might be the only reasonable choice. There is, of course, always the problem that your presence can have an effect on the subjects' behaviour, but you can overcome this by simply remaining quiet and adopting a low profile so that in time your audience forget your presence. Alternatively, you could simply leave a video or audio recorder running.

This kind of research can be very time-consuming and you may well find that you simply cannot make enough time available to do it thoroughly. However, in most cases it is at least worth trying. If you find that the results you come up with are inadequate, then you should make clear in your final assessment of your research what the shortcomings are and why they occurred.

Questionnaire Design

Before you start to design your questionnaire, make sure that you examine a number of existing questionnaires and try to determine what they have in common. Note how many of them will make use of Schramm's fraction of selection - often a reward is offered for the return/completion of the questionnaire and some effort is made to make the questionnaire appear easy to complete, for example by means of tick boxes, grouping of question types, even by using a relatively small typeface so that the questionnaire appears short.


Make it attractive. That means attractive to your audience, not necessarily to you. One student who was herself a benefits claimant, was intending to produce a guide to benefits for other claimants. She produced a very tidy-looking questionnaire printed on a fancy laser printer. When she attempted to interview respondents at the DHSS, nearly all refused. She tried again, this time using a hand-written questionnaire, which was much more successful. Presumably, in the first case the questionnaire looked so official that the potential respondents thought she was some kind of DHSS 'spy'.

Consider what technology is available to improve the presentation of your questionnaire and make sure you know how to use it efficiently. You may even find that in your college there is already software for compiling and processing questionnaires - ask around before you start.


Put the title of your study at the top of the first page. Avoid calling it "questionnaire", "audience research" or similar. That will really turn people off. Try to think of a title which makes it appear of some relevance to your respondents. If you can think of a title which suggests that they will get something out of participating, so much the better (think of the fraction of selection).

Write a brief introduction. People like to know what the whole thing is about. Again, if you can suggest that the respondents will benefit in some way from the completed artefact, so much the better. Should you let respondents know that you are a student? It all depends on who your audience are - other students are often only too willing to help someone who has to do assessed work, but other people may be inclined to take students' work less seriously than they otherwise might.


If you can possibly get someone in a key position or some organisation to endorse your research (perhaps even allowing you to use their logo), so much the better. But be careful who you choose. If the student we referred to above had said that she had the endorsement of the DHSS, that would probably have reduced her chances.

General format considerations
  • Normally you should start with "non--threatening" questions which are easily answered, e.g. age, sex, marital status etc. In any case, these are the sort of opening questions which most respondents expect, so the questionnaire starts off easily for them.
  • If you expect respondents to complete the questionnaire by themselves, make it absolutely clear how the questions are to be answered. For example, if you give them a choice from a list, make it clear how they should indicate their choice and whether or not they can choose more than one from the list. It is a good idea to keep such instructions in italics or bold or in a different typeface throughout the questionnaire so that they are clearly distinguishable from the questions themselves.
  • Group items together into sections which belong together, preferably with an appropriate heading for each section. "Chunking" questions in this way makes people feel a sense of achievement when they have completed a section, so the questionnaire does not seem so long. However, if you have to use a variety of different question types (e.g. "tick a box", "rate in order of importance" and so on) within your questionnaire, it may be preferable to group the questions according to format rather than topic so as to avoid confusion.
  • It may be desirable to explain why you are asking certain questions, especially if they are seeking information of a confidential nature. Keep such explanations in italics, bold or whatever.
  • Number the questions so that the respondent is not confused and is also given confirmation of their progress through the questionnaire.
  • If it is necessary to turn over at the end of a page, say so.
  • If a respondent needs to skip certain questions as a result of an answer to a question, make that absolutely clear. If you can, avoid having too many of such sections because some respondents will come to feel that the questionnaire is irrelevant to them and will probably give up completing it.
Interview questionnaires
  • Print questions on one side of a page only. It is very fiddly for interviewers to turn over pages during an interview.
  • You need to reduce interviewer variability, so try to make it quite clear what your interviewers should say when they introduce themselves and when they introduce each section of the questionnaire.
  • Use italics, bold or a different typeface for the instructions to the interviewer.
  • If you use a lot of open questions, in answer to which the interviewee can say whatever they want, try to anticipate the likely answers to those questions (pre-testing can help you determine what likely answers will be). The advantage of such questions is that the interviewee feels free to express herself and not strait-jacketed into choosing pre-processed answers. However, the disadvantage is that the interviewer has to write a lot and that the answers are difficult to process. You can reduce these problems by providing the interviewer with a number of ready-made answers which they simply need to tick, leaving a gap for any responses which do not fit.
Formulating questions
  • The main concern should be to avoid any ambiguity. You must therefore pre-test and trial your questions before you conduct your survey.
  • Keep the language as simple as you can. You may want to know how your respondents "prefer to access these data", but you should ask how they prefer to "get at this information".
  • Avoid leading questions which suggest a response to the respondent. The question "should benefits for single parents be reduced?" suggests that they should. Instead, ask "should benefits for single parents be reduced, left alone or increased?".
  • Watch out for emotive language. If you ask "Do you consider that people should have the freedom to send their children to the school of their choice or should they be forced to send them to a particular school?", you suggest the answer by the use of the words "freedom" and "forced".
  • Don't use questions which assume that a certain state of affairs exists now or existed in the past. The classic example is "When did you stop beating your wife?". If you ask the question "Do you want the government to stop dismantling the welfare state?", how does a respondent answer if she does not agree that the government is "dismantling" the welfare state?
  • Don't ask double questions. "Would you approve of students being allowed to use their own motorbikes and cars to come to college?" Is impossible for a respondent to answer if she approves of motorbikes but not cars. "Do you consider these brochures attractive and interesting?" cannot be answered if some are attractive and interesting and others are not, or some are attractive and not interesting etc.
  • Don't be vague. "Should students have to pay their tuition fees?" is difficult to answer if the respondent thinks that higher education students should, but further education students should not, or British students should not and overseas students should. Be wary of terms like "several", "most". Try to suggest a range of percentages which the respondent can choose from. When asking if a respondent engages in an activity, try to find out how often. You could consider using the terms "never", "rarely", "sometimes", "often" or something similar for respondents to choose from. If you ask respondents which national daily they read, when they tick the Times and the Guardian, how do you know that they read the Times every weekday and the Guardian only on Saturdays?
  • Keep "rating" questions (for example Likert Scale) to a minimum. They can provide very useful information, but if respondents are confronted with too many of them, they will simply go for a middle value every time.
  • Always consider allowing "other" as an option, as well as "don't know". For example if you ask "How many cylinders does your car have?" and allow only 4, 6 and 8, what do you do with those respondents who have a 5 cylinder Auid or a 16 cylinder Bentley and what do you do with those respondents who didn't know that their car has cylinders at all?
  • If you can, try to make answers mutually exclusive. However, if you can't do that, make it clear that respondents can choose more than one answer.


From at least some of your respondents you may very well require a considerable commitment of time and energy, with the prospect of gaining little in return for helping you. Maybe you could usefully implement what Tom Peters (1995 : 74) calls 'foot-in-the-door research'. He quotes a fascinating study where subjects were induced to put a very small sign in their front window supporting the cause of traffic safety. Later, they were asked to display a great billboard outside their home, which required letting outsiders dig holes in their front lawn. Most agreed, whereas 95% of those who had not been asked to make the first small commitment refused to allow the billboard. See if you can somehow adopt this incremental approach.


You must test your questionnaire on a sample of your audience before you start conducting your survey. Here is an example of the sort of thing which can go wrong: a student is commissioned to produce a guide for students and staff to the facilities available in the college; one of her questions is formulated as follows:

Have you ever had any problems using any of the following:
Please enter a where appropriate
Fax machine
Library computer

It looks very neat and professional, but there are two problems with this. The first is that many respondents take a to mean "I can handle this OK - no problems". It might have been better to ask respondents to enter a which is more commonly associated with "something wrong".

The second problem is that many respondents will reply that they have no problems with such hardware. One reason many respondents will say that they had "no problems using" the hardware is that they have never tried to do so, perhaps because they anticipate that they would have problems if they did. Pre-testing on a few people and discussing their answers with them will help to reveal such problems before you conduct your survey in earnest.

  • Use people as close as possible to your intended respondents.
  • Leave room for them to make comments on each question.
  • Leave room for them to make comments on the whole questionnaire.
  • When analysing the results of pre-testing, look out for any similar responses which the testers have given in response to a "please state" or "other" option. You can then include such responses as an option in your final design.
  • When analysing the results of pre-testing, look out for any totally dissimilar responses which the testers have given in response to a "restate" or "other" option. If you consider that such disparate responses might be difficult to process, you might prefer to abandon the question completely.

link to General overviewGeneral overview

link to PurposePurpose

link to ResearchResearch

link to DevelopmentDevelopment

link to PresentationPresentation

link to ValidationValidation

link to LogLog & Self-assessment (Commentary)

link to ArrangementArrangement

link to Post-modernism & audience researchPost-modernism & audience research