Don't be misled by the fact that only ten marks are available for the section on "Purpose" into thinking that audience research is unimportant. It is vital to the success of your project.
Your audience research must be very thorough and you must demonstrate a broad knowledge of the techniques available to you, even if you don't necessarily use them yourself. In particular, you should be aware of the difference between qualitative andquantitative research. It is worthwhile investigating the research carried out by, say, NRS and BARB as examples of the quantitative approach and the kinds of reception research carried out under the heading of New Audience Research as examples of the qualitative approach. You might also like to take a look at the Survey Question Bank's 'Data collection zone - SQB methods fact sheets', which has some excellent guidance on survey design.
All research methods have their advantages and disadvantages and you should show awareness of those. You may also find it useful to consider the shortcomings of even highly professional research. Normally we would expect you to attempt to combine a variety of methods, always allowing for the time and money constraints.
Bear in mind that you will need to show evidence of having conducted the research. Where printed questionnaires are concerned, that is quite easy but where interviews or observations are concerned you will need to supply sample tape recordings, or letters arranging and confirming the observations or interviews, or a statement from someone in authority that you actually carried out the observation/interview.
This section on audience research is arranged under the following headings:
Note: under the section on the project commentary I have suggested that some students might wish to consider the implications of post-modernism for the research they will have carried out throughout the project. In my experience, most students do not become fully confident with the ideas and concepts of post-modernism until towards the end of the course, so it would seem appropriate to leave it 'till then. However, there's no harm in looking at that section now if you wish to do so.
In sociological surveys, it is generally considered essential to establish that the surveys conducted produce results which may be considered both reliable and valid. I have not seen much attention paid to these two concepts in the marking scheme for Communication Studies. You should in principle certainly pay attention to them, but they are not necessarily easy to get to grips with. I would suggest you check out with your communication lecturer how important she considers it that you investigate these two linked concepts. If you do not already have a hand-out or notes on them, I have provided a brief overview below. However, if you wish to investigate the concepts more fully, you'll find very thorough descriptions at Bill Trochim's Research Methods Knowledge Base. There's also a rather simpler and shorter overview of reliability and validity here at Colorado University.
In their discussion of scaling methods used for attitude measurement, Moser and Kalton (1971) provide the following definition of reliability:
A scale or test is reliable to the extent that repeat measurements made by it under constant conditions will give the same result (assuming no change in the basic characteristics - e.g attitude - being measured.
A parallel with the method of the natural sciences is evident here - a laboratory experiment from which researchers claim to draw new knowledge will be subjected to attmepts in other laboratories around the world to repeat the original experiment under very similar circumstances and attempt to observe the claimed results. Thus, for example, the claims made a few years ago for cold fusion have been largely discounted since no other researchers, despite exhaustive efforts, have been able to replicate the results claimed for the original experiment. It's certainly not immediately evident, though, how this natural science approach can be readily transferred to the social sciences. The natural science model would seem to imply that you should survey your audience sample and then survey them again; if you come up with the same results each time, then the survey may be said to be reliable. The problem here of course is that we are not dealing with bacteria in a Petri dish, but with thinking, emotional, rational, irrational people. If you repeat your survey immediately after first conducting it, it may appear to be more reliable than it is because the respondents will have remembered what they said first time around. If you repeat the survey a couple of days later, it may turn out that the original survey had caused them to reflect more deeply on the kinds of questions you asked and the survey itself then proves to be the cause of its own unreliability. If you conducted your survey about attitudes to forcible régime change in Iraq before the first attacks and repeat the survey after the blanket media coverage of the ensuing 'insurgency', then the intervening events may well have caused the respondents to change their views. So, by repeating your test early, you run the risk of encountering the 'memory effect', by repeating it too late, you run the risk of intervening events having changed respondents' views.
One way to attempt to overcome this problem is to use the parallel forms (or alternate forms) method. Using this method a large set of questions is established which deal with the same set of constructs several times over. The questions are then randomly separated into two sets and each set is administered in turn to the same sample of people. The two sets can be administered at the same time, thereby avoiding the potential pitfalls mentioned above. The major problem is that, if it is to be genuinely reliable, this method requires that a large number of questions be generated and probably is not something you could reasonably be required to undertake/
Moser and Kalton (1971) provide the following definition of validity:
By validity is meant the success of the scale in measuring what it sets out to measure, so that differences between individuals' scores can be taken as representing true differences in the characteristic under study. It is clear that to the extent that a scale is unreliable it also lacks validity. But a reliable scale is not necessarily valid for it could be measuring something other than what it is designed to measure.
Sociologists distinguish several different measures of validity, which I shall not discuss in great detail here. If you consider you need to know more about them, check them out here at Bill Trochim's site.
Face validity essentially a fairly commonsensical, subjective judgment as to whether or not a common thread you are looking for runs through all the items. If in your project you are dealing with a subject area you are unfamiliar with, it would be a good idea to get someone with greater expertise to check your survey for this face validity.
Content validity is a similarly subjective measurement, but adds to the requirement that a common thread should be covered that this thread should also be covered in its full range. Again, if you are investigating an area it would be a good idea to consult an expert in an attempt to ensure content validity.
predictive validity and concurrent validity are concerned with how well the measure can predict a future criterion and how well it can describe a present one.
A general warning is that you should not leave out any questions simply because the answers are "obvious" to you. It may be obvious to you that a certain kind of background music is just right for your video but it could be the kind of music which makes your audience throw up.
That example also makes it clear that you need to sit down and think carefully before you draw up your audience research - if background music is vital to your video and you haven't asked them what they like, then you won't have a clue how to choose it.
Remember also that your work is going to be assessed. You will be assessed for the quality of your "evaluative decision-making" - how can the examiner know if your decisions make sense for your audience if you haven't provided any information about the audience's musical preferences?
It might be wise to conduct a pilot questionnaire with some sample members of your target audience. It may well turn out that their understanding of 'funky', 'hip-hop', 'easy listening', 'classical', 'jazz' are quite different from yours. It may turn out to be more productive to ask them to name favourite pieces of music, composers or performers, rather than asking them to choose their preferred style.
It's helpful if pilot surveys are conducted on members of your target audience, but it's not absolutely essential. The main purpose is to avoid any ambiguities and misunderstandings - you've given them a list of newspapers from which to choose the one they read most often: when they choose 'Guardian', do they mean the national Guardian or their local Cornish Guardian, when they choose Times, are they referring to Murdoch's Times or their local Lake District Times?
Still, you should give some thought to getting pilot respondents who are in crucial ways similar to your target audience - there's no point in testing an elaborate and complex questionnaire on a highly literate pilot group if your intended audience can barely read.
There is also the problem that it is not always clear what questions you need to ask. For example if for some reason (limited budget perhaps) you have to produce something in print for people who do not normally read much, you can certainly ask them about their reading preferences, but the information they give you may not really be very helpful. So, in such a case, although your artefact is going to be in print, it may be sensible to ask your audience about their television and film preferences - hardly the most obvious question.
It might also turn out that you intend to produce a video artefact but your audience clearly state a preference for something in print - if you haven't asked them any questions about their reading preferences, how can you know what's appropriate.
Give some careful thought to the way that you formulate your questions. Suppose you are intending to produce a booklet to be made available in public libraries to people who have a fair amount of money. You make the assumption that the wealthy professionals you are after are well educated and therefore read a lot and frequently visit public libraries. As a result, you concentrate your questions on the kinds of things they read, trying to get a feel for the subject matter, layout and design etc. That could be quite misguided, though. If instead you determine your respondents'
then you are more likely to obtain useful information. After all, it is possible that wealthy people who enjoy reading buy their books instead of borrowing from libraries.
It could even be the case that it's not appropriate to ask any direct questions at all. When we think of 'surveys', most of us think of questionnaires, but there are lots of different methods of audience research. If you are interested in finding out what people think of their college, it might be more productive to ask them to draw pictures, expressing their feelings, or to ask them to go through a selection of magazines and newspapers and ask them to choose photographs which match their feelings about the college. Certainly, if you are aiming for a very young audience, this is probably the only practicable way of getting the information you need, but it can be very revealing with adults too.
You may find consideration of 'psychographics' useful.
It would be a good idea to take a close look at the kinds of questions which are asked in the market research surveys which arrive in the junk mail to your door. You could also take a look at some examples of the research conducted for the National Readership Survey, which is an excellent example of its kind. Typical of the information which would be asked in such a questionnaire are the following:
Is there anything you've missed? Suppose you want to produce a video for your audience. Can they get to a public showing? Can they afford to buy a copy for themselves? If they can't afford it, you may have to include advertising: will your audience tolerate that? If they won't, you may have to produce something in print instead. So a whole new set of questions rises: what is their reading level? what typeface do they prefer? how would you deliver the printed artefact to them?
Take some time out now to write down a list of the essential information you need from your audience.
By now, then, you should have a detailed list of the objectives of your survey.
It would be advisable to run a pilot survey before you do the real thing because you need to check whether your questions are understood in the way you intend them. For example, if you want to know people's income, do you mean total income or disposable income? Do they know what you mean by "disposable income"?
Once you have run your pilot survey and tidied up such loose ends, you can proceed to figure out how to select people who are representative of your target audience. There is a wide variety of techniques used by sociologists and advertising agencies. You may not have time to use them, but you should show an awareness of them. You can find details of such methods in most introductory works on statistics. A couple of useful books are Success in Statistics (2nd edition) by Fred Castle (John Murray 1989) and
The method you are most likely to use is one which is often used by such organisations as Gallup and MORI when they conduct opinion polls. It is not really a random sampling method, but is used because it is cheap and easy. The message is called "quota sampling" because a quota is set for different sections of the population according to sex, age, income, social class, occupation and so on.
The first thing you need to do is to establish your sampling frame. Suppose you are producing a student guide for your college. If you go along to administration they can probably tell you what proportion of students are A-level students, GNVQ students, evening class students etc. Using that information, you can ensure that you have the same proportions in the 100 people you survey. That is a simple example only - there may be many other variables which you need to take into account, for instance the educational level of the students, their age, their sex, how many live locally etc. If you are lucky, you may also acquire such information from the college authorities, which will enable you to ensure that your quota is representative of the target population.
Having established the appropriate quotas, you would probably leave the actual selection of respondents to your interviewers' discretion. Be careful - bias can creep in here. Your interviewer may choose to go and interview all the business studies students she can find in the library. Or she may choose to ask all those she can find in the canteen. Or she may ask all those she can find in the local pool hall. Those in the library may be "swots"; those in the pool hall may be "lads". Either way, the selection is biased. You could hope to overcome that bias by asking business studies lecturers to give their students your questionnaire. But what about those students who habitually skive? You may not be able to overcome such bias completely and, depending on the nature of your audience, bias may be quite high and difficult to overcome. You must however make it clear in your log and commentary that you are well aware of such sources of bias and should explain what you did to cope with them.
You could conduct your research by any one or of the following methods
Now let's take a look at each of those:
Don't think you have to find out everything about your audience yourself. It is quite possible that such details as you require may already have been gathered. For example, if you intend to conduct an advertising campaign in the local press, it is quite likely that the local newspaper can provide you with an audience profile. If you are providing marketing materials for a new course in your college, the marketing department may be able to show you materials for existing courses which they know to be successful with potential students.
Although this is certainly useful material for your research, you should beware of believing research data simply because they are given to you by people in authority. Your college marketing department may well have conducted no research whatsoever and are simply telling you what is attractive to them and they therefore assume that it must be attractive to potential students. Try to establish whether the research really has being carried out and, if you can, get hold of the research results so that you can check them for yourself.
There are, though, a number of secondary sources which you can generally rely on:
Bear in mind, though, that many of these organisations have spent a lot of money on their research, so you will have to adopt a very subtle approach if you want them to give you their results for nothing.
Under this heading we shall consider all questionnaires where the respondent has her own printed copy to look at, tick boxes on, write comments on and so on.
The first question you have to answer is: how do you get your questionnaire to your respondents? You could
It might be better to take your questionnaires to a certain area where you think you will find the kind of person you are after. You may find that your local council can help you here, or perhaps a local political party will have some sort of demographic profiling which can help you. It is normally less time-consuming to leave a note saying you will pick up the questionnaires later, rather than to wait while they are filled in.
Disadvantages of all printed questionnaires
If you have your own website, you may find that the website hosting service provides some free questionnaire software which you could use. Depending on how competent you are with computer software, you could maybe rig up some system whereby the respondent has to enter a pin (which perhaps you send to them by email) in order to complete the questionnaire.
You could, of course, simply send out your questionnaire by email. If you do so, then it's generally best to send the questionnaire as an attachment, rather than in the body of the email because it's difficult to predict how the email will be formatted in the recipient's email client and it can be very difficult and confusing for them to complete. Microsoft Word makes it easy to produce questionnaires in which the repondent can type their responses into text fields, checkboxes etc., but then you have to be sure that they have Microsoft Word. The ideal solution would be to produce a form which the user can complete using Adobe Acrobat Reader (PDF), but you need the appropriate (not cheap) software to produce it. This is changing all the time, of course, so I suggest you consult, say, the technicians and/or the marketing department in your college to see what they advise.
By "interview surveys" we refer to those where the interviewer has a number of questions to ask and it is she who records the answers in some way, not the respondents. This could involve simply reading out the questions which are on a printed questionnaire. The interviewer then simply ticks boxes, makes notes etc. They could also be much less structured than that - the interviewer could have a list of questions which need to be asked, but is free to work them into an informal discussion as she sees fit. This method is often referred to as semi-structured or unstructured interviewing.
The advantages and disadvantages of telephone surveys are similar to those of face-to-face interviews. In addition, they are prohibitively expensive and respondents tend to run out of patience quickly.
The major problem with surveys, of whatever kind, is that people, for one reason or another, will lie. A recent example is the research which shows that eleven percent of men make love while watching television whereas only five percent of women do. It could be that those women are especially sexually active, but I rather suspect that some of the men are exaggerating because they like to see themselves as studs. It makes no difference that the survey is confidential and no one can identify them. Another example is some research carried out in Exeter to find how often people go to the theatre. The responses suggested that more people go to the theatre than there are seats available. Presumably, respondents wanted to make the right impression on the student interviewers. If you ask a respondent about their income, they could exaggerate it to make an impression on you, or they could deliberately underestimate it in case the taxman finds out - even if no-one could possibly identify them. The 1992 General Election is another example. The many polls which were conducted daily suggested a slight lead for Labour; in the event Conservatives won a comfortable victory. Why? No-one really knows, but it seems likely that respondents thought that they gave a better image of themselves if they said they would vote Labour. There are numerous examples of products being designed in response to consumers' expressed wishes only for it to be found that they hardly sold at all - the Ford Edsel being the classic case.
Therefore it may be desirable for you to attempt to overcome this problem by observing how people actually behave. There may be other good reasons to. If you are going to produce an artefact for 5 year olds, it is unlikely that they will be able to give comprehensive responses to a written questionnaire, so observation might be the only reasonable choice. There is, of course, always the problem that your presence can have an effect on the subjects' behaviour, but you can overcome this by simply remaining quiet and adopting a low profile so that in time your audience forget your presence. Alternatively, you could simply leave a video or audio recorder running.
This kind of research can be very time-consuming and you may well find that you simply cannot make enough time available to do it thoroughly. However, in most cases it is at least worth trying. If you find that the results you come up with are inadequate, then you should make clear in your final assessment of your research what the shortcomings are and why they occurred.
Before you start to design your questionnaire, make sure that you examine a number of existing questionnaires and try to determine what they have in common. Note how many of them will make use of Schramm's fraction of selection - often a reward is offered for the return/completion of the questionnaire and some effort is made to make the questionnaire appear easy to complete, for example by means of tick boxes, grouping of question types, even by using a relatively small typeface so that the questionnaire appears short.
Make it attractive. That means attractive to your audience, not necessarily to you. One student who was herself a benefits claimant, was intending to produce a guide to benefits for other claimants. She produced a very tidy-looking questionnaire printed on a fancy laser printer. When she attempted to interview respondents at the DHSS, nearly all refused. She tried again, this time using a hand-written questionnaire, which was much more successful. Presumably, in the first case the questionnaire looked so official that the potential respondents thought she was some kind of DHSS 'spy'.
Consider what technology is available to improve the presentation of your questionnaire and make sure you know how to use it efficiently. You may even find that in your college there is already software for compiling and processing questionnaires - ask around before you start.
Put the title of your study at the top of the first page. Avoid calling it "questionnaire", "audience research" or similar. That will really turn people off. Try to think of a title which makes it appear of some relevance to your respondents. If you can think of a title which suggests that they will get something out of participating, so much the better (think of the fraction of selection).
Write a brief introduction. People like to know what the whole thing is about. Again, if you can suggest that the respondents will benefit in some way from the completed artefact, so much the better. Should you let respondents know that you are a student? It all depends on who your audience are - other students are often only too willing to help someone who has to do assessed work, but other people may be inclined to take students' work less seriously than they otherwise might.
If you can possibly get someone in a key position or some organisation to endorse your research (perhaps even allowing you to use their logo), so much the better. But be careful who you choose. If the student we referred to above had said that she had the endorsement of the DHSS, that would probably have reduced her chances.
From at least some of your respondents you may very well require a considerable commitment of time and energy, with the prospect of gaining little in return for helping you. Maybe you could usefully implement what Tom Peters (1995 : 74) calls 'foot-in-the-door research'. He quotes a fascinating study where subjects were induced to put a very small sign in their front window supporting the cause of traffic safety. Later, they were asked to display a great billboard outside their home, which required letting outsiders dig holes in their front lawn. Most agreed, whereas 95% of those who had not been asked to make the first small commitment refused to allow the billboard. See if you can somehow adopt this incremental approach.
You must test your questionnaire on a sample of your audience before you start conducting your survey. Here is an example of the sort of thing which can go wrong: a student is commissioned to produce a guide for students and staff to the facilities available in the college; one of her questions is formulated as follows:
Have you ever had any problems using any of the following:
Please enter a where appropriate
It looks very neat and professional, but there are two problems with this. The first is that many respondents take a to mean "I can handle this OK - no problems". It might have been better to ask respondents to enter a which is more commonly associated with "something wrong".
The second problem is that many respondents will reply that they have no problems with such hardware. One reason many respondents will say that they had "no problems using" the hardware is that they have never tried to do so, perhaps because they anticipate that they would have problems if they did. Pre-testing on a few people and discussing their answers with them will help to reveal such problems before you conduct your survey in earnest.