– Other research-related activities that should be addressed at the fine tuning stage are: Examine the variables to be studied. Review the research questions with the intent of breaking them down into specific second-and third-level questions. If hypotheses (tentative explanations) are used, be certain they meet the quality test. Determine what evidence must be collected to answer the various questions and hypotheses. 51 | P a g e
Set the scope of the study by stating what is NOT a part of the research question. Investigative questions are questions the researcher must answer to satisfactorily arrive at a conclusion about the research question. Typical investigative question areas include: – Performance considerations. – Attitudinal issues (like perceived quality). – Behavioural issues. Measurement questions are the questions asked of participants or the observations that must be recorded. Measurement questions should be outlined by the completion of the project planning activities. Two types of measurement questions are common in business research: Predesigned measurement questions are questions that have been formulated and tested previously by other researchers. Such questions provide enhancement validity and can reduce the cost of the project. Custom-designed measurement questions are questions formulated specifically for the project at hand. Every evaluation, like any other research, starts with one or more questions. Sometimes, the questions are simple and easy to answer. (Will we serve something close to the 50 people we expect to?) Often, however, the questions can be complex, and the answers less easy to find. (Which, or which combination, of the three parts of our intervention will affect which of the two behaviour changes we seek within participants?) The questions you ask will guide not only your evaluation, but your program as well. By your choice of questions, you're defining what it is you're trying to change7 . You choose your evaluation questions by analyzing the community problem or issue you're addressing, and deciding how you want to affect it. Why do you want to ask this particular question in relation to your evaluation? What is it about the issue that is the most pressing to change? What indicators will tell you whether that change is taking place? Is that all you're concerned with? The answer to each of these and other questions helps to define what it is you're trying to do, and, by extension, how you'll try to do it. For example, what's the real goal of a program to introduce healthier foods in school lunches? It could be simply to convince children to eat more fruits, vegetables, and whole grains. It could be to get them to eat less junk food. It could be to encourage weight loss in kids who are overweight or obese. It could 7 Source: Community Toolbox, as at https://ctb.ku.edu/en/table-of-contents/evaluate/evaluate-communityinterventions/choose-evaluation-questions/main, as on 12th March, 2018. 52 | P a g e
be to educate them about healthy eating, and to persuade them to be more adventurous eaters. The evaluation questions you ask both reflect and determine your goals for the program. If you don't measure weight loss, for instance, then clearly that's not what you're aiming for. If you only look at an increase in children's consumption of healthy foods, you're ignoring the fact that if they don't cut down on something else (junk food, for instance), they'll simply gain weight. Is that still better than not eating the healthy foods? You answer that question by what you choose to examine - if it is better, you may not care what else the children are eating; if it's not, then you will care. Things to consider when choosing evaluation questions What do you want to know? Academics and other researchers may approach choosing research questions differently from those involved in community programs. In addition to their practical and social applications, they may choose problems to research simply because they are interesting, or because they tie into other work that they or their colleagues are doing. Community service workers and others directly involved in programs, on the other hand, are concerned specifically with improving what they're doing so they can help to enhance the quality of life for the participants in their programs, and often for the community as a whole. Since we assume that most people using this chapter of the Tool Box are likely to be practitioners in the community, let's look at some of the reasons they might pick a particular area to evaluate. If you're running, or about to run, a program to affect a community issue or problem, you might want to know one or more of the following: Is there a cause-and-effect relationship (i.e., does one action or condition directly cause another) between a particular action and a particular change? Usually, you'll be concerned with this in terms of your program. (Does our smokingcessation support group help members to quit smoking?) Sometimes, however, it might be important to look at it in terms of the community. (Does a smoking ban in public buildings, bars, and restaurants lead to a decrease in the number of community residents who smoke?) If we try this new method, what will happen? Will the program that worked in the next town, or the one that we read about in a professional journal, work with our population, or with our issue? Why are you interested? Some of the same differences between the concerns of researchers and the concerns of practitioners may hold here. Those interested primarily in research may simply be moved by curiosity or by the urge to solve a difficult problem. As 53 | P a g e
a practitioner, on the other hand, you'll want to know the effects of what you're doing on the lives of participants or the community. Your interest, therefore, might grow from: Your experience with an issue and its consequences in a particular population or community Your knowledge of promising interventions and their effects on similar issues The uniqueness of the issue to your particular community or population The similarity of the issue to other issues in your community, or the issue's interaction with other issues Your interest as a community worker has to be considered in relation to your evaluation and the purpose of your program. Your basic intent is probably to improve things for the population or the community, but in what ways and by what means? Are you trying out some new things in the hope of making an already-successful program more successful? Are you importing a promising practice to see if it works with your population? Are you trying to solve a particularly difficult professional problem? A community mediation program found that it was having little success in cases involving adolescents. After conferring with other similar programs - all of which were struggling with the same issue - mediators in the program devised a number of strategies to try to reach youth. The overall question they were concerned with - "Will these strategies make it possible to mediate successfully where teens are involved?" - was one with real consequences. Is the issue you're addressing important to the community or to the society? Media reports about or community attempts to address the issue are clear indicators that it is socially important. If it affects a particular group - violence in a given neighbourhood, a high rate of heart disease among middle-aged black males - it has an obvious impact on the community and society. If your program or intervention has the potential to help resolve the issue in other places, to be used by community workers in other fields, or to be applied in a number of ways, the importance of your analysis increases even further. If addressing the issue can lead to long-term positive social change, then the analysis is vitally important. All of this affects your evaluation and the questions you ask. If the issue is one of social importance, then your evaluation of your work is socially important as well. Are you addressing the aspects of your program or intervention that are of the greatest value to participants, the community, and society? If not, how might you begin to do so? How does the issue relate to the field? The real question here is not whether the issue is important to the field - if it's important to the community, that's what matters. However, you should explore 54 | P a g e
whether there's evidence from the field to apply to the issue. Is what you're doing likely to be more effective than other approaches that have been tried? If your approach isn't effective, are there other approaches out there that hold more promise? Can the published material about the issue help you understand it better, and give you better ideas about how to address it? Is the issue general, rather than specific to your population or community? Consider whether there is evidence that the issue occurs with a variety of populations and under a range of conditions. Also consider whether the observations or methods used to determine the issue's existence are accurate and whether they can be used in different situations and with different groups. Your evaluation may give you valuable information to pass on to practitioners in different fields or different circumstances. Who might use the results of your evaluation? If evaluation shows that your program or intervention is successful, that's obviously valuable information, especially if what you're evaluating is innovative and hasn't been tried before. Even if the evaluation turns up major problems with the intervention, that's still important information for others - it tells them what won't work, or what barriers have to be overcome in order to make it work. Some of those who might use your results include individuals and groups affected by the issue; service providers and others who have to deal with the problem (in the case of youth violence, for instance, this last group might include police, school officials, small business owners, parents, and medical personnel, among others); advocates and community activists; and public officials and other policy makers. Whose issue is it? Who has to change in order to address the issue? The focus of the intervention will tell you whom the evaluation should focus on. Some possibilities: Those directly affected by the problem Those in direct personal contact with those directly affected: parents, spouses and children, other relatives, friends, neighbours, co-workers Those who serve or otherwise deal with those directly affected: medical professionals, police, teachers, social workers, therapists, etc. Administrators and others who serve or deal with those indirectly affected: hospital or clinic directors, police chiefs, school principals, agency directors, etc. Appointed or elected officials and other policy makers 55 | P a g e
Why is it necessary to choose evaluation questions carefully? You know why you're running your program. Evaluating it should just be a matter of deciding whether things are better when you evaluate than they were before you started, right? Well, actually...wrong. It's not that simple. First of all, you need to determine what "things" you are actually looking at (remember the school lunch example?) Second, you will need to consider how you will determine what you're doing right, and what you need to change. Here's a partial list of reasons why choosing questions beforehand is important. It helps you understand what effects different parts of your effort are having. By framing questions carefully, you can evaluate different parts of your effort. If you add an element after the start of the program, for instance, you may be able to see its effect separate from that of the rest of the program...if you focus on examining it. By the same token, you can look at different possible effects of the program as a whole. (Do adult basic education learners read more as a result of being in a program? Are they more likely to register to vote? Do their children improve their school performance?) It makes you clearly define what it is you're trying to do What you decide to evaluate defines what you hope to accomplish. Choosing evaluation questions at the start of a program or effort makes clear what you're trying to change, and what you want your results to be. It shows you where you need to make changes. Carefully choosing questions and making them specific to your real objectives should tell you exactly where the program is doing well and where the program isn't having the intended effect. It highlights unintended consequences. When you find unusual answers to the questions you choose, it often means that your program has had some effects you didn't expect. Sometimes these effects are positive - not only did people in the heart-healthy exercise program gain in fitness, but a majority of them report changing their diet for the better and losing weight as well - sometimes negative - obese children in a healthy eating program actually gained weight, even though they were eating a healthier diet - and sometimes neither. Like the side effects of medication, the unintended consequences of a program can be as important as the program itself. (In the case of the exercise program, the changes in diet might do as much as or more than the exercise to maintain heart health, for instance, and may point toward changing the focus of the program in some way.) It guides your future choices. If you find that your program is particularly successful in certain ways and not in others, for example, you may decide to emphasize the successful areas more, or to completely change your approach in the unsuccessful areas. That, in turn, will change the emphasis of future evaluation as well. In participant evaluations, evaluation involves stakeholders in setting the course of the program, thus making it more likely that it will meet community needs. It provides focus for the evaluation and the program. Choosing evaluation questions carefully keeps you from becoming scattered and 56 | P a g e
trying to do too many things at once, thereby diluting your effectiveness at all of them. It determines what needs to be recorded in order to gather data for evaluation. A clear choice of evaluation questions makes the actual gathering of data much easier, since it usually makes obvious what kinds of records must be kept and what areas need to be examined. When should you choose questions and plan the evaluation? Evaluation questions, since they help shape your work, should be chosen and the evaluation planned when planning the overall program or effort. That gives you time and room for a participatory process, and gives you the chance to use the evaluation as an integral part of the program. As the program unfolds, you might find yourself adjusting or adding questions to reflect the reality of what is happening, but unless your original questions were misguided (you were wrong about what behaviour had to change in order to produce certain results, for instance), they should serve you well. Now let's discuss reality for many community based and grassroots programs. They're often understaffed and underfunded. Staff members may be underpaid, and may often work many more hours a week than they're paid for, because of their dedication to social justice and social change. Most or all program staff may even be volunteers, with full-time jobs and family responsibilities aside from their work in the program. Initial evaluation in these circumstances is often anecdotal - i.e., based on participants' comments and stories about their progress and staff members' personal, informal observations. A formal evaluation will probably wait until there's funding for it, or until someone has the time to coordinate or take charge of it. In that case, the "when" becomes "as soon as you can." You may be dealing with a program that has just started, or with one that's been operating for a long time. You may know that changes need to be made, or it may seem that the program is in fact meeting its goals. Whatever the situation, evaluation questions need to be chosen, and an evaluation planned that will give you the information you need to improve your work. Even with a program that's been going on for a while, the questions can still help you define or redefine your work, and will certainly help you improve it over the long term. Who should be involved in choosing questions and planning the evaluation? If you've consulted other sections of the Tool Box concerned with evaluation, you probably know that we advocate that all stakeholders be involved in planning the evaluation. We believe that the best evaluation is participatory. That means that there is representation of the views and knowledge of people affected by the issue to be addressed. The list of potential participants is essentially the same as that under "Whose problem is it?" in the first part of this section: those directly affected and their close contacts; those who work with those directly affected, or who deal directly or indirectly with them and the issue; and public officials. To these groups, we might add other concerned citizens, and those indirectly affected by the issue. (A shop owner may not be a victim of neighbourhood 57 | P a g e
violence, but fear of that violence might nonetheless keep customers away from his shop, for instance.) Evaluations that involve all stakeholders have a number of advantages over those conducted in a vacuum by outside evaluators or agency or program staff. They're more likely to reflect the real needs of the community, and they bring to bear the community's knowledge of its own context - history, relationships, culture, etc. - without which a program and its evaluation can go astray. Participation can range from simple consultation before the fact to complete involvement in every aspect of an evaluation - assessment, planning, data gathering, analysis, and passing on the information. In general, the greater the involvement of stakeholders, the better, but in-depth involvement of the stakeholders may not always be possible. There are time disadvantages to participatory evaluation - it takes longer - and there are logistical concerns, as well. Participants may have nothing in their backgrounds to prepare them for research, so training in a number of areas may be necessary, requiring skill, careful planning, and yet more time. The level of participation your evaluation can sustain, therefore, relies to some extent on your time constraints and your capacity to train and support participants. How do you choose questions and plan the evaluation? Choosing questions When you choose evaluation questions, you're really choosing a research problem - what you want to examine with your research. (Evaluation, whether formal or informal, is in fact research.) You have to analyze the issue and your program, consider various ways they can be looked at, and choose the one(s) that most nearly tell you what you want to know about what you're doing. Are you just trying to determine whether you're reaching the right people in sufficient numbers with your program? Do you want to know how well an intervention is working with specific populations? What kinds of behaviour changes, if any, are taking place as a result? What the actual outcomes are for the community? Each of these - as well as each of the many other things you might want to know - implies a different set of evaluation questions. To find the questions that best suit your evaluation, there is a series of steps you can follow. Describe the issue or problem you're addressing A problem is a difference between some ideal condition (all people 10 years of age or older should be able to read; people should be able to find a decent job) and some actual condition in the community or society (a 25% illiteracy rate among those attending a particular high school; 50% unemployment among minority youths in a particular city). This may mean the absence of some positive factor (qualified teachers and adequate educational facilities; entry-level jobs that are reachable from minority neighbourhoods) or the presence of some negative factor (students' difficulty with English; discrimination against minority job applicants), or some combination of these. 58 | P a g e
To describe the issue or problem: Describe the ideal condition, including the positive factors present and the negative factors absent. What should it look like if everything was as you'd want it to be? Describe the actual conditions that constitute the problem of interest, including the negative conditions present and the positive conditions absent. What are conditions really like? Describe the actual problem in terms of what you're hoping to change. What positive factors do you want to produce and/or what negative factors do you want to eliminate? Describe the importance of the problem To be sure that this is a problem you really should be addressing, consider its importance to those affected and to the community. Is the discrepancy between ideal and actual conditions of the kind and size to be considered important? What are the consequences (positive and negative) of the problem? Who experiences these consequences (i.e. program participants; their families, friends, and peers; service providers, policymakers, and others)? How many people are affected? How often and for how long are they affected? What is the intensity of the effect? How much does the fact that the problem is experienced to this degree by these people matter to them? You might also ask whether the effects of the problem matter to society, but in fact, that shouldn't make a difference. If they matter to the people who experience them, they're important. Society doesn't always consider a problem important if it's only a problem for a minority, or for a group that's generally ignored (the poor, the homeless). In light of these factors, decide whether the problem is important to the evaluation. Describe those who contribute to the problem Whose behaviour, by its presence or absence, contributes to the problem? Are they in the program participants' personal environment (participants themselves, family, friends), service environment (teachers, police), or broader environment (policymakers, media, general public)? For each of them, consider the types of behaviour that, by their presence or absence, contribute to the discrepancy that constitutes the problem. Assess the importance and feasibility of changing those behaviours How important is each of these behaviours to solving the problem? What are the chances that your effort can have any effect on each of them? 59 | P a g e
Describe the change objective Based on the above analysis, choose behaviour changes to target in specific people. Where you can, specify the desired levels of change in targeted behaviours and outcomes (those changes in conditions that should occur if the problem were to be solved). For example, a behaviour change goal might be an increase in pre-employment capacity - self-presentation, job-seeking, interview skills, interpersonal competence, resume writing, basic skills, etc. - for minority job seekers aged 18- 24. Or you might instead or in addition target policy makers, with the goal of having them offer tax incentives to businesses that locate in or close to minority communities. This is a way of defining your work. If you're planning the evaluation as you plan the program - as you would in the ideal situation - then the questions you're asking the evaluation to examine reflect the problems you're trying to solve, and this kind of analysis is important. If you're starting an evaluation of a program that has been in place for some time, then you're going to have to do some figuring after the fact about what consequences you think (hope) the program is having, and what they will lead to. You may be talking about changes in specific participant behaviors, about behaviors that act as indicators of other changes, or about results of another sort (participants gaining employment, for instance, which may have a direct relationship to participant behavior or may have more to do with local economic conditions). Make sure that the expected changes would constitute a solution or substantial contribution to the problem If you conclude that they would not result in a substantial contribution, revise your choice of problem and/or your selection of targeted people and actions as necessary. If you think that what you're looking at in an evaluation doesn't address the problem, then you should be looking at something else. If the objectives you've chosen do constitute all or a substantial part of a solution, you've found your questions. Setting Now that you've chosen your questions, there may be other factors to consider, such as the settings in which the evaluation will be conducted. If your program is relatively small and/or has only one site, this wouldn't be an issue. However, if you don't have the resources - whether finances, time, or personnel - to evaluate the whole program. There are some situations in which the choice setting may be important: If your program is very large and/or has multiple sites If different sites provide different services, activities, or conditions, or use different methods 60 | P a g e
Multiple sites Multiple sites Can present a challenge for an evaluation, because, although every effort may be made to make the program at all sites exactly the same, it will seldom be so. If the program relies on human interaction - teacher/learner, counsellor/counselee, trainer/trainee, doctor/patient, etc. - there will be differences from site to site depending on the people staffing each. (The exception is when the same people staff all sites, providing the same services at each site at different times or on different days.) Even if all are equally competent, no two staff members or teams will do things in exactly the same way or relate to participants in exactly the same way, and the differences can be reflected in differences in outcomes. If methods or other factors vary from site to site, that will further complicate the situation. Furthermore, the physical character of a site can influence not only program effectiveness, but also the recruitment of participants and whether or not they remain in the program long enough for it to have some effect (often called "retention.") The site's layout, comfort, apparent safety and security, and - often most important - how easy it is to get to, all affect whether participants enrol and stay in the program. Where you do have the capacity to evaluate all sites, it will be helpful to build into the evaluation a method of comparing them. This will allow you to identify and adopt at all sites methods, conditions, or activities that seem to make one site particularly successful, and to identify and change at all sites methods, conditions, or activities that seem to create barriers to success at others. If you can't evaluate each site separately, you'll have to decide which one(s) will give you the information that will most help in adjusting and improving your program. If you're most concerned with assessing your overall effectiveness, this may mean evaluating the site(s) closest to the program norm, in terms of methods, conditions, activities, goals, participant/staff interaction, etc. If, on the other hand, your chief consideration is learning whether a particular new or unusual method or situation is working, you may find yourself evaluating the site(s) least like the others. If sites appear only minimally different, some other considerations that may come into play are: The number and character of participants at the site. Participants at a particular site may be experiencing the effects of the issue more severely, or may have a particular important characteristic, such as a language barrier. The ability and willingness of participants and staff to support the evaluation research. If staff at a particular site are unable or unwilling to record observations, attendance, and other key information, or if site participants are unable or unwilling to be interviewed or monitored, evaluation at that site might be difficult. 61 | P a g e
The stability of the population at the site. If participants at a site come and go at a rapid rate - unless that's the program's intent - it can be difficult to gain information that contributes to an accurate evaluation. An exception, of course, occurs here if one point of the evaluation is to find out why participants stay for so short a time, and to try to develop methods or create conditions to assist them to remain in the program long enough to reach their goals. Sites with different methods, conditions, activities, or services Programs sometimes are organized so that different methods are used or different services provided at different sites. In other cases, conditions may vary from site to site because of the sites' geographical locations or the available space. The ideal situation is to evaluate all sites and compare the effects of the different methods, conditions, or services. When that's not possible, you'll have to decide what's most important to find out. If the methods, services, or conditions at a particular site are new or innovative, you may want to evaluate them, rather than those that have a track record. There may be a particular method or service that you want to evaluate, in which case the decision about which site to choose is obvious. The decision should be based on what makes the most sense for your program, and what will give you the best information to improve its effectiveness. When you have the capacity to choose more than one site to evaluate, it often makes sense to choose two or three sites that are different - especially if each is representative of other sites in the program or of program initiatives - so that you can compare their effectiveness. Even where sites are essentially similar, you'll get more information by evaluating as many as you can. Participants Another factor to consider is the participants whose behavior, activity, or circumstances will be evaluated. If your program is relatively small this might not be an issue - the participants will simply be all those in the program. However, if you don't have the resources - whether finances, time, or personnel - to evaluate the whole program, there are some situations in which the choice of participants may be important: If your program includes different groups of participants (groups that are in different stages of the program, or that are exposed to different methods or services). If groups of participants belong to populations with distinctly different cultures, stemming from race, ethnicity, class, religion, or other factors. Multiple groups There are a number of reasons why there might be multiple groups of participants in a program. You might start different groups at different times, either because the program has a rolling start schedule (when there are enough people for a class/training group, one will begin), or because the program is 62 | P a g e
aimed at different groups (for example, 5 year-olds, 8-year-olds, and 14-yearolds). You might also be trying different strategies with different groups. The Brookline Early Education Project (BEEP), a program aimed at school readiness for children aged pre-birth through 5, recruited pregnant families in three cohorts over the course of three years. In addition, families in each cohort were assigned to one of three levels of service. Thus, there were actually nine different groups among BEEP participants, even though, by the third year, all were receiving services at the same time. Once again, if there's no problem in evaluating the whole program, participants will simply include everyone. If that's not possible, there are a number of potential choices: Evaluate your work with only one group, with the expectation that work with the others will be evaluated in the future. In this case, you'd probably want to choose the one for whom you consider the program most crucial. They might be at greater risk (of heart attack, of school failure, of homelessness, etc.) or might be experiencing the issue at a high level of intensity (daily shooting incidents in the neighbourhood, high rates of teen pregnancy, massive unemployment). Include a small number (2-4) of groups in your evaluation. You might want to choose groups with contrasting characteristics (different ages, for example, or addressed by different strategies). On the other hand, depending on the focus of your evaluation, you might want groups that are essentially similar, to see whether your work is consistent in its effects. Choose a few participants from each group to focus your evaluation on. While this won't give you a complete picture, it should give you enough information to tell where your program is accomplishing its goals and where it needs improvement. The differences in the ways participants in different groups respond to the program (assuming there are differences) can also give you ideas for ways to change what you're doing. Participants from different populations and cultures Cultural factors can have an enormous effect on participants' responses to a program. They can govern conceptions of social roles, family responsibilities, acceptable and unacceptable behaviour, attitudes toward authority (and who constitutes authority), allowable topics of conversation, morality, the role of religion - the list goes on and on. In planning a program that involves members of different populations and cultures, you essentially have three choices: Plan your program and implement it in the same way for everyone. If the program involves groups - classes, support groups, etc. - participants' membership is determined not by population group but by when they sign up, what time of day they can attend, what they sign up for, or whatever other criteria make sense logistically. Plan your program to be as culturally sensitive as possible, and try to screen out anything that might be offensive to or difficult for any group. In 63 | P a g e
this instance, you might be prepared to respond if participants from a particular population requested a group of their own. Divide participants by cultural group and plan different culturally sensitive approaches for each. Your overall approach might be the same for everyone, but the way you apply it might differ by culture. In any of these instances, it would probably be important to understand how well your approach is working with members of the various populations. If you can evaluate the whole program, make sure that you include enough members of each group so that you can compare results (and their opinions of the program) among them. If your evaluation possibilities are limited, then your choices are similar to those for multiple groups of other kinds, and will depend on what exactly is most useful for you. There are interactions between the choice of sites and the choice of participants here. You may be concerned about the effects of your program on a particular population, which may be largely concentrated at one site. In that case, if you have limited resources, you may want to evaluate only that site, or that site and one other. Regardless of other considerations, you may want to set some guidelines about whom you include in the evaluation. How long do people have to be in the program, for instance, before they're included? In other words, what constitutes participation? (This also sets a criterion for who should be counted as a drop-out: anyone who starts, but leaves before meeting the standard for participation.) What about those whose attendance is spotty - a few days here, a few days there, sometimes with weeks in between? Do they have to have attended a certain number of hours to be considered participants? These issues can be more complex than they seem. People may start and drop out of a program numerous times, and then finally come back and complete it. Many others start programs numerous times, and never complete them. It's usually impossible to tell the difference until someone actually gets to the point of completion, whatever that means for the particular program. In a reversal of the start-many-times-before-completing scenario, there can be a few people who stay in a program right up till the end and then drop out. This may have to do with the fear of having to cope with success and a change in selfimage, or it may simply be a pattern the person has learned to follow, and will have to unlearn before being able to complete the program. Should any or all of these people be included in or excluded from an evaluation, either before (because of their history in the program) or after the fact? That's a decision you'll have to make, based on what their inclusion or exclusion will tell you. Just be sure that your evaluation clearly describes the criteria that you decide to use for your participants. If you're an outside evaluator or academic or other independent researcher 64 | P a g e
Up to this point, we've largely ignored the evaluation difficulties faced by evaluators not directly connected with the organization or institution running the program they're evaluating. If you've been hired or designated by the organization or a funder to evaluate the program, you have to establish trust, both with the organization and its staff and with participants, if you hope to get accurate information to work with. You also have to learn enough in a short period about the community, the organization, the program, and the participants to devise a good evaluation plan, and to analyze the data you and others gather. If you're an independent researcher - a graduate student, an academic, a journalist - you face even greater obstacles. First, you have to find a place to conduct your research - a program to evaluate - that fits in with your research interests. Then, you have to convince the organization running that program to allow you to do the research. Once you've jumped that hurdle, you're still faced with all the same tasks as an outside evaluator: establishing trust, understanding the context, etc. Let's look first at the process you as an independent researcher might follow in order to choose and gain access to a setting appropriate to your interests. Once you've gained that access, you've become an outside evaluator, so from that point on, the course of preparing for the evaluation will be the same for both. Choose a setting If you're an academic or student, you can probably find an appropriate program by asking colleagues, professors, and other researchers at your institution. If none of them knows of one offhand, someone can almost undoubtedly put you in touch with human service agencies and others who will. Other possible sources of information include the Internet, funders, professional associations, health and human service coalitions, and community organizations. Public funding information is often available on the web, in libraries, or in newspaper archives. The wider you spread your net, the more likely you are to find the program you're looking for. The right program will obviously vary depending on your research interests, but some questions that will inform your choice include: Does the setting include people who are actually experiencing the problem that is of interest to you? Is the setting similar to others of this type? (If not, its program might not be useful to others dealing with the issue, even if it works well in its own context.) Does the setting provide support for the research? Will staff, participants, and others help with data gathering, be forthcoming about context questions, cooperate with you? Does the setting have the resources to maintain the program after your evaluation is done? Does the setting permit the changes in operation required by the research? If the planning of the evaluation and choosing of questions point to doing things differently, can and will the program make the necessary changes? 65 | P a g e
Is the setting accessible? Accessibility includes not only handicap accessibility, but whether a site is in a neighbourhood that feels welcoming or safe to participants, whether it is easily reachable by public transportation or on foot from the areas from which participants are drawn, and whether it is in a building or institution that doesn't feel intimidating or strange (a university campus or building can seem as threatening as a fortress to someone who is insecure about his educational background, for example.) Accessibility can be the determining factor in whether participants consider a program, or whether they stay in it. Is the setting stable? Are the program and organization stable enough that you know they'll be able to support their work at the current level, at least until the evaluation is completed? Once you've found an appropriate setting, you'll have to convince the organization to collaborate with you on an evaluation. The next three steps are directed toward that goal. Learn as much as you can about the organization you've chosen Just as you wouldn't go to a job interview without doing some research about the employer, you shouldn't try to gain the cooperation of an organization without knowing something about it - its mission, its goals, whom it serves, who the director and board members are, etc. If someone told you about the organization, she may have, or may know someone who has, much of the information you need. If the organization maintains a website, much of that information will be available there. If it's incorporated, the office of the Secretary of the state of incorporation and/or other state offices will have information about the officers (i.e., the Board of Directors) and other aspects of the organization. Funding agencies may also have information that's a matter of public record, including proposals. Contact the appropriate person(s) and request an interview Find out whom (by name as well as position) you should talk to about conducting a research project in the organization you've chosen. Depending on the organization, this could be the board president, the executive director, or the program director (if the program you're interested in is only part of a larger organization). In any case, it might be wise to involve the program director even if he's not the final decision-maker, since his cooperation will be crucial for the completion of your research. If you can, get a personal introduction. It's always best if you come recommended by someone familiar with the person you need to speak with. If you can't get a personal introduction, it's usually best to send a letter requesting a meeting and explaining why, and follow it up with a phone call. Before the meeting, send a proposal outlining what you want to do. This should be substantive enough to help the organization decide whether it 66 | P a g e
wants to work with you, but not so specific that it doesn't allow for collaborative planning of the evaluation. Plan and prepare for the initial meeting There are several purposes for this meeting, besides the ultimate one of getting permission and support for your project (or at least an agreement to continue to discuss the possibility). They include: Establishing your credentials - the experience, educational background, and any other factors that equip you to conduct this evaluation. This might include references from colleagues, professors, or other organizations you've worked with. Learning more about the program and the organization Explaining what you want to do and why, what form the evaluation results are likely to take, what you'll do with them, who'll have access, etc. This explanation should also cover issues of confidentiality and permission of participants. Explaining what you need from the organization and/or program - participation of participants and staff, for instance, any logistical support, access to records, or access to program activities Explaining what you're offering in return - your services for a comprehensive formal evaluation, any stipends, equipment or materials, other support services, or whatever else you may have to offer Clarifying the organization's needs, and discussing how they fit with your own - and how both can be satisfied Assuming that your presentation has been convincing, and you're now the program evaluator, the rest of the steps here apply to both independent researchers and outside evaluators. Find out all you can about the context This may play out differently for outside evaluators than it does for independent researchers, but it's equally important for both. It means finding out all you can about the community, the organization, the program, and the participants beforehand - the social structure of the community and where participants fit in it, the history of the issue in question, how the organization is viewed, relationships among groups and individuals, community politics, etc. If you're an outside evaluator, you can pick the brains of program administrators, staff, and participants about the community, the organization, and the issue. Ask them to steer you to others - community leaders, officials, longtime residents, clergy, trusted members of particular groups - who can give you their perspectives as well. If possible, get to know the community physically: walk and/or drive around it, visit businesses, parks, restaurants, the library. Understanding how the issue plays out in the community, the nature of relationships among groups and individuals, and what life is like in the neighbourhoods where participants live will help a great deal in analyzing the evaluation of the program. 67 | P a g e
If you're an independent researcher, learn as much about the context as you can before you contact the program. Websites (for the organization and/or the community) and libraries are two possible sources of information, as are community and organization literature and people who know the community. Learning about the community, the organization, and the participants beforehand will both help you determine whether this program fits with your research and help you advocate for its cooperation with your project. Once you have that cooperation, you can follow the same path as an outside evaluator (since that's what you are) to learn as much about the context of the program as you can. Establish trust with program administrators, staff, and participants This can be the most difficult part of an evaluation for someone from outside the organization. There's no magic bullet or predictable timeline, but there are several things you can do: Be yourself. Don't feel you have to act a certain way: deal with people in the program as you do with friends and acquaintances in other circumstances. People can tell when you're being false, and are unlikely to trust you if you are. Treat everyone with equal respect, as colleagues in a research project. Don't assume you know more than anyone else just because you're the professional. Share freely what you do know, but don't tie yourself to any one process or method, especially in response to an opposite stance from a key individual. Ask administrators, staff, and participants what they want from the evaluation, and discuss how the evaluation could provide it. Don't be afraid to say "I don't know, but I'll find out," and then do. Follow through on whatever you say you'll do. Don't promise anything you can't deliver on, and make deadlines reasonable, so you can meet them. General tips for all evaluators These steps apply to everyone, internal evaluators as well as external. Aim for a participatory evaluation We've discussed above the involvement of all stakeholders to the extent possible. Involving participants, program staff, and other stakeholders in participatory planning and research can often get you the most accurate data, and may give you entry to people and places you normally might not have. On the other hand, participatory planning and research, as we've explained, takes time and energy. If you have limited time, you may not be able to set up a fully participatory project. You can, however, still consult with stakeholders, and involve them in ways that don't necessarily involve training or large amounts of your time. They can help you line up interviews with participants or other important informants, for instance, and/or act as informants themselves about community conditions and relationships. 68 | P a g e
At least the people in charge of the program, and probably those implementing it as well, will expect to be part of the planning of the evaluation. They are, after all, the ones who need to know whether their work is effective, and how to improve it. Involving participants as well, in roles ranging from informants about context to actual researchers, is likely to enrich the quantity and quality of the information you can obtain. Plan the evaluation, in collaboration with stakeholders That collaboration should be at the highest level of participation possible, given the nature of the program, the time available, and the capacity of those involved (if program participants are five-year-olds, they probably have relatively little to contribute to evaluation planning...but their parents might want to be involved.) The actual planning involves ten different areas, each of which will be the subject of one of the remaining sections in this chapter: Information gathering and synthesis Designing an observational system Developing and testing a prototype intervention Selecting an appropriate experimental design Collecting and analyzing data Gathering and interpreting ethnographic information Collecting and using archival data Encouraging participation throughout the research Refining the intervention based on the evaluation Preparing the evaluation results for dissemination Once the planning is done, it's time to get started on conducting the evaluation. And when you're finished - having analyzed the information and planned and made the changes that were needed - it's time to start the process again, so that you can determine whether those changes had the effects you intended. Evaluation, like so much of community work, is a process that goes on as long as the work itself does. It's absolutely essential to the continued improvement of your program. What kind of business problems might need a research study?8 Most work in business organisations, in whatever sector or ownership, will require research activities. We have already discussed the idea that business research in the context of this course is likely to involve some theory or concept as well as purely practical questions such as “how does the product range compare in terms of contribution to profit?” Or “which method of training has produced more output – coaching or a group training course?” Both these questions have potential for theory application as well as simple numerical survey, but some research problems are more obviously 8 Source: Dr. Sue Greener, as at http://web.ftvs.cuni.cz/hendl/metodologie/introduction-to-researchmethods.pdf, as on 13th March, 2018. 69 | P a g e
underpinned by theoretical ideas. For example, those which seek to generalise or to compare one organisation with another: “what are the most effective ways of introducing a new employee to the organisation?” or “how do marketing strategies differ in the aerospace industry?” When choosing an area for research, we usually start either with a broad area of management, which particularly interests us e.g. marketing or operations management, or we start with a very practical question like those in the last paragraph, which need answers to help with managerial decision-making. Refining from this point to a researchable question or objective is not easy. We need to do a number of things: Narrow down the study topic to one, which we are both interested in and have the time to investigate thoroughly Choose a topic context where we can find some access to practitioners if possible; either a direct connection with an organisation or professional body, or a context which is well documented either on the web or in the literature Identify relevant theory or domains of knowledge around the question for reading and background understanding. Write and re-write the question or working title, checking thoroughly the implications of each phrase or word to check assumptions and ensure we really mean what we write. This is often best done with other people to help us check assumptions and see the topic more clearly. Use the published literature and discussion with others to help us narrow down firmly to an angle or gap in the business literature, which will be worthwhile to explore. Identify both the possible outcomes from this research topic, both theoretical and practical. If they are not clear, can we refine the topic so that they become clear? Determine policies and procedures in relation to conducting applied research Aside from organisational policies and procedures, the Australian Code for the Responsible Conduct of Research provides a solid basis for conducting applied research. An excerpt follows: Part A: Principles and Practices to Encourage Responsible Research Conduct9 70 | P a g e
1. General principles of responsible research Introduction Responsible research is encouraged and guided by the research culture of the organisation. A strong research culture will demonstrate: honesty and integrity respect for human research participants, animals and the environment good stewardship of public resources used to conduct research appropriate acknowledgment of the role of others in research responsible communication of research results. This section discusses the responsibilities of institutions and researchers to maintain an environment that fosters responsible research. Responsibilities of institutions 1.1 Promote the responsible conduct of research Institutions are expected to: promote awareness of all guidelines and legislation relating to the conduct of research provide documents setting out clearly the policies and procedures based on this Code actively encourage mutual cooperation with open exchange of ideas between peers, and respect for freedom of expression and inquiry maintain a climate in which responsible and ethical behaviour in research is expected. 1.2 Establish good governance and management practices Good institutional governance and management practices encourage responsible conduct by researchers. Such practices promote quality in research, enhance the reputation of the institution and its researchers, and minimise the risk of harm for all involved. 1.2 .1 Each institution should provide an appropriate research governance framework through which research is assessed for quality, safety, privacy, risk management, financial management and ethical acceptability. The framework should specify the roles, responsibilities and accountabilities of all those who play a part in research. 1.2 .2 The research governance framework should demand compliance with laws, regulations, guidelines and codes of practice governing the conduct 9 Source: The University of Adelaide, as at https://www.adelaide.edu.au/researchservices/oreci/integrity/code/code.html#1, as on 12th March, 2018. 71 | P a g e
of research in Australia. Common law obligations also arise from the relationships between institutions, researchers and participants, while contractual arrangements may impose further obligations. 1.2 .3 Each institution must ensure the availability of the documents that help guide good research governance, conduct and management. 1.2 .4 There must be a clear policy on collaborative research projects with other organisations, which requires arrangements to be agreed before a project begins. As a minimum, these arrangements should cover financial management, intellectual property, authorship and publication, consultancies, secondments, ethics approval, and ownership of equipment and data. 1.2 .5 Each institution must have a well-defined process for receiving and managing allegations of research misconduct. 1.2 .6 There must be a process for regular monitoring of the institution’s performance with regard to these guidelines. 1.3 Train staff It is important that institutions provide induction, formal training and continuing education for all research staff, including research trainees. Training should cover research methods, ethics, confidentiality, data storage and records retention, as well as regulation and governance. Training should also cover the institution’s policies regarding responsible research conduct, all aspects of this Code, and other sources of guidance that are available. Institutions may make arrangements for joint induction and training with other institutions. 1.4 Promote mentoring Institutions should promote effective mentoring and supervision of researchers and research trainees. This includes advising on research ethics, research design and methods, and the responsible conduct of research. 1.5 Ensure a safe research environment Each institution must ensure a safe working environment in which to conduct each research project. Responsibilities of Researchers 1.6 Maintain high standards of responsible research Researchers must foster and maintain a research environment of intellectual honesty and integrity, and scholarly and scientific rigour. Researchers must: 72 | P a g e
respect the truth and the rights of those affected by their research manage conflicts of interest so that ambition and personal advantage do not compromise ethical or scholarly considerations adopt methods appropriate for achieving the aims of each research proposal follow proper practices for safety and security cite awards, degrees conferred and research publications accurately, including the status of any publication, such as under review or in press promote adoption of this Code and avoid departures from the responsible conduct of research conform to the policies adopted by their institutions and bodies funding the research. 1.7 Report research responsibly Researchers should ensure that research findings are disseminated responsibly. 1.8 Respect research participants Researchers must comply with ethical principles of integrity, respect for persons, justice and beneficence. Written approval from appropriate ethics committees, safety and other regulatory bodies must be obtained when required. The National Statement on Ethical Conduct in Human Research and Values and Ethics — Guidelines for Ethical Conduct in Aboriginal and Torres Strait Islander Health Research (or any replacement documents) sets out principles for protecting human participants in research (see Appendix 3). 1.9 Respect animals used in research Researchers must respect the animals they use in research, in accordance with the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes (see Appendix 3). 1.10 Respect the environment Researchers should conduct their research so as to minimise adverse effects on the wider community and the environment. 1.11 Report research misconduct A researcher who considers that research misconduct may have occurred must act in a timely manner, having regard to the institution’s policies. Special Responsibilities 1.12 Aboriginal and Torres Strait Islander peoples 73 | P a g e
It is acknowledged that research with Aboriginal and Torres Strait Islander peoples spans many methodologies and disciplines. There are wide variations in the ways in which Aboriginal and Torres Strait Islander individuals, communities or groups are involved in, or affected by, research to which this Code applies. This Code should be read in conjunction with Values and Ethics — Guidelines for Ethical Conduct in Aboriginal and Torres Strait Islander Health Research (NHMRC 2003) and the Guidelines for Ethical Research in Indigenous Studies (Australian Institute of Aboriginal and Torres Strait Islander Studies 2012). 1.13 Consumer and community participation in research Appropriate consumer involvement in research should be encouraged and facilitated by research institutions and researchers. This Code should be read in conjunction with the Statement on Consumer and Community Participation in Health and Medical Research (NHMRC and Consumers’ Health Forum of Australia Inc., 2002). And 2. Management of research data and primary materials Introduction Policies are required that address the ownership of research materials and data, their storage, their retention beyond the end of the project, and appropriate access to them by the research community. The responsible conduct of research includes the proper management and retention of the research data. Retaining the research data is important because it may be all that remains of the research work at the end of the project. While it may not be practical to keep all the primary material (such as ore, biological material, questionnaires or recordings), durable records derived from them (such as assays, test results, transcripts, and laboratory and field notes) must be retained and accessible. The researcher must decide which data and materials should be retained, although in some cases this is determined by law, funding agency, publisher or by convention in the discipline. The central aim is that sufficient materials and data are retained to justify the outcomes of the research and to defend them if they are challenged. The potential value of the material for further research should also be considered, particularly where the research would be difficult or impossible to repeat. Responsibilities of Institutions 2.1 Retain research data and primary materials 74 | P a g e
Each institution must have a policy on the retention of materials and research data. It is important that institutions acknowledge their continuing role in the management of research material and data. The institutional policy must be consistent with practices in the discipline, relevant legislation, codes and guidelines. 2.1 .1 In general, the minimum recommended period for retention of research data is 5 years from the date of publication. However, in any particular case, the period for which data should be retained should be determined by the specific type of research. For example: for short-term research projects that are for assessment purposes only, such as research projects completed by students, retaining research data for 12 months after the completion of the project may be sufficient for most clinical trials, retaining research data for 15 years or more may be necessary for areas such as gene therapy, research data must be retained permanently (eg patient records) if the work has community or heritage value, research data should be kept permanently at this stage, preferably within a national collection. 2.1 .2 A policy is required that covers the secure and safe disposal of research data and primary materials when the specified period of retention has finished. 2.2 Provide secure research data storage and record-keeping facilities Institutions must provide facilities for the safe and secure storage of research data and for maintaining records of where research data are stored. 2.2. 1 There must be a policy on research data ownership and storage. This policy must cover all situations that arise in research, including when researchers move between institutions or employers and when data are held outside Australia. Agreements covering ownership and storage of research data should be reviewed whenever there is movement or departure of research staff. 2.2. 2 Wherever possible and appropriate, research data should be held in the researcher’s department or other appropriate institutional repository, although researchers should be permitted to hold copies of the research data for their own use. Arrangements for material held in other locations should be documented. 2.2. 3 In projects that span several institutions, an agreement should be developed at the outset covering the storage of research data and 75 | P a g e
primary materials within each institution. 2.2. 4 Research data and primary materials must be stored in the safe and secure storage provided. 2.3 Identify ownership of research data and primary materials Each institution must have a policy on the ownership of research materials and data during and following the research project. The ownership may also be influenced by the funding arrangements for the project. As a general rule, the most satisfactory arrangement will be that the materials and data retained at the end of a project are the property of the institution that hosted the project, another institution with an interest in the research, or a central repository. 2.4 Ensure security and confidentiality of research data and primary materials Each institution must have a policy on the ownership of, and access to, databases and archives that is consistent with confidentiality requirements, legislation, privacy rules and other guidelines. 2.4. 1 The policy must guide researchers in the management of research data and primary materials, including storage, access, ownership and confidentiality. 2.4. 2 The processes must ensure that researchers are informed of relevant confidentiality agreements and restrictions on the use of research data. 2.4. 3 Computing systems must be secure, and information technology personnel must understand their responsibilities for network security and access control. 2.4. 4 Those holding primary material, including electronic material, must understand their responsibilities for security and access. Responsibilities of Researchers 2.5 Retain research data and primary materials When considering how long research data and primary materials are to be retained, the researcher must take account of professional standards, legal requirements and contractual arrangements. 2.5. 1 Researchers should retain research data and primary materials for sufficient time to allow reference to them by other researchers and interested parties. For published research data, this may be for as long as 76 | P a g e
interest and discussion persist following publication. 2.5. 2 Research data should be made available for use by other researchers unless this is prevented by ethical, privacy or confidentiality matters. 2.5. 3 Research data should be retained for at least the minimum period specified in the institutional policy. 2.5. 4 If the results from research are challenged, all relevant data and materials must be retained until the matter is resolved. Research records that may be relevant to allegations of research misconduct must not be destroyed. 2.5. 5 The institutional policy on the secure and safe disposal of primary materials and research data must be followed. 2.6 Manage storage of research data and primary materials Researchers must manage research data and primary materials in accordance with the policy of the institution. To achieve this, researchers must: 2.6. 1 Keep clear and accurate records of the research methods and data sources, including any approvals granted, during and after the research process. 2.6. 2 Ensure that research data and primary materials are kept in safe and secure storage provided, even when not in current use. 2.6. 3 Provide the same level of care and protection to primary research records, such as laboratory notebooks, as to the analysed research data. 2.6. 4 Retain research data, including electronic data, in a durable, indexed and retrievable form. 2.6. 5 Maintain a catalogue of research data in an accessible form. 2.6. 6 Manage research data and primary materials according to ethical protocols and relevant legislation. 2.7 Maintain confidentiality of research data and primary materials Researchers given access to confidential information must maintain that confidentiality. Primary materials and confidential research data must be kept 77 | P a g e
in secure storage. Confidential information must only be used in ways agreed with those who provided it. Particular care must be exercised when confidential data are made available for discussion. Refer also to Attachment 1 – Australian Market and Social Research Society Code of Professional Behaviour. Activity 4 Locate, research and outline one organisational policy for conducting applied research. 78 | P a g e
Activity 4 79 | P a g e
Activity 4 80 | P a g e
Activity 4 81 | P a g e
Activity 4 82 | P a g e
Activity 4 Establish mechanisms for collecting and maintaining data in a systematic manner Data collection is the process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer stated research questions, test hypotheses, and evaluate outcomes. The data collection component of research is common to all fields of study including physical and social sciences, humanities, business, etc. While methods vary by 83 | P a g e
discipline, the emphasis on ensuring accurate and honest collection remains the same10 . The importance of ensuring accurate and appropriate data collection Regardless of the field of study or preference for defining data (quantitative, qualitative), accurate data collection is essential to maintaining the integrity of research. Both the selection of appropriate data collection instruments (existing, modified, or newly developed) and clearly delineated instructions for their correct use reduce the likelihood of errors occurring. Consequences from improperly collected data include inability to answer research questions accurately inability to repeat and validate the study distorted findings resulting in wasted resources misleading other researchers to pursue fruitless avenues of investigation compromising decisions for public policy causing harm to human participants and animal subjects While the degree of impact from faulty data collection may vary by discipline and the nature of investigation, there is the potential to cause disproportionate harm when these research results are used to support public policy recommendations. Issues related to maintaining integrity of data collection: The primary rationale for preserving data integrity is to support the detection of errors in the data collection process, whether they are made intentionally (deliberate falsifications) or not (systematic or random errors). Most, Craddick, Crawford, Redican, Rhodes, Rukenbrod, and Laws (2003) describe ‘quality assurance’ and ‘quality control’ as two approaches that can preserve data integrity and ensure the scientific validity of study results. Each approach is implemented at different points in the research timeline (Whitney, Lind, Wahl, 1998): 1. Quality assurance - activities that take place before data collection begins 2. Quality control - activities that take place during and after data collection Quality Assurance Since quality assurance precedes data collection, its main focus is 'prevention' (i.e., forestalling problems with data collection). Prevention is the most costeffective activity to ensure the integrity of data collection. This proactive measure is best demonstrated by the standardization of protocol developed in a comprehensive and detailed procedures manual for data collection. Poorly 10 Source: The Office of Research Integrity, as at https://ori.hhs.gov/education/products/n_illinois_u/datamanagement/dctopic.html, as on 12th March, 2018. 84 | P a g e
written manuals increase the risk of failing to identify problems and errors early in the research endeavor. These failures may be demonstrated in a number of ways: Uncertainty about the timing, methods, and identify of person(s) responsible for reviewing data Partial listing of items to be collected Vague description of data collection instruments to be used in lieu of rigorous step-by-step instructions on administering tests Failure to identify specific content and strategies for training or retraining staff members responsible for data collection Obscure instructions for using, making adjustments to, and calibrating data collection equipment (if appropriate) No identified mechanism to document changes in procedures that may evolve over the course of the investigation . An important component of quality assurance is developing a rigorous and detailed recruitment and training plan. Implicit in training is the need to effectively communicate the value of accurate data collection to trainees (Knatterud, Rockhold, George, Barton, Davis, Fairweather, Honohan, Mowery, O'Neill, 1998). The training aspect is particularly important to address the potential problem of staff who may unintentionally deviate from the original protocol. This phenomenon, known as ‘drift’, should be corrected with additional training, a provision that should be specified in the procedures manual. Given the range of qualitative research strategies (non-participant/ participant observation, interview, archival, field study, ethnography, content analysis, oral history, biography, unobtrusive research) it is difficult to make generalized statements about how one should establish a research protocol in order to facilitate quality assurance. Certainly, researchers conducting nonparticipant/participant observation may have only the broadest research questions to guide the initial research efforts. Since the researcher is the main measurement device in a study, many times there are little or no other data collecting instruments. Indeed, instruments may need to be developed on the spot to accommodate unanticipated findings. Quality Control While quality control activities (detection/monitoring and action) occur during and after data collection, the details should be carefully documented in the procedures manual. A clearly defined communication structure is a necessary pre-condition for establishing monitoring systems. There should not be any uncertainty about the flow of information between principal investigators and staff members following the detection of errors in data collection. A poorly developed communication structure encourages lax monitoring and limits opportunities for detecting errors. Detection or monitoring can take the form of direct staff observation during site visits, conference calls, or regular and frequent reviews of data reports to identify 85 | P a g e
inconsistencies, extreme values or invalid codes. While site visits may not be appropriate for all disciplines, failure to regularly audit records, whether quantitative or quantitative, will make it difficult for investigators to verify that data collection is proceeding according to procedures established in the manual. In addition, if the structure of communication is not clearly delineated in the procedures manual, transmission of any change in procedures to staff members can be compromised Quality control also identifies the required responses, or ‘actions’ necessary to correct faulty data collection practices and also minimize future occurrences. These actions are less likely to occur if data collection procedures are vaguely written and the necessary steps to minimize recurrence are not implemented through feedback and education (Knatterud, et al, 1998) Examples of data collection problems that require prompt action include: errors in individual data items systematic errors violation of protocol problems with individual staff or site performance fraud or scientific misconduct In the social/behavioural sciences where primary data collection involves human subjects, researchers are taught to incorporate one or more secondary measures that can be used to verify the quality of information being collected from the human subject. For example, a researcher conducting a survey might be interested in gaining a better insight into the occurrence of risky behaviours among young adult as well as the social conditions that increase the likelihood and frequency of these risky behaviours. To verify data quality, respondents might be queried about the same information but asked at different points of the survey and in a number of different ways. Measures of ‘ Social Desirability’ might also be used to get a measure of the honesty of responses. There are two points that need to be raised here, 1) crosschecks within the data collection process and 2) data quality being as much an observation-level issue as it is a complete data set issue. Thus, data quality should be addressed for each individual measurement, for each individual observation, and for the entire data set. Each field of study has its preferred set of data collection instruments. The hallmark of laboratory sciences is the meticulous documentation of the lab notebook while social sciences such as sociology and cultural anthropology may prefer the use of detailed field notes. Regardless of the discipline, comprehensive documentation of the collection process before, during and after the activity is essential to preserving data integrity. Whether it is business, marketing, humanities, physical sciences, social sciences, or other fields of study or discipline, data plays a very important role, serving as their respective starting points. That is why, in all of these processes that involve 86 | P a g e
the usage of information and knowledge, one of the very first steps is data collection11 . Data collection is described as the “process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer queries, stated research questions, test hypotheses, and evaluate outcomes.” Depending on the discipline or field, the nature of the information being sought, and the objective or goal of users, the methods of data collection will vary. The approach to applying the methods may also vary, customized to suit the purpose and prevailing circumstances, without compromising the integrity, accuracy and reliability of the data. There are two main types of data that users find themselves working with – and having to collect. 1. Quantitative Data. These are data that deal with quantities, values or numbers, making them measurable. Thus, they are usually expressed in numerical form, such as length, size, amount, price, and even duration. The use of statistics to generate and subsequently analyze this type of data add credence or credibility to it, so that quantitative data is overall seen as more reliable and objective. 2. Qualitative Data. These data, on the other hand, deals with quality, so that they are descriptive rather than numerical in nature. Unlike quantitative data, they are generally not measurable, and are only gained mostly through observation. Narratives often make use of adjectives and other descriptive words to refer to data on appearance, color, texture, and other qualities. In most cases, these two data types are used as preferences in choosing the method or tool to be used in data collection. As a matter of fact, data collection methods are classified into two, and they are based on these types of data. Thus, we can safely say that there are two major classifications or categories of data collection methods: the quantitative data collection methods and the qualitative data collection methods. IMPORTANCE OF DATA COLLECTION From the definition of “data collection” alone, it is already apparent why gathering data is important: to come up with answers, which come in the form of useful information, converted from data. But for many, that still does not mean much. Depending on the perspective of the user and the purpose of the information, there are many concrete benefits that can be gained from data gathering. In general terms, here are some of the reasons why data collection is very 11 Source: Cleverism, as at https://www.cleverism.com/qualitative-and-quantitative-data-collection-methods/, as on 12th March, 2018. 87 | P a g e
important. The first question that we will address is: “why should you collect data?” Data collection aids in the search for answers and resolutions. Learning and building knowledge is a natural inclination for human beings. Even at a very young age, we are in search for answers to a lot of things. Take a look at toddlers and small children, and they are the ones with so many questions, their curious spirit driving them to repeatedly ask whatever piques their interest. A toddler curious about a white flower in the backyard will start collecting data. He will approach the flower in question and look at it closely, taking in the color, the soft feel of the petals against his skin, and even the mild scent that emanates from it. He will then run to his mother and pull her along until they got to where the flower is. In baby speak, he will ask what the flower’s name is, and the mother will reply, “It’s a flower, and it is called rose.” It’s white. It’s soft. It smells good. And now the little boy even has a name for it. It’s called a rose. When his mother wasn’t looking, he reached for the rose by its stem and tried to pluck it. Suddenly, he felt a prickle in his fingers, followed by a sharp pain that made him yelp. When he looked down at his palm, he saw two puncture marks, and they are bleeding. The little boy starts to cry, thinking how roses, no matter how pretty and goodsmelling, are dangerous and can hurt you. This information will now be embedded in his mind, sure to become one of the most enduring pieces of information or tidbit of knowledge that he will know about the flower called “rose”. The same goes in case of a marketing research, for example. A company wants to learn a few things about the market in order to come up with a marketing plan, or tweak an already existing marketing program. There’s no way that they will be able to do these things without collecting the relevant data. Data collection facilitates and improves decision-making processes, and the quality of the decisions made. Leaders cannot make decisive strategies without facts to support them. Planners cannot draw up plans and designs without a basis. Entrepreneurs could not possibly come up with a business idea – much less a viable business plan – out of nothing at all. Similarly, businesses won’t be able to formulate marketing plans, and implement strategies to increase profitability and growth, if they have no data to start from. Without data, there won’t be anything to convert into useful information that will provide the basis for decisions. All that decision-makers are left with is their intuition and gut feeling, but even gut feeling and instinct have some basis on facts. 88 | P a g e
Decision-making processes become smoother, and decisions are definitely better, if there is data driving them. According to a survey by Helical IT, the success rate of decisions based on data gathered is higher by 79% than those made using pure intuition alone. In business, one of the most important decisions that must be made is on resource allocation and usage. If they collect the relevant data, they will be able to make informed decisions on how to use business resources efficiently. Data collection improves quality of expected results or output. Just as having data will improve decision-making and the quality of the decisions, it will also improve the quality of the results or output expected from any endeavor or activity. For example, a manufacturer will be able to produce high quality products after designing them using reliable data gathered. Consumers will also find the claims of the company about the product to be more reliable because they know it has been developed after conducting significant amount of research. Through collecting data, monitoring and tracking progress will also be facilitated. This gives a lot of room for flexibility, so response can be made accordingly and promptly. Adjustments can be made and improvements effected. Now we move to the next question, and that is on the manner of collecting data. Why is there a need to be particular about how data is collected? Why does it have to be systematic, and not just done on the fly, using whatever makes the data gatherer comfortable? Why do you have to pick certain methodologies of data collection when you can simply be random with it? Collecting data is expensive and resource-intensive. It will cost you money, time, and other resources. Thus, you have to make sure you make the most of it. You cannot afford to be random and haphazard about how you gather data when there are large amounts of investment at stake. Data collection methods will help ensure the accuracy and integrity of data collected. It’s common sense, really. Using the right data collection method – and using it properly – will allow only high quality data to be gathered. In this context, high quality data refers to data that is free from errors and bias arising from subjectivity, thereby increasing their reliability. High quality and reliable data will then be processed, resulting to high quality information. METHODS OF DATA COLLECTION We’ll now take a look at the different methods or tools used to collect data, and some of their pros (+) and cons (-). You may notice some methods falling under both categories, which means that they can be used in gathering both types of data. I. Qualitative Data Collection Methods 89 | P a g e
Exploratory in nature, these methods are mainly concerned at gaining insights and understanding on underlying reasons and motivations, so they tend to dig deeper. Since they cannot be quantified, measurability becomes an issue. This lack of measurability leads to the preference for methods or tools that are largely unstructured or, in some cases, maybe structured but only to a very small, limited extent. Generally, qualitative methods are time-consuming and expensive to conduct, and so researchers try to lower the costs incurred by decreasing the sample size or number of respondents. Face-to-Face Personal Interviews This is considered to be the most common data collection instrument for qualitative research, primarily because of its personal approach. The interviewer will collect data directly from the subject (the interviewee), on a one-on-one and face-to-face interaction. This is ideal for when data to be obtained must be highly personalized. The interview may be informal and unstructured – conversational, even – as if taking place between two casual to close friends. The questions asked are mostly unplanned and spontaneous, with the interviewer letting the flow of the interview dictate the next questions to be asked. However, if the interviewer still wants the data to be standardized to a certain extent for easier analysis, he could conduct a semi-structured interview where he asks the same series of open-ended questions to all the respondents. But if they let the subject choose her answer from a set of options, what just took place is a closed, structured and fixed-response interview. (+) This allows the interviewer to probe further, by asking follow-up questions and getting more information in the process. (+) The data will be highly personalized (particularly when using the informal approach). (-) This method is subject to certain limitations, such as language barriers, cultural differences, and geographical distances. (-) The person conducting the interview must have very good interviewing skills in order to elicit responses. Qualitative Surveys Paper surveys or questionnaires. Questionnaires often utilize a structure comprised of short questions and, in the case of qualitative questionnaires, they are usually open-ended, with the respondents asked to provide detailed answers, in their own words. It’s almost like answering essay questions. o (+) Since questionnaires are designed to collect standardized data, they are ideal for use in large populations or sample sizes of respondents. o (+) The high amount of detail provided will aid analysis of data. 90 | P a g e
o (-) On the other hand, the large number of respondents (and data), combined with the high level and amount of detail provided in the answers, will make data analysis quite tedious and time-consuming. Web-based questionnaires. This is basically a web-based or internetbased survey, involving a questionnaire uploaded to a site, where the respondents will log into and accomplish electronically. Instead of a paper and a pen, they will be using a computer screen and the mouse. o (+) Data collection is definitely quicker. This is often due to the questions being shorter, requiring less detail than in, say, a personal interview or a paper questionnaire. o (+) It is also uncomplicated, since the respondents can be invited to answer the questionnaire by simply sending them an email containing the URL of the site where the online questionnaire is available for answering. o (-) There is a limitation on the respondents, since the only ones to be able to answer are those who own a computer, have internet connection, and know their way around answering online surveys. o (-) The lesser amount of detail provided means the researcher may end up with mostly surface data, and no depth or meaning, especially when the data is processed. Focus Groups Focus groups method is basically an interview method, but done in a group discussion setting. When the object of the data is behaviors and attitudes, particularly in social situations, and resources for one-on-one interviews are limited, using the focus group approach is highly recommended. Ideally, the focus group should have at least 3 people and a moderator to around 10 to 13 people maximum, plus a moderator. Depending on the data being sought, the members of the group should have something in common. For example, a researcher conducting a study on the recovery of married mothers from alcoholism will choose women who are (1) married, (2) have kids, and (3) recovering alcoholics. Other parameters such as the age, employment status, and income bracket do not have to be similar across the members of the focus group. The topic that data will be collected about will be presented to the group, and the moderator will open the floor for a debate. (+) There may be a small group of respondents, but the setup or framework of data being delivered and shared makes it possible to come up with a wide variety of answers. (+) The data collector may also get highly detailed and descriptive data by using a focus group. (-) Much of the success of the discussion within the focus group lies in the hands of the moderator. He must be highly capable and experienced in controlling these types of interactions. 91 | P a g e
Documental Revision This method involves the use of previously existing and reliable documents and other sources of information as a source of data to be used in a new research or investigation. This is likened to how the data collector will go to a library and go over the books and other references for information relevant to what he is currently researching on. (+) The researcher will gain better understanding of the field or subject being looked into, thanks to the reliable and high quality documents used as data sources. (+) Taking a look into other documents or researches as a source will provide a glimpse of the subject being looked into from different perspectives or points of view, allowing comparisons and contrasts to be made. (-) Unfortunately, this relies heavily on the quality of the document that will be used, and the ability of the data collector to choose the right and reliable documents. If he chooses wrong, then the quality of the data he will collect later on will be compromised. Observation In this method, the researcher takes a participatory stance, immersing himself in the setting where his respondents are, and generally taking a look at everything, while taking down notes. Aside from note-taking, other documentation methods may be used, such as video and audio recording, photography, and the use of tangible items such as artifacts, mementoes, and other tools. (+) The participatory nature may lead to the researcher getting more reliable information. (+) Data is more reliable and representative of what is actually happening, since they took place and were observed under normal circumstances. (-) The participation may end up influencing the opinions and attitudes of the researcher, so he will end up having difficulty being objective and impartial as soon as the data he is looking for comes in. (-) Validity may arise due to the risk that the researcher’s participation may have an impact on the naturalness of the setting. The observed may become reactive to the idea of being watched and observed. If he planned to observe recovering alcoholic mothers in their natural environment (e.g. at their homes with their kids), their presence may cause the subjects to react differently, knowing that they are being observed. This may lead to the results becoming impaired. Longitudinal studies This is a research or data collection method that is performed repeatedly, on the same data sources, over an extended period of time. It is an observational research method that could even cover a span of years and, in some cases, even 92 | P a g e
decades. The goal is to find correlations through an empirical or observational study of subjects with a common trait or characteristic. An example of this is the Terman Study of the Gifted conducted by Lewis Terman at Stanford University. The study aimed to gather data on the characteristics of gifted children – and how they grow and develop – over their lifetime. Terman started in 1921, and it extended over the lifespan of the subjects, more than 1,500 boys and girls aged 3 to 19 years old, and with IQs higher than 135. To this day, this study is the world’s “oldest and longest-running” longitudinal study. (+) This is ideal when seeking data meant to establish a variable’s pattern over a period of time, particularly over an extended period of time. (+) As a method to find correlations, it is effective in finding connections and relationships of cause and effect. (-) The long period may become a setback, considering how the probability of the subjects at the beginning of the research will still be complete 10, 20, or 30 years down the road is very low. (-) Over the extended period, attitudes and opinions of the subjects are likely to change, which can lead to the dilution of data, reducing their reliability in the process. Case Studies In this qualitative method, data is gathered by taking a close look and an indepth analysis of a “case study” or “case studies” – the unit or units of research that may be an individual, a group of individuals, or an entire organization. This methodology’s versatility is demonstrated in how it can be used to analyze both simple and complex subjects. However, the strength of a case study as a data collection method is attributed to how it utilizes other data collection methods, and captures more variables than when a single methodology is used. In analyzing the case study, the researcher may employ other methods such as interviewing, floating questionnaires, or conducting group discussions in order to gather data. (+) It is flexible and versatile, analyzing both simple and complex units and occurrence, even over a long period of time. (+) Case studies provide in-depth and detailed information, thanks to how it captures as many variables as it can. (-) Reliability of the data may be put at risk when the case study or studies chosen are not representative of the sample or population. II. Quantitative Data Collection Methods Data can be readily quantified and generated into numerical form, which will then be converted and processed into useful information mathematically. The result is often in the form of statistics that is meaningful and, therefore, useful. Unlike qualitative methods, these quantitative techniques usually make use of larger sample sizes because its measurable nature makes that possible and easier. 93 | P a g e
Quantitative Surveys Unlike the open-ended questions asked in qualitative questionnaires, quantitative paper surveys pose closed questions, with the answer options provided. The respondents will only have to choose their answer among the choices provided on the questionnaire. (+) Similarly, these are ideal for use when surveying large numbers of respondents. (+) The standardized nature of questionnaires enable researchers to make generalizations out of the results. (-) This can be very limiting to the respondents, since it is possible that his actual answer to the question may not be in the list of options provided on the questionnaire. (-) While data analysis is still possible, it will be restricted by the lack of details. Interviews Personal one-on-one interviews may also be used for gathering quantitative data. In collecting quantitative data, the interview is more structured than when gathering qualitative data, comprised of a prepared set of standard questions. These interviews can take the following forms: Face-to-face interviews: Much like when conducting interviews to gather qualitative data, this can also yield quantitative data when standard questions are asked. o (+) The face-to-face setup allows the researcher to make clarifications on any answer given by the interviewee. o (-) This can be quite a challenge when dealing with a large sample size or group of interviewees. If the plan is to interview everyone, it is bound to take a lot of time, not to mention a significant amount of money. Telephone and/or online, web-based interviews. Conducting interviews over the telephone is no longer a new concept. Rapidly rising to take the place of telephone interviews is the video interview via internet connection and web-based applications, such as Skype. o (+) The net for data collection may be cast wider, since there is no need to travel through distances to get the data. All it takes is to pick up the phone and dial a number, or connect to the internet and log on to Skype for a video call or video conference. o (-) Quality of the data may be questionable, especially in terms of impartiality. The net may be cast wide, but it will only be targeting a specific group of subjects: those with telephones and internet connections and are knowledgeable about using such technologies. Computer-assisted interviews. This is called CAPI, or ComputerAssisted Personal Interviewing where, in a face-to-face interview, the data obtained from the interviewee will be entered directly into a database through the use of a computer. 94 | P a g e
o (+) The direct input of data saves a lot of time and other resources in converting them into information later on, because the processing will take place immediately after the data has been obtained from the source and entered into the database. o (-) The use of computers, databases and related devices and technologies does not come cheap. It also requires a certain degree of being tech-savvy on the part of the data gatherer. Quantitative Observation This is straightforward enough. Data may be collected through systematic observation by, say, counting the number of users present and currently accessing services in a specific area, or the number of services being used within a designated vicinity. When quantitative data is being sought, the approach is naturalistic observation, which mostly involves using the senses and keen observation skills to get data about the “what”, and not really about the “why” and “how”. (+) It is a quite simple way of collecting data, and not as expensive as the other methods. (-) The problem is that senses are not infallible. Unwittingly, the observer may have an unconscious grasp on his senses, and how they perceive situations and people around. Bias on the part of the observer is very possible. Experiments Have you ever wondered where clinical trials fall? They are considered to be a form of experiment, and are quantitative in nature. These methods involve manipulation of an independent variable, while maintaining varying degrees of control over other variables, most likely the dependent ones. Usually, this is employed to obtain data that will be used later on for analysis of relationships and correlations. Quantitative researches often make use of experiments to gather data, and the types of experiments are: Laboratory experiments. This is your typical scientific experiment setup, taking place within a confined, closed and controlled environment (the laboratory), with the data collector being able to have strict control over all the variables. This level of control also implies that he can fully and deliberately manipulate the independent variable. Field experiments. This takes place in a natural environment, “on field” where, although the data collector may not be in full control of the variables, he is still able to do so up to a certain extent. Manipulation is still possible, although not as deliberate as in a laboratory setting. Natural experiments. This time, the data collector has no control over the independent variable whatsoever, which means it cannot be manipulated. Therefore, what can only be done is to gather data by letting the independent variable occur naturally, and observe its effects. 95 | P a g e
You can probably name several other data collection methods, but the ones discussed are the most commonly used approaches. At the end of the day, the choice of a collection method is only 50% of the whole process. The correct usage of these methods will also have a bearing on the quality and integrity of the data being sought. Activity 5 How can data be collected through observation? 96 | P a g e
Activity 5 97 | P a g e
Activity 5 98 | P a g e
Activity 5 99 | P a g e
Activity 5 100 | P a g e