Activity 5 101 | P a g e
Activity 5 102 | P a g e
Activity 5 103 | P a g e
Analyse factors affecting the reliability and validity of data What is Reliability?12 The idea behind reliability is that any significant results must be more than a one-off finding and be inherently repeatable. Other researchers must be able to perform exactly the same experiment, under the same conditions and generate the same results. This will reinforce the findings and ensure that the wider scientific community will accept the hypothesis. Without this replication of statistically significant results, the experiment and research have not fulfilled all of the requirements of testability. This prerequisite is essential to a hypothesis establishing itself as an accepted scientific truth. For example, if you are performing a time critical experiment, you will be using some type of stopwatch. Generally, it is reasonable to assume that the instruments are reliable and will keep true and accurate time. However, diligent scientists take measurements many times, to minimize the chances of malfunction and maintain validity and reliability. At the other extreme, any experiment that uses human judgment is always going to come under question. For example, if observers rate certain aspects, like in Bandura’s Bobo Doll Experiment, then the reliability of the test is compromised. Human judgment can vary wildly between observers, and the same individual may rate things differently depending upon time of day and current mood. This means that such experiments are more difficult to repeat and are inherently less reliable. Reliability is a necessary ingredient for determining the overall validity of a scientific experiment and enhancing the strength of the results. Debate between social and pure scientists, concerning reliability, is robust and ongoing. When we examine a construct in a study, we choose one of a number of possible ways to measure that construct, if you are unsure what constructs 12 Source: Explorable, https://explorable.com/validity-and-reliability, as on 12th March, 2018. 104 | P a g e
are, or the difference between constructs and variables]. For example, we may choose to use questionnaire items, interview questions, and so forth. These questionnaire items or interview questions are part of the measurement procedure. This measurement procedure should provide an accurate representation of the construct it is measuring if it is to be considered valid. For example, if we want to measure the construct, intelligence, we need to have a measurement procedure that accurately measures a person's intelligence. Since there are many ways of thinking about intelligence (e.g., IQ, emotional intelligence, etc.), this can make it difficult to come up with a measurement procedure that has strong validity13 . In quantitative research, the measurement procedure consists of variables; whether a single variable or a number of variables that may make up a construct. When we think about the reliability of these variables, we want to know how stable or constant they are. This assumption, that the variable you are measuring is stable or constant, is central to the concept of reliability. In principal, a measurement procedure that is stable or constant should produce the same (or nearly the same) results if the same individuals and conditions are used. So what do we mean when we say that a measurement procedure is constant or stable? Some variables are more stable (constant) than others; that is, some change significantly, whilst others are reasonably constant. However, the measurement procedure that is used to measure a variable introduces some amount/degree of error, whether small or large. Therefore, the score measured (e.g., 0-100 in an exam) for a given variable consists of the true score plus error. The true score is the actual score that would reliably reflect the measurement (e.g., for a person) on a given construct (e.g., a score of 76 out of 100 in an IQ test actually reflects the intelligence of the person taking the test; if that person took another IQ test the next day, we would expect them to get 76 out of 100 again, assuming that we are only seeing that person's true score and not any error). The error reflects conditions that result in the score that we are measuring not reflecting the true score, but a variation on the actual score (e.g., a person whose true score on an IQ test should be 76 out of 100 gets 74 one day, but 79 the next, with the difference in the scores between the two days reflecting the error component). This error component within a measurement procedure will vary from one measurement to the next, increasing and decreasing the score for the variable. It is assumed that this happens randomly, with the error averaging zero over time; that is, the increases or decreases in error over a number of measurements even themselves out so that we end up with the true score (e.g., if the person whose true score should be 76 out of 100 took the IQ test 20 times, we would eventually see an average score of 76, despite the fact that the scores obtained were sometimes higher than 76 and sometimes lower). However, not all measurement procedures have the same amount/degree of error (i.e., some measurement procedures are prone to greater error than others). Provided that the error component within a measurement procedure is relatively small, the scores that are attained over a number of measurements will be relatively consistent; that is, there will be small differences in the 13 Source: Laerd, as at http://dissertation.laerd.com/reliability-in-research.php, as on12th March, 2018. 105 | P a g e
scores between measurements. As such, we can say that the measurement procedure is reliable. Take the following example: EXAMPLE #1 Error component: Small Measurement of: Intelligence using IQ True score: Actual level of intelligence Error: Caused by factors including current mood, level of fatigue, general health, luck in guessing answers to questions you don't know Impact of error on scores: Would expect measurements of IQ to be a few points up and down of your actual IQ, not 105 to 135 points, for example (i.e. small error component) NOTE: You can learn more about reliability, error and intelligence/IQ by reading Schuerger and Witt (1989) and Bartholomew (2004). By comparison, where the error component within a measurement procedure is relatively large, the scores that are obtained over a number of measurements will be relatively inconsistent; that is, there will be large differences in the scores between measurements. As such, we can say that the measurement procedure is not reliable. Take the following example: EXAMPLE #2 Error component: Large Measurement of: Reaction time by measuring the speed of pressing a button when a light bulb goes on (i.e. difference between light appearing and the time when the button was pressed) True score: Actual reaction speed of person Error: Level of alertness/focus (i.e. focus, distraction), level/focus of attention, fatigue of hand/finger, guessing behaviour Impact of error on scores: Potential for time to be significantly different from one measurement to the next (e.g. 50% longer; or possibly 100% longer). Solution: Take multiple measurements rather than a single measurement, and then average the scores. NOTE: You can learn more about reliability, error and reaction times by reading Yellott (1971), Ratcliff (1993), and Salthouse and Hedden (2002). All measurement procedures involve error. However, it is the amount/degree of error that indicates how reliable a measurement is. When the amount of error is low, the reliability of the measurement is high. Conversely, when the amount of error is large, the reliability of the measurement is low. However, there are solutions to help improve measurement procedures that may be prone to large 106 | P a g e
error components. For example, multiple measurements can be taken instead of a single measurement, with the scores from the multiple measurements being averaged. This will increase the consistency/stability of the measurement procedure. What is Validity? Validity encompasses the entire experimental concept and establishes whether the results obtained meet all of the requirements of the scientific research method. For example, there must have been randomization of the sample groups and appropriate care and diligence shown in the allocation of controls. Internal validity dictates how an experimental design is structured and encompasses all of the steps of the scientific research method. Even if your results are great, sloppy and inconsistent design will compromise your integrity in the eyes of the scientific community. Internal validity and reliability are at the core of any experimental design. External validity is the process of examining the results and questioning whether there are any other possible causal relationships. Control groups and randomization will lessen external validity problems but no method can be completely successful. This is why the statistical proofs of a hypothesis called significant, not absolute truth. Any scientific research design only puts forward a possible cause for the studied effect. There is always the chance that another unknown factor contributed to the results and findings. This extraneous causal relationship may become more apparent, as techniques are refined and honed. INTERNAL VALIDITY is affected by flaws within the study itself such as not controlling some of the major variables (a design problem), or problems with the research instrument (a data collection problem)14 . "Findings can be said to be internally invalid because they may have been affected by factors other than those thought to have caused them, or because the interpretation of 14 Source: BYU, as at http://linguistics.byu.edu/faculty/henrichsenl/ResearchMethods/RM_2_18.html, as on 12th March, 2018. 107 | P a g e
the data by the researcher is not clearly supportable" (Seliger & Shohamy 1989, 95). Here are some factors which affect internal validity: Subject variability Size of subject population Time given for the data collection or experimental treatment History Attrition Maturation Instrument/task sensitivity EXTERNAL VALIDITY is the extent to which you can generalize your findings to a larger group or other contexts. If your research lacks external validity, the findings cannot be applied to contexts other than the one in which you carried out your research. For example, if the subjects are all males from one ethnic group, your findings might not apply to females or other ethnic groups. Or, if you conducted your research in a highly controlled laboratory envoronment, your findings may not faithfully represent what might happen in the real world. "Findings can be said to be externally invalid because [they] cannot be extended or applied to contexts outside those in which the research took place" (Seliger & Shohamy 1989, 95). Here are seven important factors affect external validity: Population characteristics (subjects) Interaction of subject selection and research Descriptive explicitness of the independent variable The effect of the research environment Researcher or experimenter effects Data collection methodology The effect of time Measures to ensure validity of a research include, but not limited to the following points15: a) Appropriate time scale for the study has to be selected; b) Appropriate methodology has to be chosen, taking into account the characteristics of the study; c) The most suitable sample method for the study has to be selected; 15 Source: Research Methodology, as at https://research-methodology.net/research-methodology/reliabilityvalidity-and-repeatability/, as on 12th March, 2018. 108 | P a g e
d) The respondents must not be pressured in any ways to select specific choices among the answer sets. It is important to understand that although threats to research reliability and validity can never be totally eliminated, however researchers need to strive to minimize this threat as much as possible. Steps in Ensuring Validity16 The first step in ensuring validity is choosing a well-trained and skilled moderator (or facilitator). A good moderator will check personal bias and expectations at the door. He or she is interested in learning as much candid information from the research participants as possible, and respectful neutrality is a must if the goal is valid qualitative research. For this reason, organizations often employ moderators from outside the group or organization to help ensure that the responses are genuine and not influenced by “what we want to hear.” For some academic applications, the moderator will disclose his or her perspectives and biases in the reporting of the data as a matter of full disclosure. While a good moderator is key, a good sample group is also essential. Are the participants truly members of the segment from which they are recruited? Ethical recruiting is an important issue in qualitative research, as data collected from individuals who are not truly representative of their segment will not lead to valid results. Another way to promote validity is to employ a strategy known as triangulation. To accomplish this, the research is done from multiple perspectives. This could take the form of using several moderators, different locations, multiple individuals analyzing the same data . . . essentially any technique that would inform the results from different angles. For some applications, for example, an organization may choose to run focus groups in parallel through two entirely different researchers and then compare the results. Validity in qualitative research can also be checked by a technique known as respondent validation. This technique involves testing initial results with participants to see if they still ring true. Although the research has been interpreted and condensed, participants should still recognize the results as authentic and, at this stage, may even be able to refine the researcher’s understanding. When the study permits, deep saturation into the research will also promote validity. If responses become more consistent across larger numbers of samples, the data becomes more reliable. Another technique to establish validity is to actively seek alternative explanations to what appear to be research results. If the researcher is able to exclude other scenarios, he is or she is able to strengthen the validity of the findings. Related to this technique is asking questions in an inverse format. 16 Source: Statistics Solutions, as at http://www.statisticssolutions.com/conducting-qualitative-researchvalidity-in-qualitative-research/, as on 12th March, 2018. 109 | P a g e
While the techniques to establish validity in qualitative research may seem less concrete and defined than in some of the other scientific disciplines, strong research techniques will, indeed, assure an appropriate level of validity in qualitative research. Threats to reliability Threats to reliability are those factors that cause (or are sources of) error. After all, the instability or inconsistency in the measurement you are using comes from such error. Some of the sources of error in your dissertation may include: researcher (or observer) error, environmental changes and participant changes. Researcher (or observer) error There are many situations during the dissertation process where you are responsible for taking measurements. During this measurement process, as the researcher, you can introduce error when carrying our measurements. This is known as researcher (or observer) error. Even when a measurement process is considered to be precise (e.g., a stopwatch), your judgement will often be involved in the use of the measurement (e.g., when to start and stop the stopwatch). Human error (or human differences) is also a factor (e.g., the reaction time to start the watch). This becomes a greater problem as the number of researchers (observers) increases and/or the number of measurements increases (e.g., 10 people using stopwatches, making 100 time measurements). Environmental changes During the time between measurements (e.g., recording time on a stopwatch), there may be small environmental changes that influence the measurements being taken, creating error. These changes in the environment make it impossible to ensure that the same individual is measured in the same way (i.e., under identical conditions). For example, even two closely timed measurements may be affected by environmental conditions/variables (e.g., light, day, time, temperature, etc.). However, it should be noted that ensuring that individuals are measured in the same way each time (i.e., with the same/identical environmental conditions), without any environmental change, is an ideal. Participant changes Between measurements, it is also possible for research participants to change in some way. Whilst this potential for change is generally reduced if the time between measurements is short, this is not necessarily the case. It depends on the nature of the measurement (e.g., focus/attention affects reaction times, hunger/tiredness leads to reduced physical/mental performance, etc.). These participant changes can create error that reduces the reliability (i.e., consistency or stability) of measurements. 110 | P a g e
Types and methods/measures of reliability The type of reliability that you should apply in your dissertation will vary depending on the research methods you select. In the sections below, we look at (a) successive measurements, (b) simultaneous measurements by more than one researcher, and (c) a single measurement point. Successive measurements It is common in quantitative research for successive measurements to be taken. After all, in experimental research and quasi-experimental research, researchers often conduct a pre-test, followed by a post-test. In such cases, we want to make sure that the measurement procedures that are used (e.g., a questionnaire, survey) produce measurements that are reliable, both for the pre-test and the post-test. Sometimes the measurement procedures are the same for the pre-test and the post-test, whilst on other occasions a different measurement procedure is used in the post-test. In both cases, we need to make sure that the measurement procedures that are used are reliable. However, we use different tests of reliability to achieve this: (a) test-retest reliability on separate days; and (b) parallel-forms reliability. Each of these tests of reliability is discussed in turn: Test-retest reliability on separate days Test-retest reliability on separate days assesses the stability of a measurement procedure (i.e., reliability as stability). We emphasize the fact that we are interested in test-retest reliability on separate days because test-retest reliability can also be assessed on the same day, where it has a different purpose (i.e., it assesses reliability as internal consistency rather than reliability as stability). A test (i.e., measurement procedure) is carried out on day one, and then repeated on day two or later. The scores between these two tests are compared by calculating the correlation coefficient between the two sets of scores. The same version of the measurement procedure (e.g., a survey) is used for both tests. The samples (i.e., people being tested) for each test should be the same (or very similar); that is, the characteristics of the samples should be closely matched (e.g., on age, gender, etc.). If there is a strong relationship between the two sets of scores, highlighting consistency between the two tests, the measurement procedure is considered to be reliable (i.e., stable). Where the measurement procedure is reliable in this way, we would expect to see identical (or very similar) results from a similar sample under similar conditions when this measurement procedure was used in future. Test-retest reliability on separate days is particularly appropriate for studies of physical performance, but it can also be used with written tests/survey methods. However, in such cases, there is greater potential for learning effects to result in spuriously high correlations (i.e., the reliability is exaggerated because it cannot mitigate for learning effects; it simply takes into account the two sets of scores). 111 | P a g e
The interval between the test and retest (i.e., between measurement procedures) will be determined by a number of factors. In physical performance tests, for example, you may need to assess the amount of rest participants? require, especially if the test is physically demanding. In written tests/survey methods, greater time between the test and retest will likely increase the threat from learning effects. Therefore, you will need to assess what is the appropriate interval between the test and retest: too short and there is the potential for memory effects from the first test; too long and there is the potential for extraneous/confounding effects. Ultimately, any length of interval where maturation, learning effects, changes in ability, outside influences/situational factors, participant interest, akin to learning effects, and so on, could affect the retest. Parallel-forms reliability Parallel-forms reliability (the parallel-forms method, alternate-forms method or equivalence method/forms), is used to assess the reliability of a measurement procedure when different (alternate/modified) versions of the measurement procedure are used for the test and retest. The same group of participants is used for both test and retest. The measurement procedures, whilst different, should address the same construct (e.g., intelligence, depression, motivation, etc.). Where the test-retest reliability method is more appropriate for physical performance measures, the parallel-forms reliability method is more frequently used in written/standardised tests. It is seldom appropriately used in physical performance tests because designing two measurement procedures that measure the same thing is more challenging compared with two sets of standardised test questions. The reliability of the measurement procedure is determined by the similarity/consistency of the results between the two versions of the measurement instrument (i.e., reliability as equivalence). Such reliability is tested using a t-test, similarity of means and standard deviations (i.e., between the two groups; that is, the scores from the two versions of the measurement instrument), and a high correlation coefficient. Simultaneous measurements by more than one researcher In quantitative research, sometimes more than one researcher is required when collecting measurements, which makes it important to assess the reliability of the simultaneous measurements that are taken. There are two common reasons for this: (a) experimenter bias and instrumental bias; and (b) experimental demands. Let's look at each in turn: Experimenter bias and instrumental bias Sometimes, we can think of the measurement device as the researcher collecting the data, since it is the researcher that is making the 112 | P a g e
assessment of the measurement. This is more likely to occur in qualitative research designs than quantitative research because qualitative research generally involves less structured and less standardised measurement procedures, such as unstructured and semistructured interviews and observations. However, quantitative research also involves research methods where the score on the dependent variable that is given on a particular measurement procedure is determined by the researcher. In such cases, you want to avoid the potential for experimenter bias and instrumental bias, which are threats to internal validity and reliability. For example, let's imagine that a researcher is using structured, participant observation, to assess social awkwardness (i.e., the dependent variable) in two different types of profession (i.e., the independent variable). For simplicity, let's imagine that two researchers monitor these two different groups of employees, and score their level of social awkwardness on a scale of 1-10 (e.g., 10 = extremely socially awkward). The way that a researcher scores may change during the course of an experiment for two reasons: First, the researcher can gain in experience (i.e., become more proficient) or become fatigued during the course of the experiment, which affects the way that observations are recorded. This can happen across groups, but also within a single group (even pre- and post-tests). Second, a different researcher may be used for the pre-test and post-test measurement. In quantitative research using structured, participant observation, it is important to consider the ability/experience of the researchers, and how this, or other factors relating to the researcher's scoring, may change over time. However, this will only lead to instrumental bias if the way that the researcher scores is different for the groups that are being measured (e.g., the control group versus the treatment group). One of the goals of reliability of equivalence is to assess such experimenter bias and instrumental bias by comparing the similarity/consistency of the simultaneous measurements that are being taken. Experimental demands Sometimes there are too many measurements to be taken by one researcher (e.g., lots of participants), or the measurements are geographically dispersed (e.g., measurements have to be taken at different locations). This may also result in simultaneous measurements being taken. Since the judgement of researchers is not perfect, we cannot assume that different researchers will record a measurement of something in the same way (e.g., measure the social awkwardness of a person on a scale of 1-10 simply by observing them). In order to assess how reliable such simultaneous measurements are, we can use inter-rater reliability. Such inter-rater 113 | P a g e
reliability is a measure of the correlation between the scores provided by the two observers, which indicates the extent of the agreement between them (i.e., reliability as equivalence). Single measurement point Unlike the test-retest reliability, parallel-forms reliability and inter-rater reliability, testing for internal consistency only requires the measurement procedure to be completed once (i.e., during the course of the experiment, without the need for a pre- and post-test). This may reflect post-test only designs in experimental and quasi-experimental research, as well as single tests in non-experimental research (e.g., relationship-based research) that have no intervention/treatment. When faced with such a scenario (i.e., where the measurement procedure is only completed once), we examine the reliability of the measurement procedure that has been created in terms of its internal consistency; that is, the internal consistency of the different items that make up the measurement instrument. Reliability as internal consistency can be determined using a number of methods. We look at the split-half method and Cronbach's alpha: Split-half reliability Split-half reliability is mainly used for written/standardized tests, but it is sometimes used in physical/human performance tests (albeit ones that require a number of trials). However, it is based on the assumption that the measurement procedure can be divided (i.e., split) into two matched halves. Split-half reliability is assessed by splitting the measures/items from the measurement procedure in half, and then calculating the scores for each half separately. Before calculating the split-half reliability of the scores, you have to decide how to split the measures/items from the measurement procedure (e.g., a written/standardized test). How you do this will affect the values you obtain. o One option is to simply to divide the measurement procedure in half; that is, take the scores from the measures/items in the first half of the measurement procedure and compare them to the scores from those measures/items in the second half of the measurement procedure. This can be problematic because of (a) issues of test design (e.g., easier/harder questions are in the first/second half of the measurement procedure), (b) participant fatigue/concentration/focus (i.e., scores may decrease during the second half of the measurement procedure), and (c) different items/types of content in different parts of the test. o Another option is to compare odd- and even-numbered items/measures from the measurement procedure. The aim of this method is to try and match the measures/items that are being compared in terms of content, test design (i.e., difficulty), 114 | P a g e
participant demands, and so forth. This helps to avoid some of the potential biases that arise from simply dividing the measurement procedure in two. After dividing the measures/items from the measurement procedure, the scores from each of the halves is calculated separately, before the internal consistency between the two sets of scores is assessed, usually through a correlation (e.g., using the Spearman-Brown formula). The measurement procedure is considered to demonstrate split-half reliability if the two sets of scores are highly correlated (i.e., there is a strong relationship between the scores). Cronbach's alpha Cronbach's alpha coefficient (also known as the coefficient alpha technique or alpha coefficient of reliability) is a test of reliability as internal consistency (Cronbach, 1951). At the undergraduate and master's dissertation level, it is more likely to be used than the split-half method. It is most likely to be used in written/standardized tests (e.g., a survey). Cronbach's alpha is also used to measure split-half reliability. However, rather than simply examining two sets of scores; that is, computing the split-half reliability on the measurement procedure only once, Cronbach's alpha does this for each measure/item within a measurement procedure (e.g., every question within a survey). Therefore, Cronbach's alpha examines the scores between each measure/item and the sum of all the other relevant measures/items you are interested in. This provides us with a coefficient of inter-item correlations, where a strong relationship between the measures/items within the measurement procedure suggests high internal consistency (e.g., a Cronbach's alpha coefficient of .80). Cronbach's alpha is often used when you have multi-items scales (e.g., a measurement procedure, such as a survey, with multiple questions). It is also a versatile test of reliability as internal consistency because it can be used for attitudinal measurements, which are popular amongst undergraduate and master's level students (e.g., attitudinal measurements include Likert scales with options such as strongly agree, agree, neither agree nor disagree, disagree, strongly disagree). However, Cronbach's alpha does not determine the unidimensionality of a measurement procedure (i.e., that a measurement procedure only measures one construct, such as depression, rather than being able to distinguish between multiple constructs that are being measured within a measurement procedure; perhaps depression and employee burnout). This is because you could get a high Cronbach's alpha coefficient (e.g., .80) when testing a measurement procedure that involves two or more constructs. How do I use these tests of reliability? In order to examine reliability, a number of statistical tests can be used. These 115 | P a g e
include Pearson correlation, Spearman's correlation, independent t-test, dependent t-test, one-way ANOVA, repeated measures ANOVA and Cronbach's alpha. Activity 6 Select one of Pearson correlation, Spearman's correlation, independent t-test, dependent t-test, one-way ANOVA, repeated measures ANOVA and Cronbach's alpha. Research the technique and outline how it may be used to examine reliability. 116 | P a g e
Activity 6 117 | P a g e
Activity 6 118 | P a g e
Activity 6 119 | P a g e
Activity 6 120 | P a g e
Activity 6 121 | P a g e
Activity 6 122 | P a g e
Activity 6 Review relevant research ethics and codes of conduct 123 | P a g e
How ethical issues arise in business research at every stage Discussions of ethics tend to sound worthy, sometimes border on the philosophical, and occasionally stray right off the point. Why should this be? Ethics relate to moral choices affecting decisions and standards and behaviour. It is quite hard to lay down a set of clear rules, which cover all possible moral choices. Especially in research, where the practical aspects of a study (e.g. how and when to meet people for interview, which data to sample, how to deal with someone changing their mind about being part of a study, coming across information which you aren’t really supposed to have etc etc) and the potential isolation of you as the researcher (not being in a group or class all doing the same thing, but following your own research with your own objectives and contacts), as well as possible inexperience of research at this stage of your studies, can all contribute to a feeling of doubt and worry about what to do for the best. Sometimes it can be quite a shock, when you have been used to getting pretty clear ideas about how to do something, to find you have to make your own decisions about how things will be done. Ethical choices we have never imagined can just creep up and hit us. An obvious example would be when, as a very honest student, we start to collect some data together and realize that one source of data is completely out of step with the rest. As a professional researcher, that is an interesting challenge, which will create its own new pattern of research and investigation. But as a business student with a fast approaching hand-in deadline, the temptation to lose the odd piece of data can be great. We are not suggesting that we have to be great moral advocates here, perhaps that is a matter for our own consciences, but we must anticipate as much as we can the moral choices and dilemmas, which the practice of research will bring, and try to find appropriate ethical ways of dealing with them. Codes of Ethical Conduct The National Statement on Ethical Conduct in Human Research [http://www.nhmrc.gov.au/guidelines-publications/e72 ] sets out the national guidelines for ethical conduct in research involving human participants. Australian human research ethics committees use these guidelines as the basis for approving research, and researchers should design their projects in accordance with them. The purpose of the Statement is to promote ethically good research that accords participants with the respect and protection that is due to them, and is of benefit to the wider community. The Statement clarifies the responsibilities of researchers in the ethical design, conduct and dissemination of results of human research17 . 17 Source: University of Melbourne, as at https://staff.unimelb.edu.au/research/ethics-integrity/humanethics/apply-for-ethics-approval/getting-started, as on 12th March, 2018. 124 | P a g e
Key ethical concerns18 Informed consent Informed consent is a key ethical requirement. Participants must understand what the research involves and what will be done with their data before they consent to take part (see the National Statement). The usual way to obtain informed consent is in writing, by use of a consent form that is signed by the participant and retained by the researcher as a record of the agreement. Because the researcher retains the consent form, there needs to be an information sheet for participants to keep, with the same details. Both the consent form and the information sheet should include the researcher's name and contact details, the title and brief description of the project, details on how the identities of participants will be protected (both when storing the raw research data and in its published form), a statement that participation is voluntary and participants can withdraw at any time, and provision for signature and date. Surveys Provided that a privacy statement (i.e. warning) is prominently displayed, completion of the survey can be deemed to constitute consent. The University values diversity and seeks to include all people in the community. To ensure that all people are treated in a dignified and nondiscriminatory manner, in all cases where gender data is collected, the Researcher should provide a gender inclusive option on surveys. In practice this may require, at a minimum, providing an 'Other' or 'O' option for any question on gender identity. Confidentiality The Sponsoring Organisation needs to know how the researcher will address the issue of confidentiality, i.e. how the identities of participants will be protected in the raw research data and in any published material. Researchers must ensure that the privacy of their participants is adequately protected. The Sponsoring Organisation is bound by the provisions of the Commonwealth Privacy Act 1988. Of specific relevance are its 13 Information Privacy Principles (IPPs), as stated in Schedule 1 of the Privacy Amendment (Enhancing Privacy Protection) Act 2012, which amends the Privacy Act 1988The IPPs detail the requirements for collection, storage, use and disclosure of personal information. The term "anonymous" is sometimes used incorrectly by researchers when they mean that identities will be suppressed in published material. If individuals are identified or potentially identifiable in the raw research data, then it is not accurate to refer to them as "anonymous", even if they are not identified in any 18 Source: Australian National University, as at https://services.anu.edu.au/research-support/ethicsintegrity/key-ethical-concerns, as on 12th March, 2018. 125 | P a g e
publications. In the consent form and information sheet researchers need to explain to participants how their privacy will be protected. Blanket guarantees of confidentiality (e.g. assurances of "strict confidentiality") are not helpful. If the term "confidential" is used in information provided to participants, a full description of what precisely confidentiality means in the context of a given research project should be given. Researchers should be aware that, under Australian law, any data they collect can potentially be subpoenaed. Depending on the nature of the research, it may be helpful to qualify promises of confidentiality with terms such as "as far as possible" or "as far as the law allows". Privacy and the Internet Increasingly the web is being used for surveys, but that raises particular privacy concerns. The Office of the Federal Privacy Commissioner has issued Guidelines for Federal and ACT Government Websites. These guidelines indicate the widespread concern among net users about a lack of transparency regarding the use and disclosure of personal information by websites, the tracking of individuals' activities at websites and concerns about the security of their information in the Internet environment. The Privacy Act requires that a person be given details about what information is being collected, what purpose the information is being collected for, how the information will be used and if the information is to be disclosed, to whom it will be disclosed. It is important that a person be given sufficient information to enable them to make a decision about whether or not they wish to participate in the project. Apart from the ethical issues involved, organisations should require that any email or web-based questionnaire must include a privacy statement in order to meet the requirements of federal, State and Territory privacy legislation. The following is needed: The privacy statement or warning for potential respondents must be prominently displayed with any web-based survey, usually on the same page as the questionnaire or prominently linked to it Researchers using this methodology must familiarise themselves with the Office of the Federal Privacy Commissioner's Guidelines for Federal ACT and Government World Wide Websites If the survey is located on The Australian National University website, there should also be a hyperlink to The Australian National University's own privacy statement and its information on security provided. At a minimum it should include the following information: What information is being collected about individuals when they visit the website or use email Why this information is being collected How it will be used 126 | P a g e
If it will be disclosed. A warning that there are risks associated with using the Internet as a transmission medium. (This applies also to emails, if this medium is to be used.) An offer to provide other options if possible for providing information, e.g. telephone or paper response. If any security measures, such as encryption, are provided, then information about this should be given. This could include a hyperlink to a brief statement on web security. At a minimum, the following should be included in the privacy statement. Additional information may be needed depending on each case: Privacy statement Security of the website Users should be aware that the World Wide Web is an insecure public network that gives rise to a potential risk that a user's transactions are being viewed, intercepted or modified by third parties or that data which the user downloads may contain computer viruses or other defects. Purpose of data collection This information is being sought for a research project entitled (TITLE). The researcher is (NAME AND CONTACT DETAILS). The project aims to (BRIEF DESCRIPTION OF PROJECT AIMS). The information you provide will only be used for the purpose for which you have provided it. It will not be disclosed without your consent. Security of the data The data will be kept secure by (DESCRIBE METHODOLOGY IN BRIEF). At the completion of the research project the data will be (DESCRIBE HOW THE RAW DATA WILL BE KEPT, FOR HOW LONG, AND WHAT WILL HAPPEN WITH PUBLISHED DATA) [e.g. will names be used, or other identifying details?] As the web can be an insecure medium you may choose to complete this survey by [provide alternate methods, e.g. telephone or mail out. If any security measures are being used, then provide information about these.] Recruitment How, Who, and What to do when it doesn’t go well Recruitment is a critical ethical concern in human research, and it is one element of research on which the Ethics Committee particularly focusses. When preparing an ethics protocol, it is important that you describe recruitment plans and processes in detail, and that you be as realistic as possible when thinking about who might be willing and able to participate in your research. Remember, the central principle of voluntary participation underpins all recruitment efforts, and great care needs to be taken not to create conditions under which potential participants feel pressure to join in the research. 127 | P a g e
Sampling. For quantitative studies, you may have calculated a required sample size (e.g. to achieve a desired level of precision in your research outcomes), but such calculations generally rely on assumptions such as random sampling and independence between subjects, neither of which may genuinely hold. For qualitative research, for which the object may not be to generalise findings from a sample to a population – ethnographic studies are a useful example – other forms of “sampling” may be used. A term the Ethics Committee often encounters is “snowball sampling”, and researchers need to be aware that such an approach may create conditions for potential participants whereby they feel pressure to join the research because they have been referred by a friend or colleague. It is particularly important in such cases to ensure that the principle of voluntary participation is stressed in information given to potential participants. It is also important in this vein to explain to potential participants that research typically will not benefit them directly. Even though this advice may seem to limit the prospect that they will join, it is essential to the ethical conduct of research. Who to recruit. Think carefully about your target participant group, and be realistic about whether people will be happy to join your research. Sometimes a small incentive (e.g. a gift card) can help, but it needs to be small enough not to be regarded as coercive. “Small” is relative, as well – if you are recruiting medical professionals, you may need to pay hundred(s) of dollars as an incentive; if you are working in an overseas community with much lower incomes than we in Australia enjoy, even $20 could be considered a coercive amount. If recruitment does not proceed as you plan, you may need to submit a variation to expand recruitment efforts, perhaps by using online platforms like Facebook, or by adding an incentive. The bottom line on recruitment is that participants are to be respected and valued – they give freely of their time and effort, and recruitment is seldom as simple as it may initially seem. When submitting a protocol, be mindful that the Committee will want to know as much detail as possible about how recruitment will proceed – who, how, how many, and what you will do if initial efforts don’t work as well as you had hoped. Vague, overly optimistic descriptions about recruitment will generally not be accepted at face value, so time spent thinking through recruitment strategies before applying for ethics approval will be time well spent. Use of lotteries or raffles The HREC will not normally allow lotteries or raffles and does not want to encourage the use of them. However we will allow them when: There is clear potential that the research won't attract the required number of participants. There is no danger to participants, ie the research is not about addictive behaviour, gambling etc. Intentional Recruitment of Aboriginal and Torres Strait Islander Peoples Research involving Aboriginal and Torres Strait Islander peoples is particularly sensitive. Central to such research are principles of respectful engagement and 128 | P a g e
consultation. The AIATSIS Guidelines for Ethical Research in Australian Indigenous Studies states that "it is essential that Indigenous people are full participants in research projects that concern them, share an understanding of the aims and methods of the research, and share the results of this work. At every stage, research with and about Indigenous peoples must be founded on a process of meaningful engagement and reciprocity between the researcher and Indigenous people." Accordingly, ethical review of protocols involving Indigenous peoples requires evidence to be presented by researchers of such consultation and, wherever possible, letters of support from community leaders or organisations that attest to the willingness of the community to be engaged within the research. Activity 7 Review the Office of the Australian Information Commissioner website, https://www.oaic.gov.au/, and locate the 13 Australian Privacy Principles. List the principles. 129 | P a g e
Activity 7 130 | P a g e
Activity 7 131 | P a g e
Activity 7 132 | P a g e
Activity 7 133 | P a g e
Activity 7 134 | P a g e
Activity 7 135 | P a g e
Activity 7 How ethical issues can arise right through the research process Here is a brief list of the kinds of issue, which can arise at different points in the research process: Access – physical, cognitive, continuing – just getting at the appropriate people can be frustrating and tempt researchers to cut corners. Don’t be tempted. Participant acceptance/access (not just those in authority) – for example, you have permission to ask people in customer-facing positions some questions, but they don’t know you and are not sure how far to trust you – are you a representative of management? Time – people just don’t respond in time for you to achieve project Your identity as researcher – what do they know about your study? And how the data you collect will be used? And whose data is it, if they spoke or wrote it? Re-phrasing research questions on basis of feasibility (not wrong) i.e. you find that your initial idea won’t work because you cannot gain access to the right people, so you may need to review your research question to one 136 | P a g e
which is feasible, provided it is still valid and ethical. Convenience sampling – e.g. using people we know to take part, which could produce participants who simply want to please you with their answers; or excluding troublesome views or statistics. E.g. including a poor sales year in an otherwise rising trend. Reality is messy – do we want to smooth the mess and create simple answers, or do we want to understand messy reality in order to change or anticipate it? Data recording – what if tape or digital recorder doesn’t work? Can data be recreated from your notes? Do we pretend it worked? Interviewing – e.g. what if the first interview turns up new ideas, which are then used in subsequent interviews – can you include that first one in your data set? What if an interviewee starts to see things in a new light and uncovers painful memories or ideas? Latter can also happen in focus groups- conflict, personal animosity could develop – how can this be handled? Your role in the data – we have already mentioned this, the researcher is not an object but a human being to whom people will react. What effect does this have on your data? Does it affect validity of results? Transcripts – if you transcribe an interview or conversation, what happens to it? Whose is it? How do you label it (Jo Bloggs’ interview?). And how exactly do you transcribe? Do you include repeated phrases or words? Do you attempt to record body language which may affect the meaning of what is said? Cheating in analysis when results don’t fit – this can affect both quantitative and qualitative research methods. Remember that provided the process was justified and conducted ethically and professionally, then a not very exciting outcome does not really matter. We cannot all discover gravity or relativity, but we can all design sound research plans and carry them out professionally. Confidentiality in the report of your research – how do you ensure it? Anonymity in the report – how do you deal with it? Use of research data for new purposes – can you recycle data? How could you get ethical approval for this? Strategies to ensure ethical issues in business research are addressed appropriately Some key themes and strategies to anticipate and deal with them are given below. Stakeholder analysis For ethics generally in business research, it can be helpful to start by working out who the stakeholders are in your proposed study. This may include the research participants, their managers and other team members, “gatekeepers” who may be senior managers or specific post-holders who can authorize your research, shareholders (in a private quoted company) who may be impacted if your research is detrimental to the company and is leaked, customers for the same reason but could be any kind of organization, yourself as researcher, yourself as student, competitors to this organization or activity?, suppliers? Can you identify 137 | P a g e
any more stakeholders? Some will be specific to the kind of research study undertaken, for example a study of recruitment practices could affect potential employees. Once you know who might be affected by your research study, you could design a simple risk analysis – for each stakeholder identify the type of risk from your research, its potential impact (low, medium or high) and the probability that it will happen (unlikely, possible, probable). Entering this into a grid, will give you a clear idea of priorities in designing an ethical study, and should lead you to think about strategies to reduce undesirable impacts. Participant anonymity is usually a basic requirement in business research, unless using a research method where a particular identity is relevant to results and participants agree their association with the research. So what does participant anonymity involve? It is not usually just a case of not putting their names in the final report. It will be important to decide whether you need to devise a code for each participant (so you know who they are but they cannot be named by others), or whether this is not needed by the study so no-one will have a code or a name. Can you refer to their title, role, function, department, site etc? All these, in conjunction with your results, may reveal identity. Is it appropriate to record the participant’s names on questionnaires? (this issue is not just to do with ethics, since anonymity also affects how we answer questionnaires). Can you stop yourself referring to someone, in your study, to others in their company, who might try to identify them? If you have, for good reason, collected personal details, have you checked whether you comply with the requirements of any data protection legislation in your country? Why do you need to know someone’s age or gender or ethnic group? Does it really affect the research outcomes and thus will be important data to collect? Or could you redesign your study so that this kind of data was not important and need not be collected? Informed consent Once you know where to look for participants in your study, and you have identified how to achieve ethical involvement for them and their organization, there is the practical business of achieving their consent. Informed consent requires you to prepare for all research participants some documentation which shows them what you are doing and why, what their role in the research is, what will happen to the data you collect from them and what they are agreeing to do. It will also usually set out how you will keep and dispose of the data and how the required confidentiality will be ensured. It will also set out how the participant can withdraw their consent at any time and you will not proceed with their data/interview etc. This is very detailed and seems like a lot of work, but in fact a short text can often achieve all the requirements of informed consent. This, or a brief statement referring to this documentation, must then be signed by your participants. Remember that no undue pressure should be brought to bear on any participant or gatekeeper, since this, however well-intentioned, will influence 138 | P a g e
their involvement in your research and will prove not only unethical, but may also invalidate results. Objectivity Let us assume that your motives are honest, in which case there are just two issues to tackle here. The first issue is the way data are collected and recorded. You may be using a specially designed relational database in which to record observations and related information, or we may be talking about a highlighter pen and notes in the margin of an interview transcript, or a clipboard and pencil. Whatever method is used to collect, and transfer data to a retrievable record, then it must be designed for purpose, systematic and capable of capturing all relevant details. Take for example a semi-structured interview method: what kind of system could be used to record the interview? Video recorder? Digital recorder? Tape recorder? Notepad and pen? Pro-forma with main questions and spaces to record answers? Reflect for a moment on what kind of issues could arise which might affect research objectivity depending on choice of system. Could any of these systems fail? If so, what would you do? How could you ensure continuing objectivity? The second issue is when a research study is under way and something unexpected happens to cause a problem with your data. This might be a rogue result which doesn’t fit the rest of the data. Or a failed tape recording. Or a key participant withdrawing from the study, as they have a right to do. At this stage of the research, however honest we are, there will be a temptation to fix the problem. So we should anticipate this temptation and understand, before it happens, that that is the road to failure in research. Academic and professional audiences will not be fooled, because they will understand and look for such issues. The moral responsibility of the researcher is considerable and when researchers are found to have transgressed, they are likely to be held to account in the media. To test this, search the web for media coverage of “fixed” or falsified research and its implications. Sadly, there are many examples to be found, but at least these will have been held to public account. 139 | P a g e
Activity 8 When is informed consent required? Why? 140 | P a g e
Activity 8 141 | P a g e
Activity 8 142 | P a g e
Activity 8 143 | P a g e
Activity 8 144 | P a g e
Activity 8 Practitioner researcher or internal researcher This is an extreme case of having a potential unintended effect on the outcomes of your research. If you are researching an organization of which you are part, then you already have an understood role or status within this organization. It will be difficult for you suddenly to put on an “objective researcher” hat, and even if you could do this successfully, how easy would it be for your colleagues, or subordinates or managers to see you differently in this role? However, an internal researcher may be in a position to conduct a kind of research, which may be impossible from an external perspective. Can you think of an example in business research? 145 | P a g e
It can be very tempting to undertake participant observation in a covert way in your own organization, but this clearly raises ethical issues and possible bias. Could you possibly find more useful and reliable data covertly than openly declaring your intention and gaining official agreement for access? In a few cases, the answer may be yes, but if so, there must be approval from any research ethics committee relating to your studies or research (or professional body ethics approval e.g. relating to your work function) and in retrospect you must inform those involved that the study took place and why access was not officially sought in advance. Assurances must then be given about the use to which the research data will be put and to what extent it will be anonymised. Spying is not research! Activity 9 In what circumstances might covert research be justified? How would you deal ethically with this? 146 | P a g e
Activity 9 147 | P a g e
Activity 9 148 | P a g e
Activity 9 149 | P a g e
Activity 9 150 | P a g e