Research Methodology
Statistics for description
Humans vary tremendously, not only on a world wide scale, and with regard to cultural differences and appearance but also within cultures, and within nations and families. Variation is an inescapable fact of life. In our studies of living beings and their activities, we will often be working with several individuals at any one time. In surveys, the numbers may run into thousands, but there will normally be smaller numbers in the more carefully controlled experimental type of investigation. Inevitably our efforts will reward us with sets of data which usually take the form of numbers. It is in conveying information about, and trying to interpret, these large sets of numbers in an efficient and convenient manner that we really need descriptive statistics.
Hence the principal purpose of statistics is the drawing of conclusions about large populations -human or otherwise - from comparatively small amounts of data. In a phrase such as “on average, there are about 100 accidents per week in Dodge City“, the word ‘average’ is one kind of descriptive statistic, an approximate number which indicates a typical or central figure for a group of numbers, and is officially called a ‘measure of central tendency‘. Another type of descriptive statistic is used to qualify the word ‘about’ that is called a ‘measure of spread’ or sometimes ‘a measure of dispersion’ which simply indicates just how much the word ‘about’ means for a particular set of figures.
The other main use of statistics is in decision makings where one is not confident that the ‘truth’ has been revealed. In an experiment, certain events take place, changes are recorded, and the findings, which will usually comprise numbers of some sort, are used as a basis for drawing conclusions about the underlying events, these statistics are called ‘inferential statistics‘.
The similar groups used in the comparative research studies are called either ‘experimental group’ or ‘control group’ that is simply what is considered as reference to compare the recorded changes. Research is carried out in a set-up called ‘experiment‘, that by its completion the investigator will be the proud owner of sets of ‘scores’ (the results) obtained from the victims, who are usually referred to as ‘subjects‘. Another jargon used in experimental work is the verb ‘ran‘, or ‘running’ either subjects or experiments, which is used to describe the participation of subjects in an experiment.
The purpose of summary statistics is to replace a huge indigestible mass of numbers (data) by just one or two numbers that, together, convey most of the essential information. For univariate data (single quantity) there are two main types of summary statistic: measures of location and measures of spread. Measures of location answer the question ‘what sort of size values are we talking about?’ measures of spread answer the question ‘how much do the values vary?
The ’mode’ of a set of discrete data is the single value that occurs most frequently. If there are three or more such outcomes then the data are called multimodal. For qualitative data we refer not to a mode, but to a modal class. The word ’median’ meaning middle, is used for values in the middle of the ordered row which would give a good idea of the general size of the data. The mean often called the average, is equal to the sum of all the observed values divided by the total number of observations.
The term variable refers to the description of the quantity being measured and the term observed value or observation is used for the result of the measurement. The word ’data’, plural of ’datum’ means pieces of information. There are three common types of data: qualitative, discrete and continuous. Qualitative data consist of descriptions using names. Discrete data consist of numerical values where continuous data consist of numerical values in cases where it is not possible to make a list of ht outcomes. For example, when measuring people, we may record their heights to the nearest centimetre, in which case observations of a continuous quantity are being recorded using discrete values. The data are discrete in the cases where the scores are all whole numbers. A convenient alternative to Tally charts which is listing the date is the ’stem-and-leaf diagram’, in which the stem represents the most significant digit and the leaves are the less significant digits. Diagrams are an effective way of conveying information. If only a small number of discrete values are possible, then the best approach is often to use a tally chart, followed by a summary in a frequency table and representation using a bar chart. If a large number of discrete values are possible, then the best approach is often to use a stem-and-leaf diagram, followed by a summary in a grouped frequency table and representation using a histogram.
Pie charts and compound bar charts are useful when the features of interest are the relative sizes of the frequencies in alternative categories. There are many ways of portraying data. Whatever method is used, try to make it self explanatory for the reader and preferably artistic.
The ‘range’ tells you how many numbers altogether a distribution is spread. It is easily obtained by subtracting the smallest score from the largest in the particular distribution of numbers under consideration. The problem with the range is similar to the mean, namely that extreme values have a very big effect on the descriptive statistic. The ‘mean’ deviation is a number which indicates how much, on average, the scores in a distribution differ from a central point, the mean. in the set of numbers: 8,9,10,11,12 - the mean is 10 and the range is 4. The number 8 is 2 points away from the mean, and so is the number 12. Number 9 and 11 are both 1 point away from the mean, and 10, the remaining number in the set is the mean, so does not differ at all. Listing these differences: 2+1+0+1+2= 6, There are five numbers in the group, and so you can say that the average (mean) amount they all vary from the mean is 6 divided by 5, hence it is 1.2 points. The differences, 2,1,0,.., which were obtained are called ‘deviations‘, and, 1.2 is the mean of the deviations.
In principle, the standard deviation often shortened to SD is very similar to the mean deviation. It summarises an average distance of all the scores from the mean of a particular set, but it is calculated in a slightly different manner. By squaring the above deviation figures and adding them, then finding their mean we have 2, which is the mean of the sum of square, and called the variance. This must be now un-squared , to bring it back into the right perspective. This figure 1.4142 is the standard deviation. It is slightly higher than the figure of 1.2 which was the value of the mean deviation for the same set of numbers. Accordingly variance is th estandard deviation squared, and as it changes exactly in step with the standard deviation, we can often use it as an alternative measure of spread. A statistical test called the variance ratio of F test, is to compare the spreads of defferent distributions by looking at their variances. In fact a whole bunch of statistical tests, coming under the umbrella term ‘Analysis of Variance’, and shortened to ANOVA exists and as the name implies, they focus on variance.
If a population is approximately symmetric then in a sample of reasonable size the mean and median will have similar values. Typically their values will also be close to that of the mode of the population. A population that is not symmetric is said to be skewed. A distribution with a long tail of high values is said to be positively skewed, in which case the mean is usually greater than the mode or the median. If there is a long tail of low values then the mean is likely to be the lowest of the three location measures and the distribution is said to be negatively skewed. Various measures of skewness exist. One, known as Pearon’s coefficient of skewness, is given by mean - mode, divided by standard deviation. There is also the alternative of ‘quartile coefficient’ to the above. The ‘weighted mean’ of values x1,x2,…, with weights w1, w2, ….., is to calculate for example the average wage of employees not just as simple average of the three possible wages, but, giving it ‘weight’ by multiplying to the numbers of employees involved.
During a research experiment, the more subjects a person has to interview or use in an experiment, as a general rule, the less likely he or she is to carry out the job carefully. The less well trained people are, the less seriously they will take a study, and the less likely they are to understand the subtle nuances of the task in hand. This all comes back to cost again, although there must be a point at which, even with all the financial resources one could wish of, it is still impossible or impracticable to take complete population of subjects.
Sources:
Graham Upton, and Ian Cook, Introducing statistics, Oxford University Press, 1998, (B 873)
Frances Clegg, Simple Statistics, Cambridge University, 1990, (B 902)
On Qualitative Research Method
In dealings with empiricism: broadly defined as all research in which pure data or uninterrupted facts are the solid bedrock of research, the objections were taken account of what have been raised by researchers who render life difficult for the supporter of either quantitative or mainstream qualitative methods. Hermeneuticians, critical theorists, poststructuralists, linguistic philosophers, discourse analysts, feminists, constructivists, reflectivists and other trouble makers may leave their readers irresolute vis-a-vis empirical research. In qualitative method the consideration of open, equivocal empirical material, and the focus on such material, is a central criterion, although of course some qualitative methods do stress the importance of categorizations. The distinction between standardization and non- standardization as the dividing line between quantitative and qualitative methods thus becomes a little blurred, which does not prevent it from being useful. The debate, however, does appear to be dying down, partly because the arguments have run dry and partly because polarization no longer seems to be popular in the discussions about method. If we can avoid the trap of regarding quantitative results as robust and unequivocal reflections of a reality out there, there is no reason to be anti-quantitative (Silverman, 1985).
Nonetheless, a number of practices which originate from quantitative studies may seem inappropriate to qualitative research. These include the assumptions that social science research can only be valid if based on experimental data, official statistics or the random sampling of populations and that quantified data are the only valid or generalizable social facts. There are areas of social reality which quantitative statistics cannot measure. The introduction of reflective approach means that due attention is paid to the interpretive, political and rhetorical nature of empirical research, it could thus be characterized as an intellectualization of qualitative method; competence that lies in the space between abstract philosophy and narrow questions of method. This in turn calls for an awareness among researchers of a broad range of insights: into interpretive acts, into the political, ideological and ethical issues of the social sciences, and into their own construction of the ‘data’ or empirical material about which they have something to say. It also means introducing these insights into their empirical work. Reflection is thus a question of recognizing fully the ambivalent relation of a researcher’s text to the realities studied. Reflection means interpreting one’s own interpretations, looking at one’s own perspectives from other perspectives, and turning a self critical eye onto one’s own authority as interpreter and author. It is pragmatically fruitful to assume the existence of a reality beyond the researcher’s egocentricity and the ethnocentricity of the research community - paradigms, consciousness, text (the research result), rhetorical manoeuvring.
Summarizing preferences of qualitative researchers:
1- A preference for qualitative data - understood simply as the analysis of words and images rather than numbers
2- a preference for naturally occurring data - observation rather than experiment, unstructured rather than structured interviews
3- a preference for meanings rather than behaviour - attempting ‘to document the world from the point of view of the people studied’ (Hammersley, 1992: 165)
4- a rejection of natural science as a model
5- a preference for inductive, hypothesis generating research rather than hypothesis testing (cf. Glaser and Strauss, 1967)
Source: adopted from Hammersley, 1992: 160-72
Reflective research has two basic characteristics: careful interpretation and reflection. The first implies that all references to empirical data are the results of interpretation. Thus the idea that measurements, observations, the statements of interview subjects, and the study of secondary data such as statistics or archival data have an unequivocal or unproblematic relationship to anything outside the empirical material is rejected on principle. Consideration of the fundamental importance of interpretation means that an assumption of a simple mirroring thesis of the relationship between reality or empirical facts and research results (text) has to be rejected. Interpretation comes to the forefront of the research work. This calls for the utmost awareness of the theoretical assumptions, the importance of language and pre-understanding, all of which constitute major determinants of the interpretation. The second element, reflection, turns attention inwards towards the person of the researcher, the relevant research community, society as a whole, intellectual and cultural traditions, and the central importance, as well as problematic nature, of language and narrative (the form of presentation) in the research context.
Systematic reflection on several different levels can endow the interpretation with a quality that makes empirical research of value. Reflection can, in the context of empirical research, be defined as launching of critical self exploration of one’s own interpretations of empirical material including its construction. Thus, in reflective empirical research the centre of gravity is shifted from the handling of empirical material towards, as far as possible, a consideration of the perceptual, cognitive, theoretical, linguistic, textual, political and cultural circumstances that form the back drop to the interpretations. These circumstances make the interpretations possible, but to a varying degree they also mean that research becomes in part an unconscious undertaking. For example it is difficult for the researchers to clarify the taken for granted assumptions and blind spots in their own social culture, research community and language. These issues are also relevant to quantitative research. A good deal of the criticism touches to an even greater extent on the quantitative methods, such as the adoption of a naïve view of language. A brief description of the contributions that have emerged from the different orientations are: systematic and techniques in research procedures where qualitative research should follow some well reasoned logic in interacting with the empirical material and use rigorous techniques; clarification of the primacy of interpretation indicating the research as a fundamentally interpretative activity - the recognition that all research work includes and is driven by an interpreter; awareness of the political ideological character of research for what is explored, and how it is explored, can hardly avoid either supporting or challenging existing social conditions - the interpretations are not neutral but are part of political and ideological conditions; and reflection in relation to the problem of representation and authority of the results. Finally, good research should build upon a general awareness and a systematic, explicit treatment of these positions and the problems, as well as the possibilities, which they indicate.
Data and Sources of data
Grounded theory proceeds from empirical data. Generally speaking data in grounded theory can be described in vague terms as something empirical, often some event, often in the form of an incident, often in the form of some social interaction. A first prerequisite for grounding a theory on data is that some data do exist. It is thus important to consider where data can be found - that is the sources of data. Data can sometimes be thin or inaccessible in those places where they are traditionally sought in the social sciences. As regards the sources of data, grounded theory does provide a whole range of unconventional tips, apart from the traditional sources such as participant observation, interviews, etc. in this context documentary sources such as letters, biographies, autobiographies, memoirs, speeches, novels, diaries, jokes, photographs and city plans. However it is noted t hat various tactics in library research are reminiscent of those in fieldwork. These include going to the right shelves for sources about events versus choosing the right locale for observation or t he right interview person, studying symposium proceedings versus taking part in symposia, checking what people involved say about events afterwards, through documentary sources versus interviews.
Grounded theory starts from data in order to create categories, a procedure referred to as coding. The categories in turn have properties. The properties are then simply properties or determinations of the concepts. In the coding data are assigned to a particular category; similarly, the category is construed form the data. By minute examination (Strauss, 1987), of data and categories, that is by shifting them around in our minds in all possible ways always with concrete everyday practice in mind we think out possible properties of the categories, which can enrich them. More specifically the use of a special coding is recommended; coding paradigm. To code data for relevance to whatever phenomena are referenced by a given category, for the following: Conditions, interaction among the actors strategies and tactics and consequences. The first and fourth concepts refer to causality: conditions simply refers to causes, and consequences to effects. The other two are largely self explanatory. Interaction among the actors refers to such relation as are not directly concerned with the use of the strategies nad tactics.
Field research is essentially a matter of immersing oneself in a naturally occurring set of events in order to gain firsthand knowledge of the situation. (Singleton et al., 1988: 11)
The inspection of no quantified data may be particularly helpful if it is done periodically throughout a study rather than postponed to the end of the statistical analysis. Frequently, a single incident noted by a perceptive observer contains the clue to an understanding of a phenomenon. If the social scientist becomes aware of this implication at a moment when he can still add to his material or exploit further the data he has already collected, he may considerably enrich the quality of his conclusions. (Selltiz et al., 1964: 435)
Some qualitative researchers argue that a concern for the reliability of observations arises only within the quantitative research tradition. Because what they call the positivist position sees no difference between the natural and social worlds, reliable measures of social life are only needed by such positivists. Conversely it is argued once we treat social reality as always in flux, then it makes no sense to worry about whether our research instruments measure accurately. A second criticism of qualitative research relates to how sound are the explanations it offers. This complaint questions the validity of much qualitative research. Validity is another word for truth, when for example the researcher fails to deal with contrary cases, or the extended immersion in the field which leads to a certain preciousness about the validity of the researcher’s own interpretation. There are also the issue of summarizing by only using telling examples for its pragmatism.
Criteria for the evaluation of research:
are the methods of research appropriate to the nature of he question being asked?
Is the connection to an existing body of knowledge or theory clear?
Are there clear accounts of the criteria used for the selection of cases for study, and of the data collection and analysis?
Does the sensitivity of the methods match the needs of the research question?
Was the data collection and record keeping systematic?
Is reference made to accepted procedures for analysis?
How systematic is the analysis?
Is there adequate discussion of how themes, concepts and categories were derived from the data?
Is there adequate discussion of how themes, concepts and categories were derived from the data?
Is there adequate discussion of the evidence for and against the researcher’s arguments?
Is a clear distinction made between the data and their interpretation?
Source: adopted from criteria agreed and adopted by the British Sociological Association Medical Sociology Group, Sept 1996
We are not faced, then with a stark choice between words and numbers, or even between precise and imprecise data; but rather with a range from more to less precise data. Furthermore, our decisions about what level of precision is appropriate in relation to any particular claim should depend on the nature of what we are trying to describe, on the likely accuracy of our descriptions, on our purposes, and on the resources available to us; not on ideological commitment to one methodological paradigm or another. (Hammersley, 1992: 163)
The basic points concluded are as follows; first, qualitative research involves a variety of quite different approaches. Second, although some quantitative research can be properly criticized or found insufficient, the same may be said about some qualitative research. Third, in these circumstances it is sensible to make pragmatic choices between research methodologies according to your research problem. Finally, doing qualitative research should offer no protection from the rigorous, critical standards that should be applied to any enterprise concerned to sort fact from fancy.
References:
Robin Flowerdew and David Martin; Methods in Human geography, (B779)
Norman K. Denzin and Yvonne S. Lincoln; Handbook of qualitative research - (B749) Sage publication,
Guy M. Robinson, Methods and techniques in human geography, Wiley, 1998, www.wiley.co.uk, B856
David Silverman, Doing qualitative research, Sage Publication, 2000, (B831)
Mats Alvesson and Kaj Skoldberg, Reflexive Methodology, New Vistas for Qualitative Research, Sage Publication, 2000, (B 839)
Thomas R. Black, Evaluating Social Science Research: An Introduction, Sage Publication, 1993 (B 734)
<< Home