Friday, February 29, 2008

Strength of Association

More general form and the obvious mathematical extension of the non-parametric Mann-Whitney test is the Kruskal Wallis test, with the same problems of interpretation. This test statistic has a different distribution from the other methods, since when null hypothesis is true the test statistic follow the Chi squared distribution, that is used mainly for the analysis of categorical data.

X2= (observed difference in percentages / standard error of difference )2
The interpretation of X2 is as follows: the larger the value of X2 the smaller the probability P and hence the stronger the evidence that the null hypothesis is untrue.

Any variation among the groups will increase the test statistic, therefore we are concerned with only the upper tail of the Chi squared distribution. The idea of one and two sided tests does not apply with three or more groups. One important comment on interpretation is the reminder that the size of X2 (or P) does not indicate the strength of the association, but rather the strength of the evidence against the null hypothesis of no association.

Suppose the null hypothesis of no treatment difference is true and consider the hypothetical situation where one repeats the whole clinical trial over and over again with different patients each time. Then on average 5% of such repeat trials would produce a treatment difference large enough to make X2> 3.84 and hence P< 0.05. Note that one common pitfall is to misinterpret P as being the probability that the null hypothesis is true.


Pocock S., Clinical Trials, 2003



Probability

All probabilities are conditional and so, if the situation changes, then probabilities may change.



Weight of evidence

Log (likelihood ratio) [ the ratio - eg. the sensitivity of a HIV test divided by its specificity] is termed the ‘weight of evidence’ used by Alan Turing when using statistical techniques for breaking the Enigma codes at Bletchley Park during the WWII.

Myles J., Abrams K., Spiegelhalter D., Bayesian Approaches to Clinical Trials and Health Care Evaluation, 2004

Sample Selection: Case-control and Cohort Studies

Sample Selection

It is always desirable for the sample in a study to be representative of the population of interest, but this is not as important in experiments as in observational studies. The sample should be chosn to be as similar as possible to the relevant population, so it is essential to be able to describe just how the sample was chosen. Another way of combating variablity is to increase the sample size. Larger samples enable us to evaluate effects of interest more precisely. In a designed experiment there may be several conditions, called factors, being controlled by the investigator. The distinction between within subject and between subject comparisons is important. It is not possible to say what the best design is in any given circumstance. The choice of factors to control, which factors are between subject and which within, and how many observations to take for each subject is difficult, and it will often take much thought to arrive at a satisfactory design. Expert statistical help is particularly valuable at this stage. Any weaknesses in the design cannot be rectified later.

Random Allocation is used to prevent bias where we want to compare treatments between groups which do not differ in any systematic way. While simple randomization removes bias from the allocation procedure, it does not guarantee, eg., that hte subjects in each group have similar age distributions. indeed in small studies it is highly likely that some chance imbalance will occur, which might complicate the interpretation of result. Even in studies with over 100 subjects there may be some substantial variations by chance, especially for characteristics that are quite rare. We can use stratified randomization to achieve approximate balance of important characteristics (passive smoker) without sacrificing the advantages of randomisation. The method is to produce a separate block randomisation list for each subgroup (stratum). It is essential that stratified treatment allocation is based on block randomisation within each stratum rather than simple randomisation, otherwise there will be no control of balance of treatments within strata, and so the object of stratification will be defeated. Stratified randomisation can be extended to two or more stratifying variables. In small studies it is not practical to stratify on more than one or pehaps two variables, as the number of strata can quickly approach the number of subjects. In some studies it is either impossible or impractical to allocate treatments to individual subjects. Suppose that we wish to evaluate the effectiveness of a health education campaign in the newspapers to increase awareness of the dangers of drugs, or indeed to change behaviour. We can target individuals at random, but rather we can randomly assign whole areas to receive different media coverage. with a large number of small areas this cluster randomisation should give reliable results, but with a small number of very large areas, there are problems in ensuring the comparability of the areas. Here, it is valuable to obtain baseline data before the study starts so that changes within areas over the time of the study can be compared. Other clusters sometimes used in experimental research are schools, hospitals and families.

Observational Studies


Many studies are carried out to investigate possible associations between various factors and the development of a particular disease or condition. There is no logical difference between comparing the outcome of two groups of patients given alternative treatments and comparing the outcome of groups receiving different exposures. In general however, areas of epidemiological research are not amenable to being investigated by randomized trials. We cannot randomise individuals to smoke or not to smoke nor to work in particular jobs, and other factors such as age and race are not controllable by the individual. We must use observational studies, therefore, to study factors or exposures which cannot be controlled by the investigators. Nevertheless, the goal of an observational study should be to arrive at the same conclusions that would have been obtained by an experimental trial.
There are two main types of observational study that are used to investigate the causal factors - the case-control study and the cohort study. In a retrospective case-control study a number of subjects with the disease in question (the cases) are identified along with some unaffected subjects (controls). The past history of these groups in relation to the exposures of interest is then compared. In contrast, in a prospective cohort study a group of subjects is identified and followed prospectively, perhaps for many years, and their subsequent medical history recorded. The cohort may be subdivided at the outset into groups with different characteristics, or the study may be used to investigate which subjects go on to develop a particular disease.

In the case control study we identify a group of subjects (cases) with the disease or condition of interest, say lung cancer, and an unaffected group (controls), and compare their past exposure to one or more factors of interest, such as consumption of carrots. If the cases report greater exposure than the controls we may infer that exposure is causally related to the disease of interest, for example that consumption of carrots affects the risk of developing lung cancer.

Disadvantages of case control studies:
- They are inefficient for the evaluation of rare exposures: it can be time consuming and expensive to find people with the relevant condition.
- We cannot compute incidence rates of disease in exposed/non-exposed individuals unless the study is population based, because we are deliberately selecting a subset of people with the condition of interest rather than studying the population at large.
- The temporal relationship between exposure and disease may be difficult to establish. This is a particular problem in retrospective studies when people are asked to recall periods of time and sequences of events.
- They may be prone to bias arising from difficulties in ensuring that cases and controls are similar.

Advantages of case control studies
Despite their weaknesses, case control designs are widely used due to a number of advantages over longitudinal designs:

- Rare diseases can be studied, as the specific recruitment of the individuals affected can ensure that a sufficiently large number of cases will be included in the subject for meaningful analysis
- Diseases with long latency can be studied efficiently since recruitment can focus on subjects that already have the medical condition of interest.
- They allow the effects of multiple exposures on the development of the disease to be studied at the same time.
- They are relatively quick and cheap to undertake due to their retrospective nature.



Altman, Medical Statistics, Oxford Univ.

Labels:

Tuesday, February 26, 2008

Active Research

Importantly, appropriate [research] methods and disciplinary approaches always follow from the central questions and puzzles to be addressed, not the other way round. Researchers should select those methods that will best address the question in hand. However, methodological pragmatism does not mean that vague approaches to method are permissible. Questions of validity of measures and inferences, and reliability of data sources and analysis must be always at the forefront of your thinking. Students will be expected to explain and justify their methods and approaches rigorously and precisely.

In addition to internal and external validity researchers need to pay close attention to construct validity. There are numerous ways to go about establishing the construct validity of a test, but the basic strategy is always the same. The test developer sets up an experiment to demonstrate that a given test is indeed testing the construct that it claims to be testing. (Brown 1988: 103-104)

In all cases, whatever the particular methodological approach used, a critically important aspect of scholarly method is rigour in the use of sources. This means learning to be discriminate and judicious in the use of sources; distinguishing what is true from what merely appears to be true; striving, wherever possible, for an authentic source for any given fact, quotation, viewpoint or data-set; avoiding exclusive reliance on a second- or third-hand source when a better one can be found; using ‘triangulation’ to verify the reliability of sources; and making it unambiguously clear (for example in footnotes) which sources you have used, and exactly how you have used them. These simple guidelines are important whether you are engaged in historical and archival research, interviewing and fieldwork, quantitative or qualitative work. Following good practice regarding sources not only reduces the risks of factual error and misquotation, but also reduces the danger of failing to understand and communicate the original context of any fact, view or statement. (Research Methods, Oxford Univ. 2008) (1)

A form of research which is becoming increasingly significant in education is Action Research. John Elliott (1991:69) defines action research as “the study of a social situation with a view to improving the quality of action within it.“ The three defining characteristics of action research are highlighted that it is carried out by practitioners rather than outside researchers, secondly that it is collaborative; and thirdly that it is aimed at changing things. A distinctive feature of action research is that those affected by planned changes have the primary responsibility for deciding on courses of critically informed action which seem likely to lead to improvement, and for evaluating the results of strategies tried out in practice. For some action research is a group activity , and the essential impetus for carrying out action research is to change the system (Kemmis and McTaggar 1988:6).

The research bit in Action Research has been best defined by Stenhouse as systematic and sustained enquiry, planned and self critical, which is subjected to public criticism. It does not have to include statistics or experimental and control groups, it does not have to produce any kind of definitive proof of anything: it merely has to investigate an aspect of teaching in an orderly and objective fashion- to find out something new, however limited the context. The action bit means that the teacher is not just looking at someone else, he or she is actually acting in the classroom in order to try out the ideas that the research is throwing up and to come to some conclusions that can help his or her own teaching as well as through publication that of others( Rudduck and Hopkins, 1985:18).

Action research is a worthwhile enterprise for novice teachers - indeed, for us all. The fact that it is very difficult to do well does not mean that it should not be done at all. It merely means that these difficulties should not be underestimated, and that we have to invest a lot of work - perhaps more than we might have expected - in tutoring and supporting the teachers or trainees who are embarking on it (Ur P. 1996).


(1) http://www.politics.ox.ac.uk/teaching/res_meths/introduction.asp






Active Research




The policy idea which attracted the most comment was the proposal to support marriage in the tax and benefits system. There were many messages of support for the policy, as well as some cautioning against penalising single people or single parent families.

Steven Crabb, Stand up Speak up, www.conservatives.com





......Research backs up their assessment that social services and health visitors did not appear to have the will to engage with young fathers, and aspirations of young men to be better fathers than they have had themselves are certainly not encouraged(Speak et al 1997). The near collapse of marriage in such communities has almost completely eroded its function as a meaningful and beneficial life script (Hymowitz 2006), especially for men. Early fatherhood does not draw disadvantaged young men into dependable and responsible adulthood. A lack of purpose continues the cycle of worklessness, addiction and crime. Instead there appears to be an easy dependency on the state which people will not willingly give up.

This is an environment where young women routinely express the attitude that "everyone else is a single parent anyway, so what’s the big deal if I become one." However, as the boxed quote from one social worker indicates, the gap left by an absent father yawns more, not less, widely with time.

Breakthrough Britain, Policy Recommendations to the Conservative Party, 2008

Matched Studies

Matched studies

Matched designs are for studies where the outcome is observed on the same individual on two separate occasions, under different exposure or treatment, or where two different methods applied. In case-control studies, we have binary outcome observations that follow matched or paired design in selecting individuals. Each case is then matched with one or more controls that are chosen to have the same values for major confounding variables. Having similar age or living in a same area are two examples of controls. However case-controls studies show that matched designs often have few advantages, and many have serious disadvantages. Unless the matching factor is strongly associated with both the outcome and the exposure the increase in efficiency may not be large. In some case control studies it is difficult to define th e population that gave rise to the cases. It is essential to note that if matching was used in the design, then the analysis must always take this into account. Stratifying on the case control sets are used to estimate exposure odds ratios but they are severely limited because they do not allow for further control of the effects of confounding variables that were not also matching variables. This is because each stratum is a single case and its matched controls, so that further stratification is not possible. For example, it cases were individually matched with neighbourhood controls then it would not be possible to stratify additionally on age group. Stratification can be used to control of additional confounders only by restricting attention to those case-control sets that are homogenous with respect to the confounders of interest. (Kirkwood, 214-223, 410-412)

Monday, February 25, 2008

Poisson Regression

Poisson regression models compare two exposure groups give identical rate ratios, confidence intervals and P-values to those derives using other methods. Poisson regression to control for confounding is closely related to other methods for rate ratios. We can estimate and control for the effects of vriables that change over time, by splitting the follow up time for each subject. Poisson regression models are fitted on a log scale. The results are then antilogged to give rate rations and confidence intervals. The principles and the approach are exactly the same as those for logistic regression(Kirkwood, p. 249).

May You Know, That You are Known

Fear not, for I have redeemed you;
I have summoned you by name; you are mine.
When you pass through the waters
I will be with you
The flames will not set you ablaze
For I am the Lord, your God,.....
I will bring your children from the east
And gather you from the west
I will say to the north, give them up!
And to the south: do not hold them back
Bring my sons from afar
And my daughters from the ends of the earth…..

Isaiah:43

Saturday, February 23, 2008

The Odds of an Event

Many of the statistical methods for the analysis of binary outcome variables are based on the odds of an event, rather than on its probability.

The Odds of event A are defined as the probability that A does happen divided by the probability that it does not happen:

Odds (A= prob (A happens) / prob (A does not happen) = prob (A) / 1- prob (A)

Since 1- prob (A) is the probability that A does not happen. By manipulating this equation, it is also possible to express the probability in terms of the odds:

Prob (A) = Odds (A) / 1+ Odds (A)

Thus it is possible to derive the odds from the probability and vice versa.

It can be seen that while probabilities must lie between o and 1, odds can take any value between o and infinity .

Thus the odds are always bigger than the probability (since 1- prob (A) is less than one).

Friday, February 22, 2008

About e




Above graph shows that e is the unique number with the property that the area of the region bounded by the hyperbola, the x axis, and the vertical lines x=1 and x=e is 1.



e = 2.718 - The natural log is the inverse of e (e means opposite). The Latin name is logarithmus naturali, hence, ln. In brief e is the constant proportional to the growth rate of the curve.



see graph y=ex at:

http://economics.hertford.ox.ac.uk/Micro1/maths_formula_sheet.pdf


source: mathworld.wolfram.com/NaturalLogarithm.html









Oxford University scientists hope to uncover the secret life of an important British seabird using technology developed with Microsoft Research Cambridge.

The species travels thousands of kilometres over the sea in search of food and only returns to land at night, making traditional field observation difficult. Yet monitoring wild seabird populations is increasingly important as they are particularly sensitive to environmental change and give an indication of the health of our oceans.


Big brother bird watching boosts ecology
A wireless surveillance network will be used to monitor the nesting and mating rituals of a remote North Atlantic seabird colony, providing scientists with unprecedented access to their behaviour and ecology.
Researchers from Oxford University and Microsoft Research in the UK, developed the network to monitor more than 100,000 seabirds that breed during the summer on Skomer Island, off the west coast of Wales in the North Atlantic.

Butterfly mimicry riddle solved
Two separate research papers this week show that scientists are beginning to understand how butterflies achieve their extraordinary powers of mimicry, both as caterpillars and adults.
www.ft.com
www.ox.ac.uk/media










Unique centre for studying autism spectrum disorders and the brain opened in Oxford on 12 October; attendees included the Chancellor of Oxford University, Lord Patten.

A special brain scanner at the centre will allow researchers to study the brains of children and adults with autism in ‘real time’, as they think or complete tasks. This approach will lead to a better understanding of how the brains of people with autism spectrum disorders work differently, and ultimately perhaps to better treatments.

MEG is a non-invasive brain imaging technique which measures neuronal activity indirectly by recording induced magnetic fields. The scanner does not create any magnetic fields; it is silent and subjects sit in comfort under a helmet-shaped array of detectors. Thus the technique is particularly suitable for recording brain activity in children and adults with neurodevelopmental difficulties. Uniquely the Oxford MEG Centre houses a mock scanner and shielded room to enable participants to become accustomed to the imaging environment.

The autism research group at the University of Oxford leads an international study to identify autism susceptibility genes; uses several imaging techniques to understand the brain basis of autism; and is investigating how computer-generated worlds can be used to develop social skills. The team is currently looking for children and adults with autism to take part in their studies.
contact autism.research@psych.ox.ac.uk.

http://www.ox.ac.uk/media/news_releases_for_journalists/princess_megscanner.html

Tuesday, February 19, 2008

The causal theory




This humbleness off all mighty, all wise, all noble, animal innocence characteristics of Oxford that is utterly irresistible !





“the chance of an event is the degree to which it is determined by its cause.” cited in The Causal Theory of Chance, Eagle A., Exeter college, Oxford Univ.




Chancellor of Oxford University to launch St Mary's Appeal

Lord Patten of Barnes, Chancellor of the University of Oxford, will host the launch of the St Mary's Development Appeal on 22 February 2008.
The appeal has three separate aims:

Firstly, a comprehensive restoration of much of the fabric of the church, and in particular of the tower and spire.

Secondly, an improvement of access and amenity for the Old Library to allow for disabled access and for greater use by church and outside groups.

Thirdly to re-open the West Doors and to created a purpose-built office area on the North side of the church.

source: http://www.university-church.ox.ac.uk/
news_and_events/news.htm#Development


The Third, and Fourth Moments

Sample Variance is calculated based on the sum of the square differences between each value and the sample mean, hence, called the Second Moment. The Third and Fourth Moment of distribution are calculated in similar way, based on the third and fourth Powers of differences.

Third moment= Sum (X - mean)3 / n
Fourth moment= Sum (X - mean)4 / n

Medical Statistics, Kirkwood B., Blackwell Publishing, 2006










The pupil-teacher ratio


Study a new school-level panel dataset constructed from information provided by the Independent Schools Information Service (ISIS) shows a consistent negative relationship between the pupil-teacher ratio at a school and the average examination results at that school. Our estimates indicate that the relationship persists even when we are estimating “added-value” models conditional on previous exam results.

Pupil-teacher ratios vary between 7.9 at the 10th percentile and 14.2 at the 90th percentile, with correspondingly large differences in Fees. In a sub-sample consisting of only secondary schools, the pupil-teacher ratio is 7.6 at the 10th percentile and 12.6 at the 90th percentile. The variables we use for Pst (pupil characteristics) are dummy variables for boys’and girls’schools, (other dummy variables include school size and capital spending).

The Impact of School Inputs on Student Performance: An Empirical Study of Private Schools in the United Kingdom, Kathryn Graddy, University of Oxford and CEPR
Margaret Stevens, University of Oxford, 2003





Persons, Minds and Bodies Project

E. Cohen, J. Barrett. We often think of people as consisting of minds and bodies, but is this a culturally specific way of thinking or a more widespread and fundamental way of thinking? Are there cross-culturally recurrent ways of conceptualizing what it is to be a person and of representing the relationships between persons and bodies? The area of cognitive science concerning how people develop to understand minds and mental states is termed ‘Theory of Mind.’

‘Theory of Mind’ typically focuses on how humans understand human minds (including beliefs, goals, emotions, motivations, and so on), but at CAM the area enjoys a unique treatment: we consider conceptualizations of nonhuman minds and the relationship of minds to bodies. We consider the myriad different ways in which humans use theory of mind to conceptualize spirit possessions, deities, and animals’ mental states. Further, we explore how Theory of Mind interacts with intuitive notions of the nature of bodies, especially in cultural contexts in which minds (or souls, identities, volition) are understood as separable from bodies, or displaced from bodies, or in which they are believed to intrude into bodies, as in spirit possession.

Many cognitive scientists would agree that the claims made by experimental psychologists and others regarding the universality of the mechanisms underlying notions of persons, understandings of bodies, and Theory of Mind (and their developmental trajectories) will remain speculative until these claims are tested cross-culturally, most obviously through collaboration with anthropologists.

The project has received funding from the British Academy.
source: http://www.icea.ox.ac.uk/research/cam/



The Principal Principle (Lewis, 1980): that the chance of A is the reasonable credence one would have in A given the history of a world up until some time, and the theory of chance for that world. Causal Theory of Chance, Eagle A., Exeter college, Oxford Univ.








THE WATER POVERTY INDEX WPI

Purpose of the WPI
Water management is a complex and difficult task. As populations grow and water resources become more scarce, this task will become even more complex. The WPI is mainly designed to help improve the situation for the over two billion people facing poor water endowments and poor adaptive capacity. Through its application, this tool can provide:

v A better understanding of the relationship between the physical extent of water availability, its ease of abstraction, and the level of community welfare it provides

v A mechanism for the prioritisation of water investments

v A means by which progress in the water sector can be monitored (e.g. towards the Millennium Development Goals)

When water allocation systems fail, poor people often have to use insecure or polluted sources, and conflicts over water use can arise. By making water management decisions more equitable and transparent, the WPI can contribute to the eradication of conditions which strengthen the poverty trap.

http://ocwr.ouce.ox.ac.uk/research/wmpg/wpi/wpi_leaflet.pdf

Sunday, February 17, 2008

The power of test

Multiple linear regression: The control of confounding factors

The inclusion of exposure variables that are strongly associated with the outcome variable will reduce the residual variation and hence decrease the standard error of the regression coefficients for other exposure variables. This means that it will increase both the accuracy of the estimation of the other regression coefficients, and the likelihood that the related hypothesis tests will detect any real effects that exist. This attribute is called the power of the test.


Medical statistics, Kirkwood B., and Sterne J., Blackwell publishing, 2006



Systematic reviews

Systematic reviews of research are always preferred; For reliable evidence on rare harms, therefore, we need a systematic review of case reports rather than a haphazard selection of them. Qualitative studies can also be incorporated in reviews
Different types of question require different types of evidence: Randomised trials can give good estimates of treatment effects but poor estimates of overall prognosis; comprehensive non-randomised inception cohort studies with prolonged follow up, however, might provide the reverse.

Hierarchies can lead to anomalous rankings. For example, a statement about one intervention may be graded level 1 on the basis of a systematic review of a few, small, poor quality randomised trials, whereas a statement about an alternative intervention may be graded level 2 on the basis of one large, well conducted, multi-centre, randomised trial. This ranking problem arises because of the objective of collapsing the multiple dimensions of quality (design, conduct, size, relevance, etc) into a single grade.

Whatever evidence is found, this should be clearly described rather than simply assigned to a level. Such considerations have led the authors of the BMJ’s Clinical Evidence to use a hierarchy for finding evidence but to forgo grading evidence into levels. Instead, they make explicit the type of evidence on which their conclusions are based.

To overcome flaws in evidence hierarchies we need to do firstly, to extend, improve, and standardise current evidence hierarchies22; and, secondly, to abolish the notion of evidence hierarchies and levels of evidence, and concentrate instead on teaching practitioners general principles of research that they can use these principles to appraise the quality and relevance of particular studies

Different types of research are needed to answer different types of clinical questions

Irrespective of the type of research, systematic reviews are necessary

Adequate grading of quality of evidence goes beyond the categorisation of research design

Risk-benefit assessments should draw on a variety of types of research

Clinicians need efficient search strategies for identifying reliable clinical research

Source: Assessing the quality of research; BMJ 2004;328;39-41
searched in www.ouls.ox.ac.uk

Friday, February 15, 2008

The law of excluded third

In logic, the law of the excluded third states that the formula
"P ∨ ¬P" ("P or not-P") can be deduced from the calculus under investigation.
It is one of the defining properties of classical systems of logic.



T = 16th February 2006 L = Oxford

Regret testing

1. Choose q ∈ Qd uniformly at random.
2. Play q for s periods in succession.
3. For each action a, compute the regret r(a) from not having played action a over these s periods.
4. If maxa r(a) > τ, go to step 1; otherwise retain the current q and go to step 2.
Given any two-person game G and any ε > 0, if both players use regret testing with sufficiently large s and d and sufficiently small τ, their behaviors constitute an ε-equilibrium of G in at least 1 – ε of all play periods (Foster and Young, 2006).








PMG

The use of the Pooled Mean Groups PMG technique in taking account of parameter heterogeneity, given the relatively small sizes of data sets used in empirical growth analyses, allowing full parameter heterogeneity implies estimating many parameters, with the associated imprecision. The pooled mean groups estimator provides a middle path, it allows different short run adjustment coefficients for each cross sectional unit but restricts the long run steady parameters to be constant across. Discussing the econometric methodology: specifically the rationale and mechanics of the pooled mean groups estimator, with unconstrained autoregressive distributed lag model, implies how to obtain common long run or steady relationships between the dependent and independent variables.

The pmg estimation procedure provides a sagacious middle path between assuming identical coefficients and allowing complete parameter heterogeneity.

The pmg estimator allow the intercepts, short run coefficients and error variances to differ freely across parameters.

Pmg estimation implies that the long run coefficient (Q) is a non-linear function of the short term adjustment parameters.

source: www.ox.ac.uk - working paper


A pooled mean group analysis on aid and growth

The paper uses the pooled mean group estimator and an extended annual dataset to examine the effectiveness of aid on growth. The results indicate a significant long-run impact of aid on growth, but conditioning aid on `good` policy reduces the long-run growth rate.

Keywords: Aid impact, Economic growth, Pooled mean group estimators

Date: November 2006 | Reference number: WPS/2006-14

www.economics.ox.ac.uk

Education strategy

What exactly do we mean by “learning” in multi-agent context?

A natural definition is that players “learn” if they eventually succeed in predicting their opponents’ behavior with a high degree of accuracy (Foster and Young, 2001).

There is a well-known condition in statistics that guarantees that all players will learn to predict in the strong sense. Namely, it suffices that each player’s forecast of the others’ behavior, conditional on his own behavior, never exclude events that have positive probability under their actual joint behavior. This is the absolute continuity condition (Blackwell and Dubins, 1962; Kalai and Lehrer, 1993).

Interactive learning is inherently more complex than single-agent learning,because the act of learning changes the object to be learned. If agent A is trying to learn about agent B, A’s behavior will naturally depend on what she has learned so far, and also on what she hopes to learn next. But A’s behavior can be observed by B, hence B’s behavior may change as a result of A’s attempts to learn it. The same holds for B’s attempts to learn about A.

This feedback loop is a central and inescapable feature of multi-agent learning situations. It suggests that methods which work for single-agent learning problems may fail in multi-agent settings. It even suggests that learning could fail in general, that is, there may exist situations in which no rules allow players to learn one another’s behavior in a completely satisfactory sense.



THE POSSIBLE AND THE IMPOSSIBLE IN MULTI-AGENT LEARNING, H. Peyton Young, Dept. of Economics Working paper, Oxford Univ., Jan 2007








Over educated ?????

In recent years, alongside the rapid expansion in higher education, there have been important changes in the types of qualifications being awarded by universities. While many of these new qualifications have emerged in response to changing economic needs, not all of them are career related. Thus, over-educated workers could essentially be comprised of those who have non-professional qualifications, a low quality of education, or both. Pryor and Schaffer (1999) showed that US workers who experienced downward occupational mobility generally had lower cognitive skills irrespective of educational credentials.

Beyond all aspects of human capital and job characteristics, several factors may give rise to labour market rigidities that limit the capacity of the market to fully utilise and reward highly educated workers. Such constraints could arise from family commitments, regional immobility or restrictive work practices. The Newcastle Alumni Survey containing a wealth of information on individual family circumstances and personal commitments was examined to see how far they may result in some graduates taking jobs that require less than their educational credentials.

The most distinguishing feature of the Newcastle Alumni Survey is that it is the only British data set that contains two direct questions measuring the extent of education under- utilisation. The first question is: “What is/was the minimum formal qualification level required for entering this job?” and the second question is: “What do you believe to be the education level required to actually do this job?” Answers to both questions are on a four-point scale as follows: postgraduate qualification, degree, sub-degree qualification, and no qualifications required. The first question provides a match between acquired and required qualifications to get the job, whereas the second question provides a direct measure of over-education in terms of job content. As all previous studies for the UK have replied on questions framed as in the first question, the incidence of over-education may have been overestimated by past researchers for this reason.

Looking at family commitment variables, having children prior to first job decreases (at the 10 percent level of significance) the probability of being over-educated. High debt commitments (i.e., debts in excess of £1000 upon leaving the University) raise the probability of being over-educated in the first employment. Data used from the Newcastle Alumni Survey was especially commissioned for research into over-education in the graduate population, to employ a subjective measure based upon a question that measures the education level needed to do the job. Our descriptive statistics revealed that about one in five university graduates were not employed in graduate- level positions after spending some time in the labour force. Unobserved ability differences between similarly educated workers and the quality of education are usual conjectures offered to explain these findings.

Source:The Determinants of Graduate Over-Education,
Peter Doltona and Mary Sillesb
a)Department of Economics, University of Newcastle, Newcastle upon Tyne, NE1 7RU, England.
b)St. John’s College, University of Oxford, Oxford, OX1 3JP, England.








In the field of education, the vitality of institutions and organisations are function of vitality of teachers, administrators and other stakeholders, hence, the vitality of the students is directly related to the vitality of schools and institutions of higher learning. Teachers need to recognise the responsibility for helping to prepare every student for followings:
- look forward to a future that will bring many changes and to accept the responsibility for helping to shape that future.
- recognise his special aptitudes and abilities, and thus facilitate the development of his individuality,
- develop a sound basis for accepting a defensible and evolving system of moral and ethical values for guidance in exercising his responsibilities as a citizen,
- recognise that happiness comes primarily from progress in achieving and helping others to achieve worthwhile goals and objectives,
- seek and utilize learning and knowledge as a basis for understanding meanings in relationship to his life and to society,
- learn to use the scientific method as a basis for studying and resolving the problems he encounters.

Labels:

Thursday, February 14, 2008

Educational values

"What are the qualities, attitudes, understandings and capacities which all young people should acquire through their education?" We need to give young learners far more than skills for employment alone, even if such skills are key to the country’s economy.'

The paper highlights how the government’s use of business jargon, when describing the aims and values of education, demonstrates the changed understanding of education over the last few decades. Terms such as ‘inputs’, ‘measurable outputs’, ‘targets’, ‘curriculum delivery’, or ‘performance indicators’ sound business-like, as compared with only a few decades ago. In 1972, education was defined in terms of an ‘engagement’ between teacher and learner and, in 1931, as ‘the source of common enlightenment and common enjoyment’ .

The paper says that the government quite rightly defines its reforms as an attempt ‘to raise standards’, but the concept of what ‘standards’ means is rarely examined. Should it not be defined in terms of the overall aim or purpose of educating young people? If so, the paper says it makes it difficult to understand the ‘equivalence of standards’ between learning standards geared to the more efficient working of, say, a budget airline, as compared with those geared to grasping complex concepts of nuclear physics, or appreciating the poetry of Hopkins.

The Review believes that the central aims and values of education should be about making young people think intelligently and critically about the physical, social, economic and moral worlds they inhabit. It also recommends that recognition be given to ‘competence, to coping, to creativity, and to co-operation with others’ (as set out in Capability Manifesto of the Royal Society of the Arts in 1980); with respect for the experiences, concerns and aspirations of the learners; and provide the preparation for responsible and capable citizenship.

Education should also be about ideas and values which inspire and prepare young people to face actively the ‘big issues’ that affect them and their community, such as environmental change, racism and injustices of many kinds, it says.

The Review recommends that the aims and values of education be constantly appraised, and that teachers should play a central role in such deliberation. It also urges further discussion on this issue, through forums, that involve teachers, learners, parents and members of the community.


source: www.ox.ac.uk/media
Nuffield Review on Education



The Nuffield Review is an independent review of all aspects of 14-19 education and training: aims; quality of learning; curriculum; assessment; qualifi cations;progression to employment, training and higher education; providers; go vernance; policy. The question that the Review has posed is, What are the qualities, attitudes, understandings and capacities which, in different degrees, an educated 19 year old should have in this day and age.

For the government the question has been how to improve the nation’s skills considered as one of the defining political, economic and social issues of our age - looking at wider aim of greater social inclusion, a more just society and personal fulfillment.

The review points out to the task of education which is to help young people realise some potentials and not others; it is to do so by drawing upon the cultural resources which we have inherited and through which those potential strengths and interests are directed. But the selection of this and not that potential (e.g. the potential for cooperation rather than conflict) and the choice of the cultural resources through which to develop ‘selected potentials’ (e.g. through the introduction to a particular literature) depends upon the values which are embodied in the underlying and often unexamined aims of education.

It’s always necessary, then, when the time comes to make changes to our education system, to pay attention to its broader aims. If we don’t do that, if we work on too narrow a front, then we risk damaging the values that ultimately define an educated and humane society. The pursuit of economic prosperity, for example, could be at the expense of social values, such as greater community cohesion, or of personal values such as those of personal fulfilment and growth. Derek Morrell (the civil servant who in effect was the architect of the Schools Council6) argued in 1966 that, since there was lack of consensus over the aims of education at a time of rapid social change, we must find ways of living with diversity:

‘Jointly, we need to recognise that freedom and order can no longer be reconciled
through implicit acceptance of a broadly ranging and essentially static consensus on
educational aims and methods.’

One reason for the neglect of public deliberation in what are morally controversial issues is the changed language of education – one which recently has come to be dominated by the language of management. But, if one speaks the language of
management, one is in danger of treating young people and their teachers as objects to be managed. Cuban, in The Blackboard and the Bottom Line: Why Schools can’t be Businesses, refers to a successful businessman who, dedicated to improving public schools, told an audience of teachers, ‘If I ran a business the way you people operate your schools, I wouldn’t be in business very long’. Crossexamined by a teacher, he declared that he collected his blueberries, sending back those that did not meet the high quality he insisted on. To this the teacher replied, ‘We can never send back our blueberries. We take them rich, poor, gifted, exceptional, abused, frightened, confident, homeless, rude, and brilliant. We take them with attention deficit disorder, junior rheumatoid arthritis, and English as their second language. We take them all.

There is emphasise on dualism between academic and vocational (or applied) that is questionable. In general, of course, both education and training are about the promotion of learning. That clearly is what the system is set up to bring about. But not all learning counts as education. The central meaning of education is evaluative – it picks out certain kinds of learning as worth while. In that sense, an educational activity is to be contrasted with mere training or with indoctrination or with activities which deaden the mind and the capacity to think.

There is a common association between ‘education’ and the initiation into the different forms of knowledge which constitute what it means to think intelligently – the acquisition and appropriate application of the concepts, principles and modes of enquiry to be found in the physical and social sciences, in the study of literature and history, in mathematics, in language and in the arts.

There is a need to recognize the importance, at every level of policy making and practice, of constant deliberation over these aims and values and their manifestation in the particular context of school or college; and to see the central role of teachers in such deliberation.

Source: Nuffield Review of 14-19 Education and Training, Issues Papers 6, Oxford Univ. Feb 2006

Labels:

Monday, February 11, 2008

How different variances are?

In measuring separation, if F is much greater than one null hypothesis is rejected, hence we conclude that the means of the population treatments differ. We take account of magnitude of skewedness in F test. Exercise No 5: I have F= 1.889 ???

If the variances are not all equal ANOVA will produce an anti-conservative p-value yielding an increase in the probability of a type I error. Therefore, it is important to test, either formally or informally, whether the homogeneity of variance assumption is satisfied.

In two tailed p value (hypocrite) relationships, when the only aspect of interest is the trend above or below the chosen value, we can apply Mann-Whitney test for matched pair to identify the magnitude of differences, to trace the trends and reveal the power of suspects!

If there were no difference on average between the sample values and the hypothesized specific value we would expect an equal number of observation above and below the specific value. We thus use the Binomial distribution to evaluate the probability of the observed frequencies when the true probability of p is ½.

Sunday, February 10, 2008

ANOVA

The basic idea underlying ANOVA relies on an important theorem in mathematical statistics which states that the total variance of the pooled sample can be divided into two components: the within groups variance and the between groups variance.

The within groups variance is simply the sum of the variances calculated for the individual groups; the between groups variance is the variance obtained using the means of each group as data. The within groups variance represents the internal variability of the groups; the between groups variance measures how well separated these groups are.



Medical statistics, Kirkwood, B, Blackwell Science publishing, 2003
www.ox.ac.uk

Friday, February 08, 2008

Confidence notes




Log e as natural logarithm: e = 2.718 - The natural log is the inverse of e (e means opposite). The Latin name is logarithmus naturali, hence, ln. In brief e is the constant proportional to the growth rate of the curve.



Confidence Interval: Importantly, there is a trade-off between precision (interval length) and reliability (coverage).

Source: Dept. of Continuing Education: Statistics for Health Researchers Course, Oxford Univ.



Null Hypothesis

The p-value is effectively the likelihood of making a type I error and we typically accept the null hypothesis if the p-value is above the threshold of 0.05, which corresponds to a test with a significance level of 5%.

Statistics course, Oxford Univ.

Sample quantiles

In general the kth centile is the point below which k% of the values lie. In other words, we are looking for a subset of the values.

Think of the dividing point as the 'neutral zone', above which and below which the subest of interest lie. that is why you don't find an 'equal' anywhere.

Let's try something pictorial, considering deciles (division into 10 equal groups):

See the 10 dots below. They enclose only 9 spaces, 'space' being the quantity of interest here.

. . . . . . . . . .

So to have 10 spaces you need 11 dots. So now you have 10 spaces in 10 equally marked divisions. The star-dot represnts the boundary (I used star as I couldn't use another colour), above and below which you have, in this case, 50% of the spaces (and in fact 50% of the dots if you exclude the boundary).

. . . . . * . . . . .


I hope this helps to make it a little clearer why there isn't an 'equal'.

Marialena

Labels:

Maths for a sense of wonder

1089 and All That

Think of a three-figure number.
Any three-figure number will do, so long as the first
and last figures differ by 2 or more.
Now reverse it, and subtract the smaller number from
the larger. So, for example,
782 - 287 = 495.

Finally, reverse the new three-figure number, and add:

495 - 594 = 1089.

At the end of this procedure, then, we have a final answer
of 1089, though we have to expect, surely, that this final
answer will depend on which three-figure number we
start with.

But it doesn’t.

The final answer always turns out to be 1089.

It was the element of mystery and surprise, I think,
that put this result into a different league from some of
the work we were doing in school.

David Acheson, 1089 and all That, Published by Oxford University Press 2002
Reprinted 2003, 2004 (twice), 2005, 2006

Labels:

Saturday, February 02, 2008

Identity Analysis

There is only one answer to myriad questionings of who, where, what: that is I am an Oxford aficionado!

Friday, February 01, 2008

Health Research

Trials are underway for a new vaccine to combat the most deadly form of malaria. For the first time ever, researchers will use a virus found in chimpanzees to boost the efficacy of the vaccine. The trials will take place at the University of Oxford's Jenner Institute, led by its Director, Professor Adrian Hill, and are funded by the Wellcome Trust.

Malaria is one of the world's deadliest killers, killing over a million people each year, mainly women and young children in Africa and SE Asia. The most deadly strain of the disease, P. falciparum, is responsible for 80% of malaria infections and 90% of deaths. As yet, no vaccine exists against this strain. This is because, for much of their life-cycle, the parasites responsible for infection live inside cells, where they cannot be reached by antibodies.

‘We urgently need a vaccine to help in the fight against this deadly killer,’ says Professor Hill. ‘Malaria parasites are able to outwit our immune system by hiding out in the body's cells, however. Finding a way to generate enough immune cells and antibodies to identify and destroy the parasites will be the key to preventing infection.’

The vaccine being developed and trialled by Professor Hill's team in collaboration with Okairòs uses the company’s genetically-modified chimpanzee adenovirus to produce the malaria antigen and to stimulate a response to the vaccine in the body. Adenoviruses appear to be particularly potent for increasing the immune response to the malaria vaccine. However, because human adenoviruses, which include the common cold and gastroenteritis, are widespread, most people have developed some immunity towards them. Using a chimpanzee adenovirus ensures that a recipient is unlikely to have resistance to this component of the vaccine.

‘Chimpanzees have their own set of adenoviruses which rarely infect humans, so we have not built up immunity to them,’ explains virologist Dr Sarah Gilbert at the Jenner Institute, University of Oxford. ‘This is why we have chosen such a virus to form the backbone of the new vaccine.’
Professor Hill's team is currently recruiting for more volunteers for the first trials, which are to assess the safety of the vaccine. Because the active component of the adenovirus is removed, however, there is no danger of transmission to the human of the original chimpanzee virus.

The trial will also be measuring the response of the immune system. The team hopes to generate a response from CD8+ T-cells (sometimes known as killer cells) that should kill the parasites when they enter the liver, where they multiply undetected. However, if the T-cells do not kill all of the parasites, any that escape from liver into the bloodstream will still be able to enter red blood cells and cause illness.

The group plans to test a second vaccine which would then target the parasites in the bloodstream and red blood cells.

‘Our ultimate goal is a combination product which targets the parasite at both the liver stage and the blood stage,’ says Professor Hill. ‘Few people still think that you can get really strong protection from malaria based on as single component.'

Over a dozen vaccines have now been made by scientists at the University of Oxford and taken into clinical trials, but this is the first vaccine to have also been manufactured within a UK university.

Media at www.ox.ac.uk

Analysis: Desire as Belief

Desire as Belief

Lewis claims to reduce to absurdity the supposition that (some) desires are belief-like. He construes and attempts to refute the claim within a Bayesian framework (Jeffrey 19983), assuming that agents assign probabilistic degrees of belief (credences) and values to propositions, which jointly satisfy the following axiom ADDITIVITY, where the Ai’s are a partition of A:

V(A)= Ei V(Ai).P(A) / Ei P(Ai)

The value of a state of affairs, A, is the probability-weighted average of the values of all the ways it can come about.

Lewis assumes, harmlessly simplifying, that there are just two objective values (1 and 0), depending on whether a state of affairs is good or bad. The supposition that (some)desires are belief-like is now rendered as the claim that there is a function that assigns to each proposition.

Bayesian decision theory in conjunction with Desire-as-Belief, imposes two different constraints on the value a rational agent may ascribe to the truth of a given proposition. The two constraints are incompatible. Lewis claims the culprit is Desire-as-Belief. ‘Decision theory is an intuitively convincing and well worked-out formal theory of belief, desire, and what it means to serve our desires according to our beliefs……if an anti-Humean Desire-as-Belief Thesis collides with Decision Theory, it is the Desire-as-Belief Thesis that must go’ (Lewis 1988:325).

Lewis suggests that the desire-as-belief doctrine has met ethical implications. If there were some propositions belief or disbelief in which was necessarily connected with desire, some of them presumably would be true; then we surely would want to say that the true ones were the objective truth about ethical reality (1996:60).

…..it might seem as if desire-as-belief isn’t necessary for objectivism. Even if there are evaluative propositions, they might not align properly with an agent’s motivations, since agents can act against their evaluative judgement. Almost everyone admits the occurrence of weakness of will. Some even go further. Evaluative beliefs, they claim, can be motivationally inert. Whereas the akratic agent is insufficiently motivated by his judgement, the immoralist is perfectly indifferent to moral considerations.

Desire as Belief, Lewis notwithstanding, ANALYSIS, Vol 67, No 294, April 2007, Blackwell publishing

Analysis: Permission to cheat

Prof.: God forbids cheating because cheating is wrong, cheating is not made wrong by God forbidding it.

Theology student : according to the divine command theory of morality, some acts are made obligatory by virtue of God’s command. On this theory, if God commands cheating, then cheating would actually be a duty.

Prof.: this just shows a defect with the divine command theory. If God orders us to cheat then the theory implies that cheating is permissible. But as you earlier noted, if cheating is permissible, then it is impossible to cheat. So God would be requiring the impossible.

Roy Sorensen, Permission to Cheat; Analysis, No 295, vol 67, July 2007, Blackwell publishing.

Labels: