Wednesday, January 31, 2007

Pension

Annuity Market

As recently as a hundred years ago, nearly all workers would expect to earn a wage for almost their entire life and would usually only withdraw from the labour force as they became unable to work due to ill-health. Throughout the 20th Century this has gradually been replaced by a model whereby individuals stop working some time before the end of their life and while still relatively healthy. In many cases this long period of retirement would have been financed by a Defined Benefit (DB) occupational pension scheme, which involved the employer, albeit indirectly, continuing to pay a pension to the retired worker.


Subject to the individual living, an annuity provides a higher return than a standard savings product, because the annuity is an insurance product in which the individuals who die early cross-subsidise those who survive – a phenomenon called mortality drag. The advantage of buying the annuity-type product is that it allows a higher level of consumption, because of the mortality drag.


Nearly all annuities purchased in the UK – probably about 95 per cent – are paid for with a single premium (this follows automatically in the compulsory-purchase market) and most of these provide an income which is constant in nominal terms (‘level’ annuities), although other annuity products exist, for example, annuities which provide an income constant in real terms (‘index-linked’ annuities).


One concern that annuitants might have, is the possibility of dying very soon after purchasing the annuity. This would mean ex post that neither the annuitant nor the annuitant’s estate received much benefit from the transaction. It is an inherentfeature of all insurance products that if the insured event does not occur, the insured person loses the premium: however, it is often thought to be particularly problematic in the context of annuities. Partly to allow for this, it is possible to have an annuity with a guarantee period (of up to ten years), in which case the income payments for the guarantee period are paid regardless of whether the annuitant is alive or not – if the annuitant dies then the payments are made to their estate. Under new legislation an alternative form of annuity will be available from 2006 called ‘valueprotected’.

In this annuity product, the difference between the initial premium paid and the cumulated payments made to the annuitant (assuming this difference is positive) will be paid to the estate if the annuitant dies early, though the value protection element expires at age 75 under current legislation. All of the annuities discussed so far involve the payment of a premium followed by the receipt of an income that starts immediately. ‘Deferred’ annuities involve payment of a premium followed by an income stream that only starts at some point in the future. Such annuities are virtually never purchased by individuals: instead they are purchased in bulk by firms as part of a process of closing an occupational pension scheme.
The price of annuities depends upon a variety of factors, which can be summarized as follows:

• prevailing interest rates at the time the annuity is purchased;
• information available to the life insurer about the life expectancy of the annuitant
• the size of the premium paid for the annuity, which may also be related to life expectancy as wealthier individuals tend to live longer;
• the type of annuity purchased;
• the mark-up paid to the life insurer to cover its costs and profits.

It is predicted that the demand for annuity products will increase due to the demographics of an ageing population, and because of the continuing shift of workers between Defined Benefit (DB) and Defined Contribution (DC) pension schemes. The supply situation is more complicated because current annuity products are based on bonds and the state of the bond market is determined, to a large extent, by the UK government’s policies on the size and management of the national debt.A life annuity converts a stock of wealth at retirement into a flow of income that is payable to the annuitant until death. An annuitant pays a premium to a lifeassurance company who then undertakes to pay an agreed income to the annuitant, usually on a monthly basis. Because the life annuity is paid until the annuitant dies,it insures them against longevity risk, or in other words, it insures them against running out of savings to support consumption expenditure in old age. As with all insurance products, the effect is to re-distribute between individuals: those who are unlucky – paradoxically in this case it is unlucky to live too long – are subsidised by those who are lucky (i.e. those who live for a relatively short period).

Annuities are supplied in the UK by life-assurance companies who match their annuity liabilities with bonds or similar assets. The reason for this is that annuities typically pay a constant stream of income and, absenting mortality considerations, an annuity product is very similar to a bond product; it is also possible to have annuity-type products which are more similar to equity, but this market is underdeveloped. Given the current types of bonds purchased, life assurers can be seen as producers who take bonds as an input and produce annuities as an output.

A significant determinant of annuity rates is the economy-wide interest rate, in particular the bond market. Since rates of return on bonds are currently low it follows that annuity rates are also low. Of course, low bond yields are the result of a variety of factors, including overall government borrowing, monetary policy, international rates of return and the low inflation environment since the midnineties. So it is possible – at least in principle – that the government could influence annuity rates through either monetary or fiscal policy. In practice, however, these policy instruments are used to meet other objectives and monetary policy is undertaken by the Bank of England. Furthermore, the long-run effects of government policy on both the level and shape of the term structure of interest rates – especially if we consider real rather than nominal interest rates – are not well understood by economists.

The existence of annuities can be traced back to Roman times and a table of annuity rates calculated by Domitius Ulpianus from about 230 AD was used as late as the early modern period in Europe (Haberman and Sillett, 1995; Dyson, 1969). Annuities were used throughout the Middle-Ages and became popular with governments in the 17th Century as a method of raising money. The bases of modern actuarial science were developed during the eighteenth and nineteenth centuries alongside advances in probability theory and increasing availability of empirical mortality tables (Poterba, 2004). Because annuity products are illiquid, the UK government stopped using annuities as a primary means of finance from the 1690s onwards, converting the national debt into equity and bond instruments between 1694 (with the foundation of the Bank of England) and 1753 (with the consolidation of government bonds into a uniform issue of perpetual bonds – consols). Annuities were then increasingly issued by private companies, such as the Equitable Life Assurance Society (founded in 1762), and from then into the 19th Century there was a continuous growth of life-assurance companies and societies (although predominantly concerned with life assurance rather than annuity business).

The government continued to sell small quantities of annuities and life insurance as a means of financing the national debt, but increasingly realised the possible benefits of annuities, especially deferred annuities purchased with multiple premiums, to assist the elderly poor and sought to encourage sales by allowing sales throughfriendly societies (1819) or savings banks (1833) (Wilson and McKay, 1941). Gladstone introduced legislation in 1864 to sell annuities and life insurance through the post office, primarily due to the financial weakness of savings banks (Morley, 1903). Additional stimuli for the legislation were elements of empire building within the post office and paternalism towards the poor (Perry, 1992).

The provision of government annuities through the post office meant that the government was engaged more directly in competition with both private life assurers and friendly societies. These were politically powerful enough to ensure that minimum and maximum limits were placed on life insurance sales to restrict effective competition, but the restrictions on annuity purchases were less important. However, sales of immediate annuities from 1865 to 1884 only numbered 13,897 and deferred annuities for the same period were even fewer, totalling 1,043 (Perry, 1992). Even after the removal of the restrictions in 1882, sales remained poor: by 1907 the total number of insurance policies in force was 13,269 at the post office compared to 2,397,915 from life-assurance companies (Daunton, 1985). This was despite government insurance being sold at better prices and being virtually immune to default risk. With continuing low sales of both forms of insurance, and losses on government annuities, sales ceased in 1928.

The cessation of sales of government annuities may have had some effect on the private market (Norwich Union started selling annuities again in 1928), but by this time two further considerations also reduced demand for annuities. Many workers were now in either occupational pension schemes or state pensions. Among the more affluent middle classes demand would have been reduced by the tax treatment of annuities; the entire annuity payment was treated as income and taxed accordingly, despite the fact that some of the annuity payment was implicitly capital.

The 1956 changes introduced a new compulsory-purchase annuities market for those who had built up a personal pension fund, distinct from the existing voluntary annuities market. As noted in Finkelstein and Poterba (2002), these are likely to be quite different markets: only individuals who expect to live for a long time are likely to purchase a voluntary annuity, whereas compulsory annuities are purchased as part of the terms of the pension contract. Typical voluntary annuitants are female and relatively old (over 70), whereas typical compulsory annuitants are male and recently retired (about 65)

Hannah (1986) explains the evolution of a tax-free lump sum of 25 per cent of the pension fund. ‘The chapter of accidents which led in absurd progression to this situation, [the tax-free lump sum] which was initially desired by no one, began in the early years of [the 20th]century’ (Hannah, 1986, p.115). He notes that at the turn of the last century, occupational pensions varied widely in whether they paid a pension as a lump sum or as an annuity. There were arguments that suggested a lump sum would ease the progression from working to retirement, but against this was the concern that a lump sum would be frittered away. The Radley Commission on the Civil Service said in 1888: ‘The payment...of a lump sum is open to the obvious objection that in the event of improvidence or misfortune in the use of it, the retired public servant may be reduced to circumstances which might lead to his being an applicant for public or private charity’. The Tax-Exempt (1921) Act occupational directed that funds were not allowed lump sums by the Inland Revenue, though they could be paid by the pension out of non-tax exempt funds.

In 1971 one-third of private sector schemes paid a lump sum as part of the pension entitlement. This proportion had risen to more than 90 per cent by 1979. Immediately after its introduction in 1956 the compulsory-purchase annuity market had zero sales, since it would have been the young working cohort in the late 1950s who would have started saving through a personal pension, and it is unlikely that this cohort would have annuitised immediately. By the 1990s this compulsory annuity market was ten times larger than the voluntary annuities market, and will continue to grow as the percentage of the population with personal pensions grows. .

Source:
- Edmund Cannon and Ian Tonks, Survey of annuity pricing, Department for Work and Pensions, Research Report No 318
- Association of British Insurers (2004) The Pension Annuity Market: Developing a Middle Market (ABI, London).
- Johnson, P. (1985) Saving and Spending: The Working-class Economy in Britain 1870-1939 (Oxford University Press).
- World Bank (1994) Averting the Old Age Crisis (Oxford University Press).

Performance Assessment

One of the responsibilities of government is to deliver high-quality, relevant services that meet the needs of citizens, communities, businesses, and other organizations. To do so, national governments have started to modernise their service offerings by introducing alternative system. This move has been accompanied by the development of sophisticated schemes to monitor and oversee how well the services are delivered. Given that in most industrialised economies, public sector current expenditure represents between 35 and 50 per cent (in the case of the UK 38.5 per cent) of GDP, and that prosperous states in particular have come under increasing fiscal pressure to cut their spending, the need of policy makers to be able to evaluate what the government gets for its money is evidently clear.

Deprivation may affect authority performance in many ways. Some service functions may be put under particular strain if large sections of the population suffer from low income, unemployment, poor health, or low educational attainment levels. Similarly, in areas with deprivation and large black economies, local tax collection tends to nosedive as citizens are keen to conceal their existence to their council, which ultimately feeds through to the relevant CPA indicators measuring tax collection performance.

Oversight has a long tradition in many countries and refers to scrutiny and steering from some point ‘above’ or ‘outside’ the individuals or organisations scrutinised. Traditionally, it has been effectuated by law courts or elected legislatures, but increasing use has been made of (‘quasi-independent’) reviewers, watchdogs, inspectors, regulators, auditors or monitors that are to some degree detached from executive government and line management.

The 1980s and 1990s have witnessed an increase in oversight and audit activity by governments that has led some authors to herald a ‘new age of inspection’ (Day and Klein 1990) and the advent of the ‘audit society’ (Power 1997).

the explosion of oversight has been far from uniform while the responsibilities and resources allocated to overseers has grown. In the case of the UK, for example, formal arms-length overseers doubled in size and real term resources during the 1980s and 1990s, at a time when UK civil service was cut by more than 30 per cent and local government by about 20 per cent (Hood et al. 1999).

In terms of mechanisms, oversight has shifted its emphasis, away from mere fiscal audits to value for money and performance audits. Since 2002, the UK government has assessed the delivery of public services provided by English local authorities through a regime called the Comprehensive Performance Assessment (CPA). Performance in six service blocks (benefits, social care, environment, libraries and leisure, use of resources, education and housing) is monitored through inspections and audits in order to determine if central governmental grant (of currently £120bn per annum (i.e., almost a quarter of UK public expenditure) is money spent effectively.

Andrews et al. (2005) investigated the extent to which success or failure in service provision is attributable to circumstances that are beyond the control of local managers and politicians. The explanatory variables used were: quantity of service need; age diversity; ethnic diversity; social class diversity; discretionary resources; lone parent households; population change; population; population density; and political disposition. The authors found that the ten constraint variables collectively explain around 35 percent of the inter-authority differences in service performance and 28 per cent of the differences in the ability to improve score. They concluded that these are “satisfactory levels of statistical explanation” (p. 650). More specifically, they found that higher ethnic and social class diversity appear to place additional burdens on service providers and thereby result in lower performance; that authorities with a high percentage of single parent households, which represented the authors’ proxy for measuring deprivation, found it more difficult to climb the CPA ladder (p. 651); that large authorities found it easier to achieve good CPA results; and that no differences exist between the four types of authorities. The authors then concluded that “the CPA process is flawed by the failure to take account of circumstances beyond the control of local policy makers.

We agree with Andrews’s conclusion that there appears to be an impact of deprivation on authority performance that needs exploring, and we reject the Audit Commission’s initial claim that no significant correlation can be detected. We also concur with Palmer and Kenway’s conclusion that in order to test this hypothesis in a statistical model, performance indicators are the wrong choice for the dependent variable because of their limited bearing on final CPA ratings. The selection of variables and data for the analysis, as explanatory variables, was a number of variables that controlled for external influences (e.g. political, economic, social and environmental).

CPA scores are more meaningful because, similar to the IMD 2004, they show the amount of variation across authorities. The conversion into percentages (of the maximum score possible), in turn, is required because 32 of the 148 authorities (viz., the ‘shire’ counties, which have ‘shire districts’ below them) are only assessed in four of the six service blocks and have therefore lower minimum and maximum scores (an approach already employed by Andrews et al 2005, p. 649).

Irrespective of the policy implications, we addressed the statistical issue by constructing a new deprivation domain for education that measures education deprivation of adults only. The new domain is based on indicators measuring the proportion of those aged under 21 not entering Higher Education (1999-1002), the secondary school absence rate (2001-2002), and the proportion of young people not staying in school level education above the age of 16 (2001). In order to test our choice, we later compared the statistical model using the modified domain with the model using the original domain and found that the resultant panel data estimates of all variables had the same sign, and the differences in their magnitude were so minor that both variables could be used interchangeably without affecting the resultant tables.
To conclude the explanation for our choice, and modification, of variables and data, the following list gives an overview of all explanatory and dependent variables used:
‐ Quantity of service need 2001 (measured in logarithms)
‐ Age diversity 2001 (log)
‐ Ethnic diversity 2001 (log)
‐ Social class diversity 2001 (log)
‐ Discretionary expenditure 2002, 2003, 2004 (log)
‐ Population size 2002, 2003, 2004 (log)
‐ Population Density 2002 (log)
‐ Overall Index of Multi Deprivation scores 2004 (log)
‐ Education, Skills and Training Deprivation 2004 (log) or Adults Education Deprivation 2004(log)
‐ Barriers to Housing and Services Deprivation 2004 (log)
‐ Crime Deprivation 2004 (log)

‐ Living Environment Deprivation 2004 (log)
‐ Income Deprivation 2004 (log)

‐ Type of Local Authority:
o County Councils,
o Inner London Boroughs,
o Metropolitan Districts,
o Outer London Boroughs, and
o Unitary Authorities.
‐ Political Control 2001, 2002, 2003:
o Labour,
o Conservative,
o Liberal,
o Independent, and
o No overall control

The comparative government literature shows that nations with proportional representation and (typically) coalition government have higher welfare spending, but worse fiscal discipline, than nations with plurality electoral rules and (typically) single-party government (Persson and Tabellini 2005, pp. 270-3).

During the elite interviews we conducted with auditors, auditees, and other stakeholders, repeated mention was made of the need to carry out comparative analyses between “like cases” of authorities. We followed this advice and observed interesting differences with regard to the effect of some of the explanatory variables, as evidenced by the data shown in table 2. For instance, quantity of service need has a negative effect on the CPA score in county councils, but a positive effect in the remaining types of authorities. Similarly, the effect of age diversity on CPA scores is positive when all authorities are grouped together, but negative in all types of authorities except inner London Boroughs.


the higher the level of deprivation, the lower the CPA performance score, a conclusion that contradicts, to a different extent, some earlier studies (Audit Commission 2003b, Palmer and Kenway 2004) but confirms others (Andrews 2004; Andrews et al. 2005).

This is not the end of the story, however, as this first high-level overview is only the beginning, and more detailed insights can be gained from a second stage, during which the different domains of deprivation are mapped onto the different types of authorities. This more extensive analysis is best approached with visual help.

The picture does not change much for the next set of deprivation domains to the right of the first, which shows that, if analysed separately, the 46 Unitary Authorities are relatively homogeneous and equally deprived across the seven domains. What is more, they do not deviate much from the national average.

However, divergences of some significance emerge within the third group comprising the 34 County Councils. As the first column on the left of this group indicates, they are the least deprived authorities in the country when measured across all seven deprivation domains through the composite IMD 2004. Yet, when split up into the individual domains, upward deviations surface for income deprivation and for barriers to housing and services, the latter of which is due presumably to the long distances prevalent in rural areas to reach the nearest shop, post office or hospital.

The metropolitan districts and, more strikingly, the inner and outer London boroughs display very drastic deviations between the individual deprivation domains. For instance, Inner London Boroughs are on average more deprived than the rest of the authorities in the deprivation domains of income and crime, whereas they are on average less deprived in barriers to housing and services (presumably for reasons to do with relatively good proximity in metropolitan areas) and adult education.

Deprivation in the domain of education has a negative effect on the overall CPA scores and in all CPA service blocks, except for social care (children) and libraries and leisure. In these two latter CPA service blocks, education has a positive but insignificant effect. Deprivation in the domain of barriers to housing and services has a significant negative effect in the CPA service blocks of social care (adults), environment, use of resources and benefits. Crime has a consistent negative effect on the overall CPA score and all its service blocks, except benefits where crime has a positive but insignificant effect on CPA. Similarly, deprivation in the domain of living environment has a negative effect on overall CPA and all its service blocks, except social care (children and adults) and environment. Only in these two latter CPA blocks is it that deprivation in living environment seems to have a positive and significant effect. Income deprivation has a negative and significant effect on CPA in the service blocks of education and social care (children), a positive and significant effect in the service block of housing, and a positive and not significant effect on the rest of service blocks.


Source:
Prof Iain McLean , The limits of performance assessments of public bodies: the case of deprivation as an environmental constraint on English local government, Nuffield College, University of Oxford, Public Services Programme,Nov 2006

Tuesday, January 30, 2007

Examining power politics

Examining power politics in provision of public services

A citizen-based approach to social work encourages user involvement in public policy that involves greater participation in political activity. It emerged in initiatives such as self help, campaigning and community action, a different interpretation of social works and provision of public services. This has brought forward change of power politics in the management of public services where service users’ demands and perceptions are main elements in devising and prioritising the agenda. By encouraging people to engage in active citizenship and political participation, social care professionals become active change agents while at the same time service users enjoy the shift of power.





Reflective logs of social works

The fieldwork placement is recognized as one of the major components of social work education and a major determinant of its quality. A key aspect of the learning process in the fieldwork placement is the exposition of practice encounters to the students’ critical reflection. Given the importance of the process of ‘reflection’ or ‘reflective learning’, a qualitative study based on the reflective logs of social work students was conducted to explore the meaning of social work field education and the learning experiences of social work students during their placement. The study findings revealed that disturbing events experienced by students in their fieldwork were a catalyst to their reflective process. Meanwhile, their undue concern with knowledge and skills application within a circumscribed knowledge frame suggests the dominant influence of scientism and competence-based practice in social work, in which learning outcomes and instrumental and technical reasoning are highly emphasized. Discovery of ‘self’ was also the major premise in the students’ reflection logs, in which a majority of them took their prevailing self-identity as a constant state to be verified in interaction with others in the fieldwork placement. Reflexivity is manifested in asking fundamental questions about assumptions generated by formal and practice theories; it addresses the multiple interrelations between power and knowledge, and acknowledges the inclusion of self in the process of knowledge creation in social work practice. Its realization in social work education requires the social work educators’ reflexive examination of the dynamics that influence the construction of curriculum, which in turn construct our prospective social workers.
Keywords: social work placement, reflection, reflexivity
Source: British Journal of Social Work

Environmental Alarm

World wide, economic losses due to extreme weather events increased 10-fold from about $4 billion per year during the 1950s to about $ 40 billion per year during the 1990s (IPCC 2001 b).

Monday, January 29, 2007

Enironmental Challenges

Within decades, technological progress, funded by growth, will break the relationship between GDP and carbon emissions. Our approach to India and China, and other emerging economies, must be more savvy than trying to beat them into an international agreement that is not in their interests. Government should create prize funds to support the development of new green technologies and tariffs on green technologies should be scrapped.

The importance of dealing with environmental issues is widely recognized and the importance to not take a negative view in order to deal positively with both environment and economic growth. The positive trends over the past century has been recognized. People no longer worry about the Ozone Layer or acid rain or the cleanliness of Britain’s rivers. Londoners don’t complain about smog; indeed, air quality in London is the cleanest since records began in 1581. These have been achieved not by curbing living standards but hand in hand with rising affluence. All the evidence shows that after the early stages of development, environmental trends improve because people are wealthy enough to pay for the improvements. Instead of a fear of economic growth, policymakers should see it as a force for good. Within decades, technological progress, funded by growth, will break the relationship between GDP and carbon emissions.

Moreover, an approach to climate change that emphasises technological progress hand in hand with growth offers the best way to tackle the issue of the developing economies. Our approach to India and China, and other emerging economies, must be more savvy than trying to beat them into an international agreement that is not in their interests. The British public has made clear that the needs of the world’s poorest must be taken into account.

The principles upon which science is built are doubt and constant inquiry. In the
20th century, this view of science was most clearly expressed by the philosopher Sir Karl
Popper, who once put the issue thus: “Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve”. To Popper science was an ever evolving process of discovery and refinement.

Kyoto was the ‘signal that governments and industry have been waiting for. There is now a price on climate pollution and penalties for polluters. The switch to a low carbon economy begins here.

However, it proved to be disappointing, even with extra measures, Spain is projected to exceed its 1990 emissions by 51.3% in 2010, compared with an allowed increase under Kyoto of 15%. Ireland is projected to reach 30% above 1990 levels by 2010, against an allowance of 13%, and Portugal 42.7% higher, with an allowance of 27%.3 In fact, according to the Institute for Public Policy Research (IPPR) the UK is almost alone in Europe in honouring Kyoto pledges to cut greenhouse gases, accompanied only by Sweden. We have only been able to meet our targets due to the replacement of coal-fired power stations by cleaner gas ones. This shift was a natural and market-led one – the energy industry did it simply to cut cost.

In the cases of India and China, both of whom have ratified the protocol, neither are required to reduce carbon emissions under the present agreement because they are classified as non-annex 1 developing countries. However, China and India will soon be the top contributors to greenhouse gases. Also, there is evidence that without Kyoto restrictions on these countries, industries in developed countries will be driven towards their non-restricted economies, resulting in no net reduction in carbon emissions.

Yet how can the international community hope to convince these countries to commit to agreements that will radically retard their economic growth? Indeed, would it even be morally right for developed countries to ask this of China or India, when the resultant decline in economic growth would condemn millions of their population to further years of ongoing poverty and deprivation? On the face of it Kyoto can be painted as a success. As of October 2006, a total of 166 countries and other governmental entities have ratified the agreement, representing over 61.6% of emissions from Annex I countries.7 Yet as we have seen, most of these countries do not stand to meet their targets. Moreover, even if Kyoto were succesful, its environmental benefits would be marginal.

The science surrounding climate change remains uncertain, and such forecasts are difficult to make with any degree of accuracy. Nevertheless, it is clear that international agreements are not the solution we seek. They are both unworkable and, realistically, inadequete. A new approach must be found. Historically, innovation has been our saviour but today there is a widespread notion, felt rather than understood, that technological solutions can only ever lead to greater problems. This is an unfortunate misunderstanding. Technology, in essence, is simply applied problem solving and such pessimistic objections must, in all fairness, apply equally to political, social and economic solutions as well.

Source:
Edited by Tom Clougherty, positive Environmentalism
Selling Hot Air, The Economist, Sep. 7th 2006
http://www.economist.com/surveys/displaystory.cfm?story_id=E1_SRVPDNN>
http://www.greenpeace.org.uk/contentlookup.cfm?CFID=6263881&CFTOKEN=78823873&ucidparam=20051
212090405> [Last accessed 23/11/06].
http://www.nationalcenter.org/KyotoSenate.html> [Last accessed 23/11/06].


Subject: Creating Smart Hubs in Rural Post Offices(agriculture+intenet+environment+post offices)





Dear Gareth,



I was wondering whether this idea is feasible to include in FP7 ITN research programme or Remediation land contamination initiatives.



Creating SMART HUBS by bringing together different players for the purpose of sustainable growth of local economies and attracting city dwellers to invest in environment. There is a potential for development by combining the use of post offices in rural areas and turning them into satellite internet hubs+ smart agriculture centers + environmental monitoring focal points. This is a functional idea to make use of our assets (not to close down post offices in rural areas), use of integrated communication facility and data collection (internet hubs), sustainable economic growth through improving smarter local farming and yields (smart agriculture), waste management and environmental policy making (focal points).


Research proposal

Environmental democracy, the core subject of my studies builds on our identities in relation with other diverse bodies in the nature. Environmental crisis is above all a problem of knowledge that enriches our democratic approach. It leads to rethinking identities in search of connecting, managing and improving complex relation with the environment.

One dimension that has been much neglected in the past emanates from the impression as if environmental resources are costless and infinite therefore prone to over exploitation. Environmental rationality seeks to reestablish the links between knowing, certainties, and purpose of our surroundings and the way we get on with them.

There is more to environment that it has been previously assumed. The environment dominates and controls the evolution of life forms through process of interactions which carries vital values. The environment power dominates that of heredity constitution. Given the environment, the organism only makes of itself what in reality it receives.

We are defining new roles, concepts, values and spaces. Consequences of ignoring environmental signals and delaying environmental democracy have proved to be much greater than expected. For example global warming and water scarcity has become the major motivating factor for countries in central Asia to escalate into conflicts over access to water. Studies found that we survive today as a result of borrowing from the future.

Pressing environment rationality implies the reconstitution of identities beyond instrumental modern thinking, calculating and planning. The solution to the global environmental crisis is based upon revising mind sets, perceptions and values that brings along institutional changes.

Environmental democracy modifies the logic of the scientific control of the world, of the technological domination of nature, of the rational administration of the environment. And in so doing develops culture of adaptation to the nature. Over reliance on science that assisted to liberate man from underdevelopment and oppression has generated one dimensional alienated society. We ought to return to the nature and cherish it’s values in a sustainable manner.

My studies are focused on environment as an ‘identity issue’ that revolves around environmental democracy particularly access to water and its state of affairs for which I need to study variety of reliable information enabling me to compare behaviour, decision making and examine barriers and risks involved in different social contexts in our democratic response to global warming, over population and resource depletion.

Key words: Environment, democracy, identity, values, resource depletion

Saturday, January 27, 2007

Environment as Political Priority

It is fascinating to watch a shoal of inshore fish in tropical waters. They all face one way, then in a flash they all point in another direction. They move with smooth synchronicity. No doubt slow motion cameras could capture which one moved first, but it's all so swift and sudden that I doubt if even they know that. They remind me of traders, One moment they're all chasing energy shares and then suddenly they're all going for mining companies. Whoosh again and it's commodities; just like the fish.

Business obsessions are also like that, thinks Graham Searjeant, financial editor of the Times, He writes that protecting the environment is top political priority for business leaders. Speculators have dumped mineral oil in favour of vegetable sourced energy, with maize prices doubling in a year on its ethanol role.

Looking back at the top priorities of earlier years, recorded by the UPS Europe Business Monitor, we see that computers were on everyone's mind, and the hype spawned the dot-com bubble. Then, following the terrorist attacks of 2001, the war against terror and the need for security were uppermost. After that, in the wake of the Enron and Worldcom scandals, it was corporate governance and business malpractice, with an over-reaction that has seen New York's financial pre-eminence slip back under the weight of over-regulation. Like the fish, their habit is to point the same way, then change.

The obsessions can play a role in overcoming inertia and getting a trend going, says Searjeant, but they should not be mistaken for sustainable market action.
In economic terms, going green is more akin to the war on terror than the online revolution. There are more minuses than pluses and it will be a long while before cuts in carbon emissions are meshed fully with free market forces. If cutting carbon emissions is really the top priority, for instance, other governments will follow France’s big switch to nuclear power. Biofuels for transport will be as prominent in Europe as in Brazil. But complex and sustained interference with current free market pricing will be needed to achieve this without economically crippling costs.
Rather than "complex and sustained interference," governments might find it more attractive to offer financial incentives for the development and adoption of cleaner technologies, and concentrate on speeding their movement to market mainstream. It's possible to have cars that don't pollute or burn fossil fuels. It might be possible, certainly harder, to make relatively green aircraft. The incentives have to be there to encourage people to commit to long-term investment in such things. Long before then, of course, the fish will have changed direction again and be facing a new priority.

Source: www.adamsmith.org

Giving People Opportunity to Choose - Public Services

The historical context for the current debate on public service reform goes back at least as far as the creation of the modern welfare state in the years after 1945. This saw the establishment of a broad and long-lasting consensus that whole areas of activity, previously in the private sector, should now be regulated or directly owned by the state in the public interest to secure both efficiency and equity. Variations to the post-war model of provision of public services have tended to involve either target-setting, benchmarking and performance-related pay; or competitive tendering and external contracting for defined, often stand-alone, services ranging from cleaning to IT. Both of these approaches are now well established as part of public service reform. The Committee has in the past few years examined many aspects of this reform, including the culture of performance targets and league tables.

The third variant, less developed but increasingly important in the debate on the public services, is choice, often defined as giving individuals the opportunity to choose from among alternative suppliers, whether or not entirely within the public sector. The choice that often matters most to those who are more reliant on the provision of good local services is the ability to make decisions which have a direct and immediate impact on the quality of their lives. It is clear that if choice is to succeed it will have implications for the Government’s wider objective of containing costs and increasing the efficiency of the public sector. For choice to be effective we found it was necessary to ensure additional capacity in the appropriate places. This not only comes at a cost, but expanding a successful school or closing a hospital cannot be an immediate, or even a practical, response to user choice. The complementary point was made that choice without “voice” is much less effective. As David Miliband MP, when Schools Minister, said “choice and voice are strengthened by the presence of each other: the threat of exit makes companies and parties listen; the ability to make your voice heard provides a tool to the consumer who does not want to change shops, or political parties, every time they are unhappy”.

Established methods of recording user satisfaction, handling complaints and offering redress are far from satisfactory. More care and more imaginative consideration need to be given to making such ‘voice’ mechanisms more effective. We therefore propose the development of a measurable and comprehensible Public Satisfaction Index. It is concluded that if users of the public services have the right to choose they should also have the right to expect a guaranteed minimum level of service. A choice between several poor schools or hospitals is no real choice at all. We commend the idea of ‘Public Service Guarantees’ which build on service standards schemes, such as the Charter Mark, which already exist. Public Service Guarantees would articulate the expectation of good customer service and provide the means to ensure that they are met by providers.

Three services where the debate on choice is especially lively were examined —health, secondary education and social housing—but it is believed that much of the analysis can also be applied more widely.

Choice, Voice and Public Services; House of Commons, Public Administration Select Committee, fourth report, 2004/5

Sewer blockages

Recipe for winter bird cakes

Please don't pour your cooking fat down the plughole - fat and grease causes over 60% of the 60,000 sewer blockages in our region every year.

Instead, why not make a tasty cake for birds?

These make excellent winter food. You will need:
Good quality bird seed; raisins; peanuts; grated cheese; leftover cooking fat; yoghurt pots; string; mixing bowl; scissors

1. Carefully make a small hole in the bottom of a yoghurt pot. Thread string through the hole and tie a knot on the inside. Leave enough string so that you can tie the pot to a tree or your bird table.
2. Allow the leftover fat to solidify at room temperature, then cut it up into small pieces and put it in the mixing bowl.
3. Add the other ingredients to the bowl and mix them together with your finger tips. Keep adding the seed/raisin/cheese mixture and squidging it until the fat holds it all together.
4. Fill your yoghurt pots with bird cake mixture and put them in the fridge to set for an hour or so.
5. Hang your bird cakes from trees or your bird table. Watch for greenfinches, tits and possibly even great spotted woodpeckers.

Source: http://www.thames-water.com/UK

Longitudinal Survey Method

Barriers in Use of Longitudinal Survey

ESRC examines what are the most common barriers which prevent social scientists from making more frequent use of longitudinal survey data resources. It found the most important challenges are:
• Lack of appropriate software skills and good habits in software programming. In response, the LDA online materials include a number of introductory resources to working with relevant software through textual ('syntax') programming. Most resources refer to the packages SPSS and Stata.
• Lack of confidence in undertaking data managment tasks in the handling of complex combinations of longitudinal data files and variables. Our resources are heavily oriented to training in the data management tasks common to longitudinal survey resources, such as merging data between different files, and summarising 'key' variables in a longitudinal context.
• Lack of appreciation of the qualities of appropriate longitudinal survey data resources. Our materials use illustrative analyses of secondary survey resources, and feature links to numerous information resources on relevant survey data.
• Lack of confidence in statistical techniques for the analysis of complex survey data. Our materials feature general training in issues of working with complex survey data as well as links to further training resources ('complex' survey data is not necessarily longitudinal, but longitudinal survey data is usually complex).
• Lack of a balanced array of skills in the statistical techniques of quantitative longitudinal data analysis. Our materials attempt to demonstrate a wide range of methods for the analysis of longitudinal data. We also try to point readers to other resources which can further develop such skills

Source: www.longitudinal.stir.ac.uk/materials_summary.html




Study of Survey Methods
A number of studies have outlined the relative advantages and disadvantages of online research. The key advantages nearly always quoted first are greater speed and lower cost. In a number of circumstances these are going to be significant – particularly for multinational research and research with specialist audiences. There is also general cost advantages considering the cost of building and maintaining a panel being quite substantial in the beginning. The advantages can be considerable as it is possible to accumulate very large volumes of interviews in a short space of time. Having said this, a minimum fieldwork period is often recommended for online surveys to ensure good coverage. Another advantage suggested is that online surveys do not require interviewers to be present and so interviewer effects are avoided. Prominent example given is the higher admission of undesirable behaviour in online surveys than in interviewer-administered surveys (Comley 2003). Within political polls, the anonymity afforded by internet-based approaches is particularly highlighted as a way around the problem of the ‘spiral of silence’. The increasing individualism and selectiveness of potential respondents, as well as their use of new technology such as voicemail and caller ID to avoid telephone surveys add to online research facilitation. Online surveys get around by fitting in with a respondent’s life; they can fill them in at their convenience and can partially complete and return whenever they like. It is argued that this may help explain the more ‘socially liberal’ attitudes seen in many online surveys, as respondents on average tend to lead less home-based lives and so are less cautious (Kellner 2003b). it is suggested that online interviewing reaches ‘busy people – often educated and well-off– who systematically repel or ignore cold callers but are willing to answer questions posted on their computer screen’ (Kellner 2004).

As for reliability it is known that online respondents use scales differently from respondents in other modes. There is conflicting research on this, some showing that online respondents are more likely to choose midpoints in scales and ‘don’t know’ options in general, or in contrary use extreme options. It is possible to correct for this to an extent through modeling. Unlike face-to-face surveys, which can be sampled from reasonably comprehensive databases, online surveys are most often conducted among respondents from a panel who have agreed to be contacted for market research. No simple database of everyone who is online exists, and it looks unlikely to exist for the foreseeable future. Furthermore, even if there were such a list, prohibitions against ‘spamming’ online users would prevent it from being used as a sampling frame.

There are therefore three main issues relating to coverage bias or selection error that are raised with the sampling approach to online panel: first, of course, they can reach only those who are online; second, they can reach only those who agree to become part of the panel; and, third, not all those who are invited respond (Terhanian 2003). What makes online surveys different from other survey approaches, such as telephone in the USA and face to face in the UK, is that such a large proportion of the population are excluded before the survey begins, and that these are known to be different from those who are included. Although internet access in the UK is around six in ten of the adult population and rising, the demographic profile of internet users is not representative of the UK adult population as a whole, tending towards younger age groups. Those who choose to sign up for online panels may also have a younger, more male profile (Terhanian 2005).

It has been observed that online data tend to paint a more active picture of the population: online survey respondents tend to be more politically active, more likely to be earlier adopters of technology, and tend to travel and eat out more than face-to face survey respondents.

The technique behind propensity score weighting – propensity score matching (Rosenbaum & Rubin 1984) – has been used since the early 1980s, most commonly in evaluations of social policy, to ensure that experiment and control groups have similar characteristics (where random assignment is not possible).

The propensity score matching process is as follows.
• Parallel online and telephone or face-to-face surveys are conducted where the same questions are asked at the same time using different modes (an online survey and a telephone or face-to-face survey).
• Logistic regression is then employed to develop a statistical model that estimates the probability that each respondent, conditional on his or her characteristics, participated in the telephone or face-to-face study rather than the online one. The probability, or ‘estimated propensity score’, is based on answers to several socio-demographic, behavioural, opinion and attitudinal questions.
• Next, in the ‘propensity score adjustment’ step, respondents are grouped by propensity score within the survey group (telephone/face-to-face or online) they represent.
Statistical theory (Rosenbaum & Rubin 1984) shows us that when the propensity score groupings are developed methodically, the distribution of characteristics within each internet grouping will be asymptotically the same as the distribution of characteristics within each corresponding telephone or face-to-face grouping.

One of the first major UK studies comparing online and face-to-face data as opposed to online and telephone research was projected in parallel surveys comparing an online panel survey (Harris Interactive) with a face-to-face CAPI omnibus survey (MORI). Five ‘propensity score’ questions were asked on each survey covered issues such as online purchasing behaviour, views on the amount of information respondents receive and personal attitudes towards risk, social pressure and rules.

Question wordings on both surveys were kept as similar as possible, but some adaptations were required to reflect the different interviewing methods. Show cards were used in the face-to-face survey for all questions except for those with a simple ‘yes/no’ or numerical response, and the order of response scales and statements was rotated in both surveys.

The objective of the study is to establish whether data from an online panel survey can be successfully matched to data from a nationally representative face-to-face survey. Specifically, the study aims to make comparisons at a number of levels.

Once the surveys had been completed, both sets of data were weighted to the correct demographic profile (UK adults aged 15+). In the case of the omnibus survey this involved applying simple rim weights on region, social class, car ownership, and age and work status within gender. For the online survey the demographic weights that were applied were age within gender, ITV region, education level, income level and internet usage (ranging from high to low, measured in number of hours per week).

Several questions were placed on both surveys, with the target questions covering voting intention, socio-political activism, knowledge of/attitudes towards cholesterol, views of immigration and access to technology. These questions were selected to provide a relatively stern test of how close an online survey can get to a face-to-face survey, given that there are likely to be significant mode effects (particularly interviewer effects) and a noticeable impact from any attitudinal bias in the online sample.

The first question area looked at was voting intention. Comparison of unweighted face-to-face and online data shows us what previous studies of online research methodologies have suggested: online respondents are more likely to say they would vote Liberal Democrat or Conservative than their face-to-face counterparts. This is likely to be because of two competing effects seen throughout the study.

It has been hypothesised and shown to some degree that online panels tend to achieve samples that are more educated and active. The application of demographic weighting to both sets of data does serve to close the gap between online and face-to-face results. While the face-to-face weighting has had very little effect on data (increasing Conservative and Liberal Democrat support by just one percentage point), the propensity score weighting has had a significant impact on online data (for example, increasing Labour support by eight percentage points) (see Table 1).
It should also be noted that the design effect of propensity score weighting has nearly halved the effective sample size of the fully weighted online data, whereas demographic weighting has very little effect on the face-to-face effective sample size. However, as the original online sample was very large, comparisons are still relatively robust (a difference greater than +/– three percentage points would be significant).

Attitudes towards immigration have been surveyed by MORI a number of times, and findings have varied greatly by education, social class and general world-view. Further, these questions cover sensitive issues and are likely to be susceptible to eliciting socially desirable responses, particularly when an interviewer is present. These questions were therefore interesting to repeat in the online vs face-to-face experiment, as large differences could be expected. Weighting does not have much effect on either online or face-to-face survey data, and the key finding from these questions is that online survey respondents seem much more inclined to select the neutral point (‘neither agree nor disagree’) than face-to-face respondents. It could therefore be argued that the face-to-face results artificially emphasise opinions, when actually there are few strongly held views on these sensitive, complex issues.

The results on understanding of issues surrounding cholesterol (Tables 10–15) appear to confirm that online respondents are generally better informed than face-to-face samples, with a significantly higher number of online respondents correctly saying that cholesterol is ‘a type of fat that circulates in the bloodstream’. The rating of the seriousness of cholesterol as a health risk clearly illustrates the pattern seen in other studies, where online respondents are less likely to choose extreme options.

Conclusion
It was put forward some theories as to why data from online and face-to-face surveys might be different; however, we need to understand more about why weighting by both demographics and attitudes has varying degrees of success. There seem to be two main competing effects at play when comparing online and face-to-face methodologies. Online research using panel approaches appears to attract a more knowledgeable, viewpointorientated sample than face-to-face surveys. This could be because this is a prior characteristic of those with access to the internet or those who join online panels, or it could be a learned behaviour from taking part in a number of surveys. However, face-to-face respondents are more susceptible to social desirability bias due to the presence of an interviewer. Sometimes these effects appear to balance, bringing the outcomes from the two methodologies together, but sometimes they don’t. Voting intention is an example of a question area that has been successfully matched online, suggesting that, for some areas of study, welldesigned internet-based surveys with appropriate weighting strategies can produce similar results to well-designed face-to-face surveys. However, a number of other question areas are not so encouraging, particularly where the issues are sensitive.

A further note of caution when applying relatively heavy weighting to data sets, such as the propensity score weights used in this study, relates to design effect and impact on effective sample size. If such weights are to be used, the sample must be large enough to ensure that the resulting effective sample size will stand up to significance testing. Of course, as the cost per interview is low when a large number of online interviews are conducted, this may not be a problem. Despite these limitations it seems likely that online surveys will grow substantially over the next few years. This is partly because there are some doubts over either the capacity for or methodological advantages of traditional methods. First, face-to-face interviewing resources are limited and increasingly expensive. Landline telephone penetration is dropping, with currently 7% of households having no phone or mobile only. This is likely to grow fairly significantly and, more importantly, there is significant bias involved, with young households in particular much more likely to have mobiles alone. It is therefore important to continue to think about in what circumstances and how internet-based methodologies can be used for data collection, and to develop approaches that will be as robust and representative as possible.

Source: Bobby Duffy and Kate Smith MORI Online, George Terhanian and John Bremer, Harris Interactiv, Comparing data from online and face-to-face surveys

Friday, January 26, 2007

Wednesday, January 24, 2007

MY PIECE OF ENGLAND



Magdalen, who is here

The dear
There is no specific mention in College archives of the origin of the Magdalen herd. Magdalen's E.P. Shirley in his definitive account of English Deer Parks in 1867 found that whilst the hardy dark animals had long been in Britain, the more delicate yellow-red - hence 'fallow' - deer with which they interbred had been native to Greece and Italy, and introduced to England by Saxon times

It seems that by a philosophical convolution of deer being herbivores, our magnificent animals were officially reclassified as vegetables! Though with awful loss of dignity, the herd was thus safeguarded under College control.

The swan
The area where New Building stands and where the Grove had been planted, was formerly thirteen gardens, orchards, and fish ponds. Macray records for 1491: 'for the keeping of two swans, of which one belonged to the President, 10d'; these were for his table of course. The last entry for keeping swans is in 1617, though a gift of two black swans for ornamental purposes was accepted in 1904.









Oxford - Headington 5 am, (source: BBC Oxford)

Tuesday, January 23, 2007

Research Methods

Bench marking good practices

A systematic investigation into current perceptions of qualitative methods in management research focused upon issues such as barriers to their use; the ways in which they were assessed; how people defined good practice in this area; and any skills deficits in the researcher community. Research was designed to find out how quality was assessed in qualitative research in the business and management area.

The project involved a review of the literature and a series of 45 in-depth interviews with members of four different groups who have an interest in this area. The groups were:
• key gatekeepers such as those who edited journals, and funded qualitative research
• practitioners who used qualitative research such as opinion pollers and consultants;
• university doctoral programme leaders;
• qualitative researchers.
A key issue that emerged was the extent to which qualitative research was viewed as credible. Sources of research credibility varied widely and in general definitions of credibility were seen to be associated with the quantification of data and were therefore seen to disadvantage qualitative research. Various elements of good practice in relation to qualitative management research were identified (and sometimes disputed) including flexible research design; epistemologically coherent analysis; reflexivity concerning process and product of research; and a persuasive, engaging presentation.

Assessment of the quality of qualitative research appeared to be more of an intuitive decision-making process than the application of known and agreed criteria.

Assessment of the need for training provision for qualitative research
The provision of training for qualitative research it was generally seen as scarce and of poor quality. There was a view that researchers needed to be aware of the complexity associated with conducting qualitative research, and the wide variety of techniques available. More specific training needs included: ‘technical’ skills, such as data analysis techniques and writing up qualitative research appropriately; knowledge of underlying philosophical issues behind qualitative research, and the varieties of approaches available to the researcher; reviewing skills; and skills in supervising qualitative research.

A series of workshops were designed to address deficits:

Skills of the qualitative researcher
Reflexivity
Philosophies that inform qualitative research
Analysis
Range of methods
Writing up and Publishing report
Assessment criteria
Reviewing qualitative papers and research grants
Supervision for qualitative research

Integrating quantitative and qualitative researches are increasingly viewed as compatible after sidelining a period of ‘paradigm wars’. Indeed, multi-strategy research increasingly came to be perceived as a position that offered the best of both worlds. A project was set up to provide a comprehensive assessment of the state of the field with regard to integration of quantitative and qualitative research. Further to identify areas or contexts in which the integration of quantitative and qualitative research is not obviously beneficial;
And to explore an area of research in which quantitative and qualitative research co-exist as separate research strategies or traditions and to analyse the prospects for linking the two sets of findings.

The methods that were used consist of:
1. Content analysis of case studies of the integration of quantitative and qualitative research across the social sciences. Articles in refereed journals in five fields between 1994 and 2003 were analysed.
2. Examination of discursive strategies employed in making the case for combining quantitative and qualitative research.
3. Examination of the prospects for combining published accounts of research combining quantitative and qualitative research in the field of leadership.
4. Interviews with researchers.

The research itself thus entailed a combination of quantitative (analysing content) and qualitative research (exploration of discursive practice), in order to explore different aspects of the overall project.

A cross-sectional design was by far the most common design for the collection of both quantitative and qualitative data. Thus multi-strategy research is typically being carried out with a much more limited range of research designs and research methods than stock phrases such as ‘multi’ and ‘mixed’ might lead one to expect. The modally typical article comprises quantitative data deriving from a survey instrument administered within a cross-sectional design and qualitative data deriving from individual interviews within a cross-sectional design.
In the rational to justify the use of multi-strategy research both complementarity and expansion were the most frequently cited rationales with 29% and 25% of all articles mentioning each of them as a primary rationale. When ‘Practice’ is examined, it is striking that nearly half of all articles can be subsumed into the complementarity category. Multi-strategy research is something of a moveable feast.

The main and most striking feature of these findings is that nearly one half of all articles using both quantitative and qualitative research do not in fact integrate the findings.
Finally, each article was analysed to explore whether any vestiges of the paradigm wars were still operating.

Leadership is a field in which quantitative research has dominated for many decades, but over the last 15 years more and more qualitative studies have appeared. When the studies are examined it is clear that some qualitative studies can be combined with the dominant quantitative paradigm but some cannot. Most difficult to integrate with the still dominant quantitative research paradigm are those qualitative investigations that problematize leadership. Some qualitative research on leadership was similar to much quantitative research in terms of character and the kinds of research questions explored but did not include any quantification. Such studies were particularly easy to merge with quantitative findings.


Source:
ESRC Research Methods Programme

Monday, January 22, 2007

Water & Sanitation

The world is running out of water and needs a radical plan to tackle shortages that threaten the ability of humanity to feed itself. The breadbaskets of India and China were facing severe water shortages and neither Asian giant could use the same strategies for increasing food production that has fed millions in the last few decades. "In 2050 we will have 9 billion people and average income will be four times what it is today. India and China have been able to feed their populations because they use water in an unsustainable way. That is no longer possible,".
Guardian, 22 Jan

The Office of National Statistics published report show some striking changes in household food consumption in the year up to April 2006. We are spending more on fruit (13% up) and vegetables (up 6% not including potatoes). Sweets and soft drinks purchases are down by 8% and 6%. Vitamin C intake was driven up nearly 7%, driven by the fruit and veg intake.


Water & Sanitation Education

Preparatory investigations

a water and sanitation education programme should be informed about local situation and people’s perceptions of problems and solutions. The reasons for an investigation into social and health related aspects do not differ basically from the reasons for technical survey. Who is going to waste money on borehole drilling without thorough investigation on soil condition, and water provision. Another reason for investigating social and health aspects is that they have an important influence on successful development of the technical component.

Type of information
The type and depth of information in required depends on the project phase and the reason why the information is needed. In an early stage of the project when basic decisions have to be made about the scope of the hygienic education programme and the inputs required, it may be sufficient to have a tentative assessment of people’s health problems related to water and sanitation and of possible ways to reduce these problems. For a detailed hygiene education action plan however, it will be necessary to know more about how people behave and difficulties people face when trying to make improvements, and what openings there are for the project to help overcome these difficulties.

Type of investigation
Collection of information on social and health related aspects in the first phases of a project are known by several names. These different names only partly reflect different types of investigation, as will be clear from the short descriptions below. All are a sort of baseline study or formative evaluation and aim to help the hygiene education programme to take shape and to provide a baseline for monitoring and evaluation.

Situation analysis
A situation analysis is usually a rather broad undertaking to collect data necessary for rational planning and programming. It seeks to identify the main problems affecting health related to water and sanitation and the opportunities for action for improvements. Often it is a rather distant activity, more oriented to getting the project informed than to getting the community motivated and involved. It often results in a series of quantitative data for example population figures together with broad qualitative impressions, for example on the general health situation or the need for improved water supply and users practices.

Knowledge&Attitude study
It aims to provide project staff with a more intimate understanding of people’s knowledge, attitudes and practices with regard to water, sanitation and health. Some people tend to use this type of study on the wrong assumption that proper knowldege will lead to proper attitudes and then to proper behaviour. Transfer of information does not lead automatically to change of behaviour. However, when knowledge, attitudes and practices are not put in a causal sequence but regarded as three important influencing factors, such a comprehensive study of three issues may be very valuable for the design of a hygiene education programme.

Sources of information
Information can be collected in a variety of ways, such as:
-informal discussions with individuals and groups;
- interviews discussions based on checklists with individuals, such as a household member, primary school teacher, community representative, health worker, women’s leader;
- group interviews; joint based on checklists, for example with mothers of small children, members of a local organization, neighbourhood groups, school aged children;
- focus group interviews, in which a homogeneous group freely exchanges on a specific subject;
- household surveys using a questionnaire, in which case care should be taken that not only either male or female household members are included to prevent getting a distorted picture;
- observation at household and community level for example through visiting water and sanitation sites during environmental walks;
- participant observation in which the investigator remains some weeks or months in the community, observing and recording the activities and events of daily life;
Screening of available documentation and statistical data.

Such a first investigation takes time as a one off affair, only repeated for evaluation or for preparation of a new project period. People’s involvement in investigations can range from passive information providers to active participants. At one extreme, project staff decide what information is collected from whom, by whom, where and when, at the other, community groups and local workers participate actively in all activities. The highest form of community participation is community self study, in which the project only provides advice on request.

Source: www.oxfam.org.uk

Humanitarian co-ordination

Humanitarian co-ordination is based on the belief that a coherent co-operative response to an emergency by those actors engaged in humanitarian response will maximise the benefits and minimise potential pitfalls of that response. All activities that involve more than one actor require some way of dividing activities among the different actors, and some way of managing the interdependencies between the different activities. These different kinds of interdependencies can be managed by a variety of co-ordination mechanisms, such as: standardisation, where predetermined rules govern the performance of each activity; direct supervision, where one actor manages interdependencies on a case-by-case basis, and mutual adjustment, where each actor makes on-going adjustments to manage the interdependencies.

However, co-ordination is not an end in itself, but rather a tool to achieve the goal of saving lives and reducing suffering. This must be achieved by delivering the right assistance, to the right place, and at the right time – enabling those affected by conflict and disasters to achieve their rights to protection and assistance.

Each State has the responsibility first and foremost to take care of the victims of natural disasters and other emergencies occurring on its territory. The affected State has the primary role in the initiation, organisation, co-ordination, and implementation of humanitarian assistance within its territory.
• • • The magnitude and duration of many emergencies may be beyond the response capacity of many affected countries. International co-operation to address emergency situations and to strengthen the response capacity of affected countries is thus of great importance. Such co-operation should be provided in accordance with international law and national laws.
Effective humanitarian response requires effective co-ordination of national and international responses. States hold the primary responsibility to meet the basic needs of their people, it is critical that national actors (government and civil society) be included in humanitarian response occurring within their territory. Any model aiming to enhance humanitarian response capacity must integrate with, and build on, existing national capacities although in some cases, such capacity/will is limited. Where significant government and civil society capacity exists, effective responses depend on creating genuine partnership between international and national efforts.
• Co-ordination of humanitarian assistance must use a common understanding of rights-based responses to humanitarian crises, as outlined in the Sphere Humanitarian Charter and, in particular, the Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in Disaster Relief. These rights are articulated in international human rights law, humanitarian law, and refugee law. These should guide co-operative planning, monitoring and evaluation of responses.
• o o o o o o o o • • • • • • • Good co-ordination of international humanitarian assistance requires:
Close co-operation with Government, local civil society institutions, and disaster- and/or conflict-affected communities, identifying capacities and needs.
Effective joint assessment of needs and elaboration of a joint strategy; a division of labour among humanitarian actors so that all needs are met; removal of gaps; good information sharing; good leadership on standards and accountability; adherence to best practice and codes of conduct; and an efficient use of resources.
Assigned leadership, trust, shared information, and co-operation by groups of agencies working in the same sector of humanitarian assistance.
Inclusiveness of several operational actors (UN and non-UN) with a lead co-ordination agency given responsibility to convince and actively ensure appropriate participation;
Clearly assigned responsibility to assess capacities in the sector, identify gaps, and decide how to fill them;
Lead co-ordination agency to fulfil their responsibilities do all possible to fill remaining gaps i.e. be the provider of last resort;
Commitment from sectors or clusters to work co-operatively with other groups/sectors to ensure coherence of response, and ensure that cross-cutting issues (like protection, gender, and the environment) are addressed; and
Ensure that people with the right profile and skills are provided to lead and facilitate.
Co-ordination of humanitarian action is enhanced by recognising the complementarity of different agencies modes of action. Co-ordination can be improved: by developing common criteria for assessing needs and measuring impact, and by establishing clear arrangements among humanitarian organisations regarding the geographic and thematic division of roles and responsibilities in a given context, based on the capacity and competencies of each organisation.
Co-ordination of humanitarian assistance must be based on quality programming and should ensure accountability to beneficiaries. Co-ordination of quality assistance must be based on the Sphere principles, standards, and indicators. These should form the common reference for co-ordination (see OI humanitarian quality policy in this series).
Co-ordination of humanitarian assistance must create bridges to the transition phase following conflicts and disasters. Better co-ordination of the exit strategies of humanitarian organisations and the entry strategies of development agencies is critical.
Humanitarian reform must not only focus on UN reform. Serious reform will ultimately lead to more effective and more reliable humanitarian response where it counts most in the field, for the people affected by disaster or conflict. It must engage international and local non-governmental actors. Engagement of civil society is a prerequisite to humanitarian reform.
The particular needs of women and children must be addressed at all stages of response. Cross-cluster co-ordination is required as are the continued links between sectors and clusters. Protection must be seen as the role of all humanitarian agencies, as a cross-cutting issue.
There is a need for an improved co-ordination of food security cluster in order to improve response and ensure coherence between actors and interventions which aim to ensure people are able to access their minimum food needs in emergencies and to ensure complementarity with longer-term food security strategies. Currently, emergency food security responses are dominated by food aid. There is a need to promote inter-agency mechanisms which ensure that a range of interventions (food aid, cash transfers, and livelihood support) are considered and utilised in humanitarian responses, according to needs and context. (see OI’s ‘Causing Hunger’ Briefing Paper, 2006, and the food aid policy in this series).
Oxfam welcomes the expanded role of UNHCR as the lead agency in protection for conflict-related emergencies, in addition to its mandated role with refugees.
apply this approach consistently to all situations involving internally displaced people (IDPs). This must not, however, detract from effective implementation of its on-going core mandated role of providing international protection for refugees.
• • • • • • • • • • • • • UN Humanitarian Coordinators (HC) should be appointed based on their demonstrated humanitarian experience and performance for the post. Because of this, the HC post should be separated from the post of UN Resident Coordinator (RC), unless the RC already has all the relevant skills; and is clearly accountable to the UN Emergency Relief Coordinator for her/his humanitarian performance.
The pool of HCs and cluster leads with the requisite skills must be rapidly expanded, through drawing from a variety of sources including humanitarian networks outside the UN. Enhanced training including knowledge of NGOs’ roles, principles, and standards is critical.
While recognising the importance of military logistics capacity at the early stages of some emergencies and the necessity of co-ordinating the delivery material assistance, humanitarian co-ordination must remain apart from the military and political operations of the UN. It must ensure ‘humanitarian space’ for assistance to be provided impartially and independently.

Source:
http://www.oxfam.org.uk/what_we_do/issues/conflict_disasters/
downloads/oi_hum_policy_coordination.pdf

Research objectives

Environment an identity issue

Environmental democracy, the core subject of my studies builds on our identities in relation with other diverse bodies in the nature. Environmental crisis is above all a problem of knowledge and our democratic approach. It leads to rethinking identities in search of connecting, managing and improving complex relation with the environment. One dimension that has been much neglected in the past emanates from the impression as if it is costless and infinite. Environmental rationality seeks to reestablish the links between knowing, certainties, and purpose of our surroundings and the way we get on with them. We are defining new roles, concepts and spaces. Consequences of ignoring environmental signals and delaying environmental democracy have proved to be much greater than expected. For example global warming and water scarcity has become the major motivating factor for countries in central Asia to escalate into conflicts over access to water. Studies found that we survive today as a result of borrowing from the future. Pressing environment rationality implies the reconstitution of identities beyond instrumental modern thinking, calculating and planning. The solution to the global environmental crisis is based upon revising mind sets, perceptions and values that brings along institutional changes. Environmental democracy modifies the logic of the scientific control of the world, of the technological domination of nature, of the rational administration of the environment. And in so doing develops culture of adaptation to the nature. Over reliance on science that assisted to liberate man from underdevelopment and oppression has generated one dimensional alienated society. We ought to return to the nature and cherish it’s values in a sustainable manner.

My studies are focused on environment as an ‘identity issue’ that revolves around environmental democracy particularly access to water and its state of affairs for which I need reliable resources to compare behaviour, decision making and examine barriers and risks involved in different social contexts in our democratic response to global warming, over population and resource depletion.

Un-rooted

Humanistic geography interprets life world as the contextualization of experience in place, with place conceptualized not as point in space, nor even landscape, but as community, field of care, center of significance, all linked to human identity. The place, the property of Britain, in old Shemiran road, will always have meaningful presence in historic craving for constitutional establishment in this country. The place is not immune from the raid of barbers that are set to destruct roots, principles, dignity and what ever that would signify nobility for this nation. The place is set to be filled by parasites of this society, petty materialistic, greed, veracity of vulgar rulers in robbing identities, traditions, historic pledges and commitments and all that was left as a center of significance for the community; a place that might have sparked some hopes for emancipation…..

Sunday, January 21, 2007

Quantitative Method

Modelling Discrete Data: An Overview

Discreteness
Categories: e.g., single, married, divorced
Counts: e.g., number of children
Rounded measurement: e.g., earnings to nearest $1000k.
The last of these should only really be regarded as discrete if the rounding is very coarse. (It is ‘in principle’ continuous.)

Models
(a) Multivariate: describe economically the joint distribution of several variables
(b) Dependence: describe economically the conditional distribution of one or more variables Y given fixed values of other (explanatory, or predictor) variables X.
Many research questions are answered via (b).
Models of kind (a) are often exploratory, suggestive of research questions.
Counting
(A) ‘Pure’ counts: number of ‘events’ (e.g., children, criminal arrests, squash
courts) in a given amount of ‘exposure’ (years, police-officer-years, km2).
Interest is in the rate per unit of exposure, and how that rate depends on other
variables.
(B) Category counts: number of individuals (e.g., men, constituencies, sporting
events) falling into categories of a cross classification (father’s class by own class, party of MP, sport by period by nation).
Interest is in the interdependence of the cross-classifying variables.
Counting of type (B) converts qualitative data to quantitative, for statistical analysis.

Variation
• from time to time
• from place to place
• from sample to sample
etc.
Variation may be:
systematic: in which case it is either the object of interest, or needs to be taken into account to avoid biased conclusions
random: sometimes of substantive interest, more often a nuisance to be allowed for in reporting the precision of conclusions (e.g., sampling error)
Statistical models represent both kinds of error.

Distributions
• ways of describing random variation
• for counts, the most important are
– Poisson (for ‘pure’ counts)
– binomial, multinomial (for category counts)
Others (e.g., negative binomial, beta-binomial) may be used where there is overdispersion relative to the Poisson or binomial.
Poisson distribution
Consider events occurring in time, or space, or whatever:
• singly
• independently
• at a constant rate (_, say)
Number of events Y in time t has distribution
Y _ Poisson(_t).
Mean and variance are E(Y ) = var(Y ) = _t.
Large counts vary more than small counts. But the coefficient of variation is sd(Y )/E(Y ) = 1/p_t, so large counts are more informative. (Obviously!!)

Binomial distribution
When m independent individuals are allocated at random to one of two categories, the number Y in category 1 has the binomial distribution:
Y _ binomial(m; _)
where _ is the (assumed constant) probability of being allocated to category 1. Interpretation of _ is often as the population proportion that would be allocated to category 1.
Mean and variance are E(Y ) = m_, var(Y ) = m_(1 − _).
Multinomial: if there are k categories,
(Y1, . . . , Yk) _ multinomial(m; _1, . . . , _k)
with _+ = 1.
Binomial is simply the special case with k = 2.
Some relationships
1. Poisson variables conditional on their total: if
Yi _ Poisson(μi) (i = 1, . . . , k) then
(Y1, . . . , Yk)|(Y+ = m) _ multinomial(m; _1, . . . , _k)
with _i = μi/μ+.
So there is formal equivalence between some Poisson and multinomial models.
This is sometimes exploited to fit multinomial models in software designed for
(univariate-response) generalized linear models; more modern software provides more explicit facilities for multinomial models.
2. Subtotals: if (Y1, . . . , Yk) _ multinomial(m; _1, . . . , _k), and Y _ = Pi2S Yi
for some subset S, then
Y _ _ binomial(m; __)
where __ = Pi2S _i.
3. Conditional multinomial: e.g.,
(Y1, . . . , Yt)|(Y1 + . . . + Yt = m_) _ multinomial(m_; __1, . . . , __t )
where __i = _i/Pt
j=1 _j.

Poisson response
Suppose Yi is number of events in time (or other exposure quantity) ti. Aim to
relate the distribution of Yi to explanatory variables xi1, . . . , xip.
Distribution is Yi _ Poisson(_iti) — determined entirely by the mean, _iti. The
rate _i is usually the object of interest.
The most standard model is log-linear :
log _i = xi1_1 + . . . + xip_p
Interpretation: exp(_r) is the factor by which the rate of occurrence _i is multiplied when xir increases by one unit (with other explanatory variables held constant).
Log-linear models for Poisson counts thus embody the notion that effects are multiplicative on the rate of occurrence. This is very natural for many applications.
It will not always be appropriate to assume that effects are multiplicative. For
example, it may be that there is a ‘background’ rate which is additive:
_i = exp(_) + __i , say,
where __i perhaps satisfies the log-linear model above. That is, most of the effects
are rate-multipliers, but the background effect is additive.
The appropriate specification to use in any particular application demands some thinking about the data-generating process. The choice may sometimes need to be informed by fitting competing specifications to data, followed by suitable diagnostics.
The log-linear model is an example of a generalized linear model (more later).
The mixed additive-multiplicative model is not.
Logits: pros and cons
1. In practice with binomial-response data there is very little difference between
using logit and probit link functions. Coefficients are on a different scale, but conclusions will be similar.
2. Logit is symmetric (_ and 1 − _ can be interchanged, and only the sign of coefficients is affected). So is probit. The log-log links are not (and this provides some flexibility if needed: one of the log-log links may fit better than the other).
3. Can yield nonsensical predictions (fitted values), namely probabilities implausibly close to 0 or 1. (A criticism more usually levelled at linear probability models, but logit-linear models are not necessarily better).
Generalized linear models
The notion of relating the expected value of a response (such as a count or binomial proportion) to explanatory variables, through a link function, is very general.
We have seen that it may not always be the right thing to do (there may be nonlinearity to account for). But still it provides a useful starting-point for thinking about dependence in situations where the standard linear model is problematic.
The generalized linear model is simplest thought of in terms of mean and variance:
E(Yi) = μi = g−1(Xxir_r), var(Yi) = _V (μi).

Why is it useful to think of Poisson loglinear models and logit/probit/etc regressions as generalized linear models?
• It’s not essential. A good understanding of those models can be had without the full GLM framework.
• BUT generalized linear models all have various useful features in common:
– linear predictor
– efficient algorithm for computing maximum likelihood estimates (iterative weighted least squares)
– ‘analysis of deviance’ for model screening and model choice
– definitions of residuals and other diagnostics
These common features are especially well exploited in good software programs, which provide the same interface to all GLMs both in terms of model specification and model criticism. The earliest and most famous example was GLIM, introduced in the 1970s. A good modern example is glm() in R or S-Plus.
Overdispersion
The standard distributions all have variances determined by the mean, e.g.,
Poisson has variance = mean.
Often in practice (most often, in fact) the residual variance after fitting a model is larger than it should be under the relevant (e.g., Poisson or binomial) model. A standard measure of such overdispersion is X2/(n − p) = ˜_, say, where n − p is the residual degrees of freedom for the model and
X2 =X(yi − ˆμi)2/V (μi) is the ‘Pearson chi-squared statistic’, the sum of squared ‘Pearson’ residuals. An alternative to X2 here is the model deviance D (sometimes labelled G2).
The cause of overdispersion is either positive correlation of responses, or missing explanatory variables (and these alternative causes are hard or impossible to distinguish).
The effect of overdispersion is to make the ‘usual’ reports of precision—i.e., standard errors, etc.—too small. An approximate remedy is to multiply all standard errors by p˜_.
If ˜_ is appreciably less than 1 it is an indication of underdispersion, a less common phenomenon in social-scientific work, caused usually by inhibition of events by other events, or by regularity. (e.g., the number of buses passing my house each hour is, thankfully, severely underdispersed relative to the Poisson distribution).
The approximate ‘fix factor’ p˜_ is justified theoretically by the notion of quasi-likelihood, based on the simple assumption that _ in the GLM formulation takes a value different from 1.
A more elaborate approach is to use a ‘robust’ or ‘sandwich’ estimator of the standard errors of estimated coefficients, and this is implemented in many common software systems. However, such estimators are rather unstable (non-robust!) except with very large datasets.



Source: ESRC Oxford Spring School in Quantitative Method for Social Research
http://springschool.politics.ox.ac.uk/
springschool/archive.asp

Lakes in Danger

Save Radley Lakes joined communities to try and prevent the destruction of a 12 hectare site located between Radley and Abingdon

RWE NPower and Oxfordshire County Council

between them have concocted a scheme which, if approved by the Secretary of State, will destroy a beautiful lake and deprive the community of an essential part of the landscape.

The area is rich in bio-diversity and should be protected.

Instead 500,000 tonnes of waste fuel ash (PFA) will be dumped here to the detriment of the wildlife and the environment.

The CPRE have supported us and are appalled at the Government stance particularly in view of the speech made by Ruth Kelly recently.

http://www.saveradleylakes.org.uk/



...Waste Recycling Group have applied to Oxfordshire County Council to vary their planning permission for Sutton Courtenay landfill site. Because the landfill site is so much larger than originally thought (there is around 8 million cubic metres of space remaining) and because the amount of waste going to landfill is falling as a result of recycling policies, WRGL is asking Oxfordshire County Council to extend the life of the site from 2012 until 2021. This means they are asking Oxfordshire County Council for approval to increase the number of lorries so they can fill the void with more London Waste before their licence expires!!!

Why can't NPower put their fuel ash in the spare capacity WRG have and save a lake from destruction? Will Oxfordshire County Council act in the interests of the public instead of big business? Write and ask them to reconsider the current planning application by NPower - whilst there is still time.

Energy Efficiency

In November, the International Energy Agency projected that China will become the world's largest source of carbon dioxide emissions in 2009, overtaking the United States nearly a decade earlier than previously anticipated. Coal is expected to be responsible for three-quarters of that carbon dioxide.

And the problem will get worse. Between now and 2020, China's energy consumption will more than double, according to expert estimates. Ratcheting up energy efficiency, tapping renewable resources with hydro dams and wind turbines, and building nuclear plants can help, but--at least in the coming two decades--only marginally. Since China has very little in the way of oil and gas reserves, its future depends on coal. With 13 percent of the world's proven reserves, China has enough coal to sustain its economic growth for a century or more. The good news is that ¬China's leaders saw the coal rush coming in the 1990s and began exploring a range of advanced technologies. Chief among them is coal gasification. "It's the key for clean coal in China," says chemical engineer Li ¬Wenhua, who directed advanced coal development for Beijing's national high-tech R&D program (better known in China as the "863" program) from 2001 through 2005.

Gasification transforms coal's complex mix of hydrocarbons into a hydrogen-rich gas known as synthesis gas, or "syngas." Power plants can burn syngas as cleanly as they can natural gas. In addition, with the right catalysts and under the right conditions, the basic chemical building blocks in syngas combine to form the hydrocarbon ingredients of gasoline and diesel fuel. As a result, coal gasification has the potential both to squelch power plants' emission of soot and smog and to decrease China's growing dependence on imported oil. It could even help control emissions of carbon dioxide, which is more easily captured from syngas plants than from conventional coal-fired plants.

Despite China's early anticipation of the need for coal gasification, however, its implementation of the technology in power plants has lagged. The country's electricity producers lack the economic and political incentives to break from their traditional practices.

In contrast, large-scale efforts to produce liquid transportation fuels using coal gasification are well under way. China's largest coal firm, Shenhua Group, plans to start up the country's first coal-to-fuels plant in 2007 or early 2008, in the world's most ambitious application of coal liquefaction since World War II. Shenhua plans to operate eight liquefaction plants by 2020, producing, in total, more than 30 million tons of synthetic oil annually--enough to displace more than 10 percent of China's projected oil imports.

If the new plant works, Shenhua stands to earn a substantial profit. The company predicts that its synthetic oil will turn a profit at roughly $30 a barrel, though many analysts say $45 is more realistic. (The U.S. Department of Energy's most recent price forecast predicts that crude oil will dip to $47 a barrel in 2014, then climb steadily to $57 a barrel in 2030.)

Beyond the risks inherent in the large-scale deployment of unproven technology, the gasification building boom also is an environmental gamble. Indeed, what may ultimately check China's coal-to-oil ambitions is water. China's Coal Research Institute estimates that Shenhua's plant will consume 10 tons of water for every ton of synthetic oil produced (360 gallons of water per barrel of oil), and the ratio is even worse for Fischer-Tropsch plants. Last summer, China's National Development and Reform Commission, the powerful body charged with regulating China's economy and approving large capital projects, issued a warning about the environmental consequences of the "runaway development" of synthetic-oil and chemical plants, which it said will consume tens of millions of cubic meters of water annually.
That prediction sounds particularly ominous in northern China, where water is scarce. Erdos is a mix of scrub and desert whose meager water supplies are already overtaxed by population growth and existing power plants. Zhou Ji Sheng, who as vice manager of ZMMF, one of Shenhua's Erdos-based competitors, is seeking financing for a gasification project, acknowledges that water scarcity could put an end to coal gasification in the area. "Even though we have so much coal, if we have no water, we will just have to use the traditional way--to dig it out and transport it," he says. "Water is the key factor for us to develop this new industry." Zhou says his firm plans to supplement its water supply by building a 120-kilometer pipeline to the Yellow River. But evaporation from hydroelectric reservoirs, the increased demand of growing cities and industries, and the effects of climate change mean that in the summer, the Yellow River barely reaches the sea.

Source: http://www.technologyreview.com
/Energy/17963/page2/