Monday, October 20, 2008

We cannot be ruled by law

Vagueness in law

The argument is that vagueness, and resultant indeterminacies, are essential features of law. Although not all laws are vague, legal systems necessarily include vague laws. When the law is vague, the result is that people's legal rights and duties and powers are indeterminate in some (not in all) cases.

The indeterminacy claim seems to make the ideal of the rule of law unattainable: to the extent that legal rights and duties are indeterminate, we cannot be ruled by law. The indeterminacy claim is a threat to what it shall be called the "standard view of adjudication": the view that the judge's task is just to give effect to the legal rights and duties of the parties. These drastic consequences have made the indeterminacy claim into an important focus of controversy in legal theory in this century.

To put those controversies in a new light, the second characteristic mark of vagueness is addressed - the tolerance principle.

It would be senseless to try to quantify the indeterminacies that arise from vagueness in any legal system, but we should accept the general claim that they are significant. Unlike radical indeterminacy claims, the argument casts no doubt on the sense of the practice of law, or on the meaningfulness of statements of law.

The application of vague language is indeterminate in some cases but not in all cases.

The focus on vagueness is a paradigmatic source of indeterminacy in law, and a very important source. Along with express grants of discretion and conventions giving judges power to develop the law, it is one of the most important sources of judicial discretion. And unlike other sources of indeterminacy such as ambiguity, it is a necessary feature of law.

The similarity model claims that there is no more satisfactory way of picturing the application of vague expressions than to say that they apply to objects that are sufficiently similar to paradigms. The similarity model is barely a model, and is not a theory: It gives no general explanatory account of the application of vague expressions.


Professor Timothy Endicott, Dean of Oxford Law, 2007
http://fds.oup.com/www.oup.co.uk/pdf/0-19-826840-8.pdf

Surface characteristics

Within the crystal the bonding orbitals of all the atoms are satisfied and each type of atom is in the same environment as its counterparts. In contrast to this, the surface atoms have dangling bonds that are not satisfied, and this deficiency of chemical bonding can be characterized quantitatively by a surface energy per unit area.

The [111] surface of a silicon or germanium crystal consists of a hexagonal array of Si atoms in the layer below the surface. The energy is lowered by the surface reconstruction in which surface atoms move together and bond to each other in pairs to accommodate and satisfy the broken dangling bonds. Another phenomenon that sometimes takes place is surface structure relaxation in which the outer layer of atoms moves slightly toward or a short distance away from the layer below. Contraction takes place with most metal surfaces. The Silicon[111] surface layer contracts by about 25% and the three interlayer spacings further below compensate for this by expanding between 1% and 5%.

Sunday, October 19, 2008

Hydrogen fuel cells

Researchers developing enzymes to replace platinum in hydrogen fuel cells

10 October 2008

Chemists at the University of Oxford have developed a hydrogen fuel cell which uses enzymes to catalyse the reactions on electrodes, producing electricity.

Current fuel cells use expensive precious metal catalysts, such as platinum to drive the fuel cell reaction. In 2003 Dr Kylie Vincent and Professor Fraser Armstrong were awarded the Carbon Trust Innovation Award for their proposal that enzymes, from bacteria, which convert hydrogen to water and enzymes called laccases, from fungi, that convert oxygen to water, could be used to replace platinum catalysts at both electrodes.

"Enzymes are extremely good electrocatalysts," said Armstrong. "The activity of hydrogenase enzymes is at least as high as platinum catalysts, and laccase catalysts are also more efficient."

Vincent added: “Enzymes are also much more selective than traditional catalysts, meaning that enzymatic fuel cells can run on a safe mixture of low-level hydrogen and oxygen, rather than running on separate supplies of these gases.” This was demonstrated recently.

”Enzyme-powered fuel cells would have some big advantages, in that they would be biodegradeable. The cost of the production of the enzymes could eventually be brought down so that they are cheaper than conventional catalysts."

The team is aiming to develop fuel cells for niche applications, such as self- powered sensors. "A commercial product will require a number of further improvements, including attaching the enzymes permanently and in a fully stable manner on the electrodes. The structures of enzymes could also be optimised for particular applications," said Prof Armstrong.

The researchers are working with the University's technology transfer company, Isis Innovation, to commercialise the invention. A patent has been filed, and Isis welcomes discussions with interested commercial partners.

For more information:
Professor Fraser Armstrong and Kylie Vincent (technical enquiries)
Inorganic Chemistry Laboratory, University of Oxford
Tel: +44 (0) 1865 287182/272647
Fax: +44 (0) 1865 287182/272690

Dr Stuart Wilkinson
(commercial enquiries)
Project Manager
Isis Innovation

http://www.isis-innovation.com/news/news/HydrogenFuelCells.html

Nitrides blue



Atomic arrangement of GaN

Nitrides have wurtzite atomic structure as in the left figure, which is different from conventional semiconductor materials such as Si and GaAs, and leads to asymmetric physical properties.


About "Nitride"? You may know the device lighting blue at an antenna of handy phone, which is made of nitride. In this device called the light-emitting diode (LED), injected current is directly converted into light. This light source has no heating parts nor discharging parts, therefore, it has very effective and long lifetime. If all traffic signals in Japan were replaced into LEDs, it is possible to save energy corresponding to 7 atomic power plants. The blue LED, which emits one of three primary colors, had not been achieved until the 2nd half of 80s.

http://www.meijo-u.ac.jp/ST/coe/ENGLISH/NNtoha.html

TiO2 eradicating cancer cells

.........Interestingly, the photocatalytic properties of TiO2-mediated toxicity have been shown to eradicate cancer cells.[16] and [17] It is now well established that TiO2 particles, on exposure to ultraviolet (UV) light, produce electrons and holes leading subsequently to the formation of ROS such as hydrogen peroxide, hydroxyl radicals, and superoxides.18 These oxygen species are highly reactive with cell membranes and the cell interior, with damaged areas depending on particle location upon excitation. Such oxidative reactions affect cell rigidity and chemical arrangement of surface structures, leading to cell toxicity.19 Despite promising outcomes in killing cancer cells, such treatments would be difficult to implement in clinical settings for the following reasons. First, UV light cannot penetrate deeply into human tissues, thus limiting this technique to superficial tumors.20 Second, UV-mediated production of ROS has a very short life span and thus would not be able to provide a continuous prolonged cancer-killing effect.19

Surface functionality has been shown to affect cell-particle interactions. Although it has been suggested that surface functionality should be the determining factor concerning cell uptake and subsequent activity inside the cell, studies that have varied surface functionality to investigate membrane binding, uptake, and internalization of nanoparticles are limited. We thus hypothesize that, by varying surface functionality, the cell toxicity of TiO2 particles can be altered. Three functional groups with various surface charges (-OH, -NH2, and –COOH) were included in this investigation. We found the effect of particle surface functionality on cell toxicity to be cell-dependent. 3T3 fibroblasts and B16F10 melanoma cells showed no significant response to functionalized or untreated particles at concentrations as high as 1 mg/mL.

These findings are in agreement with recent findings that TiO2 nanoparticle surface functionality (hydrophilic vs. hydrophobic) had insignificant effects on cell toxicity in an intratracheal rat model. These differences may be due to protein composition of the cell membrane and the manner in which these proteins interact with the TiO2 particles. In the case of the melanoma and 3T3 cells, weaker particle-membrane interactions may explain the insignificant influence of surface functionality and higher survival rates of particle-exposed cells. In contrast, surface functionality exerts moderate influence on LLC cell toxicity, possibly as a result of increased interaction between the TiO2 particle surface and the cell membrane. The most significant variances were seen in the JHU prostate tumor cells.



The influence of particle concentration on survival rates of cells. TiO2 particles in various concentrations were added into culture plated with confluent cells. After incubation for 24 hours the cell viability was then quantified with Live/Dead cytotoxicity Viability stain (Molecular Probes). The cell viabilities were then normalized with cells without treatment. Vertical lines denote ± 1 SD (n = 4 for all test samples and cells). Significance of differences between cancer cells versus 3T3 cells (▲)** P < .05.

The basis for the observed differential effects of surface functional groups on cell survival is mostly unclear because of the complex interaction between the cell-specific membrane properties and nanoparticle surface chemistry. The JHU prostate tumor cells showed a significant susceptibility to -NH2-coated nanoparticles. JHU cells have a relatively high cell toxicity at low particle concentrations (0.1 and 1 mg/mL), with cell survival around only 60% of that of uncoated particles. At high concentration (10 mg/mL) there is no difference of cell toxicity among -NH2-coated or uncoated nanoparticles. The detailed mechanism of such responses has yet to be determined. Because it is well established that positively charged nanoparticles have high affinity to negatively charged cell membrane protein,48 it is probable that JHU cell membranes were saturated with -NH2- functionalized particles at the lowest concentration (0.1 mg/mL) used. In addition, using polypropylene microparticles, we found that the density of surface functionality has little influence on cell-particle interactions. Therefore, the increase in the surface NH2 concentration, or the increased exposure to NH2 groups in the case of nanoparticles, may not have a significant effect on cell survival.


D.M. Blake, P.-C. Maness, Z. Huang, E.J. Wolfrum and J. Huang, Application of the photocatalytic chemistry of titanium dioxide to disinfection and the killing of cancer cells, Sep Purif Methods 28 (1) (1999), pp. 1–50.

Ref.

16 N.-P. Huang, M.-H. Xu, C.-W. Yuan and R.-R. Yu, The study of the photokilling effect and mechanism of ultrafine TiO2 particles on U937 cells, J Photochem Photobiol A: Chem 108 (2-3) (1997), pp. 229–233.

17 A.P. Zhang and Y.P. Sun, Photocatalytic killing effect of TiO2 nanoparticles on LS-174-T human colon cancer cells, World J Gastroenterol 10 (21) (2004), pp. 3191–3193.

18 C. Ogino, M. Farshbaf Dadjour, K. Takaki and N. Shimizu, Enhancement of sonocatalytic cell lysis of Escherichia coli in the presence of TiO2, Biochem Eng J 32 (2) (2006), pp. 100–105.

19 D.M. Blake, P.-C. Maness, Z. Huang, E.J. Wolfrum and J. Huang, Application of the photocatalytic chemistry of titanium dioxide to disinfection and the killing of cancer cells, Sep Purif Methods 28 (1) (1999), pp. 1–50.

20 R. Cai, Y. Kubota, T. Shuin, H. Sakai, K. Hashimoto and A. Fujishima, Induction of cytotoxicity by photoexcited TiO2particles, Cancer Res 52 (8) (1992), pp. 2346–2348.



Phagocytes


Phagocytes are cells that are found in the blood, bone marrow and other tissues of vertebrates.[1] These cells ingest and destroy foreign, particulate matter such as microorganisms and debris by a process called phagocytosis. They are important in the immunity and resistance to infection.

Wikipedia

Saturday, October 18, 2008

Nanos skill gaps

A survey aiming at identifying skill gaps of people who work in the area of nanotechnology found that:....42% of respondents identified that they faced human resource problems in their organisation such as availability of manpower with appropriate skills and right knowledge depth. While 58% of responses indicated that both generalist and specialist skill sets were valued by employers, 24% indicated a preference for generalist skills over 13% for specialists in organisations. A mixed approach is used by most organisations for employee training and development. Respondents indicated the most preferred training method being 26% for on the job-training, 22% for continual professional development, and 15% for short courses.

The survey recommends the following actions:

-- greater practical experience during post-graduate training, with focus on important competencies such as sol-gel, lithography, bottom up assembly and training in the use of SPMs and EMs


-- integrating competencies of material sciences, biology interface with nanomaterials and nanoscale effects in post-graduate programmes.

-- inclusion of knowledge of new materials, their properties and selection, and design methodologies for new product development.

-- development of short sector-based modular courses to allow continued training of the workforce, including toxicology, health and safety, intellectual property rights and important societal issues such as ethics

--training to include research and development management, project management, technology strategy, technology marketing, sustainability, risk assessment as elective modules for postgraduate

--training and professional development investigating specific training needs of sectors such as information and communication, medical devices and health care, electronics, aerospace, automotive, energy and power in relation to nanotechnology

--government bodies to increase funding for encouraging knowledge partnership through
creation of more science to business roles

Institute of Nanotechnology
www.nano.org.uk



THE FALL AND RISE OF DEVELOPMENT

THE FALL AND RISE OF DEVELOPMENT: The big push


The important point is that any kind of model of a complex system -- a physical model, a computer simulation, or a pencil-and-paper mathematical representation -- amounts to pretty much the same kind of procedure. You make a set of clearly untrue simplifications to get the system down to something you can handle; those simplifications are dictated partly by guesses about what is important, partly by the modeling techniques available. And the end result, if the model is a good one, is an improved insight into why the vastly more complex real system behaves the Way it does.

Why is our attitude so different when we come to social science? There are some discreditable reasons: like Victorians offended by the suggestion that they were descended from apes, some humanists imagine that their dignity is threatened when human society is represented as the moral equivalent of a dish on a turntable. Also, the most vociferous critics of economic models are often politically motivated. They have very strong ideas about what they want to believe; their convictions are essentially driven by values rather than analysis, but when an analysis threatens those beliefs they prefer to attack its assumptions rather than examine the basis for their own beliefs.

Still, there are highly intelligent and objective thinkers who are repelled by simplistic models for a much better reason: they are very aware that the act of building a model involves loss as well as gain. Africa isn't empty, but the act of making accurate maps can get you into the habit of imagining that it is. Model-building, especially in its early stages, involves the evolution of ignorance as well as knowledge; and someone with powerful intuition, with a deep sense of the complexities of reality, may well feel that from his point of view more is lost than is gained.
……We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious -- to be aware that your models are maps rather than reality.

If you look at the writing of anyone who claims to be able to write about social issues without stooping to restrictive modeling, you will find that his insights are based essentially on the use of metaphor. And metaphor is, of course, a kind of heuristic modeling technique.

In fact, we are all builders and purveyors of unrealistic simplifications. Some of us are self-aware: we use our models as metaphors. Others, including people who are indisputably brilliant and seemingly sophisticated, are sleepwalkers: they unconsciously use metaphors as models.


KRUGMAN, ECONOMICS NOBEL PRIZE WINNER 2008

http://web.mit.edu/krugman/www/dishpan.html

Nano fluidic devices and carbon nanohorns



The distinct structural properties of carbon nanoparticles, in particular their high aspect ratio and propensity to functional modification and subsequent use as carrier vectors, as well as their potential biocompatibility, make them useful for pharmaceutical nanodelivery. Carbon nanotubes (CNTs) have the added advantage of being potential nanofluidic devices for controlled drug delivery.

Great interest has been generated in fullerenes in general, but especially in CNTs and carbon nanohorns (CNHs) as biologically compatible materials and drug carriers mainly because of their distinct architecture, hollow interior, and cagelike structures.

Application of CNTs in biological systems depends on their compatibility with hydrophilic environments; therefore, the solubilization of CNTs in pharmaceutical solvents is essential. Furthermore, because it is becoming increasingly important
that the relevant chemical, physiochemical, and pharmaceutical properties of CNTs be identified, we have prepared a “mini-monograph” of CNTs that compiles their pertinent properties.





.......MWNTs generally have a larger outer diameter (2.5–100 nm) than SWNTs (0.6–
2.4 nm) and consist of a varying number of concentric SWNT layers, with an interlayer separation of about 0.34 nm. SWNTs have a better defined diameter, whereas
MWNTs are more likely to have structural defects, resulting in a less stable nanostructure. CNTs combine high stiffness with resilience and the ability to buckle and collapse reversibly. The high C-C bond stiffness of the hexagonal network produces an axial Young's modulus (measure of stiffness) of approximately 1 TPa and a tensile strength of 150 GPa, 17 making CNTs one of the stiffest materials known, yet with the capacity to deform (buckle) elastically under compression.

CNT dispersion and solubility
The solubility of CNTs in aqueous solvents is a prerequisite for biocompatibility; hence, CNT composites in therapeutic delivery should meet this basic requirement.
Similarly, it is important that such CNT dispersions be uniform and stable to obtain accurate concentration data. In this regard, the solubilization of pristine CNTs in aqueous solvents remains an obstacle to realizing their potential as pharmaceutical excipients because of the rather hydrophobic character of the graphene sidewalls, coupled with the strong p-p interactions between the individual tubes, which causes CNTs to assemble as bundles. To successfully disperse CNTs the dispersing medium should be capable of both wetting the hydrophobic tube surfaces and modifying the tube surfaces to decrease tube aggregation. Four basic approaches have been used to obtain a dispersion: (1) surfactant-assisted dispersion, (2) solvent dispersion, (3) functionalization of CNT sidewalls, and (4) biomolecular dispersion.


Summary of pharmaceutically relevant properties

Even though pharmaceutical excipients have been regarded as inert or nonactive components of dosage forms, they are essential and necessary components of the
formulation. Hence, it is becoming increasingly important that the pharmaceutically relevant properties of CNTs be identified.

Organoleptic properties refer to the appearance and physical description of a substance. Both SWNTs and MWNTs appear as granular or fluffy black powders, although
SWNT samples may also have a shiny metallic appearance. Aligned CNTs (also known as vertically aligned nanotubes or VANTs) appear as velvety sheets. EM images of SWNTs and MWNTs show CNTs in aggregated bundles, whereas in VANTs the CNTs are ordered in an array. Raman spectral analysis of CNTs is useful to distinguish samples of SWNTs
from those of MWNTs and/or VANTs, because only SWNTs have the diagnostic RBM peak. Raman spectra are also useful in estimating the diameter of individual CNTs in a SWNT sample.

We have investigated the dispersibility of CNTs in a series of pharmaceutical solvents and present visible, microscopic, and SEM images of CNTs in five of the most
important simple solvents using a three-category assessment of dispersibility: insoluble, swollen, and soluble. SWNTs are insoluble in water and ethanol, and they aggregate and sediment soon after sonication, seen as black sediments at the bottom of the vials....... For propylene glycol and DMSO dispersions, the inset photographs, light micrographs, and SEMs show swollen or intermediate dispersions of SWNTs in solution. Here, the SWNT clusters appear to be smaller and more
loosely aggregated. For the sodium dodecyl sulfate (SDS) dispersion, the inset photograph shows the black/brown uniform color characteristic of a homogeneous dispersion and is consistent with what is seen in the light micrograph, which shows an even distribution with few aggregates of CNTs. The SEM image of CNTs in SDS shows debundled CNTs and very small SWNT bundles.

Carbon nanotubes as functional excipients for nanomedicines: I. pharmaceutical properties; science direct, 2008




Stacking in biology


In DNA, pi stacking occurs between adjacent nucleotides and adds to the stability of the molecular structure. The nitrogenous bases of the nucleotides are made from either purine or pyrimidine rings, consisting of aromatic rings. Within the DNA molecule, the aromatic rings are positioned nearly perpendicular to the length of the DNA strands. Thus, the faces of the aromatic rings are arranged parallel to each other, allowing the bases to participate in aromatic interactions. Through aromatic interactions, the pi bonds, extending from atoms participating in double bonds, overlap with pi bonds of adjacent bases. This is a type of non-covalent chemical bond. Though a non-covalent bond is weaker than a covalent bond, the sum of all pi stacking interactions within the double-stranded DNA molecule creates a large net stabilizing energy.

Uses in materials

Many discotic liquid crystals can form columnar structures by π-π interactions. In addition, π-π interactions are an important factor in molecular self-assembly techniques in bottom-up nanotechnology.

Aromatic stacking interaction

Aromatic stacking interaction, sometimes called phenyl stacking, is a phenomenon in organic chemistry that affects aromatic compounds and functional groups. Because of especially strong Van der Waals bonding between the surfaces of flat aromatic rings, these groups in different molecules tend to arrange themselves like a stack of coins. This bonding behavior affects the properties of polymers as diverse as aramids, polystyrene, DNA, RNA, proteins, and peptides. The effect can be exploited in gas sensors to detect the presence of aromatic chemicals.

T-stacking

A related effect called T-stacking is often seen in proteins where the partially positively charged hydrogen atom of one aromatic system points perpendicular to the center of the aromatic plane of the other aromatic system.






Pi bond

In chemistry, pi bonds (π bonds) are covalent chemical bonds where two lobes of one involved electron orbital overlap two lobes of the other involved electron orbital. Only one of the orbital's nodal planes passes through both of the involved nuclei.

The Greek letter π in their name refers to p orbitals, since the orbital symmetry of the pi bond is the same as that of the p orbital when seen down the bond axis. P orbitals usually engage in this sort of bonding. D orbitals are also assumed to engage in pi bonding but this is not necessarily the case in reality, although the concept of bonding d orbitals still accounts well for hypervalence.

Pi bonds are usually weaker than sigma bonds because their (negatively charged) electron density is farther from the positive charge of the atomic nucleus, which requires more energy. From the perspective of quantum mechanics, this bond's weakness is explained by significantly less overlap between the component p-orbitals due to their parallel orientation.

Although the pi bond by itself is weaker than a sigma bond, pi bonds are often components of multiple bonds, together with sigma bonds. The combination of pi and sigma bond is stronger than either bond by itself. The enhanced strength of a multiple bond vs. a single (sigma bond) is indicated in many ways, but most obviously by a contraction in bond lengths. For example in organic chemistry, carbon-carbon bond lengths are ethane (154 pm), ethylene (133 pm) and acetylene (120 pm).

Wikipedia



Employing Raman spectroscopy to qualitatively evaluate the purity of carbon single-wall nanotube materials.


Raman spectroscopy may be employed to differentiate between metallic and semi-conducting nanotubes, and may also be employed to determine SWNT diameters and even the nanotube chirality. Single-wall carbon nanotubes are generated in a variety of ways, including arc-discharge, laser vaporization and various chemical vapor deposition (CVD) techniques. In all of these methods, a metal catalyst must be employed to observe SWNT formation. Also, all of the current synthesis techniques generate various non-nanotube carbon impurities, including amorphous carbon, fullerenes, multi-wall nanotubes (MWNTs) and nano-crystalline graphite, as well as larger micro-sized particles of graphite. For any of the potential nanotube applications to be realized, it is, therefore, necessary that purification techniques resulting in the recovery of predominantly SWNTs at high-yields be developed. It is, of course, equally important that a method for determining nanotube wt.% purity levels be developed and standardized.

Dillon AC et al, Nanosci Nanotech, 2004 Sep;4(7):691-703

http://www.ncbi.nlm.nih.gov/pubmed/15570946?dopt=abstract

Friday, October 17, 2008

Oxford's first spin-off

Oxford GlycoSystems


In October 1988, with the help of Monsanto and Searle (which had just been acquired by Monsanto), the University of Oxford launched its first ever spin-off company in which the University had a shareholding. The idea was to develop further the sugar technology in order to make it available to users all over the world. Oxford
GlycoSystems was born as a technology company and succeeded in making several
different kinds of instruments to release sugars from proteins and then allow the full oligosaccharide sequence to be obtained. Within a few years, most of the major drug companies in the world had Oxford GlycoSystems’ instruments, as glycosylation became more important in the ‘quality control’ of glycoproteins. It was realized that in calthe production process, any slight variation, such as changing oxygen levels or cell culture conditions, could lead to a change in glycosylation pattern.


Biotechnology and glycosylation

In 1985, Raymond Dwek's group at Oxford University published a landmark patent on tissue plasminogen activator, a drug that dissolves clots after heart attacks and strokes5. This patent, which was largely based on the PhD work of Raj Parekh, taught that the actual glycoforms of a protein were important, rather than only the protein’s amino acid sequence. It was possible to distinguish different glycoforms, and therefore different products, from the same gene when expressed in two different cell lines. In terms of biotechnology, this put glycosylation very much to the forefront.


Glycosylation and hepatitis B and C: glycoprotein folding

In about 1990, Baruch Blumberg, who had received a Nobel Prize for his work on a vaccine for hepatitis B, joined the Glycobiology Institute, while he was Master at Balliol College, Oxford. Professor Tim Block, from Thomas Jefferson University, PA, came for a sabbatical with Blumberg and myself and we started an antiviral programme
in the Institute. Initially, we studied hepatitis B and demonstrated that the
secretion of the virus was inhibited in the presence of the drug NB-DNJ. At the same time in the Glycobiology Institute, work was underway by Stefana Petrescu (from the
Bucharest Institute of Biochemistry, Romania), using NB-DNJ to inhibit the
metalloglycoprotein tyrosinase, which is involved in melanin biosynthesis. This pointed to the involvement of calthe nexin in the ER (endoplasmic reticulum)
in glycoprotein folding, and this is still a pivotal result for glycobiology; it soon
became clear that many viruses also achieved the three-dimensional structure of their surface glycoproteins using the calnexin pathway. We showed that the action of NB-DNJ was as an inhibitor of glucosidase 1 and 2, and thus could prevent proper folding as it inhibited the interaction with calnexin.

Thus a large antiviral programme was begun, using imino sugars to create this misfolding. Today those studies have been expanded to hepatitis B and C,
under Nicole Zitzmann at the Glycobiology Institute, and she has a programme to develop a series of morphology inhibitors. Her article in this issue of The Biochemist (pp. 23–26) outlines an important aspect of the future programme. A clinical trial on hepatitis C has already been undertaken by United Therapeutics, USA, and more are planned.

Raymond Dwek, Glycobiology at Oxford, A personal view; www.bioch.ox.ac.uk

Wednesday, October 15, 2008

Nanopores





Research into nanopores has been conducted at leading academic institutions worldwide for 15 years, revealing their unique capacity to detect and analyse specific molecules, i.e. to act as 'biosensors.” Oxford Nanopore was founded to develop this technology into potential commercial applications. Existing methods of detecting single molecules are complex and expensive, and an improved method could potentially enable applications such as:

* Fast and inexpensive sequencing of DNA or other nucleic acids, with multiple applications in medicine, agriculture, energy, and more.
* Rapid detection of chemical or biological molecules for security and defence, including, chemical or biological weapons such as anthrax.
* Accurate detection of biological markers for diagnostics, including infectious diseases with high impact on global heath, such as Hepatitis C and the flu virus.
* Ion channel screening for drug development
* The label-free analysis of interactions between biological molecules. For example, nanopore technology may be adapted to analyse antibody-epitope, protein-DNA, protein-protein or protein-sugar interactions.
* For more details see Applications


Research from Professor Hagan Bayley's group has led to the pioneering use of biological nanopores as sensors of a range of targets from small organic molecules to proteins, antibodies and viral agents. More recently it has been shown that nanopores may form the basis of a label-free, amplification-free DNA sequencing system. This world-leading science is being developed into a proprietary system specifically for DNA analysis, called BASE™ Technology. While Oxford Nanopore is currently focusing its R&D in this area, the resulting technology developments will also be fundamental to other molecular analysis applications.

Oxford Nanopore also has partnerships with other leading nanopore research groups around the world. The relationships within this global network contribute to the existing development process, and will contribute to future generations of nanopore technology.

A nanopore is, essentially, a very small hole. This hole may be formed either by a protein pore set into a membrane (biological nanopores), or by artificially creating a hole in solid materials (solid state nanopores). Oxford Nanopore is currently working with protein nanopores that have an inner diameter of 1nm, about 100,000 times smaller than that of a human hair. The inner diameter of the nanopore is on the same scale as many single molecules, including DNA.

Similar protein pores are found naturally in cell membranes, where they act as channels for ions or molecules to be transported in and out of cells. For example, the bacteria S. aureus produces protein nanopores such as α-hemolysin as a tool to extract the contents of the cells of other organisms.

The α-hemolysin nanopore can be adapted using protein engineering techniques, to be a sensor for a range of specific molecules. This can be done in a variety of ways including:

* The incorporation of a specific binding site within the nanopore, that will bind transiently with the molecule being detected
* The incorporation of a DNA probe to detect an organism with the matching DNA code.

http://www.oxtrust.org.uk/news/306714

http://www.nanoporetech.com/sections/first/14

Tuesday, October 14, 2008

Foam production process: TiC and TiH2

Porous Metals and Metallic Foams: Current Status and Recent Developments

....Foam stabilisation can be obtained by adding ceramic particles into the metallic melt, which adhere to the gas/metal interfaces during foaming and prevent pore coalescence. One foam production process (sometimes referred to as the ‘Alcan process’) uses liquid metal matrix composites (MMCs) containing 10–20 vol.% particles (typically 10 lm silicon carbide or alumina particles) into which a blowing gas is injected. Very regular and highly porous metal foams can be produced with this method. However, the high particle content makes the solid foams very brittle and hard to machine. Replacement of the large particles in the MMCs by nanometric particles is a solution to overcome this problem. Using nanoparticles, melts can be foamed at much lower particle loading.

Indeed, 5% of 70 nm SiC particles dispersed ultrasonically in the melt were shown to be sufficient to stabilised aluminium foams. Particles formed in-situ by chemical reactions have also been used with success for the same purpose. As an example, 4 wt.% of TiC particles (200–1000 nm) formed in-situ in liquid aluminum resulted in stable aluminum alloy foams. Another area of investigation involves the development and improvement of the blowing agents. TiH2 has been in use since the 1950’s and is presently considered the most powerful blowing agent available for the production of aluminium and magnesium alloy foams. Efforts to replace or to improve this agent are important for two reasons. The first is the high cost of this hydride (<80 u /kg, 2008 price). Since 0.5 to 1.5 wt.% TiH2 are needed to produce an aluminium foam, the blowing agent contributes significantly to the final cost of the material. Taking into account the price of Al melt (1.50 u/kg) or powder (3 u/kg), the blowing agent contributes to up to 25% of total raw material costs. Replacing TiH2 by a less expensive blowing agent, namely CaCO3, is being investigated by different researchers.

A second reason for the replacement of TiH2 is associated with its decomposition behaviour that does not perfectly match the melting characteristics of most aluminium alloys used for foaming. In fact, TiH2 decomposition starts at temperatures lower than the melting temperature of aluminium or magnesium alloys. This negatively affects foaming by premature hydrogen release in the solid state and losses through the porosity network between the aluminium particles. Various strategies are currently under study to overcome this problem. One option is to modify the TiH2 powder in order to alter its decomposition characteristics.
This can be done by oxidation or by coating the particles with a thin layer of nickel. Another option is to work without blowing agent and rely on gas residues contained in powder or scrap that is added to the melt. In this case, pressure is used to control the foaming of the melt.

Another cost reduction strategy is to use chip waste, e.g. from machining, as replacement for the expensive aluminium alloy powders. Chips are mixed with ceramic additives and TiH2 and densified by compressive torsion processing or thixo casting after which the compacted materials are foamed in the usual manner. An open question remains, however, whether the need for more expensive compaction techniques contradicts the materials cost saving issue.


Advanced Engineering Materials, Wiley InterScience, 2008

Porous Metals and Metallic Foams

Porous Metals and Metallic Foams: Current Status and Recent Developments


Cellular metals and metallic foams are metals with pores deliberately integrated in their structure. The terms cellular metals or porous metals are general expressions referring to metals having large volume of porosities, while the terms foamed metal or metallic foams applies to porous metals produced with processes where foaming take place. Besides, the term metal sponge refers to highly porous materials with complex and interconnected porosity where the porosity cannot be subdivided into well defined cells.

Porous metals and metallic foams have combinations of properties that cannot be obtained with dense polymers, metals and ceramics or polymer and ceramic foams. For example, the mechanical strength, stiffness and energy absorption of metallic foams are much higher than those of polymer foams. They are thermally and electrically conductive and they maintain their mechanical properties at much higher temperatures
than polymers. Besides, they are generally more stable in harsh environments than polymer foams. As opposed to ceramics, they have the ability to deform plastically and absorb energy. If they have open porosity, they are permeable and can have very high specific surface areas, characteristics required for flow-through applications or when surface exchange are involved.

Louis-Philippe Lefebvre et al, Advanced Engineering Materials, Wiley InterScience, 2008

Oxford Gargoyle

Sunday, October 12, 2008

Defects in solids

All solids, even the most ‘perfect’ crystals contain defects. Defects are of great importance as they can affect properties such as mechanical strength, electrical conductivity, chemical reactivity and corrosion. There are several terms used to describe defects which we must consider:

Intrinsic defects – present for thermodynamic reasons.

Extrinsic defects – not required by thermodynamics and can be controlled by purification or synthetic conditions.

Point defects – Occur at single sites. Random errors in a periodic lattice eg absence of atom from usual place (vacancy) or atom in a site not normally occupied (interstital).

Extended defects – ordered in one, two and three dimensions. Eg errors in the stacking of planes.

Every solid has a thermodynamic tendency to acquire point defects, as they introduce disorder and therefore increase entropy.

The Gibbs free energy, G = H – TS, of a solid, is contributed to by the entropy and enthalpy of the sample (fig. 14). Entropy is a measure of disorder within a system, hence, a solid with defects has a higher entropy than a perfect crystal.

Intrinsic point defects:

Point defects are not easy to directly detect. Several techniques have been used to study them. Two physicists, Frenkel and Schottky used conductivity and density data to identify specific types of point defects.

Schottky defect – Vacancy in an otherwise perfect lattice. Point defect where atom / ion is missing from its usual point in the lattice. Overall stoichiometry usually unaffected as there is normally equal numbers of vacancies at both M and X sites preserving charge balance.

These defects are encountered more commonly when metal ions are able to easily assume multiple oxidation states.

Frenkel defect – Point defect where an atom / ion has been displaced into an interstital site. eg In AgCl some Ag+ ions occupy tetrahedral sites (fig. 16) which are normally unoccupied. Stoichiometry is unchanged.

Encountered in open structures (wurtzite, sphalerite, etc) where coordination numbers are low and open structure provides room for interstital sites to be occupied.

Extrinsic point defects:

These are inevitable because perfect purity is unattainable in crystals of any significant size.


Oxford Dept of Chemistry
http://www.chem.ox.ac.uk/vrchemistry/solid/Page17.htm



Oxygen vacancy


While the surface of the anatase phase is well known for efficient photocatalytic effects, the high dielectric constants of the rutile phase (ɛ =30–80) have made it a candidate material as a nanoscale insulator such as an ultrathin gate oxide in field-effect transistors or a dielectric layer in capacitors for dynamic random access memory.

Positive charges at the vacancy site are not strong enough to hold electrons locally or create an F center within the band gap. This is in part due to the ionic displacements, especially those of nearby Ti atoms. In fact, when atoms are fixed at the bulk position, we find that the oxygen vacancy creates a defect state within the energy gap. The outward relaxation of Ti atoms effectively screens the positive charge of the oxygen vacancy. The shift of localized defect levels with increasing supercell size is similar to the case of an oxygen vacancy in SrTiO3 as recently reported. The dispersion in the defect state for the 3x3x5 supercell indicates
that one still needs a larger supercell to accurately characterize the localized level. However, the qualitative nature of the defect state, such as the relative position from the conduction minimum, is well addressed in the 3x3x5 supercell It is well known that ionized oxygen vacancies effectively dope TiO2 with electrons, resulting in a n-type transport behavior. In our calculations, the oxygen vacancy is created simply by taking out one oxygen atom from the supercell. After relaxation, the total energy is lowered by 1.69 eV from the initial energy and surrounding Ti atoms are displaced by 0.27–0.30 Å outward from the vacancy site. This is due to
the effectively positive charges of the vacancy site which interact repulsively with nearby cations. We calculate the electronic population at each atomic site by integrating the total electronic charges inside a sphere centered on each atom with effective ionic radii, the so-called Shannon-Prewitt radii. They are 0.61 and 1.36 Å for Ti and O atoms, respectively. We find that the electron population of three Ti
atoms surrounding the oxygen vacancy substantially increases from the bulk value of 0.968e to 1.021e while those at other Ti atoms change less than 0.02e, consistent with the positive character of the vacancy charge.

The analysis of the occupied states in the conduction band indicates that they are uniformly distributed with Ti d character. In other words, the additional electrons
at nearby Ti atoms are contributed by valence states. To confirm this, we carry out a calculation with two fewer electrons. This shifts down the Fermi level to be within the original band gap. However, the charge accumulation at Ti atoms around the vacancy is almost unchanged, indicating that extra electrons mainly originate from the small polarization of the valence states.

To determine the relative stability between point defects, we compare defect formation energies (Efor) of two point defects using the following formula:
Efor = Etot(defect) − nTi μTi − nO μO,

where Etot(defect)the total energy of the supercell containing a defect. ni and μ
i are the number and chemical potential of the constituent atoms, respectively, satisfying the relation μTi+2 μO=Etot (bulk) /2. Assuming μO is half the total energy
of an oxygen molecule, the formation energies are 7.09 and 4.44 eV for the Ti interstitial and oxygen vacancy, respectively, indicating that formation of the oxygen vacancy is energetically favoured. However, the relative stability is reversed
at μO=Etot(O2)2−2.64 eV, corresponding to a Ti richer environment, where the formation energies of the two defects are equally 1.8 eV. It is noted that the defect level for the Ti interstitial is of a localized d character and unphysical
self-interactions of occupied electrons may influence the formation energy of the Ti interstitial unfavorably. We also add that our calculations are for neutral defects in the sense that charge neutrality is maintained without uniform background charges. However, the delocalized characters of doped electrons imply that charge states of point defects are more consistent with +2 for both the oxygen vacancy and Ti interstitial. The Ti2+ interstitial found in our calculation is rather at variance with the traditional picture of this defect, i.e., Ti4+ or Ti3+. There are several possible causes for the discrepancy, such as the defect clustering or the accuracy of the local density approximation LDA.

In summary, we perform first-principles density functional calculations on the oxygen vacancy and Ti interstitial in rutile TiO2. The defect level associated with the oxygen vacancy is not identified within the energy gap while the Ti interstitial gives rise to a defect level that can be related to the infrared experiment.


First principal study, PHYSICAL REVIEW B 73, 193202, 2006
http://drm.kist.re.kr/CSC/publication/pdf/p-63.pdf

Doping of Semiconductors: n-type and p-type

Doping of semiconductors is achieved by introducing atoms with more or less electrons than the parent element. Doping is substitutional, the dopant atoms directly replace the original atoms. Suprisingly low levels of dopant are required, only 1 atom in 109 of the parent atoms.

Looking at silicon; if phosphorous atoms are introduced into a silicon crystal then extra electrons will be available (one for each dopant atom introduced as P has one extra valence electron). The dopant atoms form a set of energy levels that lie in the band gap between the valence and conduction bands, but close to the conduction band. The electrons in these levels cannot move directly as there is not enough of them to form a continuous band. However, the levels themselves can act as donor levels because the electrons have enough thermal energy to get up into the conduction band where they can move freely.

Such semiconductors are known as n-type semiconductors, representing the negative charge carriers or electrons.

What if, instead of doping with phosphorous, we doped silicon with an element with one less valence electron such as gallium. Now for every dopant atom there is an electron missing, and the atoms form a narrow, empty band consisting of acceptor levels which lie just above the valence band. Electrons from the valence band may have enough thermal energy to be promoted into the acceptor levels, which are discrete levels if the concentration of gallium atoms is small. Therefore, electrons in the acceptor levels cannot contribute to the conductivity of the material. However, the positive holes in the valence band left behind by the promoted electrons are able to move.

These type of semiconductors are known as a p-type semiconductors, representing the positive holes.

There are two fundamental differences between extrinsic and intrinsic semiconductors:

1) At standard temperatures extrinsic semiconductors tend to have significantly greater conductivities than comparable intrinsic ones.

2) The conductivity of an extrinsic semiconductor can easily and accurately be controlled simply by controlling the amount of dopant which is introduced. Therefore materials can be manufactured to exact specifications of conductivity.


Controlled valancy semiconductors:

Some transition metal compounds can be conductors due to the presence of an element in more than one oxidation state. NiO is a very good example. On oxidation the compound goes black and becomes a relatively good conductor. Some of the Ni2+ ions have been oxidised to Ni3+ and some Ni2+ ions diffuse out to maintain charge balance leaving cation holes.

The reason for the conduction is the ability of electrons to transfer from Ni2+ to Ni3+ ions. This basically allows the Ni3+ ions to move and black NiO is therefore a p-type semiconductor. Slightly different to the p-type discussed earlier this type is known as a hopping semiconductor because the transfer process is thermally controlled and therefore highly dependent on temperature.

This makes controlling the conductivity a tricky process. Therefore controlled valancy semiconductors rely on control of the concentration of, in this case, Ni3+ ions by controlled addition of a dopant (such as lithium). Instead of having NiO, you now have Li+xNi2+1-2xNi3+xO, hence, the concentration of Li+ ions controls the conductivity.
The P-N junction:

This occurs in the situation where a crystal has been doped such that half of it is n-type and the other half is p-type. The two halves have different Fermi levels (n-type’s is higher) and electrons flow from the n-type section to the p-type section to try and equalize the electron concentrations (fig. 9). This creates a positive charge on the n-type region and a negative charge on the p-type region which leads to an electric field pushing electrons back to the n-type region. Eventually a balance is reached (fig. 10).

If you apply an external potential difference to make the p-type region positive and the n-type region negative a continuous current can now flow. Electrons enter from the n-type electrode, travel through the conduction band of the n-type region, drop into the valence band of the p-type region, continue through the positive holes and then leave at the other electrode. Cannot flow the other way with a relatively low voltage as the electrons are unable to jump up to the n-type conduction band.


The Mott-Hubbard gap and breakdown of the band model

The simple band approach says that a compound with partially filled d orbitals should form a metallic solid. However, this is not the case; a large number of halides, oxides and compounds with less electronegative ligands form non-metallic solids, which have magnetic and spectroscopic properties associated with partially filled levels.

A good example is NiO. Pure NiO is green and shows d-d transitions associated with octahedral ligand-field splitting of the 3d orbitals, as is the case with an isolated ion, eg Ni(H2O)62+ (fig. 11)

The magnetic properties reveal two unpaired electrons on each Ni2+ ion. There is a strong interaction between neighbouring ions which leads to antiferromagnetic ordering of the spins at lower temperatures. In compounds of this type the d orbitals appear localized as apposed to forming a conduction band like in metallic compounds. This non-metallic behaviour stems from strong electron repulsion between electrons in d orbitals. The band model relies on a simple approximation to deal with electron repulsion which isn’t good enough for many transition metal compounds.


http://www.chem.ox.ac.uk/vrchemistry/solid/Page12.htm

Molecular conformation

http://physchem.ox.ac.uk/~jps/


Molecular conformation plays a crucial role in the selectivity and function of biologically active molecules. Molecular shape and the interactive forces between the molecule and its nearest neighbours, also control molecular recognition processes. These are involved in virtually all aspects of biological function, ranging from neurotransmission and specific drug-receptor interactions, to enzyme catalysis. Enzyme function, in its turn, is dependent upon specific interactions between neighbouring molecules, bound together at the active site of the enzyme and between the active site and the reactive substrate. It can also be dependent upon the formation of chemically reactive intermediates (transition states) and with charge migration within the enzyme-substrate complex.

The factors which control the conformational landscape involve a subtle balance between 'through bond' and 'through space' interactions within the molecule, and their modification by 'non-bonded' interactions with the environment. Hydrogen-bonded interactions are ubiquitous, operating both within the molecules and externally, especially with neighbouring water molecules. Together, these interactions determine the molecular architecture, the electronic charge distributions and the network of pathways for electron and proton transfer within the molecular structure. Their relative influence and the way in which their co-operative behaviour may control conformational and supra-molecular structure and the specificity of molecular function remain very unclear.

In the last few years, we have developed and exploited very powerful strategies for exploring and mapping the conformational landscapes of small biomolecules, e.g., neurotransmitters and b-blockers; amino-acids, amides and peptides; sugars and oligo-saccharides; and the supra-molecular structures of their size-selected hydrates and molecular complexes. Our approach exploits:

(1) The non-invasive, very low temperature environment of a pulsed nozzle, helium jet gas expansion. This provides an ideal 'laboratory' for resolving individual conformers, preparing size-selected supra-molecular clusters in a controlled way, and facilitating the spectral resolution of complex molecular structures.

(2) The selectivity, resolution and precision of tunable i.r., u.v., and multiple laser excitation methods, coupled with optical and mass spectrometric detection, which provides the experimental input for identifying individual conformers and clusters and assigning their conformational and supra-molecular structures.

(3) The power of ab initio structural computation, which provides the crucially important theoretical input, through which the experimental data can be analysed and interpreted. In this strategy, theory and experiment enjoy a symbiotic relationship - their interaction is truly a co-operative one. Theory provides the “la carte menu” of structural possibilities and the experiments tell us which ones are actually chosen.

(4) The correlation of gas phase structural data with electronic and vibrational CD spectra (e.g., of chiral neurotransmitters) recorded in solution to explore the way in which hydrogen-bonded and non-bonded interactions determine the molecular and electronic structures of both isolated and solvated biomolecular assemblies.

J P Simons, Physical and theoretical chemistry laboratory, Oxford Univ
http://www.chem.ox.ac.uk/researchguide/jpsimons.html



.......... The last few years have seen a very rapid growth of spectroscopic and computational studies exploring the conformational and structural landscapes of neutral and protonated biomolecules, their dimers and molecular complexes, isolated in the gas phase, in order to characterize their conformational and supramolecular structures. They include neurotransmitters, amino acids, amides and oligo-peptides, nucleic acid bases and carbohydrates, determined principally through laser-based vibrational spectroscopy in combination with density functional theory and ab initio calculations. These provide direct, bond-specific information about local interactions, particularly those involving hydrogen bonding. OH and NH stretch bands in free amino acids, in peptide amino acid residues, and especially in carbohydrates, oligosaccharides and their hydrated complexes, are extraordinarily sensitive to their local H-bonded environments, reflecting local and cooperative interactions as well as secondary and supramolecular structures. It should alsobe possible to use the local carbohydrate CH stretch bands to probe the dispersion forces which support the stacking interations with aromatic amino residues, often involved in selective protein-carbohydrate molecular recognition processes.

Thursday, October 09, 2008

Point defects on MgO surface

Pd particle deposition on the MgO surface is one of the more thoroughly studied metal-oxide systems. It has been found that the point defects on the MgO surface, such as neutral oxygen vacancy, would greatly facilitate the binding properties between the metal cluster and MgO surface, dominate the cluster structures, and be responsible for the dispersion of metal islands in the metallic particle deposition. Some experimental efforts investigated the nucleation and growth modes of metal clusters on MgO surfaces. It was suggested, experimentally, that the nucleation may mostly occur at point defects and that the cluster growth corresponds to the nearly complete condensation regime, namely the cluster with island shapes. Moreover, the charging of Pd clusters upon adsorption is expected on an ultra thin MgO film but not on MgO single crystals.

Cluster Formation Model in Vapor Deposition of Pd Atoms on the Perfect MgO(100)
Surface and on Its Surface Oxygen Vacancy, Journal of physical chemistry, 2008
http://pubs.acs.org/cgi-bin/article.cgi/jpccck/2008/112/i35/pdf/jp804082d.pdf

Colour compounds



We observe color as varying frequencies of electromagnetic radiation in the visible region of the electromagnetic spectrum. Different colors result from the changed composition of light after it has been reflected, transmitted or absorbed after hitting a substance. Because of their structure, transition metals form many different colored ions and complexes. Color even varies between the different ions of a single element - MnO4− (Mn in oxidation state 7+) is a purple compound, whereas Mn2+ is pale-pink.

Coordination by ligands can play a part in determining color in a transition compound, due to changes in energy of the d orbitals. Ligands remove degeneracy of the orbitals and split them in to higher and lower energy groups. The energy gap between the lower and higher energy orbitals will determine the color of light that is absorbed, as electromagnetic radiation is only absorbed if it has energy corresponding to that gap. When a ligated ion absorbs light, some of the electrons are promoted to a higher energy orbital. Since different frequency light is absorbed, different colors are observed.

The color of a complex depends on:

* the nature of the metal ion, specifically the number of electrons in the d orbitals
* the arrangement of the ligands around the metal ion (for example geometric isomers can display different colors)
* the nature of the ligands surrounding the metal ion. The stronger the ligands then the greater the energy difference between the split high and low 3d groups.

The complex ion formed by the d block element zinc (though not strictly a transition element) is colorless, because the 3d orbitals are full - no electrons are able to move up to the higher group.
Wikipedia







Free Electron Laser instability

The free-electron laser (FEL) is in a sense an extension of the undulator radiation source that has proven so useful to the synchrotron community. An undulator is a periodic magnet array that imposes a periodic deflection on a relativistic electron beam. Interference effects enhance the probability of each electron emitting radiation at wavelengths selected by a phase match between the electron energy and the undulator period. Ordinarily, these interference effects apply independently to the radiation probability for each electron, with no inter-electron effects. However, with a very long undulator and a carefully prepared electron beam, an effect
arises that is known as the FEL instability. It introduces correlations between the electrons, and opens the possibility of greatly enhanced peak x-ray brightness. This instability produces exponential growth of the intensity of the emitted radiation at a particular wavelength. The radiation field that initiates the instability can be either the spontaneous undulator radiation or an external seed field. In the case of FEL action arising from spontaneous radiation, the process is called self-amplified spontaneous emission (SASE). If an external seed is used then the FEL is referred to as an FEL amplifier.

X ray free-electron lasers, Journal of Physics, (2005), http://iopscience.iop.org/0953-4075/38/9/023/pdf?ejredirect=.iopsciencetrial



Coriolis Effect
In physics, the Coriolis effect is an apparent deflection of moving objects when they are viewed from a rotating frame of reference.

The effect is named after Gaspard-Gustave Coriolis, a French scientist who described it in 1835, though the mathematics appeared in the tidal equations of Pierre-Simon Laplace in 1778. The Coriolis effect is caused by the Coriolis force, which appears in the equation of motion of an object in a rotating frame of reference. The Coriolis force is an example of a fictitious force (or pseudo force), because it does not appear when the motion is expressed in an inertial frame of reference, in which the motion of an object is explained by the real impressed forces, together with inertia. In a rotating frame, the Coriolis force, which depends on the velocity of the moving object, and centrifugal force, which does not depend on the velocity of the moving object, are needed in the equation to correctly describe the motion.

Perhaps the most commonly encountered rotating reference frame is the Earth. Freely moving objects on the surface of the Earth experience a Coriolis force, and appear to veer to the right in the northern hemisphere, and to the left in the southern. Movements of air in the atmosphere and water in the ocean are notable examples of this behavior: rather than flowing directly from areas of high pressure to low pressure, as they would on a non-rotating planet, winds and currents tend to flow to the right of this direction north of the equator, and to the left of this direction south of the equator. This effect is responsible for the rotation of large cyclones (see Coriolis effects in meteorology).

Wikipedia

Wednesday, October 08, 2008

Packing Spheres

Is there another way of packing spheres that is more space-efficient?

In 1611 Johannes Kepler asserted that there was no way of packing equivalent spheres at a greater density than that of a face-centred cubic arrangement. This is now known as the Kepler Conjecture.



4 atoms in the unit cell (0, 0, 0) (0, 1 /2, 1 /2) (1 /2, 0, 1 /2) (1 /2, 1 /2, 0)








HCP, 2 atoms in the unit cell (0, 0, 0) (2/3, 1 /3, 1 /2)

http://www.chem.ox.ac.uk/icl/heyes/structure_of_solids/Lecture1/Lec1.html#anchor5




How to stack oranges

..... Kepler settled on an arrangement known as the face-centred cubic, which also happens to be the way greengrocers stack oranges.

Using this arrangement, oranges occupy 74.04 per cent of the total space. Kepler could not find a more efficient way to stack spheres, but nor could he be sure that no such arrangement exists. With an infinite number of possible arrangements, the challenge has been to prove categorically whether Kepler's suggested arrangement is best.

Prof Hales's approach to the problem is based on a single equation with more than 150 variables, which can be changed to describe every conceivable arrangement, thereby allowing the equation to calculate the packing efficiency for each one. Traditionally, mathematicians would alter the variables to maximise the packing efficiency for the equation, and then see which arrangement is associated with the variables. However, the equation is hugely complex, which puts the maximisation process beyond paper and pencil calculations, and even challenges the limits of computers.

Over the past decade, Prof Hales, helped by his research student Samuel Ferguson, has been studying the maximisation process, inventing shortcuts which bring it within the realm of computability. At last, having thrown enough computer power at the problem and testing all possible arrangements, Prof Hales has concluded that no arrangement beats the face-centred cubic for efficiency. In other words, Kepler and greengrocers have been right all along.
Simon Singh,

http://www.chem.ox.ac.uk

Mind Control

Pete Wilton

In a fascinating article in Scientific American Oxford's Gero Miesenbock explores the history of optogenetics - combining optics and genetic engineering to study specific types of cells.

Gero's particular interest is in combining genes that encode for cells that either emit or respond to light with neurons: in order to study brain circuitry.

He recently found a brain circuit in the olfactory system of fruit flies that produces noise - a discovery that has wider implications, as the basic architecture of a fruit fly's olfactory system is the same as a human's. Before that he had shown how stimulating the brains of fruit flies using a laser can cause female flies to perform a male courtship dance.

The Scientific American piece is well worth reading in full, but one particular point caught my eye when he goes on to discuss how the benefits of such research might one day impact on medicine:

'it would seem arbitrary and hypocritical to draw a sharp boundary between physical means for influencing brain function and chemical manipulations... In fact, physical interventions can arguably be targeted and dosed more precisely than drugs, thus reducing side effects.'

It's easy to fall prey to the understandable fear that physical interventions in our brains risk turning us into zombies or will-sapped cyborgs but are we all too easily overlooking the same risks associated with the drug cocktails patients routinely swallow?

I think we might look on such physical 'mind control' approaches differently if, in the future, they could offer relief from movement disorders such as Parkinson's, from debilitating behavioural disorders, and maybe eventually restore our lost senses.

As Gero comments, such direct approaches are still some way off. But right now optogenetics offers the promise of revealing new targets for drugs that could tackle anything from obesity to insomnia and anxiety.

Thanks to the work of Gero and others the 21st Century may genuinely turn out to be the 'Century of the Brain'.

www.ox.ac.uk/science_blog

Scientific American Magazine - September 24, 2008
Neural Light Show: Scientists Use Genetics to Map and Control Brain Functions
A clever combination of optics and genetics is allowing neuroscientists to identify and control brain circuits with unprecedented precision



In 1937 the great neuroscientist Sir Charles Scott Sherrington of the University of Oxford laid out what would become a classic description of the brain at work. He imagined points of light signaling the activity of nerve cells and their connections. During deep sleep, he proposed, only a few remote parts of the brain would twinkle, giving the organ the appearance of a starry night sky. But at awakening, “it is as if the Milky Way entered upon some cosmic dance,” Sherrington reflected. “Swiftly the head-mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns.”

Although Sherrington probably did not realize it at the time, his poetic metaphor contained an important scientific idea: that of the brain revealing its inner workings optically. Understanding how neurons work together to generate thoughts and behavior remains one of the most difficult open problems in all of biology, largely because scientists generally cannot see whole neural circuits in action. The standard approach of probing one or two neurons with electrodes reveals only tiny fragments of a much bigger puzzle, with too many pieces missing to guess the full picture. But if one could watch neurons communicate, one might be able to deduce how brain circuits are laid out and how they function. This alluring notion has inspired neuroscientists to attempt to realize Sherrington’s vision.

Their efforts have given rise to a nascent field called optogenetics, which combines genetic engineering with optics to study specific cell types. Already investigators have succeeded in visualizing the functions of various groups of neurons. Furthermore, the approach has enabled them to actually control the neurons remotely—simply by toggling a light switch. These achievements raise the prospect that optogenetics might one day lay open the brain’s circuitry to neuroscientists and perhaps even help physicians to treat certain medical disorders.

Enchanting the Loom
Attempts to turn Sherrington’s vision into reality began in earnest in the 1970s. Like digital computers, nervous systems run on electricity; neurons encode information in electrical signals, or action potentials. These impulses, which typically involve voltages less than a tenth of those of a single AA battery, induce a nerve cell to release neurotransmitter molecules that then activate or inhibit connected cells in a circuit. In an effort to make these electrical signals visible, Lawrence B. Cohen of Yale University tested a large number of fluorescent dyes for their ability to respond to voltage changes with changes in color or intensity. He found that some dyes indeed had voltage-sensitive optical properties. By staining neurons with these dyes, Cohen could observe their activity under a microscope.

Dyes can also reveal neural firing by reacting not to voltage changes but to the flow of specific charged atoms, or ions. When a neuron generates an action potential, membrane channels open and admit calcium ions into the cell. This calcium influx stimulates the release of neurotransmitters. In 1980 Roger Y. Tsien, now at the University of California, San Diego, began to synthesize dyes that could indicate shifts in calcium concentration by changing how brightly they fluoresced. These optical reporters have proved extraordinarily valuable, opening new windows on information processing in single neurons and small networks.

Synthetic dyes suffer from a serious drawback, however. Neural tissue is composed of many different cell types. Estimates suggest that the brain of a mouse, for example, houses many hundreds of types of neurons plus numerous kinds of support cells. Because interactions between specific types of neurons form the basis of neural information processing, someone who wants to understand how a particular circuit works must be able to identify and monitor the individual players and pinpoint when they turn on (fire an action potential) and off. But because synthetic dyes stain all cell types indiscriminately, it is generally impossible to trace the optical signals back to specific types of cells.

Genes and Photons
Optogenetics emerged from the realization that genetic manipulation might be the key to solving this problem of indiscriminate staining. An individual’s cells all contain the same genes, but what makes two cells different from each other is that different mixes of genes get turned on or off in them. Neurons that release the neurotransmitter dopamine when they fire, for instance, need the enzymatic machinery for making and packaging dopamine. The genes encoding the protein components of this machinery are thus switched on in dopamine-producing (dopaminergic) neurons but stay off in other, nondopaminergic neurons.

In theory, if a biological switch that turned a dopamine-making gene on was linked to a gene encoding a dye and if the switch-and-dye unit were engineered into the cells of an animal, the animal would make the dye only in dopaminergic cells. If researchers could peer into the brains of these creatures (as is indeed possible), they could see dopaminergic cells functioning in virtual isolation from other cell types. Furthermore, they could observe these cells in the intact, living brain. Synthetic dyes cannot perform this type of magic, because their production is not controlled by genetic switches that flip to on exclusively in certain kinds of cells. The trick works only when a dye is encoded by a gene—that is, when the dye is a protein.

The first demonstrations that genetically encoded dyes could report on neural activity came a decade ago, from teams led independently by Tsien, Ehud Y. Isacoff of the University of California, Berkeley, and me, with James E. Rothman, now at Yale University. In all cases, the gene for the dye was borrowed from a luminescent marine organism, typically a jellyfish that makes the so-called green fluorescent protein. We tweaked the gene so that its protein product could detect and reveal the changes in voltage or calcium that underlie signaling within a cell, as well as the release of neurotransmitters that enable signaling between cells.

Armed with these genetically encoded activity sensors, we and others bred animals in which the genes encoding the sensors would turn on only in precisely defined sets of neurons. Many favorite organisms of geneticists—including worms, zebra fish and mice—have now been analyzed in this way, but fruit flies have proved particularly willing to spill their secrets under the combined assault of optics and genetics. Their brains are compact and visible through a microscope, so entire circuits can be seen in a single field of view. Furthermore, flies are easily modified genetically, and a century of research has identified many of the genetic on-off switches necessary for targeting specific groups of neurons. Indeed, it was in flies that Minna Ng, Robert D. Roorda and I, all of us then at Memorial Sloan-Kettering Cancer Center in New York City, recorded the first images of information flow between defined sets of neurons in an intact brain. We have since discovered new circuit layouts and new operating principles. For example, last year we found neurons in the fly’s scent-processing circuitry that appear to inject “background noise” into the system. We speculate that the added buzz amplifies faint inputs, thus heightening the animal’s sensitivity to smells—all the better for finding food.

The sensors provided us with a powerful tool for observing communication among neurons. But back in the late 1990s we still had a problem. Most experiments probing the function of the nervous system are rather indirect. Investigators stimulate a response in the brain by exposing an animal to an image, a tone or a scent, and they try to work out the resulting signaling pathway by inserting electrodes at downstream sites and measuring the electrical signals picked up at these positions. Unfortunately, sensory inputs undergo extensive reformatting as they travel. Consequently, knowing exactly which signals underlie responses recorded at some distance from the eye, ear or nose becomes harder the farther one moves from these organs. And, of course, for the many circuits in the brain that are not devoted to sensory processing but rather to movement, thought or emotion, the approach fails outright: there is no direct way of activating these circuits with sensory stimuli.

From Observation to Control
An ability to stimulate specific groups of neurons directly, independent of external input to sensory organs, would alleviate this problem. We wondered, therefore, if we could develop a package of tools that would not only provide sensors to monitor the activity of nerve cells but would also make it possible to readily activate only selected neuron types.

My first postdoctoral fellow, Boris V. Zemelman, now at the Howard Hughes Medical Institute, and I took on this problem. We knew that if we managed to program a genetically encoded, light-controlled actuator, or trigger, into neurons, we could overcome several obstacles that had impeded electrode-based studies of neural circuits. Because only a limited number of electrodes can be implanted in a test subject simultaneously, researchers can listen to or excite only a small number of cells at any given time using this approach. In addition, electrodes are difficult to aim at specific cell types. And they must stay put, encumbering experiments in mobile animals.

If we could tap a genetic on-off switch to help us find all the relevant neurons (those producing dopamine, for instance) and if we could use light to control these cells in a hands-off manner, we would no longer have to know in advance where in the brain these neurons were located to study them. And it would not matter if their positions changed as an animal moved about. If stimulation of cells containing the actuators evoked a behavioral change, we would know that these cells were operating in the circuit regulating that behavior. At the same time, if we arranged for those same cells to carry a sensor gene, the active cells would light up, revealing their location in the nervous system. Presumably, by rerunning the experiment repeatedly on animals engineered to each have a different cell type containing an actuator, we would eventually be able to piece together the sequence of events leading from neural excitation to behavior and to identify all the players in the circuit. All we needed to do was discover a genetically encodable actuator that could transduce a light flash into an electrical impulse.

To find such an actuator, we reasoned that we should look in cells that normally generate electrical signals in response to light, such as the photoreceptors in our eyes. These cells contain light-absorbing antennae, termed rhodopsins, that when illuminated instruct ion channels in the cell membrane to open or close, thereby altering the flow of ions and producing electrical signals. We decided to transplant the genes encoding these rhodopsins (plus some auxiliary genes required for rhodopsin function) into neurons grown in a petri dish. In this simple setting we could then test whether shining light onto the dish would cause the neurons to fire. Our experiment worked—in early 2002, four years after the development of the first genetically encoded sensors able to report neural activity, the first genetically encoded actuators debuted.

Remote-Controlled Flies
More recently, investigators have enlisted other light-sensing proteins, such as melanopsin, which is found in specialized retinal cells that help to synchronize the circadian clock to the earth’s rotation, as actuators. And the combined efforts of Georg Nagel of the Max Planck Institute for Biophysics in Frankfurt, Karl Deisseroth of Stanford University and Stefan Herlitze of Case Western Reserve University have shown that another protein, called channelrhodopsin-2—which orients the swimming movements of algae—is up to the job. There are also a variety of genetically encoded actuators that can be controlled via light-sensitive chemicals synthesized by us and by Isacoff and his U.C. Berkeley colleagues Richard H. Kramer and Dirk Trauner.

The next step was to demonstrate that our actuator could work in a living animal, a challenge I posed to my first graduate student, Susana Q. Lima. To obtain this proof of principle, we focused on a particularly simple circuit in flies, one consisting of just a handful of cells. This circuit was known to control an unmistakable behavior: a dramatic escape reflex by which the insect rapidly extends its legs to achieve liftoff and, once airborne, spreads its wings and flies. The trigger initiating this action sequence is an electrical impulse emitted by two of the roughly 150,000 neurons in the fly’s brain. These so-called command neurons activate a subordinate circuit called a pattern generator that instructs the muscles moving the fly’s legs and wings.

We found a genetic switch that was always on in the two command neurons but no others—and another switch that was on in neurons of the pattern generator but not in the command neurons. Using these switches, we engineered flies in which either the command neurons or the pattern-generator neurons produced our light-driven actuator. To our delight, both kinds of flies took off at the flash of a laser beam, which was strong enough to penetrate the cuticle of the intact animals and reach the nervous system. This confirmed that both the command and pattern-generating cells participated in the escape reflex and proved that the actuators worked as intended. Because only the relevant neurons contained the genetically encoded actuator, they alone “knew” to respond to the optical stimulus—we did not have to aim the laser at specific target cells. It was as if we were broadcasting a radio message over a city of 150,000 homes, only a handful of which possessed the receiver required to decode the signal; the message remained inaudible to the rest.

One nagging quandary remained, however. The command neurons initiating the escape reflex are wired to inputs from the eyes. These inputs activate the escape circuit during a “lights-off” transition, as happens when a looming predator casts its shadow. (You know this from your fly-swatting attempts: whenever you move your hand into position, the animal annoyingly jumps up and flies away.) We worried that in our case, too, the escape reflex might be a visual reaction to the laser pulse, not the result of direct optical control of command or pattern-generating circuits.

To eliminate this concern, we performed a brutally simple experiment: we cut the heads off our flies. This left us with headless drones (which can survive for a day or two) that harbored the intact pattern-generating circuitry within their thoracic ganglia, which form the rough equivalent of a vertebrate’s spinal cord. Activating this circuit with light propelled the otherwise motionless bodies into the air. Although the drones’ flights often began with tumbling instability and ended in spectacular crashes or collisions, their very existence proved that the laser controlled the pattern-generating circuit itself—there was no other way these headless animals could detect and react to light. (The drones’ clumsy maneuvers also illustrated vividly that the Wright brothers’ great innovation was the invention of controlled powered flight, not simply powered flight.)

We also engineered flies with light switches attached only to neurons that make the neurotransmitter dopamine. When exposed to the laser’s flash, these flies suddenly became more active, walking all around their enclosures. Previous studies had indicated that dopamine helps animals predict reward and punishment. Our fly findings are consistent with this scenario: the animals not only became more active, they also explored their environment differently, as if reacting to an altered expectation of gain or loss.

An Unexpected Forerunner
Three days before the paper reporting these experiments was scheduled for publication in the journal Cell, I was flying to Los Angeles to deliver a lecture. A friend had given me Tom Wolfe’s recently published coming-of-age novel I Am Charlotte Simmons, thinking I would enjoy its depiction of neuroscientists, not to mention the material that had earned the book the Literary Review’s Bad Sex in Fiction Award. On the plane I came across a passage in which Charlotte attends a lecture on the work of one José Delgado, who also remotely controls animal behavior—not with light-driven, genetically encoded actuators but with radio signals transmitted to electrodes he has implanted in the brain. A Spaniard, Delgado risked his life to demonstrate the power of his approach by stopping an angry bull in midcharge. This, Wolfe’s fictional lecturer declares, is a turning point in neuroscience—a decisive defeat of dualism, the notion that the mind exists as an entity separate from the brain. If Delgado’s physical manipulations of the brain could change an animal’s mind, so the argument went, the two must be one and the same.

I almost fell out of my seat. Was Delgado a fictional character, or was he real? Immediately after landing in L.A., I did a Web search and was directed to a photograph of the matador with the remote and his bull. Delgado, I learned, had been a professor at my very own institution, Yale, and had written a book entitled Physical Control of the Mind: Toward a Psychocivilized Society, which appeared in 1969. In it, he summarized his efforts to control movements, evoke memories and illusions, and elicit pleasure or pain [see “The Forgotten Era of Brain Chips,” by John Horgan; Scientific American, October 2005]. The book concludes with a discussion of what the ability to control brain function might imply for medicine, ethics, society and even warfare. Against this background, I should probably not have been surprised when the phone rang the day our paper was published and a U.S.-based journalist asked, “So, when are we going to invade another country with an army of remote-controlled flies?”

The media attention did not stop there. The next day the headline of the Drudge Report screamed, “Scientists Create Remote-Controlled Flies,” topping news of Michael Jackson’s latest court appearance. I assume it was this source that inspired a sketch on the Tonight Show a week or so later, in which host Jay Leno piloted a remote-controlled fly into President George W. Bush’s mouth—the first practical application of our new technology.

Since then, researchers have used the light-switch approach to control other behaviors. Last October, Deisseroth and his Stanford colleague Luis de Lecea announced the results of a mouse study in which they used an optical fiber to deliver light directly to neurons that produce hypocretin—a neurotransmitter in the form of a small protein, or peptide—to see whether these neurons regulate sleep. Researchers had suspected that hypocretin plays this role because certain breeds of dogs lacking hypocretin receptors suffer sudden bouts of sleepiness. The new work revealed that stimulating hypocretin neurons during sleep tended to awaken the mice, bolstering that hypothesis.

And in my lab at Yale, postdoctoral fellow J. Dylan Clyne used genetically encoded actuators to gain insights into behavioral differences between the sexes. The males of many animal species go to considerable lengths in wooing the opposite sex. In the case of fruit flies, males vibrate one wing to produce a “song” that females find quite irresistible. To probe the neural underpinnings of this strictly male behavior, ­Clyne used light to activate the pattern generator responsible for the song. He found that females, too, possess the song-making circuitry. But under normal circumstances they lack the neural signals required for turning it on. This discovery suggests that male and female brains are wired largely the same way and that differences in sexual behaviors arise from the action of strategically placed master switches that set circuits to either male or female mode.

Light Therapy
Thus far investigators have typically engineered animals to carry either a sensor or an actuator in neurons of interest. But it is possible to outfit them with both. And down the road, the hope is that we will be able to breed subjects that have multiple sensors or actuators, which would allow us to study assorted populations of neurons simultaneously in the same individual.

Our newfound authority over neural circuits is creating enormous opportunities for basic research. But are there practical benefits? Perhaps, although I feel they are sometimes overhyped. Delgado himself identified several areas in which direct control of neural function could lead to clinical benefits: sensory prosthetics, therapy for movement disorders (as has now become reality with deep-brain stimulation for Parkinson’s disease), and regulation of mood and behavior. He saw these potential uses as a direct and rational extension of existing medical practice, not as an alarming foray into the ethical quicksands of “mind control.” Indeed, it would seem arbitrary and hypocritical to draw a sharp boundary between physical means for influencing brain function and chemical manipulations, be they psychoactive pharmaceuticals or the cocktail that helps you unwind after a hard day. In fact, physical interventions can arguably be targeted and dosed more precisely than drugs, thus reducing side effects.

Some studies have already begun to probe the applicability of optogenetics to medical problems. In 2006 researchers used light-activated ion channels to restore photosensitivity to surviving retinal neurons in mice with photoreceptor degeneration. They used a virus to deliver the gene encoding channelrhodopsin-2 to the cells, injecting it directly into the animals’ eyes. The patched-up retinas sent light-evoked signals to the brain, but whether the procedure actually brought back vision remains unknown.

Despite their theoretical appeal, optogenetic therapies face an important practical obstacle in humans: they require the introduction of a foreign gene—the one encoding the light-controlled actuator—into the brain. So far gene therapy technology is not up to the challenge, and the Food and Drug Administration is sufficiently concerned about the associated risks that it has banned such interventions for the time being, except for tightly restricted experimental purposes.

The immediate opportunity afforded by our control over brain circuits—or even other electrically excitable cells, such as those that produce hormones and those that make up muscle—lies in revealing new targets for drugs: if experimental manipulations of cell groups X, Y and Z cause an animal to eat, sleep or throw caution to the wind, then X, Y and Z are potential targets for medicines against obesity, insomnia and anxiety, respectively. Finding compounds that regulate neurons X, Y and Z may well lead to new or better treatments for disorders that have no therapies at the moment or to new uses for existing drugs. Much remains to be discovered, but the future of optogenetics shines brightly.

Miesenböck G, Scientific American Journal

http://www.sciam.com/article.cfm?id=neural-light-show&print=true







Purifying Carbon Nanotubes


Moore’s Law states that the number of transistors that can be fitted on a chip doubles every eighteen months. However, current silicon technologies are approaching the limits imposed by quantum mechanics, which will stop Moore’s Law in its tracks. Therefore, new materials and techniques must be found to complement and increase the capabilities of the current silicon technologies to maintain growth and profitability of the semi-conductor industry. 2.1 The Oxford Invention (July 2004) Semiconducting carbon nanotubes (tube-like atomic structures) can be made to act like silicon and are one of the best candidate materials for replacing current semiconductors. A nanotube is about 1/500th the size of a current transistor and has excellent electrical properties. However, due to current production constraints, graphitic and general metallic impurities have, to date, impeded progress in this area. Available from Isis-innovation Ltd, a wholly owned subsidiary of Oxford University, The Oxford Invention is a new technique for purifying carbon nanotubes which solves this problem. Now, a product containing 90% semiconducting nanotubes can be produced with further purification expected as development continues. The technology may be used for both single-walled and multi-walled nanotubes. (Further information from: www.isis-innovation.com). 2.2 Optical Computers – nano-drill creates smallest lenses (July 2004) Another potential solution to stimulate future growth in computing is through computers which harness light instead of electrons to carry bits of information on a chip. On today’s conventional electronic chips full of transistors, the most basic requirement is to use copper wire to conduct electronic impulses. However, using light instead of electrons opens many possibilities; but with the associated challenge of how to move light around without using wires. Optical fibre is discounted because a typical fibre at 50 microns wide is as large as the chip itself. Scientists from the UK and Spain believe that they may have an answer to harnessing this new medium and focusing it to the right places on a chip, using a ‘nanometre drill’. They described in Science Express (8 July 2004) how artificial materials with tiny grooves and holes drilled into their surfaces could channel and focus light beams on a chip. When light beams hit the surface of a metal such as silver, as well as some reflection, another form of light and electron mixture called surface plasmons are excited on the substrate. In their published paper, the scientists show that microscopic holes created by an ultra fine nanometre drill called an ion beam can stimulate the creation of plasmon-like phenomena. These may then be harnessed to channel information carrying light at tiny scales using the holes, measured in tens of nanometres, to direct them. Professor Sir John of Imperial College London, the lead author of the paper, said: “It opens up a new dimension of design for people looking to use surface plasmons to put light on a chip. Instead of etching a path on a chip, now we can make holes to make a path to control light on a chip. The plasmons contain the same signals as the light exciting them and therefore can be used to transport information across the surface.”

UK Trade and Investment
http://ibb.gov.uk/wweurope/site_turk/index.cfm?langID=8&siteid=36