skip to main | skip to sidebar

Tuesday, July 03, 2007

Scientific Communications in Web 2.0 Context

This is a slightly out of context post that covers, instead of a particular aspect of science, some recent developments that may change the paradigms in scientific communications.

Despite the stereotype of lab-coat wearing geeks buried in their work with little connection to the outside world, communication is an extremely important aspect of a scientist’s job. Modern scientific research cannot be conducted in isolation. Hence scientists need to effectively disseminate information, whether presenting data in the informal settings of a lab meeting, or in more formal talks or posters at seminars and scientific conferences. Additionally, there is the matter of publishing scientific findings in technical journals and convincing peers about the importance their work while applying for grants. In a broader scope, scientists also need to spread knowledge to the general lay audience (especially in the current atmosphere of countries like the US, where scientists are being broadly discredited through active political agendas).

Traditionally, the World Wide Web has been employed by scientists as a tool to read/respond to e-mails, search and read journal articles (the old practice of going to the library to access paper copies of journals is all but obsolete), and search information on products, procedures etc (not to mention keeping bench scientists occupied while they wait for reactions to incubate or gels to complete their run). However, the role of the internet in science communication is rapidly expanding. Advent of the hyper-networked platform of the so-called Web 2.0 has particularly opened up excellent opportunities for scientists to both reach out to wider audiences as well as improve communication within their own community.

A major advance has been with respect to communication of science to wider audiences through mediums like blogs (this blog itself is a humble attempt in that direction). Previously, only a select group of science writers and a small number of publications could reach out to this audience. But given the ease of setting up and maintaining a blog and its potential reach, scientists now have an unprecedented access to audiences to talk about technical aspects as well as science policy, future etc. A good example is the wide assortment of blogs hosted under the banner of Scienceblogs, with a majority written by active science researchers.

On the technical side, two exciting portals that could possibly revolutionize scientific communications have come online in recent times. Late last year, the Public Library of Science (PLoS), a non-profit organization championing ‘open-access’ in science publishing, began a web-based journal called PLoS One. Other than being openly accessible to anyone with an internet connection (as opposed to ones that require paid subscriptions), this online-journal is distinguished by its criterion for acceptance: the peer-review process only considers the technical and methodological soundness of the scientific experiments, and accepts paper without any subjective considerations for perceived importance or relevance of the work.

While PLoS One accepts completed manuscripts, the highly reputed journal Nature recently launched a site called the Nature Precedings where scientists can submit pre-publication data and ideas in the form of ‘presentations, posters, white papers, technical papers, supplementary findings, and manuscripts’. Precedings does not have any peer-review system other than a check for completeness and scientific relevance (ie to make sure no non- or pseudo- scientific materials are being posted).Also, while PLoS One accepts manuscripts related to any ‘science or medicine’, Precedings is restricted to ‘biology, medicine (except clinical trials), chemistry and the earth sciences’.

Both Precedings and PLoS One submissions are assigned a unique number called the Digital Object Identifier (DOI) which enables other researchers to cite the articles in their own communications. Additionally, in both cases, the authors retain copyright of the articles through a Creative Commons License. Both sites also have Web 2.0 features such as RSS feeds and tags enabled.

But perhaps the most exciting feature on both Precedings and PLoS One is the ability of readers to comment on the published papers or posts, the idea being that science should be interactive and the connectivity of the web should enable researchers to participate actively in discussions with a broad audience. Additionally, the ability to vote on papers and submissions provides an alternative form of peer-review (a scientific equivalent of Digg?)

Therefore, unlike publishing in traditional journals, a process that takes a few months to complete, or presenting at a conference, only a few which are held and typically with a restricted audience, these portals will allow rapid dissemination of information to a large geographically unrestricted group of scholars. In some ways it is like presenting your data at a big conference, without the actual travel. Potentially, a huge beneficiary could be science in economically poorer countries (or even scientists with sparse budgets in developed countries), where researchers do not have the resource or funding necessary to attend many high quality conferences.

Another benefit of scientists widely using these services is the potential reduction of research redundancy. Especially in the interdisciplinary scenario of today, there are often two or more research groups employing similar methods for a single purpose. While competition is good in some cases, in this day and age of restricted budget for science, it is perhaps better to collaborate then compete.

However, the major concern for the success of such initiatives is whether enough scientific researchers will participate in submitting, commenting and engaging in a meaningful discussion. Old mindsets are difficult to change; currently, scientific scholarship is judged by the number of publications, but more so by the quality of the journals published in, as decided by their Impact Factor. Therefore many researchers would prefer to publish in traditional, arguably more prestigious journals. Moreover, in case of Precedings, it is possible that many laboratories around the world will be wary about releasing novel findings or new ideas for the fear of being scooped by others. Secondly, there is the concern about participation in the discussions. For example, while there are significant number of papers published in PLoS One, very few are commented on, leave aside carrying out an active discussion [1]. Nature’s previous attempt at an ‘open’ peer-reviews system was a failure of sorts as well. Some scientists may even view such activities as time-wasting diversions from real work. Another criticism, mainly for PLoS One, is the fact that the fees for publishing are rather high – 1250 US dollars, which might be too steep for scientists with low research budget.

Still, one can hope that with time, scientists will come to embrace the use of online resources for rapid sharing and discussion of their research. In the world of physics and mathematical research, the Cornell University maintained pre-publication portal, ArXiv, has achieved this goal with great success. It is time for all branches of science, especially the ever expanding biomedical sciences, to welcome the concept. Publishing or pre-publishing at sites like PLoS One or Preceding and obtaining high votes or encouraging active discussions should be looked upon as meaningful scholarly achievements. One can also hope for further engagement of internet technologies in science, e.g. laboratories using a Wiki-like platform to update their results, experimental protocols etc. Fittingly, I will cite this presentation posted on Precedings on how such a communication scheme will look like.

--------------------------------------------------------


[1]: PLoS has recently engaged the services of an Online Community Manager to encourage commenting. The job is held incidentally by a very active science blogger, who got the job in a very Web 2.0 manner, with the initial contact occurring through his blog !

Thursday, June 28, 2007

Could life have started with Simplicity?

One of the perplexing questions people ask in the origin of Life is how did such complexity ever evolve from a simple broth of chemicals in the prebiotic world. The first person to ever attempt to try to answer it was Harold Urey and Stanley Miller who created a chemical soup of ammonia (reduced Nitrogen), methane (reduced C), and hydrogen (should be present in a reduced atmosphere) and subjected the soup to electric discharge (simulating lightning and solar radiation). This experiment was performed in the 1950s and was done to simulate early Earth condition. After this electric discharge passed through the soup, simple amino acids and sugars and the raw materials for nucleic acid bases such as adenine were found to be created in this mixture [1]. These are all the raw ingredients for biochemistry to start hence bringing evolution of the origin of life into the realm of experimental science for the first time. Even though, the conditions of early Earth have come into question since then, Urey and Miller deservedly received a Nobel prize for the novel aspect of their work. In fact, the experiments were repeated recently with nitrogen gas instead of ammonia, carbon dioxide instead of methane, and hydrogen or water (currently accepted conditions for early Earth), and the products from the broth were similar in nature to those found in the Urey-Miller experiment.

In the prebiotic world envisioned by most scientists, chemistry would have dominated the changing scenario and landscape found in Earth. Chemistry, unlike biochemistry, is very non-specific and would create a huge pool of chemicals. Under the assumption that there were signs of modern cellular organisms in that pool (and this is a big assumption made out of necessity), then all or most of the biochemical reactions would be a small subset of all the reactions occurring in this pool called protometabolism [2]. Somehow, after the first catalyst were formed (not as efficient as modern enzymes), those catalysts were more specific towards a subset of these reactions and made these reactions occur at a faster rate leading to a feedback mechanism by which these reactions became the dominant reactions leading to the biochemicals or life as we know it now.

One such theory of the origin of life states that an autocatalytic reaction cycle was present in the chemical gemisch in the prebiotic world and by the nature of it being autocatalytic, it started dominating this prebiotic world leading to the first signs of life [3-6]. One such autocatalytic cycle is the tricarboxylic acid cycle (TCA or Krebs or Calvin cycle), which is present in all modern organisms in one form or the other [7]. The TCA cycle is the only route of carbon fixation into biochemicals starting with carbon dioxide as the source of carbon [8,9]. In one form of the cycle, called the reverse TCA cycle (and found in few organisms), the overall reaction can be visualized as 2 molecules of carbon dioxide (found in prebiotic earth) reacting with a molecule of citrate and 6 molecules of hydrogen to form 2 molecules of citrate and 5 molecules of water. The important thing to note is that 2 molecules of citrate were formed from 1 molecule of citrate hence producing more of the reactant. In other words, 2 molecules of citrate can be used as reactant in the next round of the TCA and the cycle is hence called autocatalytic. As it is autocatalytic, once prebiotic conditions existed where this cycle could take place completely (all reactions in it have to take place), this cycle would have taken place much faster after some time and would have slowly dominated the early prebiotic metabolism.

In addition, in modern cells, the TCA or the rTCA cycle is at the center of a cell's metabolism. In other words, the intermediates of the TCA cycle form amino acids, nucleotides, and cofactors for the rest of the cellular machinery. So, after this cycle starts to dominate the prebiotic world, the side reactions would start producing amino acids and nucleotides leading to complexity required for biochemistry to begin [8]. However, the conditions required for this cycle to take place completely have not been found so far. Secondly, the source of energy of these reactions and the compartmentalization of these reactions (to cause insignificantly higher concentration of these biochemicals) is still a matter of speculation and further research.

It was postulated that in early prebiotic conditions, these reactions could have taken place on clay or on metal sulfide surfaces such as FeS. These metals would have themselves been oxidized to ferric sulfide providing energy to take place to completion [3,4]. Another theory is that it may not have been just the TCA cycle but some other cycle like the ribose cycle that could have been at the origin of metabolism [5]. The advantage of the ribose cycle is that unlike the TCA cycle, there are only 1 or 2 reactions in the cycle that do not take place at an appreciable rate without a catalyst and hence only 1 or 2 reactions need the clay or metal surface as a catalyst.

In either case, it is a question whether an autocatalytic cycle should be considered as life. In my opinion it should not, even though it is producing more of itself (chemical form of reproduction) at the end of the day and there is energy conversion in the cycle (metabolism). It is just that life is very specific and driven unlike early chemistry which would have been highly aspecific. But this is certainly a matter of speculation and discussion.

[1] Biochemistry - Stryer.
[2] Singularities - de Duve.
[3] Wechterheuser - Evolution of the first metabolic cycles - PNAS, 87:200-204, 1990.
[4] Wechterheuser - On the chemistry and evolution of the pioneer organism - Chemistry and Biodiversity, 4:584-602, 2007.
[5] Orgel - Self-organizing biochemical cycles - PNAS, 97:12503-12507, 2000.
[6] Smith and Morowitz - Universality in intermediary metabolism - PNAS, 101:13168-13173, 2004.
[7] Wikipedia entry on Citric acid cycle.
[8] Morowitz, Kostelnik, Yang, and Cody - The origin of intermediary metabolism - PNAS, 97:7704-7708, 2000.
[9] Srinivasan and Morowitz - Ancient genes in contemporary persistent microbial pathogens - Biol. Bull., 210:1-9, 2006.

PS: Stanley Miller passed away this year at the age of 77 and this post is dedicated to him.

Tuesday, May 29, 2007

Resolving the panorama

This post is about image stitching methods used to make a panoramic image. Panoramic images have become important in the digital age. Initially, panoramic images were developed to increase the field of view on the photograph. In the digital age, because one cannot print out pictures with resolutions less than 200 dots per inch (explained here), the method to take print outs for posters is to take a number of photographs with at least 15% overlap and stitch them together later using some software. In order to take the individual pictures that make a panoramic picture, the best technique involves using a tripod so that the camera lens only moves on a sphere eliminating parallax error. In addition, the aperture and shutter speed should not vary between the various pictures. More tips on the techniques of panoramic pictures can be found all over Google or by sending an email to me. This post is more about the science behind stitching the images of a panoramic picture.

The idea of image stitching is to take multiple images and to make a single image from them with an invisible seam and such that the mosaic pictures remains true to the individual images (in other words, does not change the lighting effects too much). This is different from just placing the images side by side because there will be differences in the lighting between the 2 images and that would lead to a prominent seam in the mosaic picture.



This Figure shows 3 photos and the locations of the seams are shown in black boxes on each picture and the final mosaic formed from all three pictures.

The first step is to find points that are equivalent in 2 overlapping pictures [1]. This can be done by taking into consideration a certain amount of pixels in the neighborhood of a pixel from 2 pictures and finding the regions that overlap in colors between the 2 pictures. Then the images are placed or warped on a surface such as a cylinder (because the panoramic picture is a 2-dimensional representation of the overlapping pictures in a cylinder quite often). After this step the curve is found that gives the most amount of overlap between the equivalent pixels on both images. Then the images are stitched together with color correction. I will deal in this post with the various algorithms for color correction.



This figure is an example of the Feathering approach.


1. Feathering (Alpha Blending): In this method, at the seams (the regions of overlap), the pixels of the blended image are given colors that are effectively linear combinations of the pixel colors of the 1st image and the 2nd image. The effect is to blur the differences of both images at the edges. In this method, an optimal window size is found so that the blurring is least visible.




This figure shows the optimal blend between the 2 figures in the previous figure.


2. Pyramid Blending: In addition to the pixel representation of images, images can also be stored as pyramids. This is a data compression method in which a the image is stored as a hierarchy or pyramid of low-pass filtered versions of the original image so that successive levels correspond to lower frequencies (dividing the images into different layers that vary over a smaller or larger region of space so that the sum of it gives you the original image). During the blending method described above, the lower frequencies (which vary over a larger distance) are blended over spatially larger distance and the higher frequencies are blended over a spatially lower distance [1] causing a more realistic blended image to be formed. Here during the pyramid forming process, the 2nd derivatives of the images (Laplacian) are taken into consideration while forming the pyramid and the blended pyramid is formed and reintegrated to form the final blended image.



This figure shows the pyramid representation of the pixels in an image and the pyramid blending approach.

3. Gradient Domain Blending: Instead of making a low resolution mapping of the image as above, the gradient domain blending method requires the calculation of the 1st derivative of the images. Hence, the image resolution is not reduced before the blending process, but the idea is the same as above. This method is also developed to find the optimal window size for alpha blending and is adaptive to regions that vary fast or slower.


This figure shows the gradient blend approach.

Sources:
[1]: http://www.cs.huji.ac.il/course/2005/impr/lectures2005/Tirgul10_BW.pdf
[2]:
http://rfv.insa-lyon.fr/~jolion/IP2000/report/node78.html

Wikipedia article on feathering.

All figures taken from http://www.cs.huji.ac.il/course/2005/impr/lectures2005/Tirgul10_BW.pdf

Thursday, May 24, 2007

Biological Control - Doing it yourself.

It was not too long back that the whole of biology was very protein and DNA centric. The reasoning was that proteins were used to do all the work in the cell - be it chemical work (enzymes), or physical work (motors, and pumps). DNA was important because it provided all the information to make the proteins and contain the set of genetic instructions that are passed on from generation to generation. For long, there was a battle whether DNA was more important or proteins were more important neglecting DNA's chemical cousin RNA.

RNA was considered as a step required in modern organisms to convert DNA to proteins. RNA is made up of nearly the same chemical constituents as DNA but it is more flexible and can have wide ranging 3 dimensional structures unlike DNA's double helical structure. However, this increased flexibility comes at a price - RNA is more unstable and in modern cells, a single molecule of RNA does not remain functional for long periods of time (mean life time is approx 5 minutes in E.coli).

Of course, all this changed when it was found that RNA molecules could be used as catalysts and even in modern day cells, there are some RNA catalysts also called ribozymes (and the list of ribozymes discovered keeps increasing). RNA captivated the imagination of biologists as this was a molecule that could store genetic information as well as be used as catalysts - taking on the dual role of enzymes and information storage. All of a sudden, RNA was considered to be at the origin of life as we know it. However in the RNA world hypothesis, one should take into consideration that it is not that only RNA is present. It only postulates that RNA is present and is dominant but other biochemicals such as peptides (small proteins) and DNA oligomers (small DNA molecules) are also present and aiding life (idea originally proposed in [1]).

One of the biggest controversies against the RNA world hypothesis has been that it does not play that big a role in modern cells. However, it has been found more recently that there are many RNA control elements in the cell. One such control element is the riboswitch. For a gene to be made, the DNA gets converted into a message called the mRNA (messenger RNA) which later gets converted to the protein equivalent to that message. It has increasingly been found that mRNA do not contain only the message to be read but certain control elements could also be present in the mRNA. These control elements are called riboswitch.

Lets take an example. Supposing you want to make Vitamin B1. There is an intermediate in its biochemical pathway called thiamine pyrophosphate (TPP). TPP is also important for nucleotide (the chemical constituent of RNA and DNA) and amino acid (the chemical constituent of proteins) biosynthesis and is important for the cell to have the right amount of TPP channeled into the different biochemical pathways. When too much of TPP is present in the cell, TPP binds to a certain riboswitch in it's own biochemical pathway. This causes the riboswitch [2] to suddenly have a defined 3-dimensional structure (from an earlier random or semi structured RNA element). This defined 3-dimensional structure also blocks the production of the protein for making more TPP. The switch in the mRNA turns the production of the protein that makes TPP on or off depending on whether enough TPP is present in the cell or not - hence regulating the production of TPP itself. So far, riboswitches are found more in the microbial world and are only now being found in the eukaryotic world.

Now, in the latest issue of Nature, the first riboswitch that controls splicing in higher organisms such as fungus has been found [3]. Splicing is the mechanism by which parts of the mRNA are removed before the protein is made so that parts of the DNA never translated in the protein. Alternative splicing is the mechanism by which a single gene at the DNA level can be translated into multiple protein molecules. This is done by excising different parts of the mRNA (excising the DNA only in one situation and not another) before it gets converted to protein. Splicing and alternative splicing occurs only in eukaryotes and has also been discussed here.

Anyways, the first riboswitch in the mRNA have been found to function for alternative splicing purposes. The TPP biochemical pathway discussed above is the system that they found riboswitches in. In this case, when TPP was present, the riboswitch forms a three dimensional structure that avoids splicing and the protein that is formed can not make more TPP. So the objective was again control of TPP concentration in the cell but the means used was alternative splicing instead of just blocking formation of protein. The implications of these results will only come out with time, but there is speculation that this opens up a whole pandora's box on riboswitches that could be found in eukaryotes.

[1] The Genetic Code - Carl Woese, 1968.
[2] Thiamine derivatives bind messenger RNAs directly to regulate bacterial gene expression. Wade Winkler Ali Nahvi & Ronald R. Breaker. Nature 419, 952 - 956 (2002).
[3] Control of alternative RNA splicing and gene expression by eukaryotic riboswitches. Ming T. Cheah, Andreas Wachter, Narasimhan Sudarsan & Ronald R. Breaker. Nature 447:497 (2007) and its companion discussion article - Molecular biology: RNA in control. Benjamin J. Blencowe & May Khanna. 447:391 (2007)

pdf of all cited aritcles avaiable on request

Monday, May 21, 2007

Battle of sexes

Human Beings are diploid – that is each of us contains a copy of chromosome from our Mom and one from Dad. This gives us the advantage of having a spare copy of any given gene. However, there are certain genes that are "marked"in the embryo in such a way that either the Mom's or the Dad's copy is selectively silenced. The end result is that some genes in our body come with instructions attached, I am from Mom, Use only me! or vice- versa. The process that does this is called imprinting – either maternal ( for use only Mom’s gene) or paternal ( for use only Dad’s gene).

Why develop this curious phenomenon? On the surface it seems to be counter productive to us humans. If the marked/imprinted gene is defective then there is no working copy left since the silenced copy form the other parent can never be used. So why evolve such a complex yet dangerous mechanism? Since the process became known to scientists in the early 60s, several hypotheses have been put forth as to why this must occur. One of the most popular one shows the peculiar nature of inherent in a gene – its selfishness.

The Haig hypothesis is simple - it relates the development of a baby to the parent's inherent fidelity. The hypothesis, put forth by David Haig, predicts that Mom and Dad have different interests when in it comes to the development of their baby in any non-monogamous species, and hence imprint genes that are involved in growth of the embryo. Simply put both Mom and Dad fight a genetic war when it comes to the baby, more so if either one of them is prone to promiscuous!

Is there evidence for this prediction?
There is an excellent study done with the “deer mice” Peromyscus. This is a perfect species genus as we have both monogamous and polygamous species that can interbreed namely, P. maniculatus and P. polionotus. The females of the dark brown Peromyscus maniculatus species are promiscuous (babies within a single litter often have different fathers). Peromyscus polionotus, the sandy mouse, however pairs for life.

Check scenario one - Dad screws around but Mom is faithful.
In this case, the dad knows that the chances that all the offspring that she might carry are all his is slim. So he has to think of a way to make sure his baby grows faster at the cost off all other siblings and even mom.

This is exactly what you see when mate the faithful Peromyscus polionotus female with the
P. maniculatus male.The pups obtained are huge and mothers die giving birth.
Reason ? Well, the Peromyscus maniculatus dad has put a copy of a gene that ensures his baby grows faster since the female of his species is promiscuous. But the poor faithful Peromyscus polionotus mom is not used to playing this war and hence has no defense against the signals he is sending in. So the babies, prompted by Dad's genes grow unchecked, use up the mom’s resources and kill mom.


Check Scenario two - Mom is promiscuous but Dad is not.
The mom knows that since all the litter she carries has her genes, she can spread her genes in the population if she can restrict the growth of any one fetus, to conserve resources for her offspring with other males. So the genes she imprints will slow fetal growth.
That is what happens when you mate the promiscous P. maniculatus females with a steadfast Peromyscus polionotus male - you get tiny pups.
What happens? In this case, the mom is using her imprinted copy to slow down growth of the babies but the counterpart signal to grow is never received from the dad. The result is puny babies.

What if both parents are not promiscuous or vice versa?
The offsprings from a P. maniculatus cross or from a Peromyscus polionotus cross are healthy and similar in size. Reason? Each partner has co-evolved the defenses. In case of the promiscuous pair, the dad signals the babies to grow faster and mom to grow slower. In the other pair, each parent has the same vested interest in the offspring. End result is a normal sized litter.

What about humans?
So far about 80 of the 30,000 or so genes in the human genome are currently known to be imprinted. More importantly, most of these genes seem to play a role in directing fetal growth! And in the expected direction if humans were not considered to be monogamous-- genes expressed from the dad’s copy generally increase resource transfer to the child, whereas maternally expressed genes reduce it. So our genes behave much in the same way as the promiscuous mice! However there are imprinted loci are also implicated in behaviour/ neurological cases (Prader -Willi Syndrome) indicating that there is more to understand about this phenomenon.

More support for the theory comes from early indications that of very little imprinting in in fish, amphibians, reptiles and birds. Since the hypothesis states that imprinted genes are linked with acquiring resources from parents, this makes sense. But imprinting does also exist in seed plants where the endosperm tissue acts as the placenta to feed the embryo. Why this is the case is still unclear. There is lot of research that is ongoing and more that needs to be done. As molecular tools improve, we will be dissecting roles of imprinted genes much easily.

Ref:
1. Dawson, W.D. Fertility and size inheritance in a Peromyscus species cross. Evolution 19, 44−55.
2. Vrana Et. al., Genomic imprinting is disrupted in interspecific Peromyscus hybrids, Nature Genetics, 20, 362 - 365

Sunday, May 13, 2007

Global Warming Facts - Part 1

(I'm referring to news articles rather than scientific articles, and avoiding technical discussions in order to keep this article readable to everybody.)

If I told you that the Ganges and the Brahmaputra will both dry up by the year 2035, how hard would you laugh at me? Now, what if it was the world's leading scientific authority on climate change that told you?

I'm sure every one of us knows at least a little bit about global warming: that it is primarily caused by the greenhouse effect, and that greenhouse gas levels in the atmosphere have been rising because of industrialization and deforestation, that rising global temperatures will melt polar ice caps thus causing sea levels to rise, and so on. However, until recently, we've all been led to believe that we have a century or two to cut greenhouse emissions and quell the problem. The key phrase there is "until recently", because climate science has now progressed enough to tell us how bad the situation really is.

How bad will India be hit?
The first sentence of this article must have sent alarm bells ringing in your head. But a little thought will tell you why the Ganges will dry up, if not when: the Ganges, and indeed all perennial rivers in North India, are fed by glaciers in the Himalayas. As global temperatures rise, the glaciers receive snow later and start melting earlier, causing them to gradually fall back to the colder regions. This news article [1] in the Hindu has a detailed discussion about the effect of global warming on glaciers. The world's leading authority on climate change, the Intergovernmental Panel on Climate Change (IPCC), believes that all North Indian rivers will turn seasonal, and ultimately dry up by the year 2035 itself if global warming remains unchecked.

But there's more. Another news article [2] confirms our worst fears: inundation of low-lying areas along the coastline owing to rising sea levels; drastic increase in heat-related deaths; dropping water tables; decreased crop productivity are some of the horrors outlined for us. Falling crop productivity due to the change in the length of the seasons is of particular concern, because there is an acute shortage of arable land in our country. With the population still growing rapidly, and crop productivity dropping, combined with the fact that we are already facing a grain shortage this year and have been forced to procure from abroad, the situation appears dire.

Is it fair? The major contributors to the greenhouse effect thus far are the developed nations, and even on an absolute basis (let us not even go into a per-capita basis), India's contribution to global warming is very little. And yet, we will be among the first to suffer its effects, as the change in climate will decrease crop productivity near the equator but actually increase it in the temperate regions. Effectively, the third world has been offered a very raw deal: suffer for something you didn't do, and still bear the yoke of cutting emissions because, frankly, at this point our planet needs all the help it can get.

How high is safe?
Let us leave India's concerns aside for now, take a step back and look at the global picture. Global temperatures have risen about 0.6 C on an average in the past century. There is a worldwide consensus among scientific circles that the adverse effects of global warming will probably be manageable for a rise in temperature upto 2 C, but beyond that, melting ice caps, unbalanced ecosystems, drastically reduced crop yields, etc. will cause worldwide disaster of monstrous proportions. If I haven't painted the picture clearly enough for you, read this article [3] and this article [4] detailing exactly what countries like Canada and Australia can expect in terms of "disaster".

But, is this where you heave a sigh and think, if it takes a century for the temperature to rise 0.6 C, then we have plenty of time to remedy the situation before the rise reaches 2 C? Wrong. You see, there is a lag between the rise in greenhouse gases and the rise in global temperatures. Scientists give the analogy of heating a metal plate directly, and then indirectly, by placing a metal block between the plate and the heat source: when you place the block, it takes some time before an increase in temperature at the heat source affects the plate; at the same time, if the heat source stabilizes or drops in temperature, the plate will continue to increase in temperature for a while before stabilizing or dropping. Thus, the increase in temperature now is a direct effect of rising greenhouse gas levels sometime in the 20th century. We are yet to reap the effect of the carbon dioxide we are currently dumping into the atmosphere! And the fact is, the amount of greenhouse gases that have been going into the atmosphere has been steadily accelerating over the past century.

So, where should we hold greenhouse gas levels in order to hold the global temperature rise to 2 C? The answer cannot be explained in one sentence, because there is some statistics involved. We cannot accurately predict the temperature rise from carbon dioxide levels yet; we have to talk in terms of probabilities. A recent study by Meinshausen et al. [5] gives some startling numbers. This is actually explained in much simpler terms in this press article [6]. The gist of it is that, we are already past the safe limit! You see, the current level of greenhouse gases in the atmosphere stands at 459 ppm of carbon dioxide equivalent (the actual concentration of CO2, corrected to include the effect of other greenhouse gases). According to the Meinshausen study, if atmospheric greenhouse concentrations are maintained at 450 ppm, the probability of global temperature rise crossing 2 C reaches unacceptable levels (> 50%). The current EU target is 550 ppm - at that level, we will be looking at a rise of around 3 C! In other words, emissions across the world should already be decreasing, not increasing at an accelerating pace. Countries around the world should be spending a significant percentage of their GDPs to save the planet, but everyone seems reluctant to move.

Panels and Reports
I had mentioned the IPCC earlier. The IPCC was formed by the UN and has actually been around since 1988. Over the years, it has established itself as the world's leading authority on climate change. It publishes its findings periodically, the assessment reports published this year being the fourth set, and the most controversial one because it reads more like a disaster movie script than a scientific report. Actually, there had been protests over the previous report that the IPCC is being alarmist, and the UK government ordered an independent study be made (a committee was appointed, led by Nicholas Stern), and its findings were released at the end of October 2006. The Stern Review actually reported that the IPCC had understated the situation in the third assessment report. You see, climate science is far from exact, and the IPCC tends to err on the conservative side. There are already publications that say that the IPCC has been conservative even in the fourth report - read this news article [7].

Perhaps the most important thing that the fourth assessment report has accomplished is that it has finally laid to rest claims that global warming is a myth. Yes, until a few years ago, there wasn't even a global consensus on whether global warming is the fault of man, because the waters got muddied by studies that showed that greenhouse gases, while absorbing heat radiated by the earth, happened to reflect sunlight coming in, thus reducing temperatures. Further, it is believed that geologically, the world is headed towards an ice age. Increasing global temperatures were attributed to periodic properties of the Sun! Now, at last, all these speculations have been laid to rest, and IPCC has stated that there is a 90% probability that the phenomenon of increasing global temperatures is anthropogenic (caused by man), and primarily because of greenhouse gases - what we've suspected all along. India, too, has finally woken up to the threat, and has set up a panel [Citation needed] to investigate the specific effects of global warming on India over the next few decades, and what remedial measures are feasible. The panel is to be headed by Mr. Pachauri himself, the current head of the IPCC.

To be continued...
In the next part: The Kyoto Protocol, Emissions Trading, Extreme weather events, Bush-bashing, cows, bees and more!

References

[1] The Great Himalayan Meltdown
[2] Climate Change Will Devastate India
[3] Dire consequences if global warming exceeds 2 degrees says IUCN release
[4] Two degrees of separation from disaster
[5] M. Meinshausen "What Does a 2 C Target Mean for Greenhouse Gas Concentrations? A Brief Analysis Based on Multi-Gas Emission Pathways and Several Climate Sensitivity Uncertainty Estimates." in H. Schellnhuber, et al., eds. Avoiding Dangerous Climate Change (Cambridge University Press, New York, 2006)
[6] The rich world's policy on greenhouse gas now seems clear: millions will die
[7] Some scientists protest draft of warming report

Wednesday, May 09, 2007

Attention Concerned Scientists in IN, KY and OH

If you are a scientist in Kentucky, Indiana or Ohio and are concerned about the scientifically inaccurate materials at the Ken Ham's Creationist museum, please sign this.

Statement of Concern
We, the undersigned scientists at universities and colleges in Kentucky,
Ohio, and Indiana, are concerned about scientifically inaccurate materials at the Answers in Genesis museum. Students who accept this material as scientifically valid are unlikely to succeed in science courses at the college level. These students will need remedial instruction in the nature of science, as well as in the specific areas of science misrepresented by Answers in Genesis.

Via Pharyngula

Subscribe to: Comments (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /