Newswise News Feed

Loading Newswise Feed...

Tuesday, March 31, 2009

Most Popular: Octo-Moms at Risk for Postpartum Depression

The tabloid media has kept a close eye on the slow-motion train wreck of Octo-Mom Nadia Sulaiman, a California woman who recently gave birth to octuplets, bringing her grand total of children to 14.  TV interviews, absurd calls to 911, and scuffles with paparazzi have shown that this woman may not be entirely stable.  New research reveals that one more complication for the Octo-Mom could be a severe case of postpartum depression.

Researchers at the Johns Hopkins Bloomberg School of Public Health recently examined the relationship between multiple births and maternal depressive symptoms. They found that multiple births increased the odds of maternal depression.  Mothers ofmultiples had a 43 percent increased risk of moderate to severe depression compared to mothers of single-born children.

Complicating the issue further is evidence that few mothers with depressive symptoms, regardless of the multiple births status, reported talking to a mental health specialist or a general medical provider.

"The low numbers of women receiving mental health counseling despite symptoms reinforces the need for facilitating better referral of patients with depressive symptoms," said Cynthia Minkovitz, MD, MPP, senior author of the study

From the article:

Mothers of multiples have 43 percent increased odds of having moderate to severe depressive symptoms nine months after giving birth compared to mothers of single-born children, according to researchers at the Johns Hopkins Bloomberg School of Public Health. Researchers examined the relationship between multiple births and maternal depressive symptoms and found that multiple births increased the odds of maternal depression, and that few mothers with depressive symptoms, regardless of the multiple births status, reported talking to a mental health specialist or a general medical provider. The results are published in the April 1, 2009, issue of Pediatrics.

"Our findings suggest that 19 percent of mothers of multiples had moderate to severe depressive symptoms nine months after delivery, compared to 16 percent among mothers of singletons," said Yoonjoung Choi, DrPH, lead author of the study and a research associate with the Bloomberg School's Department of International Health. "Mothers with a history of hospitalization due to mental health problems or a history of alcohol or drug abuse also had significantly increased odds. Non-Hispanic black mothers had higher odds compared to non-Hispanic white mothers. Mothers who were currently married, Hispanic, or with a high household socioeconomic status were less likely to have depressive symptoms." Choi, along with colleagues, used data from the Early Childhood Longitudinal Study--Birth Cohort, a nationally representative sample of children born in 2001. They measured depressive symptoms in mothers using an abbreviated version of the Center for Epidemiologic Studies Depression (CES-D) scale. Researchers examined the association between multiple births and maternal mental health, given the rapidly increasing multiple births rate in the U.S. over the last two decades. They also found that, among the mothers of both singleton and multiples, only 27 percent reported talking to a mental health specialist or a general medical provider when experiencing depressive symptoms. Researchers believe greater attention is needed in pediatric settings to address maternal depression in families with multiple births.

"The low numbers of women receiving mental health counseling despite symptoms reinforces the need for facilitating better referral of patients with depressive symptoms," said Cynthia Minkovitz, MD, MPP, senior author of the study and an associate professor with the Bloomberg School's Department of Population, Family and Reproductive Health. "Pediatric practices should make an additional effort to educate new and expecting parents of multiples regarding their increased risk for maternal postpartum depression. Furthermore, well-child visits are potentially valuable opportunities to provide education, screening and referrals for postpartum depression among mothers of multiples; such efforts require linkages between pediatric and adult systems of care and adequate community mental health resources."

"Multiple Births Are a Risk Factor for Postpartum Maternal Depressive Symptoms" was written by Yoonjoung Choi, David Bishai and Cynthia Minkovitz.

read the full article...

Most Popular: Predicting Breast Cancer Metastasis

Predicting breast cancer metastasis may get easier with a new process called TMEM,  Tumor Microenvironment of Metastasis.

TMEM is determined by measuring cellular activity in a concentrated area of the tumor, identifying three types of cells and how they interact; invasive carcinoma cells, perivascular white blood cells (macrophages), and the endothelial cells that line vessel walls. A higher TMEM density rating was associated with the development of distant organ metastasis via the bloodstream -- the most common cause of death from breast cancer.

"If patients can be better classified as either low risk or high risk for metastasis, therapies can be custom tailored to patients, preventing over-treatment or under-treatment of the disease," adds first author Dr. Brian D. Robinson, resident in Anatomic Pathology at NewYork-Presbyterian Hospital/Weill Cornell Medical Center.

From the article:

Researchers at NewYork-Presbyterian Hospital/Weill Cornell Medical Center have identified a new marker for breast cancer metastasis called TMEM, for Tumor Microenvironment of Metastasis. As reported in the March 24 online edition of the journal Clinical Cancer Research, density of TMEM was associated with the development of distant organ metastasis via the bloodstream -- the most common cause of death from breast cancer.


The National Cancer Institute (NCI)-funded translational study could lead to the first test to predict the likelihood of breast cancer metastasis via the bloodstream -- a development that could change the way breast cancer is treated. An estimated 40 percent of breast cancer patients relapse and develop metastatic disease. About 40,000 women die of metastatic breast cancer every year.


"Currently, anyone with a breast cancer diagnosis fears the worst -- that the cancer will spread and threaten their lives. A tissue test for metastatic risk could alleviate those worries, and prevent toxic and costly measures like radiation and chemotherapy," says senior author Dr. Joan G. Jones, professor of clinical pathology and laboratory medicine at Weill Cornell Medical College and director of Anatomic Pathology at NewYork-Presbyterian Hospital/Weill Cornell Medical Center.


"If patients can be better classified as either low risk or high risk for metastasis, therapies can be custom tailored to patients, preventing over-treatment or under-treatment of the disease," adds first author Dr. Brian D. Robinson, resident in Anatomic Pathology at NewYork-Presbyterian Hospital/Weill Cornell Medical Center.


The Weill Cornell investigators set out to build on previous research by co-author Dr. John S. Condeelis of the Albert Einstein College of Medicine. Working in animal models, he identified a link between blood-borne or systemic metastasis and a three-part association between invasive carcinoma cells, perivascular white blood cells (macrophages) and the endothelial cells that line vessel walls. To confirm this finding in humans, Drs. Jones and Robinson developed a triple immunostain for human breast cancer samples that simultaneously labels the three cell types that together they named TMEM (Tumor Microenvironment of Metastasis).


In a case-control study, they performed a retrospective analysis of tissue samples from 30 patients with invasive ductal carcinoma of the breast who developed systemic, distant-organ metastases. These samples were compared to matched controls that had only localized disease (i.e., invasive ductal carcinoma limited to the breast or with regional lymph node metastasis only). All patients were female and underwent primary resection of their breast cancer at NewYork-Presbyterian Hospital/Weill Cornell Medical Center between 1992 and 2003.


They found that TMEM density was more than double in the group of patients who developed systemic metastases compared with the patients with only localized breast cancer (median of 105 vs. 50, respectively). Offering further evidence in support of the TMEM concept, they found that in well-differentiated tumors, where the outcome is generally good, the TMEM count was low.
Notably, TMEM density was associated with the development of distant-organ metastasis, independent of lymph node status and tumor grade.


"Traditionally, the likelihood of breast cancer metastasis is estimated based on tumor size, tumor differentiation -- how similar or dissimilar the tumor is compared to normal breast tissue -- and whether it has spread to the lymph nodes. While these are useful measures, TMEM density directly reflects the blood-borne mechanism of metastasis, and therefore may prove to be more specific and directly relevant," says Dr. Jones.


The researchers say the next step will be to validate the findings in a larger sample group. Also on the agenda is identifying a threshold TMEM density for metastasis risk, and streamlining the process for measuring TMEM.


Breast cancer is the most prevalent malignant disease of women in the developed world, apart from non-melanoma skin cancers, with approximately one in eight women in the United States being diagnosed with breast cancer at some time in their lives. While an estimated 10 percent to 15 percent of patients have an aggressive form of the disease that metastasizes within three years after initial diagnosis, metastasis can take 10 years or longer to occur. To decrease the risk for the emergence of metastatic tumors, approximately 80 percent of breast cancer patients are treated with adjuvant chemotherapy. The clinical benefit is a 3 percent to 10 percent increase in 15-year survival, depending upon the age of the patient at diagnosis.

read the full article...

Monday, March 30, 2009

Diabetes Self-Care Might Not Mean Lower Blood Sugar

Proactive people with diabetes might take good care of themselves and believe they have the condition under control, but that doesn't necessarily mean they have good blood sugar.  Other believe that whatever they do, they have no control over their diabetes. According to a study in Health Services Research, these and other factors effect how people manage their diabetes risk. 

"People are not always adherent in managing their diabetes care, which affects overall health and the risk of diabetic complications," said lead study author Frank Sloan, Ph.D.

From the article:

People with diabetes who feel they have better control over life events are more likely to take good care of themselves and to believe they have the condition under control, but these factors do not translate to improved blood sugar levels, according to a new study of 1,034 adults.

Participants' responses to survey items on their risk tolerance, concern about their future and beliefs about their longevity had no correlation to clinical measures of their hemoglobin A1c levels, which reflect average blood glucose (or blood sugar) during the previous two to three months. The study, which appears online in the journal Health Services Research, also found no differences by race or Hispanic ethnicity in how people took charge of their self-care.

People are not always adherent in managing their diabetes care, which affects overall health and the risk of diabetic complications, said lead study author Frank Sloan, Ph.D.

"What we are able to do here is bring some new measures to bear," said Sloan, a professor of health policy and management at the Center for Health Policy at Duke University.

Some people believe that whatever they do, they have no control over their diabetes; others are very tolerant of the risks of diabetes; and, some have a philosophy that they will live for today and not care about the future, Sloan said. "One result that comes through is that people who have self-control over life in general are more likely to adhere," he said.

"This area of study is valuable as we attempt to better understand the relationship between how people from all ethnic and cultural backgrounds perceive their destinies with diabetes," said Sue McLaughlin, a registered dietitian and president of health care and education with the American Diabetes Association.

"This study illustrates the insidious nature of hyperglycemia: it is a silent and deadly killer," added Miller, who had no affiliation with the study. Many people with diabetes assume they are in good health because they do not feel bad, she said.

read the full article...

Embrace the Dwight Schrutes in Your Office for Better Performance

Nobody wants to share a cubicle with a new hire like Dwight Schrute. The beet-farming volunteer sheriff's deputy/paper salesman creates many awkward moments because of his differences with co-workers on NBC's "The Office."

But according to new research, better decisions come from teams that include a "socially distinct newcomer." That's psychology-speak for someone who is different enough to bump other team members out of their comfort zones.

Researchers noticed this effect after conducting a traditional group problem-solving experiment. The twist was that a newcomer was added to each group about five minutes into their deliberations. And when the newcomer was a social outsider, teams were more likely to solve the problem successfully.

From the article:

The research is published in the Personality and Social Psychology Bulletin.

"One of the most-cited benefits of diversity is the infusion of new ideas and perspectives," said study co-author Katie Liljenquist, assistant professor of organizational leadership at BYU's Marriott School of Management. "And while that very often is true, we found the mere presence of a newcomer who is socially distinct can really shake up the group dynamic. That leads to discomfort, but also to a better process that ultimately yields superior outcomes."

The key factor is simply whether newcomers are distinct in some way from the other group members.

"Remember, socially 'distinct' doesn't necessarily mean socially 'inept,'" says Liljenquist, whose co-authors on the paper are Northwestern's Katherine Phillips and Stanford's Margaret Neale. "Dwight's upbringing and past work history - in addition to his bobblehead doll collection - all contribute to the measure of diversity he brings to 'The Office' melting pot."

The paper adds a new wrinkle to the wealth of research on teams, says Melissa Thomas-Hunt, associate professor at Cornell's Johnson School of Management.

"[This research] is groundbreaking in that it highlights that the benefits of disparate knowledge in a team can be unleashed when newcomers actually share opinions of knowledge with old-timers but are socially different," Thomas-Hunt says. "It is the tension between social dissimilarity and opinion similarity that prompts heightened effectiveness in diverse teams."

What explains the results?

According to Liljenquist, newcomers in the experiment didn't necessarily ask tougher questions, possess novel information, or doggedly maintain a conflicting point of view. Just being there was enough to change the dynamic among old-timers who shared a common identity.

When a member of the group discovered that he agreed with the new outsider, he felt alienated from his fellow old-timers -- consequently, he was very motivated to explain his point of view on its merits so that his peers wouldn't lump him in with the outsider.

The person who found himself disagreeing with the in-group -- and instead agreeing with an outsider - felt very uncomfortable. An opinion alliance with an outsider put his social ties with other team members at risk.

"Socially, that can be very threatening," Liljenquist says. "These folks are driven to say, 'Wait, the fact that I disagree with this outsider doesn't make me weird. Something more is going on here; let's figure out what's at the root of our disagreement.' The group then tends to analyze differing opinions and critical information much more thoroughly, and that facilitates much better decision-making results."

Another revelation

The experiment also revealed a fallacy in the assumptions we make about our own effectiveness in groups. The subjects in the experiment were members of different fraternities and sororities. In general, when the newcomer was from the same sorority or fraternity as the other team members, the group reported that it worked well together, but was less likely to correctly solve the problem.

In contrast, when the newcomer was a member of a rival sorority or fraternity, the opposite was true -- these groups felt they worked together less effectively, yet they significantly outperformed socially homogenous groups.

"What's really distinct about this research is that, from a self-reporting perspective, what people perceive to be beneficial turns out to be dead wrong," Liljenquist says. "The teams that felt they worked least effectively together were ironically the top performers!"

In the workplace

Common "social distinctions" in today's workplace, Liljenquist says, would include:

* One employee from accounting working on a team in which everyone else is from sales
* An employee of a company that had just been bought out finding herself on a team of people from the acquiring firm
* An out-of-stater finding himself on a team full of natives of the company's home state

To help employees in those situations cope, managers would be wise to explain that such conflict can actually generate better results.

"Without that information people just assume, 'This is really uncomfortable. My team obviously must not being working effectively,'" Liljenquist says. "The experience in diverse teams may not always be a feel-good session, but if employees know that from the outset, they can better deal with inevitable conflicts and recognize the potential benefits -- that the affective pains can translate to real performance gains."

Although Liljenquist acknowledges many other cases for diversity in the workplace, she contends that "reaping the benefits of diverse workgroups doesn't necessarily require that newcomers bring unique perspectives or expertise to the table. Simply having people around us who differ on some dimension (break)- whether it is functional background, education, race or even a different fraternity - drives a very different decision-making process at a group level because of the social and emotional conflict we experience in their presence."

read the full article...

Friday, March 27, 2009

Most Popular: "First Economical Process" for Making Biodiesel Fuel from Algae

Chemists have developed the first economical, eco-friendly process to convert algae oil into biodiesel fuel. Turning algae into biofuel was previously a high-cost process, but this new process utilizes a re-usable, waste-free catalyst that reduces production costs by about 40%.

This new process makes algae, "the most promising source for mass biodiesel production to replace transportation fuel in the United States," says Ben Wen, Ph.D., lead researcher on the project.

From the article:
Chemists reported development of what they termed the first economical, eco-friendly process to convert algae oil into biodiesel fuel -- a discovery they predict could one day lead to U.S. independence from petroleum as a fuel.

One of the problems with current methods for producing biodiesel from algae oil is the processing cost, and the New York researchers say their innovative process is at least 40 percent cheaper than that of others now being used. Supply will not be a problem: There is a limitless amount of algae growing in oceans, lakes, and rivers, throughout the world. Another benefit from the "continuously flowing fixed-bed" method to create algae biodiesel, they add, is that there is no wastewater produced to cause pollution.

"This is the first economical way to produce biodiesel from algae oil," according to lead researcher Ben Wen, Ph.D., vice president of United Environment and Energy LLC, Horseheads, N.Y. "It costs much less than conventional processes because you would need a much smaller factory, there are no water disposal costs, and the process is considerably faster."

A key advantage of this new process, he says, is that it uses a proprietary solid catalyst developed at his company instead of liquid catalysts used by other scientists today. First, the solid catalyst can be used over and over. Second, it allows the continuously flowing production of biodiesel, compared to the method using a liquid catalyst. That process is slower because workers need to take at least a half hour after producing each batch to create more biodiesel. They need to purify the biodiesel by neutralizing the base catalyst by adding acid. No such action is needed to treat the solid catalyst, Wen explains.

He estimates algae has an "oil-per-acre production rate 100-300 times the amount of soybeans, and offers the highest yield feedstock for biodiesel and the most promising source for mass biodiesel production to replace transportation fuel in the United States." He says that his firm is now conducting a pilot program for the process with a production capacity of nearly 1 million gallons of algae biodiesel per year. Depending on the size of the machinery and the plant, he said it is possible that a company could produce up to 50 million gallons of algae biodiesel annually.

Wen also says that the solid catalyst continuous flow method can be adapted to mobile units so that smaller companies wouldn't have to construct plants and the military could use the process in the field.

The National Science Foundation funded Wen's research.

read the full article...

Most Popular: Health IT Stimulus Could Lead to Major Boom

A computerized health care software platform would improve delivery of health care, increase physician productivity, foster advances in science, as well as stimulate job and economic growth, according to researchers writing in The New England Journal of Medicine.

With $19 billion dollars of funding provided for health care information technology (IT) in President Obama's economic stimulus package, this health care software platform may soon become a reality.

Authors Kenneth Mandl and Isaac Kohane recommend the government should mandate the development of a platform that will support applications for clinical care, public health and research. They cite the success of software platforms such as the one created for the iPhone as an example of the future possibilities for such a technology.

From the article:

The more than $19 billion dollars of funding provided for health care information technology (IT) in President Obama's economic stimulus package offers a unique opportunity to deliver on the promise of computerized health care, say researchers from Children's Hospital Boston in a Perspective article published in the March 26 issue of The New England Journal of Medicine (NEJM). The co-authors argue that the development of a platform model - drawing on the success of software platforms such as the one created for the iPhone - could create a flexible health information infrastructure that will improve delivery of health care, increase physician productivity, foster advances in science, as well as stimulate job and economic growth.

"This is a critical point for health IT," said co-author Kenneth Mandl, MD, PhD, of the Children's Hospital Informatics Program. "Pouring money into existing health IT systems would be the most natural approach to take with the stimulus, but it may also be the wrong approach. We need to pause and make decisions that will have long-term benefits. Rather than simply deploying existing technologies we need to establish clarity on what the characteristics of an ideal system would be."

In their NEJM article, Mandl and Isaac Kohane, MD, PhD, also of CHIP, propose for the government, through the Department of Health and Human Services (DHHS), to mandate the development of a platform that will support applications for clinical care, public health and research. Much like the software platform developed by Apple for the iPhone, or the Indivo platform created by CHIP researchers that has emerged as a model for personal health records, the co-authors encourage DHHS to mandate the creation of a platform that will support an ecosystem of applications (i.e. order entry systems, medication reconciliation systems, patient communication systems, etc.) which can be developed by existing vendors or new health IT developers.

The platform they suggest would support:
  • Liquidity of data
  • Substitutability of applications
  • Open standards
  • A diversity of applications

"Current systems are monolithic, inflexible and not able to be easily customized to meet health care providers' needs," said Kohane. "A model that supports substitutability will encourage competing applications, market innovation and evolution, and give providers the freedom to use/not use applications as they wish. With such a model, if one application doesn't have what you need at your institution or practice, you could easily find and download one that does, or send out an RFP and one could be developed."

The development of such a model could stimulate health IT and the economy, the co-authors note, as more vendors and developers are able to participate in its success and growth. Such an approach and investment by the government, they predict, would be catalytic to health care as we know it.
Mandl K. and Kohane I. No Small Change for the Health Information Economy. New England Journal of Medicine, March 26, 2009.

read the full article...

Most Popular: We Saw It Coming: Asteroid Monitored from Space to Impact

"We knew that locating an incoming object while still in space could be done, but it had never actually been demonstrated until now," says Mark Boslough, a member of the research team that successfully tracked an asteroid in space before it entered the atmosphere, broke up, and bits of it landed on the ground.

On October 6, Numerous observatories detected the object and captured images of it. Computations correctly predicted impact would occur 19 hours after discovery in the Nubian Desert of northern Sudan. Analyses were performed while the asteroid was en route and its surviving pieces were located on October 7 by meteorite hunters in an intense search.

The event tested the ability of society to respond very quickly to a predicted impact, as well as predict the arrival time and location on Earth of the asteroid's surviving parts.

From the article:

Reports by scientists of meteorites striking Earth in the past have resembled police reports of so many muggings -- the offenders came out of nowhere and then disappeared into the crowd, making it difficult to get more than very basic facts.

Now an international research team has been able to identify an asteroid in space before it entered Earth's atmosphere, enabling computers to determine its area of origin in the solar system as well as predict the arrival time and location on Earth of its shattered surviving parts. "I would say that this work demonstrates, for the first time, the ability of astronomers to discover and predict the impact of a space object," says Sandia National Laboratories researcher Mark Boslough, a member of the research team.

Perhaps more importantly, the event tested the ability of society to respond very quickly to a predicted impact, says Boslough. "In this case, it was never a threat, so the response was scientific. Had it been deemed a threat -- a larger asteroid that would explode over a populated area -- an alert could have been issued in time that could potentially save lives by evacuating the danger zone or instructing people to take cover."

The profusion of information in this case also helps meteoriticists learn the orbits of parent bodies that yield various types of meteorites.

Such knowledge could help future space missions explore or even mine the asteroids in Earth-crossing orbits, Boslough says.

The four-meter-diameter asteroid, called 2008 TC3, was initially sighted by the automated Catalina Sky Survey telescope at Mount Lemmon, Ariz., on Oct. 6. Numerous observatories, alerted to the invader, then imaged the object. Computations correctly predicted impact would occur 19 hours after discovery in the Nubian Desert of northern Sudan.

According to NASA's Near Earth Object program, "A spectacular fireball lit up the predawn sky above Northern Sudan on October 7, 2008."

A wide variety of analyses were performed while the asteroid was en route and after its surviving pieces were located by meteorite hunters in an intense search.

Researchers, listed in the paper describing this work in the March 26 issue of the journal Nature, range from the SETI Institute, the University of Khartoum, Juba University (Sudan), Sandia, Caltech, NASA Johnson Space Center and NASA Ames, to other universities in the U.S., Canada, Ireland, England, Czech Republic and the Netherlands.

Sandia researcher Dick Spalding interpreted recorded data about the atmospheric fireball, and Boslough estimated the aerodynamic pressure and strength of the asteroid based on the estimated burst altitude of 36 kilometers.

Searchers have recovered 47 meteorites so far -- offshoots from the disintegrating asteroid, mostly immolated by its encounter with atmospheric friction -- with a total mass of 3.95 kilograms.

The analyzed material showed carbon-rich materials not yet represented in meteorite collections, indicating that fragile materials still unknown may account for some asteroid classes. Such meteorites are less likely to survive due to destruction upon entry and weathering once they land on Earth's surface.

"Chunks of iron and hard rock last longer and are easier to find than clumps of soft carbonaceous materials," says Boslough.

"We knew that locating an incoming object while still in space could be done, but it had never actually been demonstrated until now," says Boslough. "In this post-rational age where scientific explanations and computer models are often derided as 'only theories,' it is nice to have a demonstration like this."

read the full article...

Wednesday, March 25, 2009

Most Shared: Omit Needless Words: The Elements of Style Turns 50

Grammarians rejoice! The classic little book "The Elements of Style" - the English classroom staple that urges omitting needless words, explains subject-verb agreement and savors the active voice - turns 50. The story behind it began at Cornell University.

In a 1957 New Yorker column, writer E.B. White (Cornell Class of 1921) praised "The Elements of Style" by William Strunk Jr., his former Cornell English professor, as an "attempt to cut the vast tangle of English rhetoric down to size and write its rules and principles on the head of a pin." Further, he described the 43-page treatise, first published in 1918, as a "case for cleanliness, accuracy, and brevity in the use of English." White's endorsement of "The Elements of Style" quickly led his publisher Macmillan to ask White to update and expand Strunk's terse primer on grammar and usage.



From the article:

Published April 16, 1959, Strunk and White's "The Elements of Style" went on to win critical acclaim and total sales of more than 10 million copies. The tome proved so popular, that it earned a nickname "Strunk and White." In August 1959, the book appeared on The New York Times best-seller list along with another Cornell professor's book, "Lolita," by Vladimir Nabokov. Strunk and White peaked at No. 3.

New editions followed in 1972, 1979 and 1999; an illustrated edition was issued in 2005, and a 50th anniversary edition appears this month, published by Pearson Education Inc.

Teacher and book exerted a profound influence on White, author of the children's classics "Charlotte's Web" and "Stuart Little" and essayist for the New Yorker for six decades.

After earning his Ph.D. from Cornell in 1896, Strunk taught at the university for his entire career, from 1899 to 1937. He first published his book privately for use in his own English classes. Among Strunk's legendary commandments: "Omit needless words!"; "Do not break sentences in two"; and "Use the active voice."

"He was a memorable man, friendly and funny," White wrote of Strunk in a 1957 New Yorker column. "Under the remembered sting of his kindly lash, I have been trying to omit needless words since 1919," he wrote, adding in his introduction that the book "still seems to maintain its original poise, standing in a drafty time, erect, resolute and assured."

White gave his papers to Cornell during his lifetime and more came after his death in 1985. The publisher's pitch letter to White (which he could not resist editing) as well as subsequent correspondence is in the library's Division of Rare and Manuscript Collections' E.B. White Collection. It includes Strunk's first edition and thousands of letters. White's papers bear the rigorous editing he imposed on his own work. In minute detail, letters cover typesetting, royalties and pricing the book (hardcover, $2.50; paperback, $1).

In one letter, White attested that Strunk took brevity seriously. Because he was concise, his lectures were very short - so he would repeat himself three times.

"The manuscripts let you see White's writing process," said University Archivist Elaine Engst. "The most impressive White manuscript we have is 'Charlotte's Web,' because the first draft is completely different from the finished book. What's wonderful about looking at his drafts is that you see that even for somebody who was pretty close to a perfect writer, how much work went into his books: a ton of it.

"People felt they knew White," said Engst of White's thousands of correspondents. "They feel a very personal connection. He's somebody you would like to have known, and that comes through."

read the full article...

Most Popular: The First Domesticated Maize

Maize was domesticated from its wild ancestor more than 8700 years ago.

Studies confirmed some time ago that maize derived from teosinte, a large wild grass that has five species growing in Mexico, Guatemala and Nicaragua. This recent discovery in Mexico's Central Balsas River Valley is the oldest biological evidence of domesticated maize.

"We went to the area where the closest relative to maize grows, looked for the earliest maize and found it," said lead researcher Anthony Ranere of Temple University.

Researchers found maize and squash phytoliths -- rigid microscopic bodies found in many plants -- in lakeside sediments, and traces of maize starch in crevices of many of the tools that were unearthed in excavated sites which dated back at least 8700 years.

From the article:
Maize was domesticated from its wild ancestor more than 8700 years according to biological evidence uncovered by researchers in the Mexico's Central Balsas River Valley. This is the earliest dated evidence -- by 1200 years -- for the presence and use of domesticated maize.


The researchers, led by Anthony Ranere of Temple University and Dolores Piperno of the Smithsonian National Museum of Natural History, reported their findings in two studies -- "The Cultural and chronological context of early Holocene maize and squash domestication in the Central Balsas River Valley, Mexcio" and "Starch grain and phytolith evidence for early ninth millennium B.P. maize from the Central Balsas River Valley, Mexico" -- being published in the PNAS Early Edition, March 24. According to Ranere, recent studies have confirmed that maize derived from teosinte, a large wild grass that has five species growing in Mexico, Guatemala and Nicaragua. The teosinte species that is closest to maize is Balsas teosinte, which is native to Mexico's Central Balsas River Valley.


"We went to the area where the closest relative to maize grows, looked for the earliest maize and found it," said Ranere. "That wasn't surprising since molecular biologists had determined that Balsas teosinte was the ancestral species to maize. So it made sense that this was where we would find the earliest domestication of maize."


The study began with Piperno, a Temple University anthropology alumna, finding evidence in the form of pollen and charcoal in lake sediments that forests were being cut down and burned in the Central Balsas River Valley to create agricultural plots by 7000 years ago. She also found maize and squash phytoliths -- rigid microscopic bodies found in many plants -- in lakeside sediments.


Ranere, an archaeologist, joined in the study to find rock shelters or caves where people lived in that region thousands of years ago. His team carried out excavations in four of the 15 caves and rock shelters visited in the region, but only one of them yielded evidence for the early domestication of maize and squash.


Ranere excavated the site and recovered numerous grinding tools. Radiocarbon dating showed that the tools dated back at least 8700 years. Although grinding tools were found beneath the 8700 year level, the researchers were not able to obtain a radiocarbon date for the earliest deposits. Previously, the earliest evidence for the cultivation of maize came from Ranere and Piperno's earlier research in Panama where maize starch and phytoliths dated back 7600 years.


Ranere said that maize starch, which is different from teosinte starch, was found in crevices of many of the tools that were unearthed.


"We found maize starch in almost every tool that we analyzed, all the way down to the bottom of our site excavations," Ranere said. "We also found phytoliths that comes from maize or corn cobs, and since teosinte doesn't have cobs, we knew we had something that had changed from its wild form."
Ranere said that their findings also supported the premise that maize was domesticated in a lowland seasonal forest context, as opposed to being domesticated in the arid highlands as many researchers had once believed.


"For a long time, I though it strange that researchers argued about the location and age of maize domestication yet never looked in the Central Balsas River Valley, the homeland for the wild ancestor," said Ranere. "Dolores was the first one to do it.'

read the full article...

Tuesday, March 24, 2009

Most Popular: Red and Processed Meat Increase Risk of Death

Eating red meat and processed meat modestly increases risk of death from cancer or heart disease. In a pool of 500,000 study participants, the top 20% consumers of red meat and processed meats had higher rates of death than the 20% that consumed the least amount of red and processed meats.

So, how much is too much?  The higher meat consumption group tended to eat at least 6-10 times more meat than those in the lowest meat consumption category.  (62.5 grams vs. 9.8 grams per 1,000 calories in the case of red meat, and 22.6 grams vs. 1.6 grams per 1,000 calories in processed meat.)

In contrast, a higher intake of white meat appeared to be associated with a slightly decreased risk for overall death and cancer death.

Though researchers have yet to conclude specific causes and correlations between high-meat diets and death, many of us may consider practicing more restraint when it comes to eating red meat and processed meats.


From the article:

Individuals who eat more red meat and processed meat appear to have a modestly increased risk of death from all causes and also from cancer or heart disease over a 10-year period, according to a report in the March 23 issue of Archives of Internal Medicine, one of the JAMA/Archives journals. In contrast, a higher intake of white meat appeared to be associated with a slightly decreased risk for overall death and cancer death.

"Meat intake varies substantially around the world, but the impact of consuming higher levels of meat in relation to chronic disease mortality [death] is ambiguous," the authors write as background information in the article. Rashmi Sinha, Ph.D., and colleagues at the National Cancer Institute, Rockville, Md., assessed the association between meat intake and risk of death among more than 500,000 individuals who were part of the National Institutes of Health-AARP Diet and Health Study. Participants, who were between 50 and 71 years old when the study began in 1995, provided demographic information and completed a food frequency questionnaire to estimate their intake of white, red and processed meats. They were then followed for 10 years through Social Security Administration Death Master File and National Death Index databases.

During the follow-up period, 47,976 men and 23,276 women died. The one-fifth of men and women who ate the most red meat (a median or midpoint of 62.5 grams per 1,000 calories per day) had a higher risk for overall death, death from heart disease and death from cancer than the one-fifth of men and women who ate the least red meat (a median of 9.8 grams per 1,000 calories per day), as did the one-fifth of men and women who ate the most vs. the least amount of processed meat (a median of 22.6 grams vs. 1.6 grams per 1,000 calories per day).

When comparing the one-fifth of participants who ate the most white meat to the one-fifth who ate the least white meat, those with high white meat intake had a slightly lower risk for total death, death from cancer and death from causes other than heart disease or cancer.

"For overall mortality, 11 percent of deaths in men and 16 percent of deaths in women could be prevented if people decreased their red meat consumption to the level of intake in the first quintile [one-fifth]. The impact on cardiovascular disease mortality was an 11 percent decrease in men and a 21 percent decrease in women if the red meat consumption was decreased to the amount consumed by individuals in the first quintile," the authors write. "For women eating processed meat at the first quintile level, the decrease in cardiovascular disease mortality was approximately 20 percent."

There are several mechanisms by which meat may be associated with death, the authors note. Cancer-causing compounds are formed during high-temperature cooking of meat. Meat also is a major source of saturated fat, which has been associated with breast and colorectal cancer. In addition, lower meat intake has been linked to a reduction in risk factors for heart disease, including lower blood pressure and cholesterol levels.

"These results complement the recommendations by the American Institute for Cancer Research and the World Cancer Research Fund to reduce red and processed meat intake to decrease cancer incidence," the authors conclude. "Future research should investigate the relation between subtypes of meat and specific causes of mortality."

"The publication by Sinha et al is timely," writes Barry M. Popkin, Ph.D., of the University of North Carolina, Chapel Hill, in an accompanying editorial. "There is a global tsunami brewing, namely, we are seeing the confluence of growing constraints on water, energy and food supplies combined with the rapid shift toward greater consumption of all animal source foods."

"Not only are components of the animal-source foods linked to cancer, as shown by Sinha et al, but many other researchers have linked saturated fat and these same foods to higher rates of cardiovascular disease," Dr. Popkin writes. "What do we do?"

Because there are health benefits to eating some red and white (although not processed) meats, the consensus is not for a complete shift to vegan or vegetarian diets, Dr. Popkin concludes. "Rather, the need is for a major reduction in total meat intake, an even larger reduction in processed meat and other highly processed and salted animal source food products and a reduction in total saturated fat." 

read the full article...

Most Popular: Setting Your Optimum Running Speed

Runners, if your body is telling you that your pace is too fast or too slow, you should listen up. 

The long-standing view that running has the same metabolic cost per unit of time no matter the speed has officialy been debunked by recent studies.  It turns out that the efficiency of energy burnt by running varies with speed, and each individual has an optimal pace.  While measuring runners' metabolic rates at a range of speeds, research proves that energy costs increase at both fast and slow speeds.  Ultimately, an intermediate pace provides maximal efficiency.

From the article:

Runners, listen up: If your body is telling you that your pace feels a little too fast or a little too slow, it may be right.

A new study, published online March 18 in the Journal of Human Evolution, shows that the efficiency of human running varies with speed and that each individual has an optimal pace at which he or she can cover the greatest distance with the least effort. The result debunks the long-standing view that running has the same metabolic cost per unit of time no matter the speed -- in other words, that the energy needed to run a given distance is the same whether sprinting or jogging.

Though sprinting feels more demanding in the short term, the longer time and continued exertion required to cover a set distance at a slower pace were thought to balance out the difference in metabolic cost, says Karen Steudel, a zoology professor at the University of Wisconsin-Madison.
However, Steudel and Cara Wall-Scheffler of Seattle Pacific University have now shown that the energetic demands of running change at different speeds. "What that means is that there is an optimal speed that will get you there the cheapest," metabolically speaking, Steudel says.

Peak efficiency was determined by measuring runners' metabolic rates at a range of speeds enforced by a motorized treadmill. Metabolic energy costs increased at both fast and slow speeds and revealed an intermediate pace of maximal efficiency.

The most efficient running speed determined in the study varied between individuals but averaged about 8.3 miles per hour for males and 6.5 miles per hour for females in a group of nine experienced amateur runners. Much of the gender difference may be due to variations in body size and leg length, which have been shown to affect running mechanics, Steudel says. In general, the larger and taller runners had faster optimum speeds.

Interestingly, the slowest speeds -- around 4.5 miles per hour, or about a 13-minute mile -- were the least metabolically efficient, which Steudel attributes to the gait transition between walking and running. For example, she points out, both a very fast walk and a very slow run can feel physically awkward.

While holding great interest for athletes and trainers, the mechanics of running may also hold clues to the evolution of the modern human body form: tall and long-limbed with broad chests and defined waists.

Modern humans are very efficient walkers and fairly good runners, Steudel says, and efficient locomotion probably provided our ancestors with an advantage for hunting and gathering food. Distant ancestral forms, the australopithecines, had shorter, boxier frames with stubbier legs.

"They wouldn't have had noticeable waists -- their torso looked more like the torso of an ape, except they were walking on two legs," Steudel says. "With the genus Homo, you start getting taller individuals, larger individuals, and they started developing a more linear body form" with distinct waists that pivot easily, allowing longer and more efficient strides.

Human walking is also known to have an optimally efficient speed, so the new findings may help researchers determine the relative importance of the different gaits in driving human evolution, Steudel says. "This is a piece in the question of whether walking or running was more important in the evolution of the body form of the genus Homo."

read the full article...

Monday, March 23, 2009

Most Popular: Pro-Biotic Yogurt Fights Ulcer-Causing Bacteria

A new yogurt has proved it can fight the bacteria that causes gastritis and stomach ulcers with an almost vaccine-like effect.

Yogurt, a fermented milk product containing live bacteria, has always been a known healthy source of calcium, protein, and other nutrients, but some brands are now also adding probiotics intended to improve health.

A new brand of yogurt in Japan now uses probiotics to attack and prevent infection of the ulcer-causing bacteria Helicobacter pylori.

After a recent study revealed that H. pylori relies on a protein called urease to attach to the stomach, researchers developed an antibody for urease and added it to yogurt. Human trials now show this probiotic yogurt is an effective, tasty way of protecting ourselves from H. pylori and the ulcers it can cause.

From the article:

Results of the first human clinical studies confirm that a new yogurt fights the bacteria that cause gastritis and stomach ulcers with what researchers describe as almost vaccine-like effects, scientists in Japan will report here today at the 237th National Meeting of the American Chemical Society.


Researchers have long known that yogurt, a fermented milk product containing live bacteria, is a healthy source of calcium, protein, and other nutrients. Some brands of yogurt are now made with "probiotics" -- certain types of bacteria -- intended to improve health. The new yogurt represents a unique approach to fighting stomach ulcers, which affect 25 million people in the United States alone, and is part of a growing "functional food" market that now generates $60 billion in sales annually.


"With this new yogurt, people can now enjoy the taste of yogurt while preventing or eliminating the bacteria that cause stomach ulcers," says study coordinator Hajime Hatta, Ph.D., a chemist at Kyoto Women's University in Kyoto, Japan.


The new yogurt is already on store shelves in Japan, Korea, and Taiwan. The study opens the door to possible arrival of the product in the U.S., the researchers suggest.


A type of bacteria called Helicobacter pylori (H. pylori) or over-use of aspirin and or other nonsteroidal antiinflammatory drugs, causes most stomach ulcers. H. pylori ulcers can be effectively treated and eliminated with antibiotics and acid suppressants. However, that simple regimen is unavailable to millions of poverty-stricken people in developing countries who are infected with H. pylori. New research also links childhood H. pylori infection to malnutrition, growth impairment and other health problems. As a result, scientists have been seeking more economical and convenient ways of dealing with these bacteria.


In the new study, Hatta and colleagues point out that H. pylori seems to rely on a protein called urease to attach to and infect the stomach lining. In an effort thwart that protein, or antigen, Hatta turned to classic vaccine-making technology. They injected chickens with urease and allowed the chickens' immune systems to produce an antibody to the protein. The researchers then harvested the antibody, called IgY-urease, from chicken eggs. Hatta and colleagues theorized that yogurt containing the antibody may help prevent the bacteria from adhering to the stomach lining.


To test their theory, the scientists recruited 42 people who tested positive for H. pylori. The volunteers consumed two cups daily of either plain yogurt or yogurt containing the antibody for four weeks. Levels of urea, a byproduct of urease, decreased significantly in the antibody group when compared with the control group, indicating reduced bacterial activity, the researchers say.


"The results indicate that the suppression of H. pylori infection in humans could be achieved by drinking yogurt fortified with urease antibody," Hatta states. The antibody was eventually destroyed by stomach acid, but not before having its beneficial effect.


Although the yogurt appears less effective than antibiotics for reducing levels of H. pylori, it is a lot easier to take than medicine and can be eaten daily as part of regular dietary routine, Hatta says. The antibody does not affect the yogurt's overall taste and does not cause any apparent adverse side effects, he notes.


But anti-ulcer yogurt is not for everyone, Hatta cautions. He notes that people who are allergic to milk or eggs should avoid the product. Although the yogurt contains egg yolk, which tends to have lower allergen levels than egg white, an allergy risk still exists, he adds.

read the full article...

Most Popular: Top Ten Tech Cars: Carmaggedon

With these difficult economic conditions, automakers face what some gearheads call carmaggedon.

In IEEE Spectrum's annual survey of trends in car technology, experts predict there will soon be fewer car companies making fewer kinds of cars. For example Volkswagen's dozen separate models, totaling well over 1 million units a year, on the basic components of its Golf/Jetta/Rabbit line. Or the upstart Chinese automaker BYD, the offspring of a battery company that began making cars only five years ago, and has already launched production of the world's first plug-in hybrid.

Spectrum is predicting that the day of electric-drive cars seems to have come round at last, and they welcome this trend toward radical experimentation, new electronic goodies, and some high-tech mechanisms to supplement or even supplant the driver's judgment.

This year, IEEE Spectrum's survey of the hottest new car technology shows a trend toward radical experimentation. Desperate times mean that even today's most outrageous concept cars may soon seem tame, even quaint.

As the price of oil shot up only to crash down, the credit markets imploded, and consumers closed their wallets, automakers faced what some wags call carmaggedon. Overnight, Detroit's giants were pushed to the edge of insolvency, and even mighty Toyota recorded its first operating loss in 70 years. Consolidation is in the air. Soon there will be fewer car companies, and they will be making fewer kinds of cars. For a glimpse of that future, consider Volkswagen. It builds more than a dozen separate models, totaling well over 1 million units a year, on the basic components of its Golf/Jetta/Rabbit line. For another view, look at upstart Chinese automaker BYD. It is the offspring of a battery company that began making cars only five years ago, yet in December BYD launched the world's first production plug-in hybrid.

The day of electric-drive cars seems to have come round at last. Even Europe's diesel diehards seem to be throwing in the towel, and the Chinese government appears to have shifted its development priority from diesel to electric drive.

Of course, all sorts of new electronic goodies continue to crop up inside the passenger compartment. The delivery of data via cell-phone technology is getting better, media and other entertainment systems are getting more sophisticated, and there are signs that mechanisms can begin to supplement or even supplant the driver's judgment. Still, many of these goodies seem to come from a time that suddenly looks very distant, the time before the twin meltdowns of the automotive industry and the world economy.

read the full article...

Most Popular: Gold Nanoparticles Destroy Cancer

Hollow gold particles called nanospheres, smaller than the finest flecks of dust, can be targeted to search out and "cook" cancer cells.

As a minimally invasive treatment in skin cancer, for example, the hollow gold nanospheres are equipped with a special "peptide" that draws the nanospheres directly to melanoma cells, avoiding healthy skin cells. After collecting inside the cancer, the nanospheres are induced to heat up by exposing them to near-infrared light.

(left: image of the gold nanoparticle as seen through an electron microscope. The darker ring shows the "wall" of the nanosphere, while the lighter area to the right of the ring shows the interior region of the shell.)

"This technique is very promising and exciting," explains study co-author Jin Zhang, Ph.D., a professor of chemistry and biochemistry at the University of California in Santa Cruz. "It's basically like putting a cancer cell in hot water and boiling it to death. The more heat the metal nanospheres generate, the better."

From the article:

Researchers are describing a long-awaited advance toward applying the marvels of nanotechnology in the battle against cancer. They have developed the first hollow gold nanospheres -- smaller than the finest flecks of dust -- that search out and "cook" cancer cells. The cancer-destroying nanospheres show particular promise as a minimally invasive future treatment for malignant melanoma, the most serious form of skin cancer, the researchers say. Melanoma now causes more than 8,000 deaths annually in the United States alone and is on the increase globally.


The topic of a report presented here today at the American Chemical Society's 237th National Meeting, the hollow gold nanospheres are equipped with a special "peptide." That protein fragment draws the nanospheres directly to melanoma cells, while avoiding healthy skin cells. After collecting inside the cancer, the nanospheres heat up when exposed to near-infrared light, which penetrates deeply through the surface of the skin. In recent studies in mice, the hollow gold nanospheres did eight times more damage to skin tumors than the same nanospheres without the targeting peptides, the researchers say. "This technique is very promising and exciting," explains study co-author Jin Zhang, Ph.D., a professor of chemistry and biochemistry at the University of California in Santa Cruz. "It's basically like putting a cancer cell in hot water and boiling it to death. The more heat the metal nanospheres generate, the better."


This form of cancer therapy is actually a variation of photothermal ablation, also known as photoablation therapy (PAT), a technique in which doctors use light to burn tumors. Since the technique can destroy healthy skin cells, doctors must carefully control the duration and intensity of treatment.


Researchers now know that PATs can be greatly enhanced by applying a light absorbing material, such as metal nanoparticles, to the tumor. Although researchers have developed various types of metal nanoparticles to help improve this technique, many materials show poor penetration into cancer cells and limited heat carrying-capacities. These particles include solid gold nanoparticles and nanorods that lack the desired combination of spherical shape and strong near-infrared light absorption for effective PAT, scientists say.


To develop more effective cancer-burning materials, Zhang and colleagues focused on hollow gold nanospheres -- each about 1/50,000th the width of a single human hair. Previous studies by others suggest that gold "nanoshells" have the potential for strong near-infrared light absorption. However, scientists have been largely unable to produce them successfully in the lab, Zhang notes.


After years of research toward this goal, Zhang announced in 2006 that he had finally developed a nanoshell or hollow nanosphere with the "right stuff" for cancer therapy: Gold spheres with an optimal light absorption capacity in the near-infrared region, small size, and spherical shape, perfect for penetrating cancer cells and burning them up.


"Previously developed nanostructures such as nanorods were like chopsticks on the nanoscale," Zhang says. "They can go through the cell membrane, but only at certain angles. Our spheres allow a smoother, more efficient flow through the membranes."


The gold nanoshells, which are nearly perfect spheres, range in size from 30 to 50 nanometers -- thousands of times smaller than the width of a human hair. The shells are also much smaller than other nanoparticles previously designed for photoablation therapy, he says. Another advantages is that gold is also safer and has fewer side effects in the body than other metal nanoparticles, Zhang notes.
In collaboration with Chun Li, Ph.D., a professor at the University of Texas M.D. Anderson Cancer Center in Houston, Zhang and his associates equipped the nanospheres with a peptide to a protein receptor that is abundant in melanoma cells, giving the nanospheres the ability to target and destroy skin cancer. In tests using mice, the resulting nanospheres were found to be significantly more effective than solid gold nanoparticles due to much stronger near infrared-light absorption of the hollow nanospheres, the researchers say.


The next step is to try the nanospheres in humans, Zhang says. This requires extensive preclinical toxicity studies. The mice study is the first step, and there is a long way to go before it can be put into clinical practice, Li says.

read the full article...

Friday, March 20, 2009

Most Emailed: Weighing the Options after Life-Altering Stroke

After a catastrophic stroke, it's always painful and difficult to decide whether aggressive surgery is worth it. Even under the best circumstances after a severe infarction, patients could experience paralysis, loss of speech, breathing problems, and at the very least they could require help bathing, cooking, and taking care of themselves. But recent research findings may make the decision over surgery a little bit easier.

The research done by neurologists Adam G. Kelly and Robert Holloway shows that aggressive brain surgery after a severe stroke does improve the patients' quality of life and increases their life expectancy.

From the article:

Choosing to have aggressive brain surgery after suffering a severe stroke generally improves the patients' lives and allows them to live longer, according to research by neurologists at the University of Rochester Medical Center.

The findings should help patients and families put into perspective a decision that is nearly always painful and difficult to make - whether putting a patient through aggressive surgery after a catastrophic stroke is worth it. "For families facing this difficult choice, the more information we can provide, the better for their decision-making," said neurologist Adam G. Kelly, M.D., who has helped hundreds of families chart a course after severe stroke. Kelly presented the findings last month at the International Stroke Conference in San Diego.

Kelly and colleague Robert Holloway, M.D., studied three separate analyses of patients who had had a serious type of stroke known as a malignant middle cerebral artery infarction, in which blood flow to a large part of the brain is cut off. Further damage occurs when the damaged brain swells in the days immediately following the initial stroke, as delicate brain tissue is pushed up against the hard inner skull. Increased swelling and pressure in the brain diminish blood flow even further.

Even under the best circumstances, patients who have suffered such strokes face at least moderate disability, and often their challenges are severe, no matter what type of treatment is chosen. A patient might be paralyzed on one side of the body. They may have lost their ability to speak or even to comprehend what is said to them. They may need a breathing or feeding tube. At the very least, they'll likely need help each day with tasks like bathing, cooking, and taking care of themselves.

Kelly and Holloway looked at patients who were treated with medical options alone and compared their outcomes to those of patients who had a surgical procedure known as a hemicraniectomy. In that procedure, doctors remove a piece of the skull temporarily, limiting further damage by giving the brain room to swell. The portion of the skull that is removed is put back in place a few months later.

Typically the decision whether or not to go ahead with the surgery is made by families grappling with the catastrophic effects of severe stroke on their loved one. In the face of great uncertainty and under great duress and time pressures, families have to make the decision.

"There is no reliable early predictor of how most people will do after a stroke," said Kelly, an instructor and fellow in the Department of Neurology.

"Some people have a small stroke, with few effects, and we can predict that outcome fairly well. Others have strokes that are immediately catastrophic. But the vast majority of patients are in the middle, and we have a hard time predicting what the outcome will be. There is a lot we don't know about how the brain responds to injury, and how it recovers," Kelly said.

Kelly and Holloway studied data from three studies in Europe that looked at the outcomes of 93 people after stroke. The studies demonstrated that patients who didn't have the surgery were about three times more likely to die within a year than patients who did have the surgery, though many of the surgical patients were left with considerable disability.

Using a technique called decision analysis, Kelly and Holloway re-evaluated the outcomes of these trials to incorporate quality-of-life ratings for surviving patients. The authors found that even in the face of significant stroke-related disability, as a whole, patients who had undergone the surgical procedure had an improvement in their quality of life. The physicians then used a related method called sensitivity analysis to determine situations in which surgery might not be the preferred treatment. Only under circumstances where patients valued the outcome after stroke extremely poorly - almost as a fate worse than death - was surgery not the preferred option.

"There has been debate for a long time about the effects of this surgery for patients. It definitely saves lives, but we're asking whether the surgery really paid off for these patients. Did the patients value their health state? Was it worth it to them?" asks Kelly. "The answer most often is 'yes.'"

The authors say there are several types of patients for whom surgery clearly isn't a good option. These include people who were already in poor health before the stroke, people whose chances of surviving the surgery are questionable, and patients who have clearly stated that they would not want such measures to be taken.

For those who do opt for surgery, Kelly and Holloway found that living longer due to the surgery was only part of the benefit. Those patients also valued their health. That might seem difficult to understand for people who are healthy, but it does not surprise Holloway, who has worked with hundreds of patients severely limited by stroke.

"For years I've witnessed families wrestle with these decisions," said Holloway. "If the families make the decision based on what they imagine the future to be, they may decide it's a surgery the patient wouldn't want. But nearly always, when they've made the decision to go ahead with the surgery, the families are subsequently happy they made that decision.

"People who survive devastating stroke often do much better than people think they will do. People who haven't experienced a health condition, such as stroke, almost always provide a lower estimate of their quality of life compared to people who are actually living with that condition or disease," Holloway added.

Kelly has witnessed the same dynamics himself.

"People have a remarkable ability to compensate for whatever problems they face," said Kelly, whose fellowship is supported by the National Institute of Neurological Disorders and Stroke. "Oftentimes, the fear of the unknown, and the anxiety it causes, is worse than the actual situation. Over time, people adapt to their deficits, and as they do, they value their quality of life more highly."

read the full article...

Most Emailed: Spiritual Kids Are Healthier

Like adults, kids who are more spiritual or religious tend to be healthier, according to a researcher examining the correlation between physical health and spiritual faith.

"One significant finding was that children who attended church were more likely to have higher T-cell counts than non churchgoing children," says Dr. Barry Nierenberg, Ph.D., ABPP, associate professor of psychology at Nova Southeastern University in Fort Lauderdale, Florida. 

"A number of studies have shown a positive relationship between participatory prayer and lower rates of heart disease, cirrhosis, emphysema and stroke in adults," he says. "Prayer has been shown to correlate to lower blood pressure, cortisol levels, rates of depression, as well as increased rates of self-described well being."

From the article:

Like adults, kids who are more spiritual or religious tend to be healthier.

That's the conclusion of Dr. Barry Nierenberg, Ph.D., ABPP, associate professor of psychology at Nova Southeastern University in Fort Lauderdale, Florida, who has been studying the relationship between faith and health.  He presented on the topic at the American Psychological Association's Division of Rehabilitation Psychology national conference on February 27, in Jackson, Fla. "A number of studies have shown a positive relationship between participatory prayer and lower rates of heart disease, cirrhosis, emphysema and stroke in adults," he says. "Prayer has been shown to correlate to lower blood pressure, cortisol levels, rates of depression, as well as increased rates of self-described well being."

"But very few studies have attempted to examine how children's spiritual beliefs impact their health," he says.  Initially, Nierenberg conducted a study of HIV positive pediatric patients (ages seven to 17), comparing religious development, church attendance and prayer to health measures such as symptoms, T-cell counts and number of hospitalizations.

"One significant finding was that children who attended church were more likely to have higher T-cell counts than non churchgoing children," he says, "but that finding is difficult to interpret.  It's likely that the more ill a child is, the less ability they have to attend church."

"We needed a second study to more precisely examine religious faith and behavior," he says.

So they examined 16 children (ages six to 20) who were undergoing hemodialysis due to End-Stage Renal Disease (ESRD).  The patients were questioned on a scale of spirituality behaviors and attitudes, and responses were correlated to dialysis-related blood levels, including: blood urea nitrogen (BUN), lymphocytes, albumin, phosphorus, parathyroid hormone (PTH), and urea reduction ratio.

"There was a significant negative correlation between spiritual attitudes and BUN levels," he says. "As children reported more agreement with statements like, 'I am sure that God cares about me,' and 'God has a plan for me," their average BUN levels over the past year were lower."

"We have a deeper understanding of why there is so little in the literature exploring the relationship between health spirituality in children and adolescents," he says. "It's challenging to measure in this population.  It can be difficult getting all the necessary permission. The pool of children is limited, and the interviews can be time consuming.  But it's important it's done for the same reason we study it in adults."

read the full article...

Most Emailed: Tips for Dealing with Osteoarthritis of the Knee

Do you suffer from Osteoarthritis (OA), the painful “wear and tear” arthritis that can affect any joint in the body? You're not alone. The American Academy of Orthapaedic Surgeons (AAOS) estimates some 33 million Americans are affected by osteoarthritis, most commonly occurring in people who are 65 years of age or older. OA of the knee can have a major effect on a person’s ability to engage in daily activities, like walking or climbing stairs.

The AAOS offers some guidelines for treatment, and some things that patients should avoid entirely, such as:

• Patients who are overweight, with a Body Mass Index (or BMI) greater than 25 should lose a minimum of five percent of their body weight.
• Patients should be encouraged to begin or increase their participation in low-impact aerobic fitness.

For pain management, the AAOS recommends:
• Acetaminophen (not to exceed 4 grams per day)
• Non-steroidal anti inflammatory drugs (NSAIDs)
• Intra-articular corticosteroids (for short term pain relief)

More tips and things to avoid after the jump.


From the article:


Osteoarthritis <http://www.orthoinfo.org/topic.cfm?topic=A00212 (OA) is known as the “wear and tear” arthritis. It can affect any joint in the body particularly after years of use. A clinical practice guideline for the treatment of Osteoarthritis of the Knee was presented today at the 2009 Annual Meeting of the American Academy of Orthopaedic Surgeons (AAOS) <http://www.aaos.org/> .

The guideline was explicitly developed to include only treatments which are less invasive than knee replacement http://orthoinfo.aaos.org/topic.cfm?topic=A00389 surgery.

The Guideline and Evidence Report recommends:
• Not performing an arthroscopic lavage if a patient only displays symptoms of osteoarthritis and no other problems like loose bodies or meniscus tears. If those mechanical problems—such as loose bodies and meniscal tears - are present, then arthroscopy can be potentially beneficial. “The current science shows us that just washing out the joint does not decrease the patient’s osteoarthritis symptoms and can expose the patient to additional risk,” said John Richmond, MD, Chair of the AAOS O/A of the Knee, Guideline work group.

Other important recommendations include:
• Patients who are overweight, with a Body Mass Index (or BMI) greater than 25 should lose a minimum of five percent of their body weight.
• Patients should be encouraged to begin or increase their participation in low-impact aerobic fitness.

“These two recommendations are very important because patients can self manage the progression of their OA, and take more control of what their issues are,” said Dr. Richmond. “As far as losing weight, this has the highest potential to actually slow the progression of the disease.”

After a thorough analysis of the current scientific literature, the work group recommends against using the following treatments:
• Glucosamine and/or chondroitin sulfate or hydrochloride
• Needle lavage (aspiration of the joint with injection of saline)
• Custom made foot orthotics

The work group does suggest that patients with symptomatic OA of the knee receive one of the following analgesics for pain unless there are contraindications to this treatment:
• Acetaminophen (not to exceed 4 grams per day)
• Non-steroidal anti inflammatory drugs (NSAIDs)
• Intra-articular corticosteroids (for short term pain relief)

In addition, the clinical practice guideline does not recommend for or against the use of:
• Bracing
• Acupuncture
• Intra-articular hyaluronic acid

Osteoarthritis of the knee is a leading cause of physical disability. Some 33 million Americans are affected by osteoarthritis, but it most commonly occurs in people who are 65 years of age or older. OA of the knee can have a major effect on a person’s ability to engage in daily activities, like walking or climbing stairs.

“The Academy created this clinical practice guideline to improve patient care for those suffering from osteoarthritis of the knee,” stated Dr. Richmond. “This serves as a point of reference and educational tool for both primary care physicians and orthopaedic surgeons, streamlining possible treatment processes for this ever-so common ailment.” While a wide range of treatment options are available, they should always be tailored to individual patients after discussions with their orthopaedic surgeons.

read the full article...

Most Shared: Bloggers and Independent Media Gaining Clout

Ithaca College journalism professor Jeff Cohen sums things up quite plainly when he says the mainstream media simply cannot "dismiss independent media voices as 'unprofessional'."  In fact, according to Cohen, the mainstream media is "struggling to catch up and learn from the independents."

Citing recent groundbreaking investigative reporting by bloggers and awards presented to leading members of the independent media, Cohen puts into a nutshell what's changing about the landscape of the new and old media. The take-home message: the new media is here to stay.


From the article

"The crisis inside mainstream media, especially daily papers, is economic - but also cultural," says Jeff Cohen, director of the Park Center for Independent Media and associate professor of journalism at Ithaca College.

"Many successful bloggers and new independent media are building active communities; they don't see their readers/viewers as mere 'audience' or 'consumers' or 'customers.' Blogger Josh Marshall of Talking Points Memo, for example, relied on his community to help uncover the still-reverberating story of political firings of federal prosecutors by the Bush White House. Marshall won the Polk award http://www.nytimes.com/2008/02/25/business/media/25marshall.html; Attorney General Gonzalez lost his job," said Cohen.  Recently Ithaca College announced the first-ever Izzy Awards for special achievement in independent media, naming Amy Goodman of "Democracy Now!" http://www.democracynow.org/  and blogger Glenn Greenwald http://www.salon.com/opinion/greenwald/ the winners.

"[Goodman and Greenwald] have each built loyal, active communities desperate for accurate, quality journalism that sparks civic action on issues like civil liberties and racial justice," said Cohen.

The voices of independent media existed before the Internet, but new technologies have expanded their reach and ability to engage their readers/listeners/viewers as interactive communities. "No longer can mainstream media dismiss independent media voices as 'unprofessional,' instead they find themselves struggling to catch up and learn from the independents," said Cohen.

read the full article...

Most Shared: Worms, Flies, and Yeast Age Just Like Humans on the Genetic Level

By studying the genetic make-up of simple organisms like yeast, nematode worms, and fruit flies, researchers have found some answers to questions about the human aging process. Each of these tiny animals' longevity-related genes have a corresponding human version, and these so-called longevity proteins are highly connected "hubs" involved in complex cellular functions with a substantial influence on how people grow old.

"This establishes a similarity in aging process among diverse species that is perhaps a lot broader than many of us may have expected," says Robert E. Hughes, lead author of the research results published in PLoS Genetics.
From the article:

When it comes to the aging process, yeast, nematode worms and fruit flies have more in common with humans than previously expected. In addition to highlighting the similarities between species, a large-scale human protein network reveals a complex web of interactions among the human equivalents of the many longevity genes found in simple-animals. The network indicates that these human versions of longevity proteins are highly connected "hubs" involved in complex cellular functions. The paper also reports that the activity of genes encoding network proteins change during human aging. These results point to a surprisingly close relationship between aging processes in human and simpler organisms. The findings appear in the March 13, 2009 edition of the on-line, open access journal PLoS Genetics.

Buck Faculty member Robert E. Hughes, lead author of the study says that while hundreds of longevity-related genes have been identified in simple animals, there has always been a question of how relevant those genes are to humans. "This really demonstrates that there's a strong relationship between the kinds of genes that appear to be important in human aging at the level of protein interaction and changes in gene expression and the kinds of genes that have been identified in large-scale genetic screens in invertebrate species," said Hughes. "This establishes a similarity in aging process among diverse species that is perhaps a lot broader than many of us may have expected." The longevity protein network was assembled from a large scale interaction map or interactions developed at Prolexys Pharmaceuticals in Salt Lake City, UT. The longevity network is comprised of 175 equivalent human versions of proteins known to influence life span in yeast, nematode worms or flies, and 2,163 additional human proteins that interact with those proteins. Overall, the network consists of 3,271 interactions among 2,338 different proteins.

Hughes likened the connections and interactions between proteins to those commonly found in human social networking. One striking result from the network analysis was the finding that longevity proteins had an average of 19 connections as compared to an average of 14 observed for proteins in general. "These longevity proteins were unquestionably the 'social butterflies' of the interaction network, and therefore are likely to function as 'hubs' or interfaces among groups of proteins, " said Hughes. "This really suggests that life-spans are determined by complex interactions among cellular systems and that this complexity can be observed at the level of protein interactions." Curiously, "knocking out" the aging genes used in this study resulted in increased life span in simple organisms. Hughes says its possible that removing these highly connected "hub" genes may increase life span by preventing dysfunctional events from spreading through the cell.

A second major conclusion of the study emerged when the protein interaction network was compared with gene expression studies of done with younger and older volunteers. Statistical analysis clearly demonstrated that the network was enriched for proteins encoded by genes whose expression levels change during human aging. This surprising result further demonstrated a functional connection between human and invertebrate aging.

"This work demonstrates the value of combining high-throughput screening for protein interactions with genetic and functional validation to understand complex biological processes such as aging. Furthermore, we would like to encourage scientists interested in aging and longevity to mine the data made available in the study", said Dr. Sudhir Sahasrabudhe, Chief Scientific Officer and the scientific founder of Prolexys Pharmaceuticals. The many proteins which have previously not been implicated in the aging process are a valuable resource for the scientific community.

read the full article...

Thursday, March 19, 2009

Most Clipped: Prostate Screenings Prevent Few Deaths in Older Patients

The prostate cancer screening tests that have become an annual ritual for many men do not appear to reduce deaths from the disease among those with a limited life-expectancy. Study results show six years of aggressive, annual screening for prostate cancer led to more diagnoses of prostate tumors but did not lead to fewer deaths from the disease.

"The important message is that for men with a life expectancy of seven to 10 years or less, it is probably not necessary to be screened for prostate cancer," says the study's lead author and principal investigator Gerald Andriole, M.D., chief urologic surgeon at the Siteman Cancer Center at Washington University School of Medicine and Barnes-Jewish Hospital.

For younger men with a longer life-expectancy, annual prostate screenings may still be the best way to detect cancer in its earliest, most treatable stages.

"We don't have enough data yet about the youngest men in the study - those in their 50s," Andriole says, "and it may be that over time, we will, in fact, see a benefit from screening."



From the article:

The prostate cancer screening tests that have become an annual ritual for many men don't appear to reduce deaths from the disease among those with a limited life-expectancy, according to early results of a major U.S. study involving 75,000 men.

Results released today from the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial show that six years of aggressive, annual screening for prostate cancer led to more diagnoses of prostate tumors but not to fewer deaths from the disease. The study, led by researchers at Washington University School of Medicine in St. Louis and conducted at 10 sites, will appear online March 18 in the New England Journal of Medicine (and in the journal's print edition on March 26).

"The important message is that for men with a life expectancy of seven to 10 years or less, it is probably not necessary to be screened for prostate cancer," says the study's lead author and principal investigator Gerald Andriole, M.D., chief urologic surgeon at the Siteman Cancer Center at Washington University School of Medicine and Barnes-Jewish Hospital.

But it's too soon, he added, to make broad screening recommendations for all men based on the study's initial findings.

"So far, only a minority of men enrolled in the PLCO study have died, so it may be premature to make generalizations about the ultimate results of the trial," he says. "We don't have enough data yet about the youngest men in the study - those in their 50s - and it may be that over time, we will, in fact, see a benefit from screening."

The PLCO trial began in 1992 with funding from the National Cancer Institute and was designed to determine whether prostate cancer screening reduces deaths from the disease. It involves men ages 55 to 74 who received either annual PSA blood tests and digital rectal exams or "routine care," meaning they had the screening tests only if their physicians recommended them. After seven to 10 years of follow up, deaths from prostate cancer were very low in both groups and did not differ significantly between the groups.

Health guidelines issued last year by the U.S. Preventive Services Task Force recommend against prostate cancer screening for men age 75 or older and concluded there is insufficient evidence to assess the balance of benefits and harms of prostate cancer screening in men younger than 75. However, the American Urological Association and the American Cancer Society recommend annual prostate cancer screening tests beginning at age 50 for most men.

More than 186,000 U.S. men will be diagnosed with prostate cancer this year, and nearly 29,000 will die from the disease, according to the National Cancer Institute. PSA blood tests, introduced in 1988, have increasingly been used as a screening tool for prostate cancer, despite a lack of evidence showing they reduce death rates from disease.

The controversy over prostate cancer screening has arisen because most men who undergo a biopsy for an abnormal PSA test do not have prostate cancer. For those who have cancer, the tumors generally grow so slowly that most men die of other causes. Furthermore, prostate cancer treatment can result in incontinence and impotence. However, some tumors can be aggressive, and the difficulty has been distinguishing aggressive cancers from those that are slow growing.

"We definitely need to find better ways to detect and treat aggressive tumors, those that are truly life-threatening, so that men with slow-growing tumors can avoid unnecessary treatments," says Andriole.

Today's results are the first to detail death rates from prostate cancer among men in the PLCO study and are being released to coincide with the presentation of the data at the European Association of Urology meeting in Stockholm, Sweden.

The PLCO data are being made public now because the study's Data and Safety Monitoring Board, an independent review committee that meets every six months, saw a continuing lack of evidence that screening reduces deaths due to prostate cancer as well as the suggestion that screening may cause men to be treated unnecessarily. The PLCO investigators will continue to follow patients for several more years to see whether annual screening eventually reduces prostate cancer deaths.

The trial involved 76,693 men, who were randomly assigned to receive either annual PSA blood tests for six years and digital rectal exams for four years or routine care, which included physical checkups but no mandate for annual prostate cancer screening.

The new report includes data for all participants seven years after they joined the trial and for 67% percent of participants 10 years after they joined the trial.

At seven years, there were 22 percent more prostate cancer diagnoses in the men screened annually (2,820 men in the screening group vs. 2,322 in the routine-care group). This trend has continued in data collected up to 10 years (currently there are 17 percent more prostate cancer diagnoses in the screening group).

Deaths from prostate cancer did not differ significantly between the groups. Seven years after the start of screening, there were 50 deaths from prostate cancer in the screening group and 44 deaths in the routine-care group. Ten years after the start of screening, there were 92 prostate cancer deaths in the screening group and 82 in the routine-care group.

"My recommendation is that, for now, men with a life expectancy of more than seven to 10 years continue to be screened for prostate cancer," says Andriole.

"On the other hand, screening is probably not necessary for elderly men and men with significant health issues. These men should have a conversation with their doctors to make an individual decision about whether they want to be screened, because clearly there can be harmful side effects related to treatment, while for these men, there has been no demonstration that screening will prolong their lives."

Of the men who received annual screening, 85 percent had PSA tests and 86 percent had digital rectal exams. Men in the routine-care arm sometimes had prostate cancer screening tests: PSA screening ranged from 40 percent of men at the beginning of the study to 52 percent of men by the last screening year, and screening with rectal exams ranged from 41 percent initially to 46 percent by the last screening year. The exam involves a doctor inserting a lubricated, gloved finger into the rectum to feel for anything that is not normal.

Men were referred for follow up testing for prostate cancer if their PSA level was higher than 4.0 ng/ml or if the rectal exam was abnormal.

The researchers noted that the vast majority of men in both groups who developed prostate cancer were diagnosed with stage II disease (out of IV). The number of later-stage cases was similar in the two groups. However, men in the routine-care group had more aggressive tumors (Gleason score 8-10). The reduced number of men with prostate cancer with a Gleason score of 8-10 in the intervention group may eventually lead to a mortality difference, but data analyzed so far have not shown such a benefit.

Additionally, men in both groups received similar treatments for their disease, which was not dictated by being a participant in the PLCO.

Another study reported in this same online issue of the NEJM is the large European Randomized Trial of Screening for Prostate Cancer (ERSPC), which shows a 20 percent reduction in the rate of death from prostate cancer but with a high risk of overdiagnosis. In the ERSPC, unlike the PLCO trial, men were referred for follow-up testing if their PSA level was 3.0 ng/mL or higher and were also screened, on average, every four years as opposed to annually in the PLCO.

Lowering the threshold for what is considered an abnormal PSA to 3.0 ng/ml is likely to diagnose more tumors but not necessarily identify those that are more likely to be aggressive, Andriole says.

read the full article...

Top Story: Effects of Educational Video Games on Student Achievement

Educational video games have gained acceptance as an instructional tool in our nation's schools, prompting researchers to examine the direct effects on student mathematical achievement.

Therefore, lead researcher Albert Ritzhaupt and his team at the University of North Carolina Wilmington's Watson School of Education will observe nearly 500 middle school students, 15 teachers, technology trainers, and administrators using standards-based educational video games produced by Tabula Digita, an educational gaming company.

The DimensionM instructional video games being used in the study help students absorb the complexities of math by presenting them in a format that is fun and engaging.

"We hope our research will serve to explain further how playing serious, high quality, interactive games influences mathematics achievement and self-efficacy in math," said Ritzhaupt.

From the article:

The rapid growth of educational video games as a viable instructional tool has prompted academic researchers at the University of North Carolina Wilmington's Watson School of Education to implement a comprehensive study designed to learn more about the direct effects of educational video games on student mathematical achievement.

Initiated earlier this month, the research effort, led by Albert Ritzhaupt, assistant professor in the Watson School of Education, will observe nearly 500 middle school students and 15 teachers, technology trainers and administrators using Tabula Digita's DimensionM(tm) standards-based educational video games. The participating middle schools in the eastern North Carolina region include: West Pender Middle School and Cape Fear Middle School in Pender County and Trask Middle School and D.C. Virgo Middle School in New Hanover County. "We hope our research will serve to explain further how playing serious, high quality, interactive games influences mathematics achievement and self-efficacy in math," said Ritzhaupt. "Equally important will be to gain a greater understanding of how students react to and interact with gaming in the classroom and how teachers respond to those unique student actions."

The DimensionM instructional video games being used in the study are designed to teach and reinforce key math concepts through a series of cutting-edge, first-person action adventure missions that incorporate three-dimensional graphics, sound, animation and storylines comparable to those in popular video games. Students practice and master math concepts previously discussed in class by successfully navigating a myriad of middle school level math and algebra lessons embedded in the game. The purpose of the game is to help students absorb the complexities of math by presenting them in a format that is fun and engaging.

Joining Ritzhaupt in the research endeavor will be assistant professor Heidi Higgins and technology coordinator/lecturer Beth Allred, both in the Watson School of Education. The study is expected to run until the end of May, with the findings disseminated in a full report in late summer.
Photo opportunities of the research taking place are available.

read the full article...