Saturday, August 15, 2009

Gene Therapy Creates a New Fovea

Treatment leads to an unexpected improvement in vision for one patient.

By Emily Singer


Twelve months after receiving an experimental gene therapy for a rare, inherited form of blindness, a patient discovered that she could read an illuminated clock in the family car for the first time in her life. The unexpected findings suggest that the brain can adapt to new sensory capacity, even in people who have been blind since birth.

Correcting vision: To deliver the corrective gene to the eye, surgeons cut the vitreous gel of the eye and then inject a virus loaded with corrective genes underneath the retina (a model of the eye is shown here).
Credit: Sarah Kiewel/University of Florida

The patient, who remains anonymous, suffers from a disease called Leber congenital amaurosis, in which an abnormal protein in sufferers' photoreceptors severely impairs their sensitivity to light. "It's like wearing several pairs of sunglasses in a dark room," says Artur Cideciyan, a researcher at the University of Pennsylvania in Philadelphia, who oversaw the trial.

At the start of the study, physicians injected a gene encoding a functional copy of the protein into a small part of one eye--about eight-to-nine millimeters in diameter--of three patients, all in their twenties and blind since birth. In preliminary results published last year, Cideciyan and colleagues found that all three patients showed substantial improvements in their ability to detect light three months after treatment.

The researchers have now published new results of the study in the journal Human Gene Therapy, showing that these improvements remained stable after one year. And in a letter to the New England Journal of Medicine, they describe surprising gains in one patient's vision. "It was unexpected because the major improvement of vision had occurred within weeks after the treatment," says Cideciyan.

Probing further, the researchers found that the patient appeared to be using the treated part of her eye like a second fovea--the part of the retina that is most densely populated with photoreceptors and is typically used for detailed vision, such as reading. The patient could detect dimmer light using the treated region than she could with her natural fovea. "We realized she was slowly adapting to her newfound vision by subconsciously focusing her attention to the treated area as opposed to the untreated central fovea," says Cideciyan. "It suggests that there is a plasticity, an ability to adapt in the adult visual brain."

"It's very encouraging," says Kang Zhang, an ophthalmologist and director of the Institute for Genomic Medicine at the University of California, San Diego, who was not involved in the study. "The formation of almost another vision center has implications as we go forward for patients with congenital blindness. They might not be able to use their normal fovea, but they might be able to develop a new center of vision."

Researchers now plan to study other patients in the trial to determine if they have experienced similar improvements. They also hope to figure out how to accelerate these gains, perhaps by using visual training targeted to the area treated with gene therapy.

The scientists also say that the fact that patients' visual improvements held for a year after injection is promising. "It means that for congenital or childhood blindness," says Zhang, "there is the potential to at least stabilize, if not improve, visual function."


http://www.technologyreview.com/biomedicine/23239/


Nanowire Advance for Lithium Batteries

Electrodes made of carbon-silicon nanowires could boost the life and performance of lithium-ion batteries.

By Prachi Patel


Lithium ion has become the battery of choice for electric vehicles, driving researchers to improve the technology's performance, longevity, and reliability. A new type of nanowire electrode developed by materials science and engineering professor Yi Cui at Stanford is a step toward that goal.

Nanowire Boost: Carbon nanowires coated with silicon (bottom) produces a material that can store six times as much charge as the graphite used in today’s lithium battery electrodes. (Bare carbon nanowires shown at top.)
Credit: Yi Cui, Stanford University

The new electrodes, discussed in last week's Nano Letters, can store six times as much charge as the graphite electrodes in current lithium batteries--that means electric cars that give more mileage per charging session.

When a lithium battery is charged, lithium ions move from the positive electrode (cathode) to the negative anode. Silicon is a promising material for anodes because it can store over 10 times as many ions as graphite at the same weight. But when silicon absorbs charge, it swells to four times its original volume, cracking after a few charging cycles.

The new nanowires exploit the properties of silicon and graphite. Cui and his colleagues make the material by depositing amorphous silicon on carbon nanowires. The wires can store a charge of about 2,000 milliamp hours per gram, while graphite anodes store less than 360 milliamp hours per gram. Meanwhile, the carbon core makes them robust. "Lithium ions can also get absorbed into carbon," says Cui, "but the volume expansion of carbon is 10 percent or smaller, so it provides a stable backbone." In tests, the nanowires performed well for more than 50 charging cycles.

The researchers had previously made electrodes from pure crystalline silicon nanowire. Those had triple the storage capacity of graphite electrodes but only lasted through 20 cycles.

The carbon-silicon nanowires are also easier to make. They don't require the high temperatures that are needed to grow the silicon-only nanowires. "Carbon nanofiber is already commercially available and you can produce tons," Cui says. "The coating process could be made a lot faster and is easy for large-scale manufacturing."

For use in commercial electric vehicles, lithium battery electrodes need to last through at least 300 charge cycles. In this respect, the nanowires could face stiff competition. In December 2008, a team from Hanyang University in Ansan, South Korea, unveiled nanoporous silicon anodes that lasted for more than 100 charging cycles and could store more charge than the nanowires. Chemist Jaephil Cho, who led the work, says that the nanoporous material has more silicon-per-unit volume than nanowires, so it can hold more charge per unit volume. However, he says, "carbon fiber [manufacturing] is easy to scale up and therefore [Cui's] method for making carbon-silicon nanowires is believed to be very practical."

General Motors and Applied Sciences, meanwhile, are developing nanowire anodes that are very similar to those of the Stanford team. The companies coat carbon nanofibers with silicon particles, as opposed to amorphous silicon, resulting in anodes that can store charge of 1,000 to 1,500 milliamp hours per gram. Gholam-Abbas Nazri, who is leading the work at the GM Research and Development Center in Warren, MI, says that the anode capacity can be increased by making the silicon layer thicker, but right now it's best to stabilize the capacity at 1,000 milliamp hours per gram. Anodes that store more charge need cathodes that can supply higher charge, Nazri says, and "at the moment, there is no cathode [material] with enough capacity to match carbon-silicon anode."

Cui is confident in the success of silicon as an anode material for lithium batteries. "In the next five years or less, we'll see a battery with silicon anodes," he says. However, cost will be the deciding factor. In the end, he says, it all depends on "whoever can come up with a low-cost, large-scale manufacturing process, produce the best performance, and put out products."


http://www.technologyreview.com/energy/23240/

A Cool Micro Fuel Cell

A high-power cell uses materials already on the market and operates at lower temperatures.

By Katherine Bourzac


Solid-oxide fuel cells run efficiently on a wide variety of conventional fuels and biofuels, but their high operating temperatures have limited their applications. Many researchers are working on this problem, developing new electrode and electrolyte materials that operate at lower temperatures without compromising performance. Now researchers in Japan have demonstrated a high-performance micro fuel cell that operates at lower temperatures, thanks to a restructured electrode.

Cool fuel: This solid-oxide fuel cell has a power output of one watt at 600 degrees Celsius and is about two millimeters in diameter. Its size and operating temperature could make it a suitable power source for quick-starting portable devices.
Credit: Science/AAAS

"The cell is suitable for portable power sources, which require quick start-up," as well as auxiliary power for automotives, says Toshio Suzuki, a research scientist at Japan's National Institute of Advanced Industrial Science and Technology. Suzuki led the development of the new fuel cell, which is described today in the journal Science. The cell is tube shaped and about two millimeters in diameter; its power output is about one watt at 600 degrees Celsius. Conventional solid-oxide fuel cells operate at temperatures above 700 degrees.

Solid-oxide fuel cells generate an electrical current by pulling oxygen from the air and using it to oxidize fuel. Oxygen comes through the cathode side, fuel through the anode side; the two react in the electrolyte and form water and carbon dioxide as a waste products, depending on the fuel type. This reaction is more efficient than conventional generators. Solid-oxide fuel cells are also more efficient than the other predominant fuel-cell type, which uses expensive platinum catalysts and a polymer membrane that can become contaminated, and runs only on hydrogen fuel.

Solid-oxide fuel cells are "more flexible, more powerful, and don't have the problem of getting contaminated," says Eric Wachsman, director of the Florida Institute for Sustainable Energy and chair of materials science and engineering at the University of Florida. The problem with these devices, says Wachsman, is the operating temperatures. This means a long warm-up time can be required, and you can't use one in a cellular phone. The high temperatures also cause the battery cell to wear out.

Suzuki's group created a power source with a lower operating temperature by improving the structure of the anode, where the fuel comes in. The Japanese group used conventional techniques including lithography and etching to make anodes with varying degrees of porosity. The best-performing anode was a very porous structure based on nickel oxide, a conventional material for these electrodes. Suzuki says they chose to use existing materials because their performance over time has been proven. "These are reliable materials for long-term stability, and have a cost advantage compared with other new materials for low-temperature solid-oxide fuel cells," he explains.

"The performance is no doubt quite good," says Harry Tuller, professor of ceramics and electronic materials at MIT. "This is a nice systematic study showing the evolutionary impact of demonstrated improvements" in the electrode, he says. However, Tuller cautions that the electrodes and the electrolyte are doped with small amounts of expensive materials, which could add expense to the cells. The anode contains, in addition to nickel oxide, a small amount of the rare element scandium.

Wachsman says that it's difficult to bring down the operating temperature of these cells without compromising on power output. He's also working on new solid-oxide fuel-cell electrode structures. Using a different set of materials and a similar approach, Wachsman recently demonstrated a fuel cell with a restructured anode and a new electrolyte for a power output of two watts per square centimeter at 650 degrees. This work is described in the journal Electrochemistry Communications.

Suzuki says his group is in discussions with several companies about commercializing the cells.


http://www.technologyreview.com/energy/23223/

Making Android More Secure

The open platform calls for a different approach to security.

By Erica Naone


Google entered the mobile phone market with a splash, promising that its Android operating system would be wide open to developers. This was very different from the traditional approach--mobile phone carriers in the United States have typically exercised tight control over which software can be run on their devices. Apple's popular iPhone, though a recent entry, was no exception. Apple closed off aspects of its device to third-party applications and had to approve all applications sold through its market.

Credit: Technology Review

But as phones become more like desktop computers, they become subject to the same security risks that abound on the Internet. To ensure Android's success, Google had to come up with a new approach to security for mobile phones. Rich Cannings, Android Security Lead at Google, spoke this week at the Usenix Security Conference in Montreal, Quebec, regarding the company's design.

There's always a balance to strike between being open and being secure, Cannings says. "I could make the most secure mobile phone ever, but no one would use it." A truly secure mobile phone certainly couldn't access the Internet, he says, and it might not even be able to send text messages or receive calls.

Instead of eliminating all risks and, therefore, all features, Google's approach is to minimize what attackers can do if they are able to get access to a device. For inspiration, Google looked to the Web, Cannings says. Web applications are protected by the "same origin policy," which under normal circumstances prevents one website from exchanging information with any other website that a user may have open.

To translate this type of approach to an operating system, Cannings says, the company treated each application as a different user of the device. When multiple users share a single desktop machine, the operating system is designed to protect them from each other by giving each its own account. From one account, it's not possible to see files in other accounts, or to affect another user's data. In the same way, the Android operating system treats each application as a separate user, so that if an attacker breaks into the Web browser, for example, he won't be able to access the address book.

But just separating each application wasn't secure enough. There's no reason, for example, for a Pac-Man application to be able to access the Internet, Cannings says. So the Android security team limited each application's access to the phone unless it asked permission from the user. Here, they were faced with another challenge.

"Most humans have a difficult time analyzing unknown risks," he says. When users have to handle their own security, they often become numb to the risks and click OK every time a dialogue box alerts them to a problem. Android is designed to ask once, when the application is being installed. It also shows the user only the most important alerts, while offering an option to see the full list.

It isn't only applications that Android secures. The team also looked at bits of software that are common entry points for attackers. For example, Cannings says, the software that runs media, such as audio and video on a Web browser, is very complex and a common target. In Android, that software runs apart from the browser in a separate media server, so that if it is compromised, an attacker can't access the passwords and cookies stored in the browser.

Charlie Miller, a security researcher at Independent Security Evaluators who has found and reported several bugs in the Android platform, says that Google's technique of placing each application on an Android phone into a separate sandbox can certainly be effective. For example, Miller did find a bug in the software that Android used to play mp3s, but found that the access he gained with his exploit didn't allow him to attack other applications on the phone.

However, Miller thinks Google relies too heavily on this one method of protection. "It is a good security piece, but in my opinion, there should be more layers," he says. An attacker could find a bug in the operating system that allowed him to break through the walls between applications, which would make bugs in media software just as dangerous as before, he says.

Miller adds that systems such as the iPhone will stop unauthorized applications from executing code. Google's system, on the other hand, allows any type of code to run, which puts more tools in the hands of the attacker.

Finally, Miller says, "Google has this other obstacle: that they make the operating system, but they don't control the phone." The first time he spotted a bug in Android, Miller notified Google and the company patched the Android source code the same day. This solution, however, didn't protect phones already in use. "They were basically at the mercy of T-Mobile [which currently offers the Android phones for sale in the US] to roll the patch out and push it out to all the phones that were in the world," Miller says. While some vendors may be responsive to security concerns on their phones, he believes that others might never roll out patches at all.

Cannings says that when a bug is found, Google notifies its carriers--currently 32 companies in 21 countries--and works to provide them with test builds of its proposed solutions. When the carriers are satisfied, they push the fix out to their customers.

No product is ever truly secure, Cannings says, but Google is working to prepare Android for the malware attacks that will inevitably come as smartphones become more popular.



http://www.technologyreview.com/computing/23241/

Friday, August 14, 2009

Killing Cancer Stem Cells

A new screening method identifies drugs that selectively target these elusive cells in tumors.

By Courtney Humphries


Recent evidence suggests that certain cancers may persist or recur after treatment because a small population of cells, called cancer stem cells, remains behind to seed new tumors. Though scientists are not yet certain about the role cancer stem cells play in disease, evidence is accumulating that these cells are particularly resistant to chemotherapy and radiation, and can linger in the body even after treatment.

Cell attack: Cancer cells in tumors treated with salinomycin--a drug that specifically targets cancer stem cells--have a less malignant appearance (right) and look more like differentiated cells than untreated cells (left).
Credit: Piyush Gupta, Kai Tao, and Charlotte Kuperwasser

Several research groups have begun looking for substances that kill these cells. A new approach, developed by researchers at the Whitehead Institute for Biomedical Research and the Broad Institute of MIT and Harvard, makes use of high-throughput screening methods to identify chemicals that selectively target these elusive cells. In a study published today in Cell, the researchers identify one particular drug that kills breast cancer stem cells in mice. Although it is still unclear whether the drug will be useful in humans, the researchers believe their study demonstrates that it's possible to target these cells selectively.

Because cancer stem cells, which have the ability to give rise to new tumors, may remain behind after chemotherapy and radiation treatments, finding ways to target these cells specifically may offer a way to make treatment more effective. But accessing and studying cancer stem cells has been challenging because very few are present in tumors and they are difficult to generate and maintain outside the body. Other groups have recently screened for drugs that target leukemia stem cells and brain cancer stem cells. In the Cell paper, a team led by the labs of Eric Lander at the Broad Institute and Robert Weinberg at the Whitehead Institute developed a way to generate a large number of cells that mimic naturally occurring epithelial cancer stem cells; these cells can be maintained in this state for long periods of time.

Epithelial cancers are the most common types of cancer in adults and affect the skin and inner lining of organs in the body. Using epithelial breast cancer cells, the researchers introduced a genetic change in these cells, causing them to take on the properties of mesenchymal cells, which form connective tissue in the body. Piyush Gupta, a co-author at the Broad Institute, says that for reasons not completely known, when this "epithelial-to-mesenchymal transition" is performed on breast cancer cells, it promotes the development of a large number of cells that he says are "indistinguishable from cancer stem cells." These cells can then be grown in tiny pockets on plates and screened robotically for their response to large collections of chemicals.

The researchers used a library of 16,000 chemicals at the Broad Institute to look for compounds that killed these transformed breast cancer stem cells more effectively than they killed normal breast cancer cells. Gupta explains that since cancer stem cells are usually resistant to drugs, relatively few chemicals are effective--a mere 32 compounds were identified in the screen as preferentially treating breast cancer stem cells.

After some initial testing of several compounds, the researchers focused on one drug called salinomycin. They compared it to the actions of a drug commonly given in breast cancer chemotherapy, paclitaxel (also known by its brand name, Taxol), in cultured cells and in mice. While paclitaxel treatment leads to a higher proportion of drug-resistant cancer stem cells, salinomycin had the opposite effect, reducing the number of breast cancer stem cells in cultured cells more than 100 times more effectively than paclitaxel. The drug also reduced breast tumor growth in mice, although the reduction was less dramatic.

Gupta says that it's not clear whether salinomycin will be a clinically useful drug, because it has not yet been tested in humans. The team is continuing to study this initial candidate drug, but he also notes, "we're following up on several others that we think may be promising."

Jeffrey Rosen, a breast cancer researcher at Baylor College of Medicine, in Houston, TX, says that the study is an early example of a promising new turn in the hunt for cancer therapies. "It's very exciting that some groups are starting not to view tumors as homogeneous entities but to target subpopulations of cells we think are import for drug resistance," he says. However, Rosen notes that the results in mice were not as promising as the drug's performance in cells. He says that the cancer field is hampered by a lack of good animal models to determine which drugs will be relevant for therapies. The problem, he says, is "once you pull out a compound or drug, then how do you actually go the next step and show that it's really going to work?"

Weinberg calls the study "the first step in the direction of trying to eliminate these cells in tumors." He believes that even if the role of cancer stem cells in different kinds of cancer has not been resolved, "we have no doubt that getting rid of them is going to be an important part of creating cures."

Although this study focused on breast cancer, the researchers anticipate that the screen could be applied to any kind of epithelial cancer. Gupta says that while targeting cancer stem cells may not necessarily be a "magic bullet" in cancer treatment, "if you have a certain subpopulation of cancer cells that are resistant to standard treatment, you would want to find a compound that targets these cells." He adds that a drug that targets cancer stem cells could be used in combination with standard treatments to ensure that resistant cells are not left behind.


http://www.technologyreview.com/biomedicine/23222/

Microsoft Team Traces Malicious Users

Three researchers find a way to trace compromised machines used to attack other computers.

By Robert Lemos


Anonymity on the Internet can be both a blessing and a curse. While the ability to hide behind anonymous proxies and fast-changing Internet protocol (IP) addresses has enabled freer speech in nations with repressive regimes, the same technologies allow cybercriminals to hide their tracks and pass off malicious code and spam for legitimate communications.

Credit: Technology Review

In a paper to be presented next week at SIGCOMM 2009 in Barcelona, Spain, three researchers from Microsoft's research center in Mountain View, CA, demonstrate a way to remove the shield of anonymity from such shadowy attackers. Using a new software tool, the three computer scientists were able to identify the machines responsible for malicious activity, even when the host's IP address changed frequently.

"What we are really trying to get at is the host responsible for an attack," said Yinglian Xie, a member of the Microsoft team. "We are not trying to track those identifiers but associate them with a particular host."

The prototype system, dubbed HostTracker, could result in better defenses against online attacks and spam campaigns. Security firms could, for example, build a better picture of which Internet hosts should be blocked from sending traffic to their clients, and cybercriminals would have a harder time camouflaging their activities as legitimate traffic.

Xie and her colleagues, Fang Yu and Martin Abadi, analyzed a month's worth of data--330 gigabytes--collected from a large e-mail service provider, in an attempt to determine which users were responsible for sending out spam. To trace the origins of multiple spam outbreaks, the scientists studied records including more than 550 million user IDs, 220 million IP addresses, and a time stamp for events such as sending a message or logging into an account.

Tracing the origins of messages--a key task for tracking spam and other kinds of Internet attack--involved reconstructing relationships between account IDs and the hosts from which users connected to the e-mail service. To do this, the researchers clumped together all the IDs accessed from different hosts over a certain time period. The HostTracker software then combed through this data to resolve any conflicts. For example, sometimes more than one user appeared to originate from the same IP address or a single user had multiple ID addresses during overlapping periods of time.

HostTracker resolves the conflicts by cross referencing the data to identify proxy servers, which allow several hosts to appear as a single IP address, and to determine when a guest was using a legitimate host. "The fact that we are able to trace malicious traffic to the proxy itself is an improvement because we are able to pinpoint the exact origin," Xie says.

The researchers also created a way to automatically blacklist traffic from a particular IP address, once the HostTracker system has determined that the host at that address is compromised. Using this method in simulation, the researchers were able to block malicious traffic with an error rate of five percent--in other words, 5 out of 100 IP addresses classified as malicious were actually legitimate. Using additional information to identify good user behavior reduced that false-positive rate to less than one percent.

The results suggest that HostTracker would be a good way to refine the current way of defending against distributed denial-of-service attacks and spam campaigns, says Gunter Ollmann, vice president of research and development at Damballa, a firm that helps companies find and eliminate compromised hosts in a computer network.

"Using this technique will help find botnets that have a high frequency of traffic, such as spam campaigns, DDoS attacks, and maybe click-through attacks," Ollmann says. "Other attacks, such as password-stealing and banking trojans, where the attack is more host-centric--this sort of technique would not be as effective."

Xie acknowledges that while the technique is useful for creating lists of hosts to track, it may be less useful for law enforcement agencies attempting to identify the attackers behind online crime. "The accountability we are talking about is not court accountability," she says. "We want to separate the two notions. The accountability that we are talking about is the ability to identify the hosts."


http://www.technologyreview.com/computing/23224/


Wednesday, August 12, 2009

Molecular Condom Blocks HIV

A novel gel that filters out HIV could protect women from infection.

By Emily Singer


A polymer gel that blocks viral particles could one day provide a way for women to protect themselves against HIV infection. The gel reacts with semen to form a tight mesh that blocks the movement of virus particles. The material, which is still in early development, could eventually be combined with antiviral gels currently in clinical trials to provide a dual defense against HIV.


Scientists have been working on microbicide gels for HIV for more than a decade. This type of prophylactic, which women could use without relying on their partners, is of particular interest in areas such as Sub-Saharan Africa, where HIV-infection rates are high and use of condoms is relatively low. But development has been slow--a number of products have failed clinical trials.

Most of the topical microbicides being tested for HIV prevention contain antiviral drugs designed to block replication of the virus once it infects a cell. The new gel, which is being developed by Patrick Kiser and colleagues at the University of Utah, in Salt Lake City, acts at the first stage of infection--when the virus moves from semen to the surface of vaginal tissue.

"This research stresses improvement not in the drugs but in the vehicle used to deliver the drugs," says Ian McGowan, a physician and scientist at the University of Pittsburgh Medical Center who was not involved in the research. "That's a relatively neglected area, and the technology is quite exciting."

Kiser and colleagues developed a gel from two polymers--PBA (phenylboronic acid) and SHA (salicylhydroxamic acid)--that can be spread around the vagina prior to intercourse. With the introduction of semen, the vagina reaches a higher pH level, causing molecules in the gel to bind together, creating a finer mesh that prevents HIV particles from passing through. "The idea is to use the trigger of semen to activate the gel and create a more effective barrier," says Kiser.

In research published this week in the journal of Advanced Functional Materials, researchers showed in lab tests that the gel can block the movement of HIV particles, and that it appears safe when tested in human vaginal cells. The next step is to test the gel on human tissue collected from women who have had hysterectomies to show that it can prevent infection.

"It's a very interesting approach to take advantage of normal vaginal physiology and alter it to inhibit HIV transmission," says Craig Hoesley, an infectious-disease specialist at the University of Alabama, in Birmingham. But this might also prove troublesome. McGowan points out that the change in pH after intercouse can be variable, so researchers need to show that the gel can react under different chemical conditions.

Kiser and his team ultimately want to combine this type of gel with an antiviral drug in order to block both the movement of HIV and its replication. But extensive testing, including safety testing, remains to be done. For example, for use in Sub-Saharan Africa, the gel must be stable at different temperatures. "We will also need to see if it is compatible with antiviral drugs," says McGowan.


http://www.technologyreview.com/biomedicine/23214/

Lithium Battery Recycling Gets a Boost

The DOE funds a company that recycles plug-in vehicle batteries.

By Tyler Hamilton


The US Department of Energy has granted $9.5 million to a company in California that plans to build America's first recycling facility for lithium-ion vehicle batteries.

Waste materials: Recycling worn-out batteries from electric cars produces a mix of finely shredded metals, consisting of cobalt, aluminum, nickel, and copper (show on the left), and a slurry that is processed into a cobalt cake (on the right).
Credit: Tesla Motors

Anaheim-based Toxco says it will use the funds to expand an existing facility in Lancaster, OH, that already recycles the lead-acid and nickel-metal hydride batteries used in today's hybrid-electric vehicles.

There is currently little economic need to recycle lithium-ion batteries. Most batteries contain only small amounts of lithium carbonate as a percentage of weight and the material is relatively inexpensive compared to most other metals.

But experts say that having a recycling infrastructure in place will ease concerns that the adoption of vehicles that use lithium-ion batteries could lead to a shortage of lithium carbonate and a dependence on countries such as China, Russia, and Bolivia, which control the bulk of global lithium reserves. "Right now it hardly pays to recycle lithium, but if demand increases and there are large supplies of used material, the situation could change," says Linda Gaines, a researcher at the Argonne National Laboratory's Transportation Technology R&D Center.

Toxco's DOE grant may seem like pocket change--last week the DOE awarded a total of $2.4 billion to companies developing batteries and systems for electric vehicles--but it's also early days for the project. Sales of plug-in hybrids and all-electric vehicles have yet to take off, and though President Barack Obama has pledged to get a million plug-in hybrids on US roads by 2015, it will likely be a decade before any large-scale recycling capability is required.

Demonstrating the capacity to recycle, however, will be key to showing that electric vehicles are truly "green"--both emission-free in operation and sustainable in design. "Management of these batteries has to be done in an environmentally responsible way and in an economic way," says Todd Coy, executive vice president of Kinsbursky Brothers, Toxco's parent company.

Toxco also has an edge over newcomers to the market. The company is already North America's leading battery recycler and has been recycling single-charge and rechargeable lithium batteries used in electronics devices and industrial applications since 1992 at its Canadian facility in Trail, British Columbia. "We're managing the bulk of the batteries already out there," says Coy.

The Trail facility is also the only one in the world that can handle different sizes and chemistries of lithium batteries. When old batteries arrive they go into a hammer mill and are shredded, allowing components made of aluminum, cooper, and steel to be separated easily. Larger batteries that might still hold a charge are cryogenically frozen with liquid nitrogen before being hammered and shredded; at -325 degrees Fahrenheit, the reactivity of the cells is reduced to zero. Lithium is then extracted by flooding the battery chambers in a caustic bath that dissolves lithium salts, which are filtered out and used to produce lithium carbonate. The remaining sludge is processed to recover cobalt, which is used to make battery electrodes. About 95 percent of the process is completely automated.

The DOE grant will help Toxco transfer the Trail recycling process to its Ohio operations, laying the foundation for an advanced lithium-battery recycling plant that can expand to accommodate expected growth in the US electric-vehicle market. The electric-car maker Tesla Motors, like most major automakers, already sends old or defective battery packs to Toxco's Trail facility for recycling. "It's very important for us," says Kurt Kelty, director of energy storage technologies at Tesla. "The recycling issue is a key issue and we need to get it right."

But Kelty says the economics of recycling depend largely on the chemistries of the lithium-ion batteries being used. He adds that lithium is currently one of the least valuable metals to retrieve. For example, the lithium in a Tesla Roadster battery pack would represent roughly $140 of a system with a replacement cost of $36,000. For most lithium-ion batteries, the lithium represents less than 3 percent of production cost.

"The lithium part is a really negligible cost when you compare it to other metals; nickel, cobalt, those are going to be the biggest drivers [of recycling]," says Kelty, adding that Tesla actually makes money by recycling just the nonlithium recycled components of its batteries. "So while we've been reading plenty of articles about the industry running out of lithium, it's totally missing the mark. There's plenty of lithium out there."

Estimates range, but cobalt sells on the market for about $20 per pound, compared to $3 per pound for lithium carbonate. Cobalt, a byproduct of nickel and copper mining, is also scarcer and half of the world's supply comes from the Democratic Republic of Congo, a politically unstable region.

Some lithium-ion chemistries are less cost effective to recycle. For example, the lithium iron phosphate batteries produced by A123 Systems don't yield as much value back. The lower-cost materials in A123's batteries give the company an edge over competitors but also make its batteries less economical to recycle.

The lithium situation could change. Industry research consultant Tru Group says the global recession has led to a large surplus of lithium in the market, keeping prices low. The consultancy, however, expects that by 2013, supply and demand will be in balance again and that a production crunch could occur around 2017 and beyond.

Over the long term, some observers believe the mass introduction of plug-in hybrid and electric vehicles, combined with the fact that much of the world's lithium reserves lie in foreign and potentially unfriendly countries, could lead to a large spike in the price of lithium carbonate. The concern is that we could end up trading "peak oil" for "peak lithium."

Gaines is looking at the scarcity issue closely. She is overseeing a four-year project at Argonne that will assess the long-term demand for lithium-ion battery materials and recycling infrastructure. Gaines says research to date shows that demand could be met until 2050, even if plug-in vehicle sales grow dramatically. But recycling will be crucial to helping the US become less dependent on foreign sources of lithium. "We show that recycling would alleviate potentially tight supplies," she says.


http://www.technologyreview.com/energy/23215/

Finding the Right Piece of Sky

Researchers design a search system that recognizes the features of pictures of the sky.

By Erica Naone


Last week at SIGGRAPH, an international conference on computer graphics, a group presented an innovative system designed to analyze images of the sky. Most commercial image-search systems figure out what's in an image by analyzing the associated text, such as the words surrounding a picture on a Web page or the tags provided by humans. But ideally, the software would analyze the content of the image itself. Much research has been done in this area, but so far no single system has solved the problem. The new system, called SkyFinder, could offer important insight into how to make an intuitive, automatic, scalable search tool for all images.

Searching the skies: SkyFinder automatically divides image into pieces and assigns tags to each, allowing users to find images like the ones above.
Credit: Hamed Saber, CoreBurn, Beyond the Lens, pfly (CC2.0 license)

Jian Sun, who worked on SkyFinder and is the lead researcher for the Visual Computing Group at Microsoft Research, says that the traditional approach to image search sometimes leads to nonsensical results when a computer misinterprets the surrounding text. Typically, engines that analyze the content of images instead of text need a picture to guide the search--something submitted by the user that looks a lot like their intended result. Unfortunately, such an image may not be easy for the user to find. Sun says SkyFinder, in contrast, provides good results while also letting the user interact intuitively with the search engine.

To search for a specific kind of sky image, the user simply enters a request in fairly natural language, such as "a sky covered with black clouds, with the horizon at the very bottom." SkyFinder will offer suggested images matching that description.

Each image is processed after it is added to the database. Using a popular method called "bag of words," Sun explains, the image is broken into small patches, each of which is analyzed and assigned a codeword describing it visually. By analyzing the patterns of the codewords, the system classifies the image in categories such as "blue sky" or "sunset," and determines the position of the sun and horizon. By doing this work offline, Sun says, the system can easily be scaled to search very large image databases. (The SkyFinder database currently contains half a million images.)

It's also possible to fine-tune search terms using a visual interface. The system offers a screen, for example, where the user can adjust icons to show the desired positions of the sun and horizon. Those coordinates are added to the search.

SkyFinder arranges images logically on the screen--for example, from blue sky to cloudy sky, or from daytime to sunset. Once the user has found an image she likes, she can use it to guide a more targeted search to find similar images.

The system also includes tools to help a user replace the sky in one image with the sky from another picture.

"Computer graphics has had enormous successes in the past decades, but it is still impossible for an average computer user to synthesize an arbitrary image or video to their liking," says James Hays, who was not involved with the research and has a PhD in computer science from Carnegie Mellon University. He believes it's important to develop more-sophisticated tools for inexperienced users. Such people could use a tool like SkyFinder to find an image they want or to make adjustments to an existing image. Hays believes SkyFinder's main contribution is its user interface.

Ritendra Datta, an engineer at Google who has studied machine learning and image search, says that allowing computers to understand automatically what's being shown in an image remains one of the major open problems in image search. "SkyFinder seems to be an interesting new approach" that works for one type of image. Datta believes that advances in specialized applications could eventually be applied on a broader scale.

He thinks, however, that thorough usability studies are badly needed for search systems that rely on automatic analysis of images.

Sun plans to improve SkyFinder by adjusting it to analyze more attributes of the sky and by expanding the database. For now, he says, systems that automatically analyze images have to be trained completely differently depending on what type of image they're working with. However, he says his work with SkyFinder could be used to identify pictures of the sky among a general bank of images.



http://www.technologyreview.com/web/23213/

Tuesday, August 11, 2009

Supercritical Fuel Injection

A supercritical diesel engine could increase efficiency and cut emissions.

By Duncan Graham-Rowe


Researchers in New York have demonstrated a supercritical diesel fuel-injection system that can reduce engine emissions by 80 percent and increase overall efficiency by 10 percent.

Going supercritical: This laboratory equipment is being used to study supercritical diesel fuel.
Credit: George Anitescu, Syracuse University

Diesel engines tend to be more efficient than gasoline, but the trade-off is that they are usually more polluting. Because diesel is heavy, viscose, and less volatile than gasoline, not all the fuel is burned during combustion, resulting in carbon compounds being released as harmful particulate soot. The higher combustion temperatures required to burn diesel also lead to increased nitrogen oxides emissions.

A fluid becomes supercritical when its temperature and pressure exceed a critical boundary point, causing it to take on novel properties between those of a liquid and a gas. George Anitescu, a research associate at the Department of Biomedical and Chemical Engineering at Syracuse University in New York state, who developed the new engine design, says that supercritical diesel can be burned more efficiently and cleanly.

By raising diesel to a supercritical state before injecting it into an engine's combustion chamber, viscosity becomes less of a problem, says Anitescu. Additionally, the high molecular diffusion of supercritical fluids means that the fuel and air mix together almost instantaneously. So instead of trying to burn relatively large droplets of fuel surrounded by air, the vaporized fuel mixes more evenly with air, which makes it burn more quickly, cleanly, and completely. In a sense, it is like an intermediate between diesel and gasoline, but with the benefits of both, says Anitescu, who presented his work last week at Directions in Engine-Efficiency and Emissions Research, a conference held in Dearborn, MI.

In the past, another related approach, called homogeneous charge compression ignition, has been used to improve the performance of diesel. This involves premixing diesel and air before injecting it as a vapour into a combustion chamber under high pressure. But while this mixture burns more efficiently, it also makes combustion more difficult to control, which can lead to engine knocking: shockwaves within the engine's cylinders caused by pockets of unburned fuel and air. In contrast, supercritical diesel injection produces very small vapour-like droplets, but with fuel densities equivalent to a liquid, says Anitescu.

Andreas Birgel, a researcher with the Internal Combustion Engines and Fuel Systems Research Group at University College London, UK, says there is plenty of interest in producing diesel that vaporizes more easily, for example, by using corn or rapeseed oil to make biodiesel, which has a relatively low viscosity. Another approach is to treat conventional diesel with additives, he says.

In order for the diesel to reach a supercritical state, Anitescu's fuel system has first to heat it to around 450 degrees Celsius at a pressure of about 60,000,000 Pascal. Achieving the pressure is not a problem, Anitescu says, but increasing the temperature is more demanding.

Because fuel systems usually operate at temperatures below 80 degree Celsius, Anitescu and his colleagues used the heat from the engine's exhaust to raise the fuel's temperature. This causes further complications. "You need to prevent it from coking," he says. Coking occurs when hydrocarbons in the fuel react, producing sticky deposits that can lead to fuel-system failures. The phenomenon can be avoided by diluting the fuel with an additive, such as carbon dioxide or water. In the Syracuse engine, a small amount of exhaust gas is introduced to act as an anticoking agent, a technique known as exhaust-gas recirculation.

The system has only been tested in a laboratory setup, but a prototype could be ready for testing by the end of the year, says Anitescu. The fuel system is designed to use conventional fuel injectors, even though these are designed to work with regular fluids. Anitescu says it may be possible to improve the performance by switching to a fluid state just below supercritical. This may allow vaporization to occur while getting better performance out of the injectors. "We have many options here," he says.

At the same conference, Transonic Combustion, a company based in of Camarillo, CA, presented details of an alternative way to use supercritical fuels that involves a novel fuel injector and redesigning the engine's entire valve system and combustion chamber.

But with either approach, going supercritical does not come without a cost, says Birgel. "You still need the viscosity because most diesel fuel systems depend upon the fuel for lubrication," he says.

"This is an issue which has yet to be addressed," admits Anitescu. He says it may be possible to introduce lubricants, but this would only be necessary in the final stage of the fuel system, where the fluid is at its hottest. For subcritical fuels, it may not be an issue, he says.


http://www.technologyreview.com/energy/23156/


Nanowires That Behave Like Cells

Transistors with lipid membranes could make better interfaces for neural prosthetics.

By Katherine Bourzac


Researchers at the Lawrence Livermore National Laboratory have sealed silicon-nanowire transistors in a membrane similar to those that surround biological cells. These hybrid devices, which operate similarly to nerve cells, might be used to make better interfaces for prosthetic limbs and cochlear implants. They might also work well as biosensors for medical diagnostics.

Hybrid nanowire: The silicon nanowire shown in the microscope image (top) is covered in a fatty membrane similar to those that surround biological cells. The bottom image is an illustration depicting the two layers of lipid molecules that surround the nanowire, sealing it from the surrounding environment. Ions can pass through the membrane via an ion channel, depicted here in lavender.
Credit: Aleksandr Noy

Biological communication is sophisticated and remains unmatched in today's electronics, which rely on electrical fields and currents. Cells in the human body use many additional means of communication including hormones, neurotransmitters, and ions such as calcium. The nexus of biological communication is the cell membrane, a double layer of fatty molecules studded with proteins that act as gatekeepers and perform the first steps in biological signal processing.

Aleksandr Noy, a chemist at the national lab, gave silicon nanowires a cell membrane in the hopes of making better bioelectronics. "If you can make modern microelectronics talk to living organisms, you can make more-efficient prosthetics or new types of biosensors for medical diagnostics," says Noy. For example, if the electrodes connecting a prosthetic device with the nervous system could read chemical signals instead of just electrical ones, the person wearing it might have better control over the prosthetic.

Noy started by making arrays of silicon-nanowire transistors--rows of 30 nanometer-diameter wires bounded at either end by electrical contacts--using methods developed by other researchers. The arrays were placed in a microfluidic device. Noy's group used the microfluidics to deliver hollow spheres of fatty membrane molecules. The spheres are attracted to the negatively charged surfaces of the nanowires, where they accumulate and fuse together to form a continuous membrane that completely seals each nanowire just as a biological membrane seals the contents of a cell. Bare nanowire transistors exhibit a measurable change in their electrical properties when exposed to acidic or basic solutions; the membrane-protected nanowires do not, because the fatty layer seals out the harsh solution--just like a biological cell membrane.

To give the coated nanowires electrical gates--essentially, a means of making them responsive to the surrounding chemical environment--Noy added proteins to form ion channels, which control the flow of charged atoms and molecules across cell membranes. When put into solution with the nanowires, these proteins insert themselves into the membrane. Noy's group tested the devices with two types of ion channels: one that always allows small, positively charged ions to pass through and one that does so only in response to a voltage change that can be produced by the nanowire. This voltage-responsive protein is often used to mimic nerve-cell electrical signals. The nanowires with ion channels were able to sense the presence of ions in the solution. By using the nanowire to create a voltage difference across the membrane, the voltage-responsive protein can be opened and closed, effectively allowing the nanowire to turn its chemical-sensing ability on or off. "The neuron is a good analog in some ways," Noy says of these devices.

Noy's work, described this week in the Proceedings of the National Academy of Sciences, opens new avenues because it makes the nanowires more like cells, says Yi Cui, assistant professor of materials science and engineering at Stanford University. With Charles Lieber, a chemist at Harvard University, Cui has made silicon nanowires into very sensitive sensors by coating the nanowires with antibodies. The sensors could, for example, detect blood proteins characteristic of cancer. Noy's work, Cui says, "is a really creative way to integrate a transistor with a cell membrane." By coating the nanowires, Noy can take advantage of everything that biological cell membranes have to offer, including the ability to sense and respond to voltage changes, as well as ions, proteins, and other biomolecules. This range of functionality can't be achieved with antibodies, says Cui.

Next, Noy plans to develop more-sophisticated nanowire-hybrid devices. So far, each device has been equipped with only one type of ion channel, which limits the complexity of the functions they can carry out. (Biological cells are coated with many different membrane proteins.)

The researchers will also begin testing the devices' interactions with living cells. Other researchers, including Peidong Yang at the University of California, Berkeley, and Harvard's Lieber, have used bare silicon nanowires to interface with neurons, stem cells, heart cells, and other tissues. They've shown that the nanowires can send and receive electrical signals with very high spatial resolution, even within single cells. Noy's initial work remains a proof of concept.



http://www.technologyreview.com/computing/23157/

Nanoconstruction with Curved DNA

A breakthrough in DNA origami creates twisted and curved shapes to order.

By Courtney Humphries


DNA nanotechnology uses the unique physical properties of DNA molecules to design and create nanoscale structures, with the hope of one day creating tiny machines that work together just like the parts of a cell. But one of the challenges of the field is to find ways to design and engineer DNA structures with high precision. A recent study published in Science marks a breakthrough in researchers' ability to shape DNA; it describes a way to build three-dimensional DNA shapes with elaborate twists and curves with unprecedented precision, developed by scientists at Harvard and the Technical University of Munich in Germany.

Fixed gear: A new method for designing three-dimensional shapes from DNA makes it possible to create curved parts, including this nanoscale "gear" with twelve teeth.
Credit: Hendrik Dietz/Science

Hao Yan, a biochemistry professor at Arizona State University who was not involved in the study, says that the work adds a key level of control over previous methods. "I think we can say that it is possible to create any kind of architecture using DNA," he says.

A key advantage of using DNA as a construction material is that it is programmable. DNA molecules consist of strings of linked nucleotide bases of four types: A, T, G, and C. These bases stick to the bases on another DNA strand following a simple rule: A pairs with T, and C pairs with G. By creating DNA sequences with complementary bases on different strands, it is therefore possible to design DNA molecules that self-assemble into certain shapes according to predictable rules.

Previous work used a method called "DNA origami" to design two-dimensional shapes from DNA; further studies have built upon this approach to create shapes in three dimensions. DNA origami uses one very long strand of DNA, called the scaffold, and hundreds of shorter strands, called staples. The staples bind to the scaffold at certain sites based on their sequence, pinching the scaffold and forcing it to double back many times over to create a sheet in a particular shape.

The Science study extends work by the same team of researchers, adapting the DNA origami method to create more-complex three-dimensional shapes. Previously, the team designed DNA to form helices bundled by cross-linked staple strands in a honeycomb-like lattice. In the current study, the researchers introduced bends and twists into these shapes by adding or deleting bases at certain points in the scaffold, changing the local forces that the helices exert on one another and forcing the entire structure to curve to the right or left. They found that they could control the degree of curvature with a great deal of precision, achieving sharp bends similar to those of the tightly wound DNA found in cells.

The researchers created objects including nanoscale "gears," a wireframe beach ball-shaped capsule, and triangles with either concave or convex sides. Shawn Douglas, a co-author at Harvard University, developed a publicly available computer-aided design program that can serve as a visual interface for designing the DNA shapes.

Bendable molecules: A bundle of DNA helices (top row) can be made to bend at precise angles (the other rows) by introducing or deleting base pairs in the DNA sequence.
Credit: Hendrik Dietz/Science

William Shih, a co-author of the study and assistant professor of biological chemistry and molecular pharmacology Harvard Medical School, says that the ability to make curved structures adds an important element to the DNA nanoscience toolbox. He points out that objects like rings, springs, and gears are important for machines at the macroscale, while cells also contain elements with curved parts, suggesting that these properties are important on the nanoscale. "If we didn't have this general building capability, we would be handicapped in our ability to build useful devices," he says.

Chengde Mao, an associate professor in analytical chemistry at Purdue University, calls the achievement "surprising" and says that his own lab has attempted to make similar structures and failed. He says not only that the work shows that DNA can be twisted and bent to extreme degrees but that "one of the nice things is that it's a really smooth curve," whereas other attempts have resulted in shapes that are pixilated.

The practical applications of the technique are still unclear, but there are many possibilities. Since the DNA shapes described in the Science paper are the size of an average virus, Shih says they could perhaps be designed to enter a cell like a virus in order to release a drug. DNA parts might also be used to design molecular electronics, which could someday offer a new level of miniaturization for faster computers.

Yan says that the study adds to the impressive abilities of DNA, but adds that scientists need to study these structures further to see how stable they are and how well they hold up over time.


http://www.technologyreview.com/biomedicine/23155/

An Operating System for the Cloud

Google is developing a new computing platform equal to the Internet era. Should Microsoft be worried?

By G. Pascal Zachary


From early in their company's history, Google's founders, Larry Page and Sergey Brin, wanted to develop a computer operating system and browser.

Credit: Brian Stauffer

They believed it would help make personal computing less expensive, because Google would give away the software free of charge. They wanted to shrug off 20 years of accumulated software history (what the information technology industry calls the "legacy") by building an OS and browser from scratch. Finally, they hoped the combined technology would be an alternative to Microsoft Windows and Internet Explorer, providing a new platform for developers to write Web applications and unleashing the creativity of programmers for the benefit of the masses.

But despite the sublimity of their aspirations, Eric Schmidt, Google's chief executive, said no for six years. Google's main source of revenue, which reached $5.5 billion in its most recent quarter, is advertising. How would the project they envisioned support the company's advertising business? The question wasn't whether Google could afford it. The company is wonderfully profitable and is on track to net more than $5 billion in its current fiscal year. But Schmidt, a 20-year veteran of the IT industry, wasn't keen on shouldering the considerable costs of creating and maintaining an OS and browser for no obvious return.

Finally, two years ago, Schmidt said yes to the browser. The rationale was that quicker and more frequent Web access would mean more searches, which would translate into more revenue from ads. Then, in July of this year, Schmidt announced Google's intention to launch an operating system as well. The idea is that an OS developed with the Internet in mind will also increase the volume of Web activity, and support the browser.

Google's browser and OS both bear the name Chrome. At a year old, the browser holds a mere 2 to 3 percent share of a contested global market, in which Microsoft's Internet Explorer has a majority share and Firefox comes in second. The Chrome operating system will be released next year. Today, Windows enjoys around 90 percent of the global market for operating systems, followed by Apple's Mac OS and the freeware Linux. Does Google know what it's doing?

Ritualized Suicide
Going after Microsoft's operating system used to be hopeless. When I covered the company for the Wall Street Journal in the 1990s, I chronicled one failed attempt after another by software innovators to wrest control of the field from Bill Gates. IBM failed. Sun failed. Borland. Everybody. By the end of the 1990s, the quest had become a kind of ritualized suicide for software companies. Irresistible forces seemed to compel Gates's rivals, driving them toward self-destruction.

The networking company Novell, which Schmidt once ran, could have been one of these casualties. Perhaps Schmidt's managerial experience and intellectual engagement with computer code immunized him against the OS bug. In any case, he knew that the task of dislodging Microsoft was bigger than creating a better OS. While others misguidedly focused on the many engineering shortcomings of Windows, Schmidt knew that Microsoft was the leader not for technical reasons but for business ones, such as pricing practices and synergies between its popular office applications and Windows.

So for Schmidt to finally agree to develop an OS suggests less a technological shift than a business revolution. Google's new ventures "are game changers," he now says.

What has changed? Google has challenged the Microsoft franchise, further diminishing a declining force. The latest quarter gave Microsoft the worst year in its history. Revenue from its various Windows PC programs, including operating systems, fell 29 percent in the fiscal quarter that ended in June. Some of the decline stems from the global economic slowdown. But broad shifts in information technology are also reducing the importance of the personal computer and its central piece of software, the OS. In many parts of the world, including the two most populous countries, China and India, mobile phones are increasingly the most common means of reaching the Web. And in the rich world, netbooks, which are ideal for Web surfing, e-mailing, and Twittering, account for one in every 10 computers sold.

Another powerful trend that undercuts Microsoft is toward programs that look and function the same way in any operating system. "Over the past five years there's been a steady move away from Windows-specific to applications being OS-neutral," says Michael Silver, a software analyst at the research firm Gartner.

One example would be Adobe Flash. Such popular social applications as Facebook and Twitter are also indifferent to operating systems, offering users much the same experience no matter what personal computer or handheld device they use. Since so many people live in their social-media sites, the look and feel of these sites has become at least as important as the user interface of the OS. The effect is to shrink the role of the OS, from conductor of the orchestra to merely one of its soloists. "The traditional operating system is becoming less and less important," says Paul Maritz, chief executive of VMware, who was once the Microsoft executive in charge of the operating system. By and large, he has noted, "people are no longer writing traditional Windows applications."

Microsoft's troubles make the company's OS doubly vulnerable. Vista, its current version, has been roundly criticized, and it has never caught on as widely as the company anticipated; many Microsoft customers continue to use the previous version of Windows, XP. A new version being released this fall, Windows 7, promises to remedy the worst problems of Vista. But even 7 may not address a set of technical issues that both galvanize Microsoft's critics and stoke the appetites of Brin and Page to create a more pleasing alternative. In their view, the Microsoft OS takes too long to boot up, and it slows down even the newest hardware. It is too prone to viral attacks and too complicated.

Exactly how Google plans to solve these problems is still something of a mystery. Technical details aren't available. Google has said so little about the innards of its forthcoming OS that it qualifies as "a textbook example of vaporware," wrote John Gruber on his blog Daring Fireball. Information is scarce about even such basic things as whether it will have a new user interface or rely on an existing open-source one, and whether it will support the driver that make printers and other peripherals routinely work with Windows PCs.

The mere announcement of Chrome already threatens Microsoft, however. The imminence of Google's entry into the market--following the delivery of its Android OS for mobile phones--gives Microsoft's corporate customers a reason to ask for lower prices. After all, Google's OS will be free, and the buyers of Windows are chiefly PC makers, whose profit margins are already ultra-slim.

"It's all upside for Google and no downside," says Mitchell Kapor, a software investor and the founder of Lotus, a pioneer supplier of PC applications that was bloodied by Microsoft in the 1990s.

Legacy Code
Fifteen years ago, I wrote a book on the making of Windows NT--still the foundation of Microsoft's OS family. At the time, I wrongly concluded that developing the dominant operating system was proof of technological power, akin to building the greatest fleet of battleships in the early 20th century, or the pyramids long ago. Windows NT required hundreds of engineers, tens of millions of development dollars, and a huge marketing effort. By the mid-1990s, Microsoft was emphasizing features over function, complexity over simplicity.

In doing so, Microsoft and its cofounder, Bill Gates, seemed to be fulfilling the company's historical destiny. The operating system as a technological showpiece goes back to OS/360, a program designed by IBM that was immortalized in The Mythical Man-Month, a book by the engineer Frederick Brooks. The historian Thomas Haigh explains, "That was a huge scaling up of ambition of what the OS was for."


IBM's 360 mainframe was the first computer to gain widespread acceptance in business, and the popularity of the machine, first sold in 1965, depended as much on its software as its hardware. When IBM used Microsoft's DOS as the operating system for its first PC, introduced in 1981, it was the first time Big Blue had gone outside its own walls for a central piece of code. Soon, technologists (including, belatedly, IBM) realized that control of the OS had given Microsoft control of the PC. IBM tried and failed to regain that control with a program called OS/2. But Microsoft triumphed with Windows in the 1990s--and became the most profitable company on earth, turning Gates into the world's richest person. Thus, the OS came to be viewed as the ultimate technological product, a platform seemingly protean enough to incorporate and control every future software innovation and at the same time robust enough to drag outdated PC machines and programs into the present.

It couldn't last. The main reason why control of the OS no longer guarantees technological power, of course, is the ascent of the Internet. Gates made few references to the Internet in the first edition of his book The Road Ahead, published in November 1995. Neither Windows NT nor its mass-market incarnation, Windows 95, was intimately connected to the Web. With the spread of Netscape's browser, though, Gates began to realize that the individual PC and its operating system would have to coöperate with the public information network. By bringing a browser into the OS and thus giving it away, Microsoft recovered its momentum (and killed off a new generation of competitors). Then, preoccupied once again with control of the OS, Microsoft missed the sudden, spectacular rise of search engines. When Google's popularity persisted, Microsoft was unable to do with the search engine what he had done with the browser.

In one sense, this failure to adapt to a networked world reflected the integrity of Gates's vision of the PC as a tool of individual empowerment. In the mid-1970s, when the news of the first inexpensive microprocessor-based computers reached Gates at Harvard, he instantly understood the implications. Until then, computers had been instruments of organizations and agents of bureaucratization. The PC brought about a revolution, offering the little guy a chance to harness computing power for his personal ends.

Technology is now moving away from the individualistic and toward the communal--toward the "cloud" (see our Briefing on cloud computing, July/August 2009). Ray Ozzie, Microsoft's chief software architect, who has been the most influential engineer at the company since Gates retired from executive management, describes the process under way as a return to the computing experience of his youth, in the 1970s, when folks shared time on computers and the network reigned supreme. Cloud technologies "have happened before," he said in June. "In essence, this pendulum is swinging." Similarly, Schmidt recalls how, in the early 1980s, Sun Microsystems' OS was developed for a computer that lacked local storage.

The return to the network has big implications for the business of operating systems. Computer networks used to be closed, private: in the 1960s and '70s they revolved around IBM mainframe operating systems and, later, linked Windows machines on desktops and in back rooms. Today's computer networks are more like public utilities, akin to the electricity and telephone systems. The operating system is less important. Why does Google want to build one?

Successful operating-system designs continue to pay off big, though increasingly in cases where the system is well integrated with hardware. Apple's experience is illustrative. For years, people advised Steve Jobs, Apple's cofounder and chief, to decouple the Mac OS from the company's hardware. Jobs never did. Indeed, he moved in the opposite direction. With the iPod and then the iPhone, he built new operating systems ever more integrated with hardware--and these products have been even more successful than the Macintosh. "For Apple, software is a means to an end," says Jean-Louis Gassée, who once served as the company's chief of product development and who has since founded his own OS and hardware company, Be. "They write a good OS so they can have nice margins on their aluminum laptop."

The effort to create a good OS carries risks. The biggest one for Google is that expectations will outstrip results. Even though the company plans to use a number of freely available pieces of computer code--most notably the Linux "kernel," which delivers basic instructions to hardware--its new system can't be assembled, like a Lego plaything, out of existing pieces. Some pieces don't exist, and some existing ones are deficient. There is the real chance that Google might tarnish its reputation with an OS that disappoints.

Then there is the risk that cloud computing won't deliver on its promise. Privacy breaches could spoil the dream of cheap and easy access to personal data anywhere, anytime. And applications that demand efficient performance may founder if they are drawn from the cloud alone, especially if broadband speeds fail to improve. These unknowns all present substantial threats.

Magic Blends
David Gelernter, a computer scientist at Yale University, has described the chief goal of the personal-computer OS as providing a " 'documentary history' of your life." Information technology, he argues, must answer the question "Where's my stuff?" That stuff includes not only words but also photos, videos, and music.

For a variety of good reasons--technical, social, and economic--the cloud will probably never store and deliver enough of that "stuff" to render the OS completely irrelevant. You and I will always want to store and process some information on our local systems. Therefore, the next normal in operating systems will probably be a hybrid system--a "magic" blend, to quote Adobe's chief technology officer, Kevin Lynch. Predicting just how Microsoft and Google will pursue the magic blend isn't possible. "We hope we are in the process of a redefinition of the OS," Eric Schmidt told me in an e-mail. But one thing is certain: the new competition in operating systems benefits computer users. Microsoft will do more to make Windows friendlier to the new networked reality. No longer a monopoly, the company will adapt or die. It's worth remembering that in the 1970s, AT&T, then the most powerful force in the information economy, "made a set of decisions that doomed it to slow-motion extinction," says Louis Galambos, a historian of business and economics at Johns Hopkins. "Microsoft is not immune to 'creative destruction.' "

Neither is Google. To completely ignore operating systems in favor of the cloud might be an efficient route to failure. And there is much to admire in the very attempt to create a new one. For Brin and Page, it is as much an aesthetic and ethical act as it is an engineering feat.

G. Pascal Zachary wrote Showstopper on the making of Windows NT.


http://www.technologyreview.com/web/23140/

Supercomputer Visuals Without Graphics Chips

Computer scientists are visualizing the world's most gigantic datasets without graphics clusters.

By Christopher Mims


Before specialized graphics-processing chips existed, pioneers in the field of visualization used multicore supercomputers to realize data in three dimensions. Today, however, the speed at which supercomputers can process data is rapidly outstripping the speed at which they can input and output that data. Graphics-processing clusters are becoming obsolete.

Core collapse: This image--step 1492 of a simulation of a core-collapse supernova--was generated on Argonne National Laboratory's super computer, Intrepid, without the use of a graphics cluster.
Credit: Argonne National Laboratory

Researchers at Argonne National Laboratory and elsewhere are working on a solution. Rather than moving massive datasets to a specialized graphics-processing cluster for rendering, which is how things are done now, they are writing software that allows the thousands of processors in a supercomputer to do the visualization themselves.

Tom Peterka and Rob Ross, computer scientists at Argonne National Laboratory, and Hongfeng Yu and Kwan-Liu Ma of the University of California at Davis, have written software for Intrepid, an IBM Blue Gene/P supercomputer, that bypasses the graphics-processing cluster entirely. "It allows us to [visualize experiments] in a place that's closer to where data reside--on the same machine," says Peterka. His team's solution obviates the need to take the time-consuming step of moving the data from where it was generated to a secondary computer cluster.

Peterka's test data, obtained from John Blondin of North Carolina State University and Anthony Mezzacappa of Oak Ridge National Laboratory, represent 30 sequential steps in the simulated explosive death of a star, and are typical of the sort of information a supercomputer like Argonne's might tackle. Peterka's largest test with the data maxed out at a three-dimensional resolution of 89 billion voxels (three-dimensional pixels) and resulted in two-dimensional images 4,096 pixels on a side. Processing the data required 32,768 of Intrepid's 163,840 cores. Two-dimensional images were generated with a parallel volume-rendering algorithm, a classic approach to creating a two-dimensional snapshot of a three-dimensional dataset.

Normally, visualization and post-processing of data generated by Intrepid, which, at 557 teraflops, is the world's seventh-fastest supercomputer, requires a separate graphics-processing unit known as Eureka. (A teraflop is the equivalent of a trillion calculations per second.) Built from NVIDIA Quadro Plex S4 GPUs (graphics-processing units), Eureka runs at 111 teraflops. More-powerful supercomputers, in the petaflop range, present even bigger challenges.

"The bigger we go, the more the problem is bounded by [input/output speeds]," says Peterka. Merely writing to disk the amount of data produced by a simulation run on a petaflop supercomputer could take an unreasonable amount of time. The reason is simple: from one generation of supercomputer to the next, storage capacity and storage bandwidth aren't increasing as quickly as processing speed.

This disparity means that future supercomputing centers simply might not be able to afford separate graphics-processing units. "At petascale, [separate graphics-processing units] are less cost-effective," says Hank Childs, a computer systems engineer and visualization expert at Lawrence Berkeley National Laboratory. Childs points out that a dedicated visualization cluster, like the one for Argonne's Intrepid supercomputer, often costs around $1 million, but in the future that cost might increase by a factor of 20.

Pat McCormick, who works on visualization on the world's fastest supercomputer, the AMD Opteron and IBM Cell-powered "Roadrunner" at Los Alamos National Laboratory, says that Peterka's work on direct visualization of data is critical because "these machines are getting so big that you really don't have a choice." Existing, GPU-based methods of visualization will continue to be appropriate only for certain kinds of simulations, McCormick says.

"If you're going to consume an entire supercomputer with calculations, I don't think you have a choice," says McCormick. "If you're running at that scale, you'll have to do the work in place, because it would take forever to move it out, and where else will you be able to process that much data?"

Peterka, McCormick, and Childs envision a future in which supercomputers perform what's known as in-situ processing, in which simulations are visualized as they're running, rather than after the fact.

"The idea behind in-situ processing is you bypass I/O altogether," says Childs. "You never write anything to disk. You take visualization routines and link them directly to simulation code and output an image as it happens."

This approach is not without its pitfalls, however. For one thing, it would take a whole second or more to render each image, precluding the possibility of interacting with three-dimensional models in a natural fashion. Another pitfall is the fact that interacting with data in this way burns up cycles on the world's most expensive mainframes.

"Supercomputers are incredibly valuable resources," notes Childs. "That someone would do a simulation and then interact with the data for an hour--that's a very expensive resource to hold hostage for an hour."

As desktop computers follow supercomputers and GPUs into the world of multiple cores and massively parallel processing, Peterka speculates that there could be a trend away from processors specialized for particular functions. Already, AMD offers the OpenCL code library, which makes it possible to run code designed for a GPU on any x86 chip--and vice versa.

Xavier Cavin, founder and CEO of Scalable Graphics, a company that designs software for the largest graphics-processing units used by businesses, points out that the very first parallel volume-rendering algorithm ran on the CPUs of a supercomputer. "After that, people started to use GPUs and GPU clusters to do the same thing," Cavin says. "And now it comes back to CPUs. It's come full circle."


http://www.technologyreview.com/computing/23139/