Friday, July 31, 2009

Smoothing the Way for Light

A technique makes smooth metal films for optical computing and imaging.

By Katherine Bourzac


Researchers at the University of Minnesota have developed a cheap way to repeatedly make very smooth nanopatterned thin films. The advance could have implications for making devices--such as more efficient solar cells, higher-resolution microscopes, and optical computers--that use light in an unconventional way.

Guiding light: Silver films patterned with structures like this pyramid guide light along their surface and concentrate it at the tips. This structure’s surface is very smooth, which prevents scattering.
Credit: Science/American Association for the Advancement of Science

Surface waves of light called plasmons can do things that ordinary light waves can't--squeezing into much smaller spaces for high-resolution imaging or miniaturized optical circuits, for example. These surface waves can be generated and controlled by shining light on thin, smooth, patterned metal films. But plasmons scatter easily, so the nanopatterned metal films that guide plasmons must be very smooth. And such smooth metal patterns are difficult to make.

"People have shown useful effects with plasmons, but the problem is doing it on a substrate you could cheaply and reproducibly make," says David Norris, professor of chemistry at the University of Minnesota. Up till now, researchers have been making plasmonic devices one at a time using techniques such as blasting out metal patterns using beams of high-energy ions or electrons. Because each of these devices is "handmade," says Norris, each is different, making standardization difficult. And while these methods are good for carving out nanoscale features in metal, they have the unintended consequence of making the surface rougher. As a result, harnessing plasmons has remained largely a laboratory curiosity and not a practical technology.

The way plasmons move through a metal film can be controlled by patterning the film. Plasmons travel along the surface of metal films just like a wave travels on the surface of a pond. Surface roughness in the metal is like a leaf on the pond's surface, causing the waves to scatter. Today in the journal Science, Norris's group describes a way of making very smooth metal patterns using silicon molds. These surfaces are incredibly smooth--if they were pond surfaces, the leaves would be only four-tenths of a nanometer thick.

The Minnesota researchers use the lithography techniques honed by the semiconducting industry for patterning silicon to make a very smooth mold, which they cover with a metal film. "The top surface of the metal is now rough, but the bottom surface in contact with the silicon is quite smooth," says Norris. He then covers the film with a strong adhesive and peels off the patterned metal so that the smooth side is now exposed. The silicon molds can be used again and again. The Minnesota researchers have used the technique to make bull's-eyes, arrays of bumps and pyramids, and long ridges.

Targeting light: This silver bull’s-eye was patterned using a new molding technique for making very smooth nanostructures. When this type of structure is illuminated, light travels along the ridges’ surfaces, which can be used to concentrate light for imaging.
Credit: Science/American Association for the Advancement of Science

There are many competitive processes for making smooth films, says Nicholas Fang, assistant professor of mechanical science and engineering at the University of Illinois at Urbana-Champaign. Norris's method for smoothing metal surfaces is "quite unique," says Fang, and should prove useful for making plasmonic structures, particularly if the molds prove to be durable over the long term. However, surface roughness is only one source of problems with plasmonics, says Fang. Now what are needed are methods for making the edges of the features in these patterned metal films smooth.

Harry Atwater, professor of applied physics and materials science at Caltech, agrees. "When you're making waveguides, the edges are just as important as the surfaces." Atwater is developing plasmonic concentrators for solar cells. Silicon solar cells are usually about 100 micrometers thick; thinner cells would be cheaper, but their performance suffers. Atwater has found that adding a patterned layer of metal that can interact with plasmons makes it possible to collect and concentrate light from wider angles and improves performance in thin silicon solar cells. Techniques for printing plasmonics for solar cells will have to be cheap and scalable because cost per unit area is such an important consideration for photovoltaics. Norris's technique is "a useful idea," says Atwater, but only time will tell whether it can work repeatedly over the large areas required for solar cells.

The future of plasmonics, says Atwater, will probably be in new materials besides metal. Metals like gold and silver, which have been used in plasmonics for about a decade, have an intrinsic electrical resistance that causes plasmons to scatter, no matter how smooth the surface and edges. The carbon nanomaterial graphene, which has a low resistance, might fit the bill. Atwater says scientists will also have to "pull out the metallurgy textbooks" to look for other materials.


http://www.technologyreview.com/computing/23097/



A New Approach to Fusion

A startup snags funding to start early work on a low-budget test reactor.

By Tyler Hamilton


General Fusion, a startup in Vancouver, Canada, says it can build a prototype fusion power plant within the next decade and do it for less than a billion dollars. So far, it has raised $13.5 million from public and private investors to help kick-start its ambitious effort.

Power pistons: General Fusion's reactor is a metal sphere with 220 pneumatic pistons designed to ram its surface simultaneously. The ramming creates an acoustic wave that travels through a lead-lithium liquid and eventually accelerates toward the center into a shock wave. The shock wave compresses a plasma target, called a spheromak, to trigger a fusion burst. The thermal energy is extracted with a heat exchanger and used to create steam for electricity generation. To produce power, the process would be repeated every second.
Credit: General Fusion

Unlike the $14 billion ITER project under way in France, General Fusion's approach doesn't rely on expensive superconducting magnets--called tokamaks--to contain the superheated plasma necessary to achieve and sustain a fusion reaction. Nor does the company require powerful lasers, such as those within the National Ignition Facility at Lawrence Livermore National Laboratory, to confine a plasma target and compress it to extreme temperatures until fusion occurs.

Instead, General Fusion says it can achieve "net gain"--that is, create a fusion reaction that gives off more energy than is needed to trigger it--using relatively low-tech, mechanical brute force and advanced digital control technologies that scientists could only dream of 30 years ago.

It may seem implausible, but some top U.S. fusion experts say General Fusion's approach, which is a variation on what the industry calls magnetized target fusion, is scientifically sound and could actually work. It's a long shot, they say, but well worth a try.

"I'm rooting for them," says Ken Fowler, professor emeritus of nuclear engineering and plasma physics at the University of California, Berkeley, and a leading authority on fusion-reactor designs. He's analyzed the approach and found no technical showstoppers. "Maybe these guys can do it. It's really luck of the draw."

The prototype reactor will be composed of a metal sphere about three meters in diameter containing a liquid mixture of lithium and lead. The liquid is spun to create a vortex inside the sphere that forms a vertical cavity in the middle. At this point, two donut-shaped plasma rings held together by self-generated magnetic fields, called spheromaks, are injected into the cavity from the top and bottom of the sphere and come together to create a target in the center. "Think about it as blowing smoke rings at each other," says Doug Richardson, chief executive of General Fusion.

On the outside of the metal sphere are 220 pneumatically controlled pistons, each programmed to simultaneously ram the surface of the sphere at 100 meters a second. The force of the pistons sends an acoustic wave through the lead-lithium mixture, and that accelerates into a shock wave as it reaches the plasma, which is made of the hydrogen isotopes deuterium and tritium.

If everything works as planned, the plasma will compress instantly and the isotopes will fuse into helium, releasing a burst of energy-packed neutrons that are captured by the lead-lithium liquid. The rapid heat buildup in the liquid will be extracted through a heat exchanger, with half used to create steam that spins a turbine for power generation, and the rest used to recharge the pistons for the next "shot."

The ultimate goal is to inject a new plasma target and fire the pistons every second, creating pulses of fusion reactions as part of a self-sustaining process. This contrasts with ITER, which aims to create a single fusion reaction that can sustain itself. "One of the big risks to the project is nobody has compressed spheromaks to fusion-relevant conditions before," says Richardson. "There's no reason why it won't work, but nobody has ever proven it."

He says it look longer than expected to raise the money for the prototype project, but the company can now start the first phase of building the test reactor, including the development of 3-D simulations and the technical verification of components. General Fusion aims to complete the reactor and demonstrate net gain within five years, assuming it can raise another $37 million.

If successful, it believes it can build a grid-capable fusion reactor rated at 100 megawatts four years later for about $500 million, beating ITER by about 20 years and at a fraction of the cost.

"I usually pass up these quirky ideas that pass my way, but this one really fascinated me," says Fowler. He notes that there are immense challenges to overcome, but the culture of a private startup may be what it takes to tackle them with a sense of urgency. "In the big programs, especially the fusion ones, people have gotten beat up so much that they've become so risk averse."

General Fusion's basic approach isn't entirely new. It builds on work done during the 1980s by the U.S. Naval Research Laboratory, based on a concept called Linus. The problem was that scientists couldn't figure out a fast-enough way to compress the plasma before it lost its donut-shaped magnetic confinement, a window of opportunity measured in milliseconds. Just like smoke rings, the plasma rings maintain their shape only momentarily before dispersing.

Nuclear-research giant General Atomics later came up with the idea of rapidly compressing the plasma using a mechanical ramming process that creates acoustic waves. But the company never followed through--likely because the technology to precisely control the speed and simultaneous triggering of the compressed-air pistons simply didn't exist two decades ago.

Richardson says that high-speed digital processing is readily available today, and General Fusion's mission over the next two to four years is to prove it can do the job. Before building a fully functional reactor with 220 pistons on a metal sphere, the company will first verify that smaller rings of 24 pistons can be synchronized to strike an outer metal shell.

Glen Wurden, program manager of fusion energy sciences at Los Alamos National Laboratory and an expert on magnetized target fusion, says General Fusion has a challenging road ahead and many questions to answer definitively. Can they produce spheromaks with the right densities, temperature, and life span? Can they inject two spheromaks into opposite ends of the vortex cavity and make sure they collide and merge? Will the acoustic waves travel uniformly through the liquid metal?

"You can do a good amount of it through simulations, but not all of it," says Wurden. "This is all very complex, state-of-the-art work. The problem is you're dealing with different timescales and different effects on materials when they're exposed to shock waves."

Los Alamos and General Fusion are collaborating as part of a recently signed research agreement. But Richardson isn't planning on a smooth ride. "The project has many risks," he says, "and we expect most of it to not perform exactly as expected." However, if the company can pull off its test reactor, it hopes to attract enough attention to easily raise the $500 million for a demonstration power plant.

Says Fowler, "Miracles do happen."


http://www.technologyreview.com/business/23102/


A Better Way to Rank Expertise Online

New software distinguishes between experts and spammers, showing who can be trusted.

By Brittany Sauser


Websites where users can organize and share information are flourishing, but it can be hard to know which users and information to trust. Now a team of European researchers has developed an algorithm that ranks the expertise of users and can spot those who are using a site only to spam.

Credit: Technology Review

The technique works in a way similar to Amazon's reputation engine or the ratings of Wikipedia pages, but it evaluates users based on a new set of criteria that makes intuitive assumptions about experts.

The algorithm draws on a method applied in ranking Web pages, but takes it an interesting step further, says Jon Kleinberg, a professor of computer science at Cornell University in Ithaca, NY, who was not involved with the work. "It distinguishes between 'discoverers' and 'followers,'" Kleinberg says, "focusing on users who are the first to tag something that subsequently becomes popular."

The new work focuses on collaborative tagging systems such as Delicious, a social bookmarking website, and Flickr, a photo-sharing site. These sites let users add relevant keywords to "tag" Web links or photos and then share them. Normally, users are ranked by how frequently or how recently they add content to the system. "It's quantity over quality, so the more you do, the more credit you get," says Michael Noll, a professor of computer science at Hasso Plattner Institute in Potsdam, Germany, who led the research on the new software. "But the fact is [that] quantity does not imply quality."

The conventional approach also leaves the system very vulnerable to Web spammers, says Ciro Cattuto, a researcher at the Complex Network and Systems Group of the Institute for Scientific Interchange Foundation in Italy. Spammers adapt to the social behavior of other users, Cattuto says, so they see the most popular tags and start loading advertising content with those tags. To combat this, you need an algorithm that can search, rank, and present information in a usable way, says Cattuto. "The new method performs better than anything currently available--spammers rank very low, their content is not exposed, and eventually they stop polluting the system."

The new algorithm is called Spamming-resistant Expertise Analysis and Ranking (SPEAR) and is based on the well-known information-retrieval algorithm called HITS that is used by search engines like Google to rank Web pages. Like HITS, SPEAR is a method of "mutual reinforcement," says Kleinberg. In other words, the algorithm evaluates popular users and popular content and declares expert users to be the ones who identify the most important content, while important content is that which is identified by the most expert users. "The result is a way of identifying both expert users and high-quality content," he says.

To rate a person's level of expertise--as "good," "average," or "novice"--Noll's team integrated a second factor into their algorithm: temporal information. "The idea is that the early bird gets the worm," says Ching-man Au Yeung, a researcher in electronics and computer science at the University of Southampton in the U.K., who helped develop the algorithm. Those people who first discover content that subsequently receives a lot of tagging can be identified as trend setters in a community. "They are finding the usefulness of a document before others do," says Au Yeung, who compares their acquisition of influence to the way a knowledgeable academic builds a reputation.

In contrast, followers find useful content later and tag it because it is already popular. These are more likely to be spammers, "people who identify a topic that grows in importance and use it to point to their own stuff," says Scott Golder, formerly a research scientist at Hewlett Packard and currently a graduate student at Cornell. Golder adds that the SPEAR algorithm employs "a very smart set of criteria that has not been used before in computer science."

The researchers tested their algorithm using data from Delicious, analyzing over 71,000 Web documents, 0.5 million users, and 2 million shared bookmarks. "We set the algorithm to find JavaScript experts, for example, and it produced a list of users; the top two were professional software developers," says Noll. "None of the spammers ranked in the top 200."

Noll says that the algorithm can be adjusted for any online community, including Twitter and music-sharing sites. The work was presented last week at the SIGIR Conference in Boston. Noll says that companies including Microsoft were interested in using the algorithm for social Web search, where documents are ranked based on users' bookmarks.

"I'd expect ... this combination of mutual reinforcement with the distinction between discoverers and followers to be useful in many domains," says Kleinberg.


http://www.technologyreview.com/web/23100/


Mining Social Networks for Clues

A researcher shows how programming tools can be used to track users' real-life movements and behavior.

By Erica Naone


The dangers of posting sensitive personal information on social-networking sites are well known, but a researcher has now revealed how data mining these sites can dig up undisclosed personal information.

Credit: Technology Review

On Wednesday, in a presentation at the Black Hat computer-security conference in Las Vegas, Nitesh Dhanjani detailed how the information posted on social websites like Facebook and Twitter can be mined to find out a person's whereabouts and activities.

Dhanjani showed data-collection programs that can be created using the programming tools released by such sites. For instance, he showed how to track the movements of politicians and celebrities using Twitter, by mining the service for relevant geographical information. Earlier this year, Republican congressman Pete Hoekstra was criticized for posting information on Twitter that revealed his location while traveling in Iraq.

Dhanjani also showed how to work out what software a person uses to post to Twitter; this information could help an attacker hack into that person's account, he said.

Sensitive business information can also be revealed by mining social-network connections, Dhanjani said. For example, if there's a rumor that two companies are in talks for a merger, an interested party could watch the business-networking site LinkedIn for connections between company employees. If a higher-than-average number of connections start forming, this might help to confirm the rumors.

With some social sites, a snooper needs to befriend someone in order to view her personal connections. But last year, two computer-security consultants--Nathan Hamiel of Hexagon Security Group and Shawn Moyer of Agura Digital Security--showed how this can be done by finding a friend of the target who doesn't yet have a profile and creating a fake one. At that point, the target's friends will often initiate a social connection themselves.

"The more powerful you are, the more the secrecy of your address book is important," Dhanjani said, since an attacker can build up significant information about a target just by gaining access to the network.

However, Dhanjani also demonstrated more positive uses of social-network mining. He demonstrated a tool that can filter posts on Twitter by geographic area and search for particular keywords, such as posts mentioning "fire" or "smoke" to provide an earlier warning for emergency responders.

Dhanjani suggested that social networks could also assist with criminal investigations. Today, investigators talk with friends and associates of known criminals to identify accomplices. The connections on social sites could help reveal which people are closest to a target, he said.

Previously, Dhanjani identified a likely credit-card criminal by watching his behavior across a number of social-networking sites. Dhanjani was able to connect the criminal's profile to a suspected true identity, partly through an analysis of his postings.

Other researchers agree that social-networking sites reveal far more than users often intend. "Legitimate software now does everything malware used to do that we freaked out about," says Hamiel of Hexagon Security.

Dhanjani noted that social networks have both positive and negative qualities. "I think social media is beautiful, and I use it, too," he said. As a security researcher, he tries to be careful about what he reveals when he uses Twitter, but he's discovered that it isn't possible to fully protect his privacy. "Social media is like a cocktail party," he said. "In order to get something out of it, you have to give something up."


http://www.technologyreview.com/computing/23101/

Thursday, July 30, 2009

Search Spammers Hacking More Websites

The head of Google's Web-spam-fighting team warns that spammers are increasingly attacking websites.

By Kristina Grifantini


The head of Google's Web-spam-fighting team, Matt Cutts, warned last week that spammers are increasingly hacking poorly secured websites in order to "game" search-engine results. At a conference on information retrieval, held in Boston, Cutts also discussed how Google deals with the growing problem of search spam.

Credit: Technology Review

Search spammers try to gain unfair prominence for their Web pages in search results, thereby making money from the products that these sites offer or from advertising posted on them. The practice, also known as "spamdexing," exploits the way search engines' algorithms figure out how to rank different pages for a particular search query. Google's page-rank algorithm, for instance, in part gives prominence to pages that are heavily linked to other material on the Web. Spammers can exploit this by adding links to their site on message boards and forums and by creating fake Web pages filled with these links. Garth Bruen, creator of the Knujon software that keeps track of reported search spam, says that some campaigns involve creating up to 10,000 unique domain names.

"We're getting better at spotting spammy pages," said Cutts after his talk, adding that spammers are increasingly hacking legitimate websites and filling their pages with spam links or redirecting users to other sites.

"As operating systems become more secure and users become savvier in protecting their home machines, I would expect the hacking to shift to poorly secured Web servers," said Cutts. He expects "that trend to continue until webmasters and website owners take precautions to secure Web-server software as well."

"I've talked to some spammers who have large databases of websites with security holes," Cutts said. "You definitely see more Web pages getting linked from hacked sites these days. The trend has been going on for at least a year or so, and I do believe we'll see more of this."

Bruen agrees. "We've seen an increase in spam e-mail and spam domains that not only sell illicit products, but that attempt to download malware and infect the visitor's PC," he says. Such malware could use an unknowing victim's computer to send out e-mail spam.

"It really is an arms race," says Daniel Tunkelang, one of the conference organizers and the chief scientist at search company Endeca.

To prevent such attacks, Cutts recommended that anyone running her own website regularly patch the Web server and any software running on it. "In the same way that you wouldn't browse the Web with an unpatched copy of Internet Explorer, you shouldn't run a website with an unpatched or old version of WordPress, cPanel, Joomla, or Drupal," said Cutts. He also suggested that users hand over management of Web software. "Using a cloud-based service where the server software is managed by someone else can often be more secure," he said.

During his talk, Cutts also explained that Google's efforts to identify dubious Web sites now include parsing the JavaScript code that underlies pages. Code may contain hidden instructions that record users' data, for example.

"It wasn't obvious to me that Google can do this," says Endeca's Tunkelang. "And apparently some spammers were saying that Google can't do that."

Cutts noted that spammers and hackers are also finding new ways to spam, with the rise of social networking sites like Facebook and Twitter. These sites "bring identity into the equation, but don't really have checks to verify that a profile or person sending you a message is who you think they are," said Cutts.

"Authentication [across the Web] would be really nice," says Tunkelang. "The anonymity of the Internet, as valuable as it is, is also the source of many of these ills." Having to register an e-mail before you can comment on a blog is a step in this direction, he says, as is Twitter's recent addition of a "verified" label next to profiles it has authenticated.

Danah Boyd, a Microsoft Research scholar who studies social media, suggests that spammers take advantage of the fact that people don't always adhere to the rules on social-networking sites--for example, they sometimes provide fake information about themselves. "The variability of average users is precisely what spammers rely on when trying to trick the system," says Boyd. "All users are repurposing systems to meet their needs, and the game of the spammer keeps changing. That makes the work that Matt does very hard but also very interesting."


http://www.technologyreview.com/web/23095/


Crowdsourcing Closer Government Scrutiny

Web volunteers are helping to make the U.S. government more accountable.

By David Talbot


A new Web-based effort promises to track the sources of congressional earmarks, compile databases of the Twitter posts of state lawmakers, and add sharper perspective to the Obama administration's open-government efforts.

Government watchdogs: With an online tool developed by Sunlight Labs, Web-based volunteers can comb through earmark requests.
Credit: Sunlight Labs
Multimedia
video See the federal IT Dashboard in action.


"Government puts out a ton of data that is really interesting about what it does, but people can't understand it," says Clay Johnson, director of Sunlight Labs, an arm of the open-government group Sunlight Foundation, based in Washington, DC.

The foundation has already tapped open-source developers to help process the often-fragmented and cryptic data released by the government. "We are doing anything we can to celebrate the opening of this data--and making it so it's useful," says Johnson.

Now the group is raising an army of Web-based volunteers to go through all the information contained in those releases. Congressional earmarks--funds for projects inserted anonymously as line items in various bills, without any hearings or reviews--are a big initial focus. In 2004, members of Congress wrote more than 14,000 earmarks costing more than $50 billion. Technically, it's already possible to find the sources of earmarks, but this involves going through all 535 congressional websites and reading PDFs of the earmark requests posted.

The new Sunlight Labs transparency corps invites users to log in and join the effort to analyze this information collaboratively. Users are presented with the PDFs and prompted to read them carefully and then enter the pertinent information--the date and dollar amount of a request, name of the requester, description of the project, and so on--into fields on the screen. These then become part of a searchable database.

Another corps project aims to track the Twitter statements of all state lawmakers. Volunteers who log on are asked to seek out and enter the Twitter addresses of state senators and representatives; Sunlight Labs will seek verification from three or four people that the address is correct, and then start recording the lawmakers' tweets (the short messages they send through Twitter).

The project is in its infancy, but with this data in hand, it will later become possible to search the tweets for the most popular words that each lawmaker uses, and to search their statements by topic and date. It may even be possible to compare their tweets with statements made in other contexts, such as speeches recorded in the

The project's launch roughly coincided with the launch earlier this month of a White House effort to chart the progress of information-technology (IT) projects in various federal agencies. The new IT Dashboard, an online tool accessible to the public, allows users to see which IT projects are under way, check their status, and provide feedback to the chief information officers at different federal agencies.

The tool revealed particularly startling delays at the Veterans Administration, so the White House halted 45 over-budget or behind-schedule projects for review. "We were able to catch these contracts, in part, thanks to our new tool," Vivek Kundra, the White House chief information officer, wrote in his blog earlier this month.

Keeping track: This month the White House launched its new IT Dashboard, which provides information about the status of IT-related contracts in federal agencies. The tool includes analysis of whether contracts are on schedule or incurring cost overruns.
Credit: it.usaspending.gov


"The dashboard may be just the tip of an iceberg that will herald a new-age transparency regarding federal spending," says Andrew Rasiej, founder of the Personal Democracy Forum, a website that covers politics and technology. "Once people get used to this type of information being so readily accessible, they will demand to see [it] for all other federal spending too, and then the genie will be completely out of the bottle."

But the IT Dashboard also shows the limitations of the government's own open-government efforts, says Johnson. It helps users find the primary recipients of funding, but not subcontractors. Furthermore, it's not easy to discern the origins of contracts or their geographic distribution, and it's almost impossible to see how they are connected to elected officials. "The IT Dashboard is a tool for government to audit itself, but it isn't a particularly good tool for citizens to look at," Johnson says.

Johnson says the transparency corps could be mobilized to fill that gap. He notes that the dashboard is based on government forms that track the progress of government contracts and the milestones reached. These raw forms, which are available through the site, could be a gold mine for further work, he says.

For example, it's possible to extract the names of all contractors and subcontractors from these forms and plot their locations geographically, to see if they happen to reside in a particular congressional district. It's also possible to trace contractors' contributions to lawmakers, by identifying the company board members from Securities and Exchange Commission filings, and then cross-referencing their names to Federal Elections Commission records of campaign donors. Thanks to the dashboard's own analyses, it may also be possible to highlight which low-performing companies are most closely tied to which politicians.

"The IT Dashboard is just one way of looking at the data," says Raseij, "and shows the government is trying to partner with the public in a transparent way. It's up to groups like the Sunlight Foundation and others to take the government's lead and make even more sense out of the available information and data for the public good."

Meanwhile, if you happen to hold public office, be careful what you ask for, and watch what you tweet.


http://www.technologyreview.com/web/23096/

Wednesday, July 29, 2009

How to Land Safely Back on the Moon

A hazard-detection system promises safe landings for next-generation lunar explorers.

By Anne-Marie Corley


Engineers at the Charles Stark Draper Laboratory in Cambridge, MA, are developing a guidance, navigation, and control system for lunar landings that includes an onboard hazard-detection system able to spot craters, slopes, and rocks that could be dangerous to landing craft. In the Apollo missions of 40 years ago, astronauts steered the lander to a safe spot by looking out the window; the lander itself "had no eyes," says Eldon Hall, a retired Draper engineer and one of the original electronics designers for Apollo's navigation computer.

Back to the moon: Altair is NASA’s next-generation lunar lander, larger than the Apollo lander but with similar design features. It will carry four astronauts.
Credit: NASA
Multimedia
video Watch a test flight of the lunar landing technology.


That meant there were some close calls with Apollo, says Tye Brady, the technical director for lunar landing at Draper, who demonstrated his team's automated-landing and hazard-avoidance technology at last week's celebration of the 40th anniversary of Apollo 11. "They were really close," Brady says, "and one- to two-meter craters are deadly. You don't see them till the last minute." Apollo 11 astronaut Neil Armstrong had to steer past a field of rocks that didn't show up on any recon photos beforehand, and Apollo 14 landed at a precarious tilt with one footpad resting about a meter away from a crater.

The new navigation and guidance system is being developed for NASA's Altair lunar lander, which is scheduled to land on the moon by 2020 as part of the Constellation program. The project is headed by NASA's Johnson Space Center, with support from other NASA research facilities in addition to Draper Laboratory. The Jet Propulsion Laboratory recently completed a field test of the sensors and mapping algorithms, and it plans to begin full systems tests in May 2010.

Brady says that the best image resolution today, such as the cameras on the orbiter now circling and photographing the moon, cannot resolve smaller holes or boulders at projected landing sites, even in smooth, well-lit areas--which aren't the targets for NASA's future landings. Altair aims to land capably at any site on the moon's surface, and the lunar terrain will vary. For that, Brady says, "you need real-time hazard detection" to adjust as you go.

Draper's system will use LIDAR laser technology to scan an area for hazards like craters or rocks before the lander touches down on the moon's surface. Raw data from LIDAR is processed and assembled into a 3-D map of the moon's surface, using algorithms developed by the Jet Propulsion Laboratory. One advantage of using LIDAR is that "it's the only type of sensor that measures the 3-D shape of what's on the ground at high resolution and from high altitude," says Andrew Johnson, the JPL lead for the hazard-detection system. That allows the system to build a terrain and elevation map of potential landing sites onboard the spacecraft, but from high enough up that there is time to respond to obstacles or craters at the landing site.

Landing in a pinch: Draper Laboratory’s simulated guidance, navigation, and control system prioritizes landing sites (areas 1, 2, 3, 4) in this representative display. Astronauts may designate a first-choice site or default to site number 1. Hazards such as boulders and craters are highlighted in red for real-time decisions about safe landing sites.
Credit: Draper Laboratory

Once the map is built, the system designates safe sites based on factors like the tilt angle of the surface, the distance and fuel cost to get to a site, the position of the lander's footpads, and the crew's margin for safe distance from hazards. Based on that information, the navigation system presents astronauts with a prioritized list of three to four safe landing sites. The astronauts can then designate any of the sites as first choice, or if they are incapacitated, the system will navigate the lander automatically to the first site on its list.

The ability to land autonomously will enable both crewed and robotic missions to land safely, Brady says (while Apollo's lunar module had an automatic landing mode, it was never used). In addition to NASA's Altair, the system could be integrated into vehicles landing on near-Earth asteroids, Mars, and other planets, or used with other lunar vehicles built by private groups.

Another advantage of using LIDAR, Johnson says, is that it works under any lighting conditions. To deal with light at the moon's equator--where a "day" is equivalent to 14 Earth days, and a "night" lasts 14 Earth nights--Apollo missions had to be timed exactly, with just one launch opportunity per month, so NASA could control the craft's exposure to light and heat. But because lighting conditions are more varied and extreme at the moon's poles, with patches of light and dark from the shadows of mountains and deep craters, it will be difficult for astronauts to see to navigate. LIDAR allows the craft to "land at night, or in shadowed regions, because the light is provided by the LIDAR sensor, not the sun," Johnson says. With real-time hazard detection, he says, the launch and landing limitations of Apollo won't apply to future missions.

The challenge for a landing system, says Brady, is getting everything to happen in about 120 seconds, including hazard-detection scans to get the data, human interaction for site approval, and then hazard-avoidance maneuvers and touchdown. His team has developed a simulator to create realistic image maps of the moon's surface, in addition to using computer code from NASA for the guidance and navigation portion of the system. So far, about 20 astronauts have sampled the Draper simulation. "They're good at going slow and easy, and they're very patient," Brady says. "They do a good job relying on the system." That's a long way from the early days when the Apollo astronauts "wanted to fly the whole thing themselves," Hall says.

The Draper team continues to develop high-fidelity models of LIDAR and terrain maps, while coordinating with NASA's crew office to determine the best way to display information for astronauts. They aim to have the technology ready by 2012.


http://www.technologyreview.com/computing/23085/

Adding Meaning to Millions of Numbers

Semantic technology could keep numbers tied to the information that explains what they mean.

By Eric Naone


Out of context, a number can be a dangerous thing. In 1991, for example, NASA's $125 million Mars Climate Orbiter was destroyed because one team of engineers used imperial units of measurement while others relied on metric ones. Consequently, the spacecraft was ordered to orbit at a perilously low altitude, which caused it to burn up in the Martian atmosphere.

Credit: Technology Review


Similar number-related mix-ups occur all the time back on Earth, albeit with slightly less catastrophic consequences. True Engineering Technology, a startup based in Cambridge, MA, has now developed semantic technology that adds meaning to numerical data to help prevent such miscommunications from occurring.

Today the company is launching a website called Numberspace that lets users upload pieces of numerical data--the distance from New York to London in kilometers, for example (5,581). Once it is semantically tagged, the information can be shared without losing its meaning. Customers can also pay for a business version that stores their information on a private server.

To store a number in the system--creating what the company calls a "truenumber"--a user simply types a short phrase into a form on the website. For the example cited above, a user might type, "The distance from New York to London in kilometers is 5,581." True Engineering Technology's software then interprets the phrase, extracting the number 5,581 and the meaning conveyed by key words and phrases such as "New York," "London," and "distance." Users can also add notes and comments to the truenumber, such as how it was calculated and whether there are queries concerning its accuracy.

Allen Razdow, CEO of True Engineering Technology, believes the technology will interest businesses that rely on accurate numbers for important decisions, particularly engineering firms. For example, workers at an automotive plant might need to access the most recent emissions figures stored from a range of different electronic documents. By storing these figures as truenumbers, any worker could search for them online and find the latest, most accurate figure. The system could also automatically perform conversions from one unit to another and flag any potential mismatches and errors. Users can also paste truenumbers onto websites or into Microsoft Office documents, keeping them linked to the contextual data stored on the Web server.

Razdow sees the technology as part of the movement toward the semantic Web, which aims to let computers store the meaning of pieces of information as well as the information itself. "I think there are natural, practical evolutions of how information gets represented that get to be more semantic by degrees," Razdow says. Numbers, he argues, are a good place to start because people follow certain rules when speaking about them. For example, it's easy to train the system to recognize which units represent distances, and to throw up error alerts if the user types that a distance is "5,581 seconds."

Numbers game: True Engineering Technology stores numerical information on a central server. The system can create visualizations, shown above, that illustrate how different figures relate to each other.
Credit: True Engineering Technology

"In the past, we thought of numbers as having an absolute value and an absolute measure of their authority, but of course that's not really ever the case," says Bruce Jenkins, an analyst at Ora Research in Cambridge, MA, who's been briefed about the technology.

"The pedigree and authority of numbers in a system will become so much more visible with this technology," Jenkins says, "and weaknesses, questions over the authority of numbers, will become much easier to see and correct."

Jim Hendler, a professor of computer science at Rensselaer Polytechnic Institute who studies the semantic Web, says that the approach is in line with how he expects semantic technologies to be commercialized. "The semantic Web is really an infrastructure technology that's used to enhance what we already do on the Web and to create new applications," he says.

Some observers have questioned whether users will take the time required to add contextual information to semantic systems and to keep them up to date. But Hendler says that if a system is well designed, adding contextual information can actually save users time in the long run. "It's a mistake to think that metadata makes things harder or causes extra work," he says.

Though Numberspace is being marketed primarily to engineering companies, Razdow says the website could also be used in any industry that deals heavily with numbers. Research firms, journalists, and academics, for example, might all benefit, he says.



http://www.technologyreview.com/web/23087/



A Better Way to Shoot Down Spam

Junk mail can now be identified based on a single packet of data.

By Rachel Kremen


New software developed at the Georgia Institute for Technology can identify spam before it hits the mail server. The system, known as SNARE (Spatio-temporal Network-level Automatic Reputation Engine), scores each incoming e-mail based on a variety of new criteria that can be gleaned from a single packet of data. The researchers involved say the automated system puts less of a strain on the network and minimizes the need for human intervention while achieving the same accuracy as traditional spam filters.

Credit: Technology Review


Separating spam from legitimate e-mail, also known as ham, isn't easy. That's partly because of the sheer volume of messages that need to be processed and partly because of e-mail expectations: users want their e-mail to arrive minutes, if not seconds, after it was sent. Analyzing the content of every e-mail might be a reliable method for identifying spam, but it takes too long, says Nick Feamster, an assistant professor at Georgia Tech who oversaw the SNARE research. Letting spam flow into our in-boxes unfiltered isn't a sensible option, either. According to a report released by the e-mail security firm MessageLabs, spam accounted for 90.4 percent of all e-mail sent in June.

"If you're not concerned about spam, I would suggest you turn off your spam filter for about an hour and see what happens," says Sven Krassen, senior director of data-mining research at McAfee. The Santa Clara, CA, company provided raw data for analysis by the Georgia Tech team.

The team analyzed 25 million e-mails collected by TrustedSource.org, an online service developed by McAfee to collate data on trends in spam and malware. Using this data, the Georgia Tech researchers discovered several characteristics that could be gleaned from a single packet of data and used to efficiently identify junk mail. For example, their research revealed that ham tends to come from computers that have a lot of channels, or ports, open for communication. Bots, automated systems that are often used to send out reams of spam, tend to keep open only the e-mail port, known as the Simple Mail Transfer Protocol port.

Furthermore, the researchers found that by plotting the geodesic distance between the Internet Protocol (IP) addresses of the sender and receiver--measured on the curved surface of the earth--they could determine whether the message was junk. (Much like every house has a street address, every computer on the Internet has an IP address, and that address can be mapped to a geographic area.) Spam, the researchers found, tends to travel farther than ham. Spammers also tend to have IP addresses that are numerically close to those of other spammers.

Dean Malmgren, a PhD candidate at Northwestern University whose work includes identifying new methods for identifying spam, says he finds the research interesting. But he wonders how robust SNARE will be once its methodology is widely known. IP addresses, he notes, are easy to fake. So, if spammers got wind of how SNARE works, they might, for example, use a fake IP address close to the recipient's.

The Georgia Tech researchers also looked at the autonomous server (AS) number associated with an e-mail. (An AS number is assigned to every independently operated network, whether it's an Internet service provider or a campus network.) Knowing that a significant percentage of spam comes from a handful of autonomous server numbers, the researchers decided to integrate that characteristic into SNARE, too.

The end result was a system capable of detecting spam 70 percent of the time, with a 0.3 percent false positive rate. Feamster says that's comparable to existing spam filters but notes that when used in tandem with existing systems, the process should be far more efficient.

"Consider SNARE a first line of defense," says Shuang Hao, a PhD candidate in computer science at the Georgia Institute of Technology and a SNARE researcher. Each of the characteristics in the SNARE system contributes to the overall score of an e-mail. So far SNARE has been implemented only in a research environment, but if used in a corporate setting, the network administrator could set rules about what happens to e-mail based on its SNARE score. For example, e-mail that scores poorly could be dropped before it even hits the mail server. Hao says this can save considerable resources, as many companies have a policy that requires they retain a copy of every e-mail that hits the server, whether or not it's junk. Messages with mediocre scores could be further assessed by traditional content filters.

Hao is currently helping Yahoo improve its spam filter, based on what he's learned developing SNARE. He says that Cisco has also expressed interest in the work.

"It is fairly clever in the way that they combine a bunch of data that's cheap to use," says John Levine, president of the Coalition of Unsolicited Commercial Email and a senior technical advisor to the Messaging Anti-Abuse Working Group, a consortium of companies involved in fighting spam. "On the other hand, I think some of their conclusions are a bit too optimistic. Spammers are not dumb; any time you have a popular scheme [for identifying spam], they'll circumvent it."

The research team will present their work on SNARE at the Usenix Security Conference next month in Montreal. In the future, Feamster hopes to able to apply their findings to other computer security problems, such as phishing e-mails, in which the sender pretends to be from a trusted institution to con recipients into divulging their passwords.

http://www.technologyreview.com/communications/23086/

Tuesday, July 28, 2009

Radiation Therapy for Moving Targets

Researchers have combined two devices for real-time tumor tracking and treatment.

By Katherine Bourzac


Normal tissue often gets caught in the crossfire during radiation therapy. Damage is caused by the high-energy beams of radiation used to kill tumor tissue--particularly when the patient's breathing causes the tumor to shift.

Odd couple: A prototype device combines a magnetic resonance imager with a linear accelerator, two technologies that ordinarily interfere with each other. The blue cylinders facing each other are the imaging magnets. The metal circle visible to the left at the back is a magnetic and radiation shield that protects the accelerator’s waveguide.
Credit: University of Alberta Cross Cancer Institute


To better track a tumor's position in real time and adjust the radiation accordingly, researchers at the University of Alberta in Canada have combined a linear accelerator with a magnetic resonance imager. Today in Anaheim, CA, at the annual meeting of the American Association of Physicists in Medicine, researchers will present evidence that a device that combines these technologies can accurately track and irradiate a moving target.

Radiation therapy uses high-energy x-rays from a medical linear accelerator to damage tumor tissue and treat nearly every type of cancer. In the United States, half of all patients with cancer receive this form of treatment, which typically requires 10 to 15 sessions lasting from about 15 to 30 minutes each. In order to make sure the entire tumor is irradiated, doctors have to irradiate a margin of healthy tissue around it, which leads to side effects including nausea, pain, and skin-tissue damage. In between sessions, the healthy tissue regenerates, but the tumor does not. One way to minimize the side effects is to lower the radiation dose and increase the number of sessions, sometimes to as many as 35.

"We would like to decrease the margins and increase the radiation dose, in order to control the tumor better without side effects," says Gino Fallone, director of the medical physics division at the University of Alberta department of oncology.

Another challenge is posed by tumor movement during treatment. Tumors in the lungs and the prostate especially may move by about two centimeters during treatment. Current radiotherapy deals with this challenge by combining the radiation source with a computed tomography (CT) scan. This helps doctors reduce damage to healthy tissue, but CT scans are not very good at showing soft tumor tissue, and they are too slow to track tumor movement in real time. Fallone's group has turned to magnetic resonance imaging (MRI), which provides crisp pictures of soft tissues such as tumors, in the hopes of doing better.

Until now, it hasn't been possible to use MRI to guide radiotherapy. This is because MRI machines and the linear accelerators that supply high-energy x-rays for radiotherapy interfere with each other. MRI uses a strong magnet and pulses of radio-frequency waves to excite and read a signal from protons in the water molecules inside soft tissues in the body. Medical linear accelerators also use radio-frequency pulses, in their case in order to accelerate electrons through a waveguide toward a metal target. When the electrons hit the target, high-energy x-rays come out the other side; these x-rays are then aimed at tumor tissue. If these two machines are in the same room, the magnetic field from the MRI interferes with the waveguide, preventing the electrons from being accelerated, and the radio-frequency pulses from the linear accelerator interfere with the imager's magnetic field, degrading picture quality.

To combine the technologies, the Alberta researchers had to reengineer both components. "The whole machine is designed differently," says Fallone. Special shielding is employed. And instead of using a high-strength magnetic field generated by superconducting-wire coils, as in clinical MRI, the machine uses a weak permanent magnet. The weak magnet interferes much less with the accelerator and is smaller and less expensive to operate. This December, Fallone's group published the results of imaging studies that showed it was possible to generate MRI images while running the linear accelerator without interference.

The weak magnet imposes a different challenge, however: the image quality is much lower. So researchers at Stanford University are working on computational methods for getting the necessary information from these lower-resolution images. "Diagnostic MRI requires a very high image quality, but for radiotherapy you don't need to see the tumor in exquisite detail," says Amit Sawant, an instructor in radiation oncology at the Stanford School of Medicine. "You can afford to lose [image] signal, and still get enough information to know when the tumor is moving." What's important to see during radiotherapy, says Fallone, are the edges of the tumor.

Fallone and Sawant will present initial results of image-tracking studies done with the prototype combined device at the conference in Anaheim. Sawant's group will describe imaging software that allows the machine to acquire five two-dimensional MRI images per second--much faster than conventional MRI. The Stanford researchers increased the imaging speed by decreasing the imaging area and using a technique called compressive sensing. When images are stored, about 90 percent of the data is thrown out; using compressive sensing, it's possible to acquire only the most important 10 percent of the image data in the first place.

Fallone will present results demonstrating that such real-time guidance can be used to redirect the prototype device's x-ray beam. "So far, only CT has been available for image guidance," says Bhadrasain Vikram, chief of the clinical radiation oncology branch of the National Cancer Institute's Radiation Research Program. "It's exciting that [MRI] is becoming available to start asking whether it can provide more accurate information." Better guidance for radiotherapy, says Vikram, might speed up the treatments or even "cure some cancers you can't cure today."

But before the system can be tested on patients, the researchers caution that the image-acquisition process needs to be sped up even more, so that it's possible to make 3-D images. The device will also need to be tested on animals. Fallone estimates that human tests are at least five years away.


http://www.technologyreview.com/biomedicine/23078/

Cheaper Solar Thermal Power

A simpler design could reduce the cost of solar power generated by concentrating sunlight on Stirling engines.

By Kevin Bullis


Stirling Energy Systems (SES), based in Phoenix, has decreased the complexity and cost of its technology for converting the heat in sunlight into electricity, allowing for high-volume production. It will begin building very large solar-power plants using its equipment as soon as next year.

Sun catchers: This is the latest design of a system for focusing sunlight on a Stirling engine to generate electricity.
Credit: Sandia National Laboratories/Randy Montoya


The company is currently building a 1.5-megawatt, 60-unit demonstration plant that will use the company's latest design. Stirling expects to finish that project by the end of the year. It also has contracts with two California utilities to supply a total of 800 megawatts of solar power in Southern California. The first of the plants that will supply this power could be built starting the middle of next year, pending government permits and loan guarantees from the U.S. Department of Energy (DOE).

The projects are part of a resurgence in what's known as solar thermal power. Various solar thermal technologies were developed starting in the 1970s, but a breakdown in government funding and incentives caused them to stall before they reached a scale of production large enough to drive down costs and allow them to compete with conventional sources of electricity. "It was a classic problem with solar. The market support to bring solar to high volume wasn't there," says Ian Simington, the chairman of SES and chief executive of the solar division of NTR, a company based in Dublin, Ireland, that bought a controlling share of SES last year.

Recent state mandates and incentives for renewable energy have led to a new push to commercialize the technology. There are over six gigawatts of concentrated solar power under contract in the southwestern United States right now, says Thomas Mancini, program manager for concentrated-solar-power technology at Sandia National Laboratory in Albuquerque, NM. That's equivalent to about six nuclear-power plants. BrightSource Energy has contracts to provide 1.3 gigawatts of solar power with concentrated solar power, and Solar Millenium has announced a project that would generate nearly one gigawatt of power.

Stirling Energy Systems technology uses 12-meter-wide mirrors in the shape of a parabolic dish to concentrate sunlight onto a Stirling engine. The difference in temperature between the hot and cool sides of the engine is used to drive pistons and generate 25,000 watts of electricity. The first phase of the company's large-scale projects will use 12,000 of these dishes to generate 300 megawatts of power. Simington expects electricity from the systems to cost between 12 and 15 cents per kilowatt hour, higher than the cheapest sources of electricity--such as coal-fired power plants--but competitive in many markets, especially in the afternoon, when prices are highest.

Earlier this month the company unveiled its production design. Compared to several prototypes that have been tested for several years at Sandia National Laboratory, the new design cuts about two metric tons from the weight of each dish and reduces the number of mirrors in each from 80 to 40. The simplified design can be built in large quantities using equipment in existing factories for automobiles.

The company's design has certain advantages over other approaches to concentrated solar power. In other systems, heat is collected over a large area and used to drive turbines in a central facility. These turbines require large amounts of water for cooling, Mancini says, whereas the SES system uses a closed-radiator system that doesn't consume water. Water use is an important consideration for solar thermal technologies, Mancini adds, since they work best in areas with a lot of direct sunlight--that is, in deserts. (These concentrated-solar-power systems are quite different from solar water heaters used in homes.)

Another advantage of the SES system is its modularity. With other approaches, the entire solar collection and generation system has to be in place to start generating electricity. With the Stirling engine system, power can come online as the dishes are installed, and more generating capacity can easily be added by building more dishes, without any need to enlarge a central generating plant.

But the system also has a significant disadvantage. Other solar thermal power plants collect heat in a central place where it can easily be stored, making it possible to generate electricity when the sun isn't shining. "There's no obvious way to do this with the dishes," Mancini says.

Although there has been a resurgence in contracts for solar thermal power, obstacles to the plants being built still remain. The new projects could be stalled by slow action from the government. Permits originally thought to be ready by the end of this year are now expected no sooner than next May. What's more, the current economy has made financing hard to come by, says Sean Gallagher, SES's vice president for market strategy and regulatory affairs. That has forced his company and others to rely on Department of Energy loan guarantees. But, Gallager says, although the DOE has promised to speed up its process for issuing these, it has yet to issue even the rules for applying for the guarantees included in February's stimulus package.


http://www.technologyreview.com/energy/23079/


Sunday, July 26, 2009

Protein Treatment Repairs Heart Damage

The treatment causes adult heart-muscle cells to proliferate and cardiac function to improve.

By Amanda Schaffer


By injecting a protein into mice with heart damage, researchers in Boston have shown that it's possible to cause adult heart-muscle cells to proliferate and cardiac function to improve. The approach could eventually prove valuable for heart-attack patients who have lost cardiac-muscle cells and some cardiac function, especially since existing therapies are unable to regenerate or restore these lost cells.

http://link.brightcove.com/services/player/bcpid263777539?bctid=30331437001
Repair job: The green spots in this video show the division of cardiac-muscle cells as a result of the experimental treatment.
Credit: Bernhard Kühn/Cell

Several large research groups are working on techniques to regenerate heart tissue or shore up heart function using stem cells, and some of these projects have reached clinical trials. The Boston team's work, led by Bernhard Kühn at Children's Hospital Boston, instead focuses on stimulating adult heart cells, an alternative approach that could, in theory, lead to less invasive and less expensive treatments.

Kühn's work is "very exciting" in that it involves using "protein therapy to harness cardiac regeneration," says Roger Hajjar, director of the cardiovascular research center at Mount Sinai Medical Center in New York, who was not involved in the research.

For years, the prevailing dogma was that adult cardiac cells do not regenerate. Some researchers have shown, however, that at least some cardiac cells are, in fact, capable of dividing. But following a heart attack, they do not proliferate sufficiently to repair the resulting damage. Kühn's work suggests a novel way in which they could be stimulated to do so.

In a study published today in the journal Cell, Kühn and colleagues first showed that a protein called neuregulin1 can cause fully mature heart-muscle cells from mice to divide and proliferate in a petri dish. The researchers then injected this protein into mice with heart damage. After 12 weeks of daily injections, the animals' hearts showed less hypertrophy, or enlargement, and improved function. For instance, the hearts had about a 10 percent increase in ejection fraction--the fraction of blood pumped out of the left ventricle with each beat. The treatment "didn't make the damage go away completely," says Kühn, "but it did make the heart work significantly better."

Going forward, one potential worry is that Kühn's team injected the protein systemically, meaning that it traveled throughout the animals' bodies. In addition to the heart, cells in the breasts and nervous system also express receptors for the therapeutic protein, which raises the risk of unwanted cell division. "We were nervous about the treated mice developing breast tumors or producing milk," Kühn acknowledges. "We did not see abnormalities when we looked at the breasts macroscopically. But we plan to study breast and nervous tissue," more closely in future research, he says. A therapy that could be injected directly into the blood would be relatively easy and inexpensive to administer, he notes.

However, others say that systemic injections would be too risky in people, especially since cancer cells might already be present in some patients. If this therapy were to move forward, it would be "extremely important to deliver the protein locally," says Hajjar.

A few previous studies have also shown that proteins injected into animal models can cause division of adult heart cells and improvement in cardiac function. In 2007, Kühn found that a different molecule, a protein called periostin, also caused some cardiac-muscle cells to proliferate, improving heart function. In 2006, another group at Children's Hospital Boston used a regimen with a protein called fibroblast growth factor and found that it too resulted in heart-cell proliferation, reduced scarring, and improved function.

Most animal and human studies have focused, however, on various kinds of stem cells. Many researchers believe that the adult heart contains a small number of tissue-specific stem cells, which could potentially play a role in regeneration and repair. Piero Anversa of Brigham and Women's Hospital in Boston recently began phase-one trials for an approach in which cardiac stem cells are isolated from patients, expanded in the lab, and then reinjected. Anversa has shown that a cocktail of growth factors, injected into dogs, causes native cardiac stem cells to differentiate into mature cells and improve heart function. Meanwhile, Eduardo Marban, director of the Cedars-Sinai Heart Institute in California, has pioneered a related technique. His team removes small pieces of tissue from patients' hearts, grows a collection of cells, including cardiac stem cells, and then injects the cells into patients' coronary arteries.This work is also in phase-one trials.

Other researchers are focusing on stem cells derived from bone marrow. And in other research conducted in pigs and, preliminarily, in humans, the use of bone-marrow cell therapy has improved heart function.

One advantage of cell therapy is that the cells could be administered less often, in theory, than a drug or protein therapy, says Joshua Hare, director of the Interdisciplinary Stem Cell Institute at the University of Miami, although the administrations would also likely be more invasive. Several cell-therapy approaches are also further along in the research process and could potentially be available to patients sooner.

Still, there may be some overlap in how protein therapy and cell therapy could work in the heart. Some of the benefits of cell therapy may come from stimulating endogenous pathways similar to or the same as the one targeted by Kühn, says Hare. It's possible that part of the underlying biology is similar, he adds, and "we just have to figure out the best way to manipulate it."


http://www.technologyreview.com/biomedicine/23060/

A Contest to Train Cyber Combatants

Cyber-defense and capture-the-flag contests will help train future defenders of cyberspace.

By Robert Lemos


In the 1950s, shocked by the Russians' launch of Sputnik, the United States embarked on an initiative to boost its numbers of scientists and engineers. Now, private industry, academics, and government agencies are banding together to create a similar push to educate and train at least 10,000 students to become the future defenders of cyberspace.

Credit: Technology Review

On Monday, the Center for Strategic and International Studies, the SANS Institute, the U.S. Department of Defense (DoD), and several university and private-industry partners plan to announce the U.S. Cyber Challenge, a triathlon of competitions designed to inspire students to learn the technical skills needed to defend--and, in some cases, attack--computer networks.

Alan Paller, director of research for the SANS Institute, an organization that educates and trains system administrators and computer engineers, says that schools aren't turning out enough students with the technical know-how to defend critical networks. "This shortage is as tough as the shortage of scientific people we had in the 1950s," Paller says. "The country has about 1,000 people that could compete in a cyber competition at a high level today. We actually need between 20,000 and 30,000."

The consortium behind the U.S. Cyber Challenge hopes that the competitions will boost interest in practical network-administration and computer-security skills. The aim is "training and developing that workforce and getting people excited about digital forensics and training them to work for us," says Jim Christy, director of future exploration for the U.S. Department of Defense's Cyber Crime Center (DC3).

The U.S. Cyber Challenge brings together three competitions under a single umbrella. First is the DC3's Digital Forensics Competition, which pits teams against one another to solve a number of puzzles that an expert might come across when investigating a crime. For example, entrants have to analyze file signatures, check out suspicious software, decrypt files without the password, and parse header files for interesting information. The competition has already proven extremely popular: Nearly 600 teams have registered so far this year, compared to 199 teams last year. The DoD is also considering offering a massive cash prize, up to $1 million, to increase interest in solving the top level of problems: challenges with no known current solution, such as getting data off a severely damaged hard drive.

The second contest is a capture-the-flag competition run by the SANS Institute and designed for college students and high-achieving high-school students. Known as NetWars, the competition is played on a virtual private network over the Internet, using a custom operating-system image created by a small group that runs the game. Teams get points for attacking other teams' virtual machines and controlling certain services and files--the "flags."

"It's mostly attack to start out with," says Ed Skoudis, cofounder of security firm InGuardians and an advisor to the SANS Institute for the game. The result is a fair simulation of attack and defense in cyberspace, Skoudis asserts. Participants try to exploit weaknesses in their rivals' systems and then defend the systems they compromised from the other attackers.

A third competition aims to develop high-school students' knowledge of network defense. The CyberPatriot High School Cyber Defense Competition, which is in its second year, teaches students the difficulty of protecting computer networks against attacks. In the first contest, eight teams competed against each other. This year, 266 schools have signed up, says Gregory White, an associate professor with the University of Texas at San Antonio and the director of the university's Center for Infrastructure Assurance and Security, which runs the program along with the Air Force Association.

Earlier this week, the Partnership for Public Service and consultants at Booz Allen Hamilton released a report concluding that the lack of cybersecurity skills in the federal workforce leaves the "potential for major vulnerabilities for our national security." The Obama administration, too, in its recently released Cyberspace Policy Review, flagged the shortage of well-educated cybersecurity professionals as a problem of national importance.

Aside from potentially funding the forensics challenge, the federal government has not announced funding for the U.S. Cyber Challenge. However, companies such as Google and state governments such as Delaware's have already expressed interest in taking part."If you wait for a committee to do something, you will be waiting for a long time," White says. "[Government officials] seem to be interested, but that has not translated to funding."



http://www.technologyreview.com/web/23066/

A Cell-Phone Microscope for Disease Detection

A cheap smart-phone microscope could bring fluorescent medical imaging to areas with limited access to health care.

By Anne-Marie Corley


In a twist on traditional smart-phone accessories, researchers have demonstrated fluorescent microscopy using a physical attachment to an ordinary cell phone. The researchers behind the device say that it could identify and track diseases like tuberculosis (TB) and malaria in developing countries with limited access to health care, or in rural areas of the U.S.

Snap diagnosis: The Cellscope uses a blue-light LED and filters for fluorescence imaging. The sample is inserted next to the metal focusing knob.
Credit: David Breslauer

The "Cellscope," which came out of an optics-class project at the University of California, Berkeley, could capture and perform simple analysis of magnified images of blood and sputum samples, or transmit the images over the cell-phone network for analysis elsewhere.

The contraption--a tube-like extension hooked onto the cell phone with a modified belt clip--works just like a traditional microscope, using a series of lenses that magnify blood or spit samples on a microscope slide. To detect TB, for example, a spit sample is infused with an inexpensive dye called auramine. An "excitation" wavelength is emitted by the light source--a blue light-emitting diode (LED) on the opposite end of the device from the cell phone--and absorbed by the auramine dye in the spit sample, which fluoresces green to illuminate TB bacteria. Then automated software can count the green bacteria for a diagnosis in real time, or the image can be transmitted via cell network to a separate facility where doctors can analyze it and respond.

"The cell phone approach is very valuable for all parts of the world where [medical] resources are scarce," says Aydogan Ozcan, an assistant professor of electrical engineering at UCLA, who is working to develop a lens-free method for mobile cell imaging. "It's a great step forward in this important area."

The researchers involved with the project, led by Berkeley bioengineering professor Daniel Fletcher, describe their work in a paper published in the journal PLoS One. They previously demonstrated a prototype device that used white light, or bright-field imaging, to capture magnified images of blood cells stained to detect malaria parasites, an approach that could also identify the oddly shaped red blood cells indicating sickle cell disease. Fluorescence adds a new capability that could be particularly useful if made cheaper and portable.

"Fluorescence microscopy in resource-poor countries is hard," says Wilbur Lam, a bioengineer and physician in the UCSF School of Medicine who worked on the project as a clinical expert. "Lab-grade [fluorescence] technology is expensive and hard to operate," he says. "You need a dark room, a mercury lamp, and a lot of training." These facilities aren't available in many areas of developing nations, which, Lam notes, are the places that most need the technology to detect common diseases like TB. The Cellscope device could be distributed to health workers in remote areas, extending the reach of fluorescence-based medical imaging.

According to Fletcher, fluorescence is increasingly preferred by the World Health Organization as a TB detection tool, because it's easier for the untrained eye to spot something green than to pick out a colored stain against a bright-field background. However, with traditional fluorescence equipment, health workers still have to count spots on a microscope slide by eye, which can be unreliable. The Berkeley group developed software that counts the green spots automatically; when installed on the smart phone, it could make the process easier and faster.

The cell-phone microscope could also be useful for TB therapy, Lam says. TB patients must be directly observed taking their medication over several weeks, to prevent drug resistance buildup. The phone can store images for comparison, and it provides immediate feedback, so patients could go to their local health worker and see their progress each week, rather than waiting a month for samples to come back from a centralized processing location, or seeing complications of the disease show up three or four months later.

That ability to transmit microscope images makes the Cellscope a new tool for telemedicine, says Lam. And because the images can have GPS tags associated with them, they could provide early warning for disease outbreaks.

Digitizing medical records is another problem for health workers in the field. Fletcher's group ran into the issue while demonstrating their technology in Bangladesh and the Democratic Republic of Congo. Pen-and-paper records are easily lost--a problem that the cell-phone microscope could solve by attaching patient-identification information to each digital image. Records could then be called up for easy reference when a patient returns to the health clinic.

The researchers' key innovation, Lam says, was not inventing a new medical test, but rather taking a standard test and presenting it in a new way. Their technology "just happens to be smaller, cheaper, and attached to a cell phone," he says.

In a world with four billion cell phones, many in developing countries, Ozcan says, the cell-phone microscope could take advantage of existing infrastructure to fight disease on a new, more mobile front.


http://www.technologyreview.com/biomedicine/23059/