Saturday, August 22, 2009

Cheaper LEDs

Flexible arrays of bright inorganic LEDs could mean cheaper displays and lighting.

By Katherine Bourzac


A new technique makes it possible to print flexible arrays of thin inorganic light-emitting diodes for displays and lighting. The new printing process is a hybrid between the methods currently used to make inorganic and organic LEDs, and it brings some of the advantages of each, combining the flexibility, thinness and ease of manufacturing organic polymers with the brightness and long-term stability of inorganic compounds. It could be used to make high-quality flexible displays and less expensive LED lighting systems.

Stretchy screens: Arrays of tiny red inorganic LEDs can be printed on stretchable rubber substrate to conform to curves. The gold-colored wires are electrical connections and are also flexible.
Credit: Science/AAAS

Inorganic LEDs are bright and long lasting, but the expense of manufacturing them has led to them being used mainly in niche applications such as billboard-size displays for sports arenas. What's more, the manufacturing process for making inorganic LED displays is complex, because each LED must be individually cut and placed, says John Rogers, a materials science professor in the Beckman Institute at the University of Illinois at Urbana-Champaign. So display manufacturers have turned to organic materials, which can be printed and are cheaper. While LED-based lighting systems are attractive because of their low energy consumption, they remain expensive. The new printing process, developed by Rogers and described today in the journal Science, could bring down the cost of inorganic LEDs because it would require less material and simpler manufacturing techniques.

Displays based on inorganic LEDs, says Nicholas Colaneri, director of the Flexible Display Center at Arizona State University in Tempe, "are not generally economical to make." The manufacturing process involves sawing wafers of semiconducting materials such as gallium arsenide, picking and placing each piece individually using robotics, and adding electrical connections one at a time.

To make the hybrid LEDs, the Illinois researchers start by growing an inorganic semiconducting material on top of what Rogers calls a "sacrificial" layer. The group uses a chemical bath to etch out LEDs that are just 10 to 100 micrometers on each side. Each LED is then secured with polymer anchors on two of its four corners. The anchors hold the LED in place during a second chemical bath that undercuts the LED, removing the sacrificial layer. The LEDs, which are about 2.5 micrometers thick, can then be picked up on a soft stamp and printed onto a glass, plastic or rubber substrate covered in a polymer adhesive. "You can deliver thousands of LEDs in a single step," says Rogers. "And because they're so thin, they can be interconnected using the conventional processes" used for organic LEDs and liquid-crystal displays.

Rogers says the method should be cheaper than conventional methods for printing inorganic LEDs because it requires less of the expensive semiconducting materials. Using the chemical etching and stamping techniques, it's possible to make each individual LED smaller. "The materials cost is a significant component of the final cost, so you have to use the minimum amount," says Rogers. LEDs made using conventional sawing techniques range are about a half a millimeter per side. But because they're bright, these LEDs can be much smaller and still provide the same display resolution. "To light a 100-micrometer pixel, you only need a 5-micrometer LED because to your eye, it looks the same," says Rogers.

The researchers have so far demonstrated the printing process for red LEDs made from gallium arsenide. Rogers says that the same approach can be used to make other colors of LEDs using different materials. "Conceptually it's the same process," he says. Rogers says he has also used the method to make blue LEDs using nitrides, though this work has not yet been published.

"These are conventional LEDs made by an unconventional process," says Colaneri. If Rogers and his coworkers can "demonstrate that this dramatically reduces the cost," he says, then "this is a potential competitor with OLEDs, though it's far from proven."

Rogers says the university is in talks with recently formed Canadian start-up Cool Edge to license the printing method and expects the first applications to be in lighting. Existing LED lightbulbs cost $30 to $100 for a single fixture, says Steven DenBaars, professor of materials science and co-director of the Solid-State Lighting Center at the University of California, Santa Barbara. "We've got to reduce the costs," he says.



http://www.technologyreview.com/computing/23294/?a=f

A Step Forward for Microbial Machines

A novel approach to genetic engineering could aid the creation of fuel-producing bacteria, and edges closer to artificial life.

By Emily Singer


In a deft act of genomic manipulation, researchers at the J. Craig Venter Institute, in Rockville, MD, transplanted a bacterial genome into yeast, altered it, and then transplanted it back into a hollowed bacterial shell, producing a viable new microbe. The technique provides a way to more easily genetically engineer organisms not commonly studied in the lab and could aid the growing effort to create microbes that can produce fuels or clean up toxic chemicals. "This research enhances our capabilities in genome engineering and opens new applications," says Jim Collins, a bioengineer at Boston University, who was not involved in the research. "I see this as an important advance relevant to bioenergy and biomaterials industries."

Microbial magic: Researchers at the Venter Institute, in Rockville, MD, transplanted the genome of the bacterium Mycoplasma mycoides (shown here) into yeast, altered it, and then transplanted it back into a related bacterial species.
Credit: Courtesy J. Craig Venter Institute

Thanks to decades of scientific research, microbes such as yeast and E. coli come with an arsenal of genetic tools that have enabled researchers to enact genetic overhauls of growing complexity--replacing entire chemical pathways, for example, to make microbes that can perform more complex tasks or produce materials more efficiently. But many microbes of industrial interest, such as those with unique capabilities for generating chemicals, aren't as easily hackable. Target organisms include photosynthetic microbes, which scientists hope can be engineered to efficiently turn light into fuel. By inserting the genomes of these bacteria into yeast, the researchers at the Venter Institute are more easily able to engineer them. "People want the capability of yeast or E. coli but want to have the photosynthetic apparatus there," says David Berry, a partner at Flagship Ventures and the 2007 TR35 Innovator of the Year. "Combining those two genomes would be interesting in the biofuels world."

The new technology emerged from the Venter Institute's high-profile quest to create life from scratch--generating a synthetic genome and then using it to control, or reboot, a recipient cell. In 2007, Venter researchers published a paper describing a genome transplant, in which a genome from one type of bacteria was transferred to a closely related one, giving the host the characteristics of its donor. Then last year, they created a synthetic genome by stitching together pieces of synthesized DNA.

To build a synthetic organism, however, researchers must transplant that synthetic genome into a cell and have it successfully reboot the cell. But that last step has proved problematic. The synthetic genome was assembled in yeast, which means it lacks some of the molecular markings characteristic of bacteria. Researchers discovered that without these markings, the host bacterium views the transplanted genome as a foreign invader, hacking it into pieces.

The new technique, published online in the journal Science, provides a way around that hurdle. Sanjay Vashee and colleagues first transplanted the genome of Mycoplasma mycoides into yeast. While scientists have previously grown pieces of bacterial DNA in yeast, this is the first instance of growing an entire bacterial genome this way. Using existing tools for genetically engineering yeast, researchers then chemically altered the bacterial genetic material so that it carried the molecular markings characteristic of bacteria. They transplanted the modified genome into Mycoplama capricolum, a species closely related to the mycoides genome donor, to create a viable mycoides cell.

The researchers now aim to test the technique on other bacteria. "We want to start transferring this technology to organisms that are more relevant industrially or for biofuels," says Vashee. For example, he says, genetic pathways from organisms that can breakdown environmental pollutants could be engineered into bacteria that can survive in harsh and contaminated environments, such as acidic ponds, and then used to clean up those areas.

The technology will likely find its way to Synthetic Genomics, a biofuels start-up founded by Venter that is developing genetically modified algae to produce fuels and other chemicals. The company announced a $300 billion partnership with ExxonMobil last month.


http://www.technologyreview.com/biomedicine/23297/

The Electric Acid Test

On the Isle of Man, the beginnings of a marketable electric motorcycle.

By Adam Fisher


The Isle of Man is a small British possession in the Irish Sea. Inland, a native breed of four-horned sheep graze in verdant fields. On the coasts, castles touch the sea. The Manx have their own (albeit dead) language, their own money, their own laws, and--tellingly for this story--no national speed limit. This quirk of governance makes the place a natural host to a bloody ritual that has taken place nearly every spring for a century: the Tourist Trophy. The TT is not a motorcycle race but the motorcycle race: the first, the most famous, and by far the deadliest.

Team Agni: Arvind Rabadia and Cedric Lynch.
Credit: Jude Edginton
Multimedia
video Watch the zero-emissions bike race.
An Electric Motorcycle Race

It's also a party: 40,000 bikers invade the island determined to scare the wool off the sheep while screaming through the Snaefell Mountain Course, a winding circuit of public roads cordoned off for the event. The circuit climbs from sea level to 1,380 feet, snaking for almost 38 miles through 200-some turns on country roads that cut through village, hamlet, and farm. Much of the track is hemmed in by dry-stacked fieldstone walls topped by spectators drinking their pints. There is no safe place to crash. Racers die or are maimed every year.*

As in warfare, the carnage is accompanied by technological progress. Soichiro Honda came to the race in 1959 having declared five years earlier that it was time to challenge the West. Less than a decade later, his company won the world manufacturer's title in every class: 50cc, 125cc, 250cc, 350cc, and 500cc. Not long after that, the British motorcycle industry was itself conquered, wiped out by mismanagement and superior Japanese technology. Ironically, the technical advances that made racing bikes so fast led the Fédération Internationale de Motocyclisme (FIM), the sport's governing body, to decertify the race in 1976, calling it too dangerous. Thus, pros no longer ride the TT. However, the race's bloody reputation makes the TT, if anything, even more prestigious than FIM-sanctioned events. To compete in it, in the words of the legendary FIM rider ­Valentino Rossi, "you need to have two great balls."

This year, the Manx government added a futuristic new event to the June race schedule. The TTXGP, for "Tourist Trophy eXtreme Grand Prix," was billed as the first zero-emissions motorcycle race. While any technology could enter, as a practical matter zero emissions means electric. Even the FIM got on board, making the TTXGP the first FIM-approved TT race in over 30 years and the first officially sanctioned electric-motorcycle race ever. "It is either going to be the most important day in the next hundred years of motorcycling or a complete debacle," said Aaron Frank, an editor for Motorcyclist magazine who traveled from Milwaukee to watch the race. "But either way, it's worth watching."

As the day arrives, everyone watching knows that the TTXGP will be slower than the "real" motorcycle race, the TT, because the TTXGP is an energy-limited race. In effect, the "gas tank" of an electric bike is minuscule, so to win the TTXGP the bikers must mind their energy consumption. In contrast, the gas bikers in the TT run with their throttles wide open. However, batteries' energy density has been improving at a rate of about 8 percent a year, which means that even without any other technological progress, electric bikes should run head to head with gas in about 20 years. The TTXGP is intended to make the future arrive sooner. The winner will not just be the fastest in an esoteric class but the front-runner in the greater challenge ahead: creating an electric bike that can compete in the $50 billion world motorcycle market. In that sense, the TTXGP is the proving ground for the next Honda.

*This year is no exception, claiming the life of a racer named John Crellin--the 226th TT fatality.

Green Machines
Twenty-two electric bikes show up to compete. While many of the entries are experimental one-offs from technical universities or obsessive hobbyists, three entrants are so-called factory teams: Brammo, Mission Motors, and MotoCzysz. All of them hail from the West Coast of the United States. Brammo is in Ashland, OR, Mission Motors in San Francisco, and MotoCzysz in Portland. And all are entering the consumer market with an electric bike. Brammo is set to sell its motorcycle off the floor at Best Buy: it's a $12,000 runabout with a top speed of 55 miles per hour. For the TTXGP, Brammo has upgraded almost every component in its bike to create two 100-mile-per-hour crotch rockets, both entered in the race. The Brammo racers are fast, light, and nimble but under-spec'd compared with what Mission and MotoCzysz trailer in: full-size race bikes heavy with batteries, capable of reaching 150 miles per hour. The Mission bike will sell for $69,000; the MotoCzysz will probably sell for slightly less.

Mission and MotoCzysz are both targeting the high-end superbike market, and both promise to ship products in the next year or two, but that is where the similarities end. Mission's charismatic young CEO, Forrest North, is a computer geek who likes to speculate on the future of software design: he fantasizes about a wheelie-­popping autobalancing "Segway app" for a bike's control computer. (Though he hastens to say that Mission itself is not working on such an app.) MotoCzysz founder Michael Czysz is a designer--and his bike is a looker. Exposed battery packs protrude from each side, a fresh take on the naked-sportbike style of the insanely popular Ducati Monster. The packs are modular and swappable, and the bike is "green," Czysz explains, "because it's upgradable." Even Infield Capital's David Moll, one of the investors behind Mission Motors, is impressed when he sees the battery-­as-engine design. "I've got a dog in this fight, but if that doesn't excite you," he says of the MotoCzysz entry, "then there's something wrong with you."

Brammo, Mission, and MotoCzysz are directly competing for the capital that's needed, in enormous quantity, to introduce a new vehicle to the American market. Brammo has the early lead in the money race: a $10 million investment from Best Buy's venture fund and Chrysalix Energy Venture Capital, as compared with Mission's $2 million in seed capital. Bringing up the rear is MotoCzysz, a company essentially funded out of Michael Czysz's back pocket. While both Mission and Brammo hope to win the TTXGP in order to generate publicity and thus orders, ­MotoCzysz needs to win, or at least place, in order to woo enough capital to enter the marketplace. It's anyone's race to win, of course, but in most motor sports, the factory teams with access to deep corporate pockets are the first to cross the finish line. Behind them come the privateers--scrappy dreamers and shade-tree mechanics who are short on resources but long on heart.

So it's all the more surprising that in the week before the race, a dark horse emerges, freaking out all the factory teams. The fastest bike in the TTXGP prelims--two qualifying runs around the island--turns out to be from Team Agni, a total unknown, a mere privateer. Millions of American research-and-development dollars find themselves chasing the tail of a no-money ratbike engineered in India.

"Bloody Simple"
Cedric Lynch and Arvind Rabadia are the two halves of Team Agni, and their tent is the smallest in the pit area, a 10-foot-by-10-foot red E-Z Up. Their kit is equally minimal: an assortment of hand tools, a halogen work light, and a few copies of the latest issue of Battery Vehicle Review to pass out to curious visitors. The zine, which is the journal of the U.K.'s Battery Vehicle Society, is a hand-stapled, Xeroxed affair; the cover story, "Living with the G-WIZ," features one owner's evaluation of his electric quadricycle.

In their tent the day before the big race, Lynch positions the hot halogen light over a custom fiberglass battery tank that Rabadia has built by hand. The toxic smell of polyester resin fills the air. "Bloody hell, Cedric!" exclaims Rabadia from his lawn chair. "Are you trying to kill us, man?" Rabadia sports a Mohawk and a gold hoop earring, giving him an all-purpose air of menace. Lynch, on the other hand, has the otherworldly demeanor of someone who has spent the past 20 years meditating in a cave. He's barefoot, ponytailed, and dressed in little better than rags; it is unclear whether he even hears Rabadia's outbursts. Right now, Lynch is bent over double, fashioning a part from a piece of scrap metal by holding it with his bare feet and boring a hole in it with a mechanical hand drill. They're quite a pair--the pirate and the pauper. "I do all the talking and Cedric does all the working," Rabadia says. "Swearing at Cedric is my way of calming myself down."

At the center of the Agni tent is the machine that's blown through the two qualifying laps and set the pace to beat. If the factory-made machines look like the future, the Agni entry looks like Frankenstein's monster. The bike is a Suzuki GSX-R with a lopsided stack of lithium-polymer batteries where the internal-­combustion engine and gas tank would normally be. Twin DC motors, each the size and shape of a stack of pancakes, are mounted outboard of the frame and drive the rear wheel by way of a chain. The engineering is primitive, the craftsmanship nonexistent. The whole bike seems to be held together with zip ties and duct tape. Instead of a dashboard, the rider reads from a battered yellow voltmeter jammed between the handlebars. After the fiberglass tank dries, the paint job comes out of a spray can, and the stickers of Agni's sponsors--mainly Kokam, a South Korean battery company--are slapped on so haphazardly that they flap in a breeze. But Team Agni is ready for the main event.

The bike's shabbiness is, for Rabadia, a badge of honor in what he sees as a class struggle between the factory teams and the privateers. "We thought we were the underdogs," he says. The Agni bike was thrown together in only six weeks. "It could have been half that," he says. "I told Cedric 'two weeks,' but then I wasn't around to crack the whip." For Lynch, the bike's evident ugliness is not a class statement but, rather, the fruit of his rigorous antimaterialist philosophy. To Lynch, it's what inside that's important, and nothing else. There's not much to an electric bike--just a battery bank, controllers, motors, and the wiring that connects them. But unlike all the other designers, who hide their circuit boards inside aluminum cases, Lynch showcases his wiring under Plexiglas right on top of the main battery stack, enabling his competitors to examine exactly what makes the thing go. There's not a microchip to be seen, but that's exactly the point. "Anything that's not there can't go wrong," Lynch explains. He races as he lives, on the barest minimum. "Bloody simple, it is," Rabadia adds. "Nothing to it."

Agni’s home-brew bike crushed the flashy, high-design, high-technology bikes of the American teams with a keep-it-simple strategy.
Credit: Ian Kerr

Team Agni may be a study in minimalism and eccentricity, but it also has something formidable: more than 50 years of experience. Lynch recounts how he first became interested in electricity. "I left school when I was 12 because I couldn't stand it, and I went home to read," he says. "Mostly theoretical treatises and that sort of thing." For fun, he puttered around in a workshop with his father, one of the engineers who had built the Colossus computer and broken the Nazis' war codes. As a young man, Lynch made a career of entering electric-vehicle races. The first one was in 1979, when his poverty proved to be no disadvantage. "DC motors were very expensive then," he recalls, "so I made one of my own design out of tin cans." Lynch came in second, as his tin-can design proved to be more efficient than that of the factory-made competition. In the 1980s and '90s he would come to dominate the Battery Vehicle Society races. "We won most of the things we entered," Lynch says. "It was good fun." Back in the BVS days, Rabadia was Lynch's protégé, but now it's Lynch who works for Rabadia. The latter set up Agni in his native India to commercialize his mentor's design: the so-called pancake motor. He's brought Lynch to the TTXGP "because our motor is the best, and we need to get the respect we deserve."

"Just a Miscalculation"
Meanwhile, on the opposite side of the island, Team MotoCzysz has rented out a small test track to get some last-minute performance data. Things are not looking good for the best-looking bike. In the first qualifying lap around the island, MotoCzysz blew two of its three motors, and in the second, the rider had to cross the finish line under human power, paddling with his feet like a duck. "Humiliating," Czysz admits, "but just a miscalculation."

Like Agni's machine, the bike has no software, no onboard data-logging computer, no odometer. The bike is smart enough to know how much charge it has left, but the state-of-charge meter--the "gas gauge," in essence--had yet to be calibrated. To make sure that the bike has enough juice for the race, the rider has to know what's left in the "tank." And without a dynamometer, the only way to get the calibration information is to ride the bike in a circle for a few miles and then hook it up to a digital multimeter. Czysz makes the best of it while climbing onto the bike. In full leathers, highly styled hair, and designer sunglasses, he looks like the Derek Zoolander of electric-vehicle racing. He even speaks with Zoolanderian opacity: "Other teams have data acquisition," he boasts. "We have rider acquisition."

Adrian Hawkins, the lead MotoCzysz engineer, sheepishly holds up his stopwatch and ledger. "Our acquisition system," he says.

Just before launch, the owner of the track--a practical joker--suggests to Czysz that Imperial miles and U.S. miles are different.

Czysz turns to Hawkins, and asks how long each lap is.

"One point five miles," answers Hawkins.

"U.K. miles or U.S. miles?" Czysz quizzes.

Hawkins is stumped: U.K. miles or U.S. miles?

"U.K. miles or U.S. miles!" Czysz demands, more forcefully this time. Czysz has a reputation as a screamer, and his voice is rising.

"U.S. miles," Hawkins stammers, gently telling Czysz that miles are consistent across borders.

A voice from the small crowd that's gathered to watch comes to Hawkins's rescue, politely informing Czysz there's an Imperial gallon and a U.S. gallon, and perhaps that is the source of his confusion.

"It's gallons that are different?" says Czysz to no one in particular, "Okay, I didn't know." And with that, he zips off.

"Over the Moon"
On Friday, race day, the spectators at the start/finish line are in a jocular mood. They've come to the Isle of Man to see the afternoon's Senior TT, the "real" race, in which the boys with the biggest balls race thundering liter bikes at speeds of up to 180 miles per hour. Although the fastest TTXGP bikes can hit 150 miles per hour, they can't sustain that pace for the whole course, because even the biggest, heaviest battery bombs in the field--Mission and MotoCzysz--have the energy equivalent of only about a quarter of a gallon's worth of gasoline in their tanks. The electric racers must carefully modulate their throttles to conserve energy. For the TT traditionalists, that fact makes the electric race little more than a mildly amusing morning diversion. Voices from the crowd crack jokes:

"That isn't the warm-up area anymore, then, is it?"

"No more 'Gentlemen, start your engines!' I suppose."

"They'll need some pretty long extension cords for this track."

And then, with a wave of a green flag, the electric bikes take off, not an extension cord in sight. The motors wind up, accelerating the bikes with a steadily rising whir: mix, chop, blend, crumb, aerate--until, finally, puree. They're so quiet that some spectators camped out on the sides of the road aren't even aware the bikes are coming until they're already past.

It's a good show, especially when some of the field start blowing parts under the strain. MotoCzysz is the first casualty: two of its three air-cooled DC motors disintegrate, throwing chunks of metal through its vent holes. The machine, perhaps fittingly, comes to rest in front of one of the oldest churches on the island--St. Runius. Back at the start/finish line, Michael Czysz realizes that he's a goner when the radio announcers at the first checkpoint fail to note his bike. "That's it, that's it, it's over now," he says under his breath as the meaning of failure sinks in. One of Brammo's two hopped-up race bikes is the next casualty, victim of a bump taken at 100 miles per hour that pops the rear wheel in the air ever so slightly. Suddenly free of the earth's bite, it spins even faster, the motor's RPM skyrockets, and the overclocking sensor inside does what it was programmed to do: cut the power to protect the engine. The bike eventually gets to within one mile of the finish line before giving up the ghost completely.

The rest of the pack zooms by, chains clicking furiously, on the way to their next checkpoint, the Sulby speed trap. A privateer team from Germany, XXL, pours on the juice to ring up the race's fastest recorded top speed: 106.5 miles per hour. XXL's English-­speaking engineer, Marko Werner, laughs at the grandstand crowd's stunned reaction when they hear the figure over the PA. "It vas easy," Werner says. There were no all-nighters or last-minute track days for him or his team, because instead of trying to reinvent the electric wheel, XXL spent its time and money--four months and 35,000 euros--sourcing the most trouble-free components it could find: a water-cooled motor and controller designed for a 10-year-old hybrid car, the Audi A4 Duo. "Siemens did all the verk," Werner confides.

The race is not won by top speed, of course--it's the fastest average speed that counts. And Agni is first over the finish line with a lap time of 25 minutes, 53.5 seconds. A cry goes through the grandstand crowd: "India wins!" Three minutes behind, for second place, is XXL. Brammo's good bike takes third, the only factory bike to make it to the podium. Mission comes in fourth, and MotoCzysz and the second Brammo bike are DNF: did not finish.

If the TTXGP were a battle in which the biggest war chest determined the outcome, Brammo would have won. If it were a beauty contest, MotoCzysz would have taken the tiara and the sash. If it were chess with a crash helmet, then Mission would have had it. But in the end, reliability trumped all. Agni won the TTXGP by keeping it simple. XXL and Mission both used faster but more complicated liquid-cooled AC motors. But second-place XXL chose the tried-and-true design from Audi, while fourth-place Mission went all-out with a custom-built power plant. The experience of third-place Brammo is most telling of all: its one and only breakdown came after it bolted on an extra battery pack at the last minute.

As the Agni, XXL, and Brammo bikes glide one-two-three into the winner's circle, a scene of barely controlled mayhem erupts: the riders are draped with laurels, and shouts of "Motoguru!" go up for Cedric Lynch, who tells the television cameras that he is "absolutely delighted" with the result. Magnums of champagne are uncorked and sprayed across the crowd. TTXGP race organizer Azhar ­Hussain toasts Team Agni with a speech: "Today, a new company with no budget and no baggage came and won." The Indian ambassador to the United Kingdom is on hand to give Team Agni his personal congratulations. Arvind Rabadia receives him while wearing the Indian tricolor as a superhero-style cape. "First in the qualifier, first in the second qualifier, first in the race," boasts ­Rabadia, the Ashoka Chakra embroidered on his flag looking for all the world like a motorcycle wheel. "I'm over the moon, man!"

Lynch sees the race differently, as he does most things. In his mind, he didn't beat the rest of the field. Rather, he led it, earning a historic victory in an epic, ongoing struggle against internal combustion. "I can just imagine," Lynch muses, "what the petrol-heads would have said if we hadn't beaten the 50cc lap record set in 1966 by Ralph Bryans on a Honda works bike."

Adam Fisher is a freelance writer who lives in Sausalito, Ca. He gave up motorcycling for good after crashing his sparkle-orange 1974 Honda CB750.


http://www.technologyreview.com/energy/23172/









Antivirus Protection Gets Social

Can cloud computing and social networking improve security software?

By Robert Lemos


People often rely on their circle of friends for support. Now one start-up company hopes to harness those social connections to create and deliver security software that will protect users from computer viruses and other digital threats.

Threat assessment: Immunet Protect (above) gives updates on the threats stopped by the software which is still in development. Future versions will include information on malicious software detected on friends’ computers.
Credit: Technology Review

On Wednesday, start-up Immunet announced its first product, Immunet Protect, a program that checks downloaded files against a directory of malicious software, such as viruses and Trojan horses. The company joins a handful of others in turning much of its detection capabilities into a service offered over the Internet, or the "cloud." Yet, the real difference, say its founders, is the software's ability to protect digital communities--those users connected together via social networks such as instant messaging, Facebook or Twitter.

Because malicious software travels quickly through messages sent to friends via e-mail, instant messaging and other social technologies, Immunet plans to use the same pathways to send the needed information to protect users, says Oliver Friedrichs, CEO and founder of the four-person firm.

"We are seeing an increase in the number of threats propagated through social networks, as well as attacks on social-networking sites in general," Friedrichs says. "Your social network is becoming a larger attack vector than it has been in the past. Our approach is to protect that social network."

Immunet users will be able to see who in their social network is a current user of the service. For example, Facebook users could see which of their friends has also installed Immunet Protect. The service will also allow users to see whether their friends have seen a greater proportion of threats than the population of Immunet Protect users as a whole.

The idea is to treat malicious programs less as an analysis problem--where a file is scrutinized to determine whether it poses a threat--and more of a data-mining problem, Friedrichs says. When a new file is downloaded to a user's computer, information on more than 100 attributes is sent to Immunet's servers. If a threat is recognized, the service will respond in a fifth of a second; otherwise, the file is allowed to run while the company attempts to analyze the file's attributes.

"There are so many threats today that an analyst cannot analyze them all, so we are using data-mining techniques to find the needles in the haystack," Friedrichs says. "We consider our user base to be a very large sensor network."

It's no secret that current antivirus software has trouble detecting the latest threats. Last week, antivirus firm Panda Security released a report showing that 52 percent of malicious software is not seen for more than 24 hours, because the cybercriminals who are responsible for spreading the software compress and rearrange the binary code in a different way every day, a technique known as packing. The ability to rearrange code has put traditional antivirus companies at a disadvantage. In a research paper published last year, computer scientists at the University of Michigan found that even the best antivirus programs could only detect three-quarters of newly packed malicious code. It took three months for the best antivirus engine to detect 90 percent of the dangerous software.

Safety in numbers: Immunet Protect integrates with Facebook to allow users to see if their friends use the software.
Credit: Immunet

"There is no easy solution to the problem, unfortunately," says Jon Oberheide, a PhD student at the University of Michigan and the lead author of the paper. "The battle is quite asymmetric, with the scales being tipped heavily in the attacker's favor. We need to focus our efforts and resources on approaches that will significantly reduce this asymmetry, instead of continuing the endless game of reactive catch-up, which the vendors are obviously losing."

To process and analyze viruses faster, several companies have moved to a cloud model, where--rather than putting an intelligent analysis engine on every user's computer--the scanner is a "dumb" program that converts each new file into a list of attributes that are then sent to the software provider's servers. Those servers analyze the file attributes and determine whether it is malicious.

Other antivirus firms have already started to rebuild their antivirus software incorporating the cloud-computing model. McAfee, Panda and Prevx already provide some level of automated analysis as an online service for users.

Pedro Bustamante, senior research advisor with Panda Security, argues that community data can help antivirus firms prioritize their analysis efforts. Panda sees nearly 50,000 files a day, of which some 37,000 are samples of malicious code.

"I have not seen a product yet that is using community as a factor in detection," he says. "I think it could be a nice complement to detection technology but not a stand-alone solution."

However, Immunet's approach puts the company at the very early stages of a cloud antivirus solution, Bustamante adds. "It takes a long time to develop these technologies in the cloud."

Friedrichs underscores that Immunet's service is not complete--it's still in development. The company is working on adding generic detections and heuristics for flagging large categories of threats, which should make them easier to identify. In addition, the company is currently considering ways of handling potentially harmful files when the user's computer is not connected to the Internet.

http://www.technologyreview.com/web/23293/

OurTube

"Open video" could beget the next great wave in Web innovation--if it gets off the ground.

By David Talbot


In 2005, Michael Dale and Abram Stern, a pair of grad students in digital media arts at the University of California, Santa Cruz, decided it would be fun to make video remixes of speeches in the U.S. Congress. Their goals were artistic; Stern had notions, for example, of editing a Senate floor speech to remove everything but the pronouns. They would be following, loosely, in a tradition of video commentary that includes remixing speeches from the 2004 Republican National Convention to feature only the many utterances of terrorism or September the 11th by George and Laura Bush, Dick Cheney, Rudy Giuliani, and others. Aware that congressional proceedings are public--and that C-SPAN airs them freely--the pair went online to hunt for the raw material. But "the footage wasn't there," Dale recalls. While C-SPAN did offer archival material for a fee, he says, "if we wanted to pull together a few different clips of senators saying different things--there was no online repository for download."

Credit: Technology Review
Multimedia
video Hear two video artists describe the promise of open-video and the difficulties of working with proprietary formats.
video See a demonstration of the ease of open-source video editing.
Open Video in Practice
How a remix was made--and how it could have been easier.
By David Talbot

So they bought a computer and several hard drives, which they hooked up to a television, and started unabashedly copying C-SPAN's congressional coverage. Then, in March 2006, they went live with a website called Metavid.org, hosted by the University of California, which offered the purloined legislative footage free for the downloading. Before long, C-SPAN--a nonprofit company created by the cable industry--claimed that the university was violating its copyright. When university lawyers learned that only the videos of committee hearings had been shot by C-SPAN's cameras (proceedings on the floor of the House and Senate were recorded by government cameras), a compromise was reached: floor footage could stay up (with the C-SPAN trademark removed), but the committee footage had to be taken down. C-SPAN later liberalized its policies to allow free reuse of federal-government coverage--but it excluded commercial use. This is not something Metavid could promise, so the hearings remain unavailable on the site.

As they looked for alternative sources of committee footage, Dale and Stern encountered a thicket of technical problems. It turns out that many (though not all) congressional committees do make their own videos, and some of these committees allow you to play the videos on their websites. But the technologies involved reflect the chaos of competing formats that characterizes Web video today. To pick two examples: the Senate Commerce Committee offers videos in a Flash player but offers no download link. And the House Judiciary Committee still uses RealPlayer, a format that's now largely obsolete. Any would-be users of these resources would soon run into trouble. Where download links weren't provided, they'd need special software to copy the video from the government site. Once the videos were in hand, they'd have to buy software to do any necessary format conversions and editing. And finally, they'd have to upload the results. "All of these offerings are difficult to reuse in a video project," says Dale.

Dale and Stern's difficulties offer one small glimpse into a larger problem with online video: unlike much of the rest of the Web, it is accessed through a collection of closed, proprietary formats, such as Adobe's Flash and Microsoft's Silverlight. (Try a video search engine such as Blinkx; you'll get plenty of videos pulled from around the Web, but to watch them you may need to download or update software.) Certain websites, led by YouTube, convert uploaded content to Flash for ease of viewing. Today, however, a growing number of technologists and video artists want to see Web video adopt the kind of open standards that fueled the growth of the Web at large. HTML, the markup language that describes Web pages; JavaScript, the programming language that allows forms, graphics, and various special effects to be added to them; JPEG, the standard for images--all these building blocks of the Web can be used by anyone, without paying fees or asking permission. This openness was indispensable to the creation and then the explosion of blogs, search engines, social networks, and more.

A similar transformation of video would not just allow trouble-free playback of any video you might encounter. It would also mean that any innovation, such as a new way to search, would apply to all videos, allowing new technologies to spread more rapidly. And it would make it far easier to mix videos together and create Web links to specific moments in different videos, just as if they were words and sentences plucked from disparate online text sources: imagine linking part of a politician's speech to a contradictory utterance years earlier. "In 1993 people thought AOL's newsrooms were mind-blowing, because that's all they were exposed to," says Dean Jansen, outreach director of the Participatory Culture Foundation, a nonprofit group that is developing an open-source video player called Miro. "Now they can write their own blogs and find and read hundreds of thousands of news sources and blogs, from all over the Internet. I don't think it's an exaggeration to say that this is the scale of change that would become possible if video [technologies] were totally free online, like text and images."

Today, Dale works toward realizing that vision as part of an effort by the Wikimedia Foundation, which launched and operates Wikipedia, to create video companions to the online encyclopedia's text entries. The idea is that you'll be able to search the Web for snippets of video, import them into a Wikipedia article, and keep track of edits--all using open technologies that don't require video plug-ins or software purchases. One hope is that Wikipedia, as the world's seventh-largest website, will help drive video openness generally, says Chris Blizzard, director of technical evangelism at Mozilla, which is supporting the project. But the larger point is that efforts like these will make it far easier for anyone to innovate with video and for anyone else on the Web to enjoy those innovations. The results are impossible to predict, except through the example of what the open Web has provided so far. "Nobody is going to tell you they want something before it emerges," Blizzard says. "Rather, the experience of the Web is: 'Holy cow, I can do this other thing now!' Open standards create low friction. Low friction creates innovation. Innovation makes people want to pick it up and use it. But it's not something where we can guess what 'it' is. We just create the environment that lets 'it' emerge."

Let's Go Crazy
YouTube has helped make video a mainstay of the Web, thanks largely to its simplicity and user-friendliness. Anyone can open a YouTube account and upload videos, and anyone who visits YouTube can easily find and watch videos, all free. It has become the world's third-most-popular website, with 41 percent of the video-hosting market. A recent analyst report by Credit Suisse predicts that YouTube will serve up an astonishing 75 billion video streams this year, to 375 million users. And every minute, YouTube's burgeoning servers slurp up 20 hours' worth of newly uploaded user videos, says the company's director of product management, Hunter Walk. Susan Boyle, the Scottish songstress phenom? The latest footage from Tehran's street protests? Bulldogs on skateboards? Your cousin's baby video? It's all there, available in a few clicks.

And as YouTube grows and adds features, it continues to stress simplicity and user satisfaction (much in the spirit of its current owner, Google, which bought YouTube in 2006 in a deal worth $1.65 billion). Among other features, it has introduced ways for users to add elements such as captioning to their videos, build on their social networks (by automatically alerting Twitter followers when they upload new videos, for example), and annotate videos with computer-readable tags to improve search results. Other new tools can help businesses manage their YouTube-hosted videos and learn who is watching them. "YouTube represents a unique place in the video ecosystem; the breadth, depth, and freshness of content is unparalleled," Walk says. "The best years are ahead of us." In 2009, uploads of videos from mobile devices are up 1,700 percent--400 percent just since the release of the new iPhone 3G, he says. And the only obvious price of such service is exposure to advertising.

Internet video is thriving in other respects, too. Not just ­YouTube but Apple TV, Windows Media Center, Hulu, and more are making it possible for computers and mobile devices to deliver programming normally associated with television. (YouTube, for example, in a bid for growth and revenue, is offering premium channels with short-form content from entertainment titans Disney, ABC, and ESPN.) Boxee, a New York City startup, is bringing things full circle with a browser that enables you to play any media available over the Internet on your TV screen; the interface is designed for easy use from across the living room.

Against this backdrop, there initially seems little to dislike about YouTube. But its sheer size makes it an easy and tempting target for filtering by national governments (Iran, for one, has done just that). The result is that video can, in some contexts, be censored more effectively than other forms of Web content. Similarly, YouTube is a convenient target for legal action by media companies trying to protect copyright, sometimes in ways that overstep the bounds of common sense. Two years ago Stephanie Lenz, a Pennsylvania mother, got an e-mail from YouTube announcing that it had taken down her shaky 29-second movie of her toddler son, Holden, giggling and dancing as the Prince hit "Let's Go Crazy" played distortedly in the background. ­YouTube explained that it was responding to a request from the Universal Music Publishing Group, which owns the rights to the song. She argued that her video represented "fair use," and it was reposted. But she decided to sue Universal Music, claiming that it was abusing the Digital Millennium Copyright Act. (Universal Music later said it had issued thousands of so-called takedown notices on behalf of Prince alone; the Artist himself is fanatical on the subject.)

While artists have every right to thwart wholesale copying, such crackdowns on incidental, noncommercial use--which is generally quite legal--can inflict collateral damage on innovation. "When people make their own mixes of existing material and YouTube takes that down, this is a huge inhibitor to this kind of commonplace creativity that the Web enables," says Abigail De Kosnik, a professor of new media at the University of California, Berkeley, and author of Illegitimate Media: Minority Discourse and the Censorship of Digital Remix Culture. "What people need to realize is that too much of those kinds of protections and [technology] restrictions--and right now, without open video, we have too much of both--inhibits new genres from emerging."

Finally, the need to generate revenue is driving YouTube further toward a centralized, television-like model, with advertising-supported premium content. In short, while it's never been easier for the average Internet user to find and consume video online, you can't easily adapt or reuse what you find in the vast body of video out there. "The video box that you see on YouTube is a whole bunch of different formats inside this plug-in that isn't manipulable, transformable, or remixable in the way that everything else on the Web is," says Mark Surman, executive director of Mozilla. You can't even download videos you play on YouTube--at least not without help from third-party websites or software.

YouTube sees little need to add features such as downloading tools. "We haven't gotten enough feedback that we need downloads," Nikhil Chandhok, a YouTube senior product manager, said at a recent conference in New York City. "You are mostly connected all the time ... and can access any YouTube video you want." Even if you do go to the trouble of using third-party services to download videos, if you want to do anything creative with those videos, your work has just begun. You will need to convert various formats, buy video editing tools, and learn to use them. (Walk did not want to talk about open standards, except to say that the company has "been about a lot of kinds of openness early" in terms of expanding access to video itself. Of course, making video easier to work with outside of YouTube would tend to threaten ­YouTube's dominance.)

Open archives: Wikipedia’s new video-collaboration effort will allow editors to mine open-source archives for content, including congressional footage from Metavid.org and diverse collections held by the Internet Archive; its holdings range from Iraq War news coverage to dating-advice videos from the late 1940s and 1950s.
Credit: www.metavid.org and www.archive.org

Wikivideo
If YouTube is the epicenter of the Web's video revolution, Wikipedia is the epicenter of online collaboration. In the eight years since its founding, it has grown to become not just the dominant online reference but an increasingly important source of real-time news, with more than 13 million frequently updated entries, including 3 million in English. But these two hubs of free, user-generated content operate as if in separate universes. Wikipedia, which makes it easy to alter content, offers few videos to play (though about 3,000 videos can be found scattered around the site). YouTube, with millions of videos available, offers few options for editing or innovating with them. Generally, each site's best qualities as an information resource are all but absent from the other.

But that could change as Wikipedia strives to add features that permit effortless open-source video editing and remixing. Michael Dale, the former Santa Cruz student, is leading the effort at Wikimedia under the sponsorship of Kaltura, a startup with offices in New York City and Israel. Kaltura is developing open-source technologies for playing, editing, and uploading videos. A major benefit of open video is that the video itself can be extracted from the player, just as an image can be extracted from a website when you right-click it. With the new version of HTML technology, HTML 5, an open-source player is included in the browser--no plug-ins required. Mozilla's newly released Firefox 3.5 browser, Apple's Safari, and Google's Chrome (see "An OS for the Cloud," p. 86) all support this feature, though Safari requires a plug-in to support a specific open video format, called Ogg Theora, that Wikipedia is using. And if history is any guide, these advances by competitors may goad Microsoft to follow suit with improvements to Internet Explorer. "Right now, when you post a Flash video, you are posting the video and also a plug-in player, and that can make it difficult to access the video file itself," says Dale. "Once video is just another asset on the Web and something browsers can natively deal with, we can pull audio, video, images, and text from anywhere on the Internet and do the kinds of sharing and editing and remixing that you want to do, all in the open Web platform." Can Wikipedia really change the way everyone uses video? "When Wiki started, people said it wouldn't work, but it worked," says Kaltura cofounder Ron Yekutiel. "The next question is: Why should it stop at simple media?"

The results should start to become visible this fall. If you are editing a Wikipedia entry, you will find an "Add media" button. Clicking it will bring up an interface that will, initially, allow you to search through three repositories of free licensed multimedia files. One is Metavid, the congressional archive started by Dale and Stern. Another is the Internet Archive, the San Francisco-based digital library most famous for archiving old Web pages; it also holds hundreds of thousands of old interviews, documentaries, and films contributed from various sources. The third is Wikimedia Commons, a multimedia repository operated by the Wikimedia Foundation itself.

Some observers think Wikipedia's foray into multimedia will help move the entire Web toward open video standards. "To make video part of the fabric of Wikipedia will provide incentives to [video] producers to get their stuff out there and indexed," says Jonathan Zittrain, a Harvard Law School professor and cofounder of the Berkman Center for Internet and Society at Harvard University. Producers who want their videos excerpted and linked on a Wikipedia page--drawing more traffic to their own websites--will not just have to put much less restrictive licenses on the material; they'll also have to accept open standards rather than proprietary ones. "With no business model yet gelled, this is just the right time for Wikipedia to be experimenting, and possibly leading, the development of open tools and content for video," Zittrain says.

Jimmy Wales, Wikipedia's founder, sees the effort as the next logical advance in Web technology. "Today any computer programmer in the world can launch a website and have full-strength tools for creating new things," he says. But he points out that this is not yet true for video. No collaborative video editing process is available to all Web users. "It's a process that's a lot harder to do if all I can do is download a 60-minute video to my computer, open up some [proprietary] software to edit the video, then upload it," Wales says. "There's no easy way for other people to give direct feedback. The record of the edits isn't there. And if someone else wants to change it, they have to redo all the work on their computer."

Wikipedia's effort to promote open video standards isn't the only one; the YouTube competitor Dailymotion, for example, is making 300,000 videos available in the Theora format. But whatever the catalyst, wide acceptance of such standards could have important implications even for people who don't want to make their own video remixes. In particular, it could drive broader and faster advances in video search. Consider Blinkx, which has indexed 35 million hours' worth of videos and devised a variety of ways to search them, from simple means--metadata, or computer-readable tags that literally describe what's in a video--to advanced techniques involving speech analysis and facial recognition. One method devised by Blinkx allows searchers to draw a box around a face in a video, click it, and then search the Web for other videos containing that face. But for that trick to work with all Web videos, Blinkx must rebuild the interface code to accommodate each of a handful of dominant video formats and 80 lesser-used ones. "If open video works, then all the people doing these kinds of innovations within individual video formats--they can all talk to each other," says Suranga Chandratillake, the company's founder and CEO. "It means innovation isn't split into separate groups in separate formats. Today the video Web is written in tens of languages, causing all the usual barriers when you want to switch from one to the next. With a dominant open format, everything will link to everything else; viewers will be able to freely watch content and jump between relevant clips."

And on the copyright front, Creative Commons, the nonprofit organization that has provided usage licenses for 250 million copyrighted works, is helping to clarify what existing video works can and can't be used. "Open licensing is a crucial part of this fairly multilayered ecosystem that will make open video take off," says Mike Linksvayer, Creative Commons' vice president. "If the video itself, and the components of the video, like music, aren't actually openly licensed, then each of the other layers is hindered."

Lately, Mozilla's Blizzard and Surman have been showing off something a Mozilla developer cooked up with open-source video tools. In their video, the two men walk in and out of the camera's field of view. A thought bubble dances over each head (tracking their movements thanks to face recognition software); inside each bubble, their real-time Twitter feeds are displayed. This was all done with Theora, HTML 5, and other new standards, Blizzard says. While such a stunt could be performed with proprietary software, it wouldn't be so easy--or so easily shared. "This is what we mean when we talk about taking video out of the plug-in prison and allowing people to create things," he says. The goal isn't to make any one application possible but to bring about the next Internet revolution--one whose specific form is hard to foresee, except that it's likely to be televised.

And webcast.

David Talbot is Technology Review's chief correspondent.



http://www.technologyreview.com/web/23173/?a=f

Ultracaps Could Boost Hybrid Efficiency

Recent studies point to the potential of ultracapacitors to augment conventional batteries.

By Kevin Bullis


Energy storage devices called ultracapacitors could lower the cost of the battery packs in plug-in hybrid vehicles by hundreds or even thousands of dollars by cutting the size of the packs in half, according to estimates by researchers at Argonne National Laboratory in Argonne, IL. Ultracapacitors could also dramatically improve the efficiency of another class of hybrid vehicle that uses small electric motors, called microhybrids, according to a recent study from the University of California, Davis.

Rugged power: This ultracapacitor is being used to capture energy from the brakes in hybrid buses. Similar devices could soon be used to reduce the cost of hybrid cars.
Credit: Maxwell Technologies

The use of ultracapacitors in hybrids isn't a new idea. But the falling cost of making these devices and improvements to the electronics needed to regulate their power output and coordinate their interaction with batteries could soon make them more practical, says Theodore Bohn, a researcher at Argonne's Advanced Powertrain Research Facility.

Although batteries have improved significantly in recent years, the cost of making them is the main the reason why hybrids cost thousands of dollars more than conventional vehicles. This is especially true of plug-in hybrids, which rely on large battery packs to supply all or most of the power during short trips. Battery packs are expensive in part because they degrade over time and, to compensate for this, automakers oversize them to ensure that they can provide enough power even after 10 years of use in a vehicle.

Ultracapacitors offer a way to extend the life of a hybrid vehicle's power source, reducing the need to oversize its battery packs. Unlike batteries, ultracapacitors don't rely on chemical reactions to store energy, and they don't degrade significantly over the life of a car, even when they are charged and discharged in very intense bursts that can damage batteries. The drawback is that they store much less energy than batteries--typically, an order of magnitude less. If, however, ultracapacitors were paired with batteries, they could protect batteries from intense bursts of power, Bohn says, such as those needed for acceleration, thereby extending the life of the batteries. Ultracapacitors could also ensure that the car can accelerate just as well at the end of its life as at the beginning.

Reducing the size of a vehicle's battery pack by 25 percent could save about $2,500, Bohn estimates. The ultracapacitors and electronics needed to coordinate them with the batteries could cost between $500 and $1,000, resulting in hundreds of dollars of net savings.

Ultracapacitors would also make it possible to redesign batteries to hold more energy. There is typically a tradeoff between how fast batteries can be charged and discharged and how much total energy they can store. That's true in part because designing a battery to discharge quickly requires using very thin electrodes stacked in many layers. Each layer must be separated by supporting materials that take up space in the battery but don't store any energy. The more layers used, the more supporting materials are needed and the less energy can be stored in the battery. Paired with ultracapacitors, batteries wouldn't need to deliver bursts of power and so could be made with just a few layers of very thick electrodes, reducing the amount of supporting material needed. That could make it possible to store twice as much energy in the same space, Bohn says.

Ultracapacitors could also be useful in a very different type of hybrid vehicle called a microhybrid, says Andrew Burke, a research engineer at the Institute of Transportation Studies at UC Davis. As designed today, these vehicles use small electric motors and batteries to augment a gasoline engine, allowing the engine to switch off every time the car comes to a stop and restart when the driver hits the accelerator. A microhybrid's batteries can also capture a small part of the energy that is typically wasted as heat during braking. Because ultracapacitors can quickly charge and discharge without being damaged, it would be possible to design microhybrids to make much greater use of an electric motor, providing short bursts of power whenever needed for acceleration. They could also collect more energy from braking. According to computer simulations performed by Burke, such a system would improve the efficiency of a conventional engine by 40 percent during city driving. Conventional microhybrids only improve efficiency by 10 to 20 percent.

In both plug-in hybrids and microhybrids, ultracapacitors would offer improved cold weather performance, since they do not rely on chemical reactions that slow down in the cold. "In very cold weather, you have to heat the battery, or you can't drive very fast--you'd have very low acceleration," Bohn says. In contrast, ultracapacitors could provide fast acceleration even in cold temperatures.

Mark Verbrugge, director of the materials and processes lab at GM, says that of the two uses for ultracapacitors, it will be easier to use them in microhybrids. In this case, he says, ultracapacitors would simply replace batteries, since they store enough energy to augment the gasoline engine without the help of batteries. In plug-in hybrids, which require much more energy, ultracapacitors would need to be paired with batteries, and this would require complex electronics to coordinate between the two energy storage devices. "By and large, you never want to add parts to a car," he says. "You want the simplest system possible" so that there are fewer things to go wrong.

For ultracapacitors to be practical in microhybrids, Verbrugge says, the cost of making them has to decrease by about half, which may be possible because many parts of the manufacturing process for large ultracapacitors aren't yet automated. But to justify the added complexity in plug-in hybrids, he says, the entire system would have to cost significantly less than using batteries alone.

The researchers at Argonne have already taken steps toward proving that ultracapacitors can provide these savings, having shown that they reduce the heat stress placed on batteries by a third. They are continuing to test ultracapacitors to demonstrate that they can make batteries last longer, which would allow automakers to use smaller batteries and save money.


http://www.technologyreview.com/energy/23289/

Reprogrammed Human Cells Shed Light on Rare Disease

A new study uses induced pluripotent stem cells to investigate a neurological disease and test drugs.

By Courtney Humphries


Stem cells generated from patients with a rare neurological disorder are helping scientists dissect the underlying mechanism of the disease and test several candidate drugs. The study, published today in Nature, is the realization of one of the major goals in stem cell research: using induced pluripotent stem (iPS) cells--stem cells derived from reprogrammed adult cells--to study the effects of disease in a patient's own cells, which are otherwise impossible to access.

Mirroring disease. By taking skin cells from patients with a rare neurological disease, researchers were able to create stem cells that could be turned into the specific neurons (shown here labeled with red and blue markers) affected by the disease.
Credit: Gabsang Lee and Lorenz Studer.

The work is "a blueprint for the future of using stem cells to study and treat neurological disease," says Jeanne Loring, director of the Center for Regenerative Medicine at the Scripps Research Institute. (Loring was not involved in the study.)

The idea is simple: Take skin cells from patients with a particular disease, turn those cells into stem cells, direct those stem cells to become a cell type of interest--for instance, the dopamine-releasing neurons that are affected by Parkinson's disease--and see how those cells behave and react to different drugs. A spate of recent papers has demonstrated the development of disease-specific stem cells for conditions such as Down syndrome, amyotrophic lateral sclerosis, spinal muscular atrophy, and Parkinson's disease. The new study is the first to use cells derived from iPS cells to test drugs for their effect against a disease.

Lorenz Studer, lead author of the paper and a developmental biologist at the Sloan-Kettering Institute in New York, and his team focused on a rare disease called familial dysautonomia (FD), which affects neurons that control functions such as the sensation of touch, blood pressure and tear flow. Symptoms usually arise early in life, sometimes from birth, and include a lack of muscle tone and reflex control, problems sensing pain, high blood pressure and difficulty breathing. The disease is caused by a known genetic mutation, but scientists have been unable to create an animal model of the disease.

The researchers obtained skin cells from patients with FD and created lines of iPS cells by inserting four genes into the cells using viruses. This reprogrammed the cells to behave like embryonic stem cells, which can give rise to all cell types. The researchers then directed those undifferentiated cells to turn into specific cell types, including the neural crest cells that give rise to the neurons affected by FD.

FD patients are known to have a mutation in a gene that encodes a protein called IKAP. The mutation causes part of the genetic sequence to be skipped when the gene is translated into a protein; however, the genetic defect only affects certain tissues, for reasons not known. To understand more about the disease, the researchers looked for the normal and mutated protein in the different cell types. Studer says they expected that the neural cells would have more of the abnormal protein. But in fact, they found that the faulty translation happened at equal rates in the different types of cells. Levels of normal IKAP were much lower in the neural cells, which may be why the disease strikes these cells.

The team also found that the cells were defective in their ability to differentiate into neurons and did not migrate as easily as normal cells in a culture dish. The researchers used these differences to measure the effect of three drugs that had been proposed as candidate drugs for FD. One of them, kinetin, a natural plant hormone that is often used as an antiwrinkle treatment in cosmetics, showed promise in treating the cells When cells were treated with the drug, Studer says, "it led to near complete reversal of the splicing defect." Further treatment reversed the defect in differentiation, although it did not affect the cells' ability to migrate.

Susan Slaugenhaupt, a neurologist at Massachusetts General Hospital who studies FD, says that these cells help alleviate a long-standing frustration in studying neurological disease. This technology provides "the ability to examine disease-relevant cell types from patients" for the first time, she says. "You can't get brains from patients and look at these cell types." Slaugenhaupt is now collaborating with the research team to further test drugs for FD using this model.

Slaugenhaupt adds that this study is the first to show that kinetin can improve disease in neural cells and that it "provides the best evidence to date that long-term treatment with kinetin may be beneficial to FD patients." Clinical trials of the drug are scheduled to start soon.


http://www.technologyreview.com/biomedicine/23288/


TR35 2009 Young Innovator

Kevin Fu, 33

University of Massachusetts, Amherst

Defeating would-be hackers of radio frequency chips in objects from credit cards to pacemakers

Stepping back: Kevin Fu takes the point of view of a malevolent hacker to uncover dangerous security flaws in wireless devices.
Credit: Steve Moors
Multimedia
video Watch Fu explain how he uncovers security flaws.

Could implanted medical devices that use wireless communication, such as pacemakers, be maliciously hacked to threaten patients' lives? Kevin Fu is no stranger to such overblown scenarios based on his research, though he prefers to stick to talking about technical details. But Fu, a software engineer and assistant professor of computer science, is a security guy. And security people think differently.

"Anyone who works in the world of security--they always have an adversary in mind," Fu explains, sitting behind his desk on the second floor of the UMass Amherst c­omputer science building. "That's how you can best design your systems to defend against it."

The threats Fu researches are chiefly those connected to the security of radio ­frequency identification, or RFID. RFID is an increasingly common technology, used in everything from tags for shipping containers to electronic key cards, from Exxon­Mobil's Speedpass key-chain wands to Chase's no-swipe "Blink" credit cards. It allows billing and personal information to be shared quickly and wirelessly. But not, Fu realized back in 2006, very securely.

After testing more than 20 such "smart" or no-swipe credit cards from MasterCard, Visa, and American Express, Fu and his colleagues found that they could lift account numbers and expiration dates from several of the cards--even cards inside a wallet--just by walking past them with a homemade scanner.

Criminals troll mailboxes, shopping malls, and airports, harvesting nearby RFID information for use in identity-theft scams. Basically, they pick your pocket without ever touching your pocket. Making these cards truly secure would require good encryption software--Fu's specialty. But encryption requires a steady supply of energy, something that the passive, externally powered RFID chips used in these applications don't have. "The inspiration was about the programming," Fu explains. "But the programming won't work without an RFID computer to program. And the RFID computer won't work without solving the energy issues." He breaks a weary smile. "So, thus far, it's been something like a two-year sideline."

The only way for Fu to resolve this catch-22 is to invent new technology--a project he's working on with a team led by Wayne Burleson, a professor of electrical and computer engineering. But even as he wrestled with this problem, Fu found himself wondering, as only a security guy can: if financial information is vulnerable, what about seemingly more obscure targets with far bigger consequences?

This is what first brought him to the heart-attack machine.

At his desk, Fu clicks through a ­Power­Point slide show of bad-guy examples, from the madman who put cyanide-laced Tyleno­l on Chicago drugstore shelves in 1982 to the hacker who posted seizure-inducing animations on an Internet message board for epileptics.

"It might seem paranoid," Fu admits, "but from a security standpoint, you need to start with the fact that bad people do exist." And there seemed no better place to hunt such misanthropes than the world of medicine.

Fu began wondering about the security of medical devices that use RF communication, such as pacemakers and defibrillators. He discussed the problem with his longtime colleague Tadayoshi Kohno, assistant professor of computer science and engineering at the University of Washington and a veteran investigator into the vulnerabilities of computer networks and voting machines (see TR35, September/October 2007).

"Kevin is a fantastic researcher," Kohno says. "His research is now covered in almost every undergraduate computer-security course that I know of. And his insights are exceptionally deep." Together, Fu and Kohno took their questions about de­fibrillators far from the computer ­science lab--into the world of cardiologist William H. Maisel, director of the Medical Device Safety Institute at Boston's Beth Israel Deaconess Medical Center.

The two explained to Maisel's wide-eyed staff how security people think. In turn, the medical professionals introduced the security researchers to Cardiology 101--starting with pacemakers and defibrillators, devices that are implanted in some half-million people around the world every year. Basically, a pacemaker regulates aberrant heartbeats with gentle metronomic pulses of electricity, while a defibrillator provides a big shock to "reboot" a failing heart. Merged, they form an implantable cardioverter defibrillator, or ICD. The ICD is designed to stop a heart attack in a cardiac patient. But, Fu and Kohno wondered, could it create one instead?

In his UMass office, Fu pulls out a shoebox containing the works of an ICD. It looks the way the Tin Man's heart might: padlock-sized and encased in hard, silvery surgical steel, now peeled away can opener-style. I instinctively reach in, drawn like a magpie to the shiny objects. Fu quickly jerks the box away. "Um, you don't want to touch that," he says. "The coil in these things delivers 700 volts"--enough juice to stop your heart.

He points out the matchbook-sized microchip and antenna coil--technolog­y that connects the latest-generation ICDs with the Internet, allowing doctors to re­program a device without surgery. From the perspective of cardiologists and patients, this wireless programming is a godsend. But from Fu's viewpoint, it represents a new security risk. And so he wondered: Could black-hat hackers listen in on the wireless communication between an ICD and its programming computer? Could they make sense of what they heard and use it to inflict harm?

"Most people who make these devices don't think like this," Fu says. "But this is how the adversary thinks. He doesn't play your game; he makes his own game." To assess the security threat, the researchers needed to play the hacker's game.

Catching bugs: By exposing ways for wireless devices to be hacked, Fu has alerted manufacturers to the potential dangers that their customers face. He found that implanted cardiac devices are particularly vulnerable.
Credit: Steve Moors

Fu's team set out to create a technique to eavesdrop on defibrillator chatter. The hardware was just off-the-shelf stuff--a platform designed to allow researchers and serious hobbyists to build their own software radios. It has been made into FM radios, GPS receivers, digital te­levision decoders--and RFID readers. All that was left was to write the software, rip the antenna coil out of an old pacemaker, solder it into the radio--and voilà, they had a transmitter.

"It worked pretty well--amazingly well," Fu says. After "nine months of blood and sweat," they could intercept digital bits from an ICD--but they had no idea what those bits meant. His students trudged back to the lab to figure out how to interpret them. Using differential analysis--basically, changing one letter of a patient's name and then listening to how the corresponding radio transmission changed--they were able to painstakingly build up a code book.

Now their homemade software radio could listen in on and record ICD programming commands. The device could also rebroadcast those recordings, as fresh commands, to any nearby ICD. It had become dangerously capable of playing doctor.

Fu discovered one set of commands that would keep an ICD in a constant "awake" state, surreptitiously draining the battery to devastating effect. "We did a back-of-the-envelope calculation on this," he explains. "A battery designed to last a couple years could be drained in a couple weeks. That alone was a notable risk."

Even more notable, Fu's software radio was capable of completely reprogramming a patient's ICD while it was in his or her body. The researchers were able to instruct the device not to respond to a cardiac event, such as an abnormal heart rhythm or a heart attack. They also found a way to instruct the defibrillator to initiate its test sequence--effectively delivering 700 volts to the heart--whenever they wanted.

Fu doesn't like to think of himself as h­aving built a heart-attack machine, or even of discovering that such a thing could be built. Though he is an academic who doesn't shy away from pursuing real-world applications for his theoretical technologies, that "real world" is usually at least 10 years in the future. But the ramifications of the ICD-programming radio were both immediate and chilling: the device could be easily miniaturized to the size of an iPhone and carried through a crowded mall or subway, sending its heart-attack command to random victims.

A heart-attack machine? Really? It would be foolish, Fu says, not to recognize that there are depraved people out there, more than capable of building and using such a machine to inflict harm on random innocents "just for kicks." To this extent, the issue of protecting remote programming access to ICDs is directly related to the issue of protecting RFIDs. Encrypting the communication is the only way to shield millions of people from random risks. It doesn't take a Fu to come up with practical solutions, but by exposing the security dangers he has provided a valuable, perhaps even life-saving, alert to manufacturers.

Fu is too smart to engage in speculation about how the technology could be abused, except to say that he'd be very surprised if there weren't "people already working on this." In the best case, we'll never know how foresighted he was; medical-device maker­s will eliminate the threat before hackers ever exploit it. "Kevin is a computer scientist who also has the ability to look at problems like a medical doctor and like a patient," says Maisel. "The work Kevin is doing now--relating to medical-device security and privacy--has the potential to impact millions of people."

How about the more dramatic scenario­s? Imagine a spy agency using printed circuitry to put a heart-attack machine into a news­paper, delivered with morning coffee to a foreign leader with a pacemaker. Or a Lex Luthor-like supervillain who retrofits a radio tower to broadcast his death ray to entire populations.

Kevin Fu--professor, researcher, scientist--rolls his eyes. "All I can say about that one," he says with a laugh, "is it might make a pretty good movie." --Charles Graeber


http://www.technologyreview.com/TR35/Profile.aspx?trid=760


Thursday, August 20, 2009

Wi-Fi via White Spaces

A network design that uses old TV spectrum could produce better long-range wireless connectivity.

By Erica Naone


Long-range, low-cost wireless Internet could soon be delivered using radio spectrum once reserved for use by TV stations. The blueprints for a computer network that uses "white spaces," which are empty fragments of the spectrum scattered between used frequencies, will be presented today at ACM SIGCOMM 2009, a communications conference held in Barcelona, Spain.

White spaces: Accessing the Internet over unused portions of TV spectrum could provide good long-range connectivity in rural areas, and help fill in gaps in city networks. Microsoft researchers tested a new protocol, called White Fi, using the device shown here.
Credit: Microsoft Research

TV stations have traditionally broadcast over lower frequencies that carry information longer distances. However, with the ongoing transition from analog to digital broadcasts, more unused frequencies are opening up than ever.

By tapping into these lower frequencies, it should be easier to provide broadband Internet access in rural areas and fill in gaps in city Wi-Fi networks. For example, the spectrum between 512 megahertz and 698 megahertz, which was originally allotted to analog TV channels from 21 to 51, offers a longer range than conventional Wi-Fi, which operates at 2.4 gigahertz. "Imagine the potential if you could connect to your home [Internet] router from up to a mile," says Ranveer Chandra, a member of the Networking Research Group at Microsoft Research behind the project.

The FCC ruled last November that companies could build devices that transmit over white spaces but also gave strict requirements that this should not interfere with existing broadcasts, both from TV stations and from other wireless devices that operate within the same spectrum. Chandra and his colleagues designed a set of protocols, which they call "White Fi," to successfully navigate the tricky regulatory and technical obstacles involved with using white spaces.

"It's a totally different paradigm for wireless networking," says Chandra. "Until now, in wireless networks, you were given a spectrum, and you would share it with everyone else. Everyone was an equal stakeholder. Now, you have this spectrum where there are certain people who are primary users."

One of the main obstacles for Chandra's group was dealing with a network of different devices; in the past, work focused on sending and receiving signals between individual devices over white spaces.

Setting up a group of devices to communicate over white-space frequencies is a more complicated proposal, because white-space devices have to find available spectrum, which can change depending on where and when the device is operating. The researchers designed a system consisting of a wireless access point, like the router used in Wi-Fi networks, and the mobile devices communicating with it.

White Fi is designed so that each device measures the spectrum conditions around it and works with the others to find available frequencies. Because interference can happen at any time, the system can move to a different slice of spectrum if need be.

Some of the challenges that faced the group came about because of the undefined nature of white-space frequencies. . The researchers designed their algorithms to determine the ideal amount of frequency bandwidth to use for a broadcast, balancing the desire for strong signal against the possibility of interference with neighboring frequencies. They also had to design a way for mobile devices to find a signal from an access point.

One of the most important parts of the White Fi system is a protocol for dealing with collisions among different signals (particularly those from wireless microphones, which can turn on at any time). Even a single packet of interference is enough to produce audible disruptions for a microphone. Even if interference affects only one device on the network, strict regulations forbid all devices on the network from using that channel. The researchers got around this by designing the access point so that it maintains a backup channel. If another user is detected, the white-space device or access point immediately switches to the backup channel, which reassigns bandwidth use as needed.

Peter Steenkiste, a professor of computer science at Carnegie Mellon University who specializes in networking, says that previous work on white spaces has focused on addressing one problem at a time. "The thing that I think is very interesting about this paper is that it really has looked at how you put a complete system together," he says.

Steenkiste adds that "there are a lot of practical issues that they've worried about." In particular, he says, the researchers did not assume an ideal, controlled environment for their system. Rather, they took into account such problems as measurement "noise" and the unpredictable behavior of wireless microphones. "[The research] has an answer for every question," Steenkiste says.

Chandra says that his group recently received an experimental license from the FCC that allows them to build a prototype White Fi system on the Microsoft Research Campus in Redmond, WA. They plan to send their findings to the FCC in the hope that the data will help determine future white-space regulations. Chandra notes that since the transition from analog to digital television is happening worldwide, there is a high level of international interest in US white-space experiments. Researchers and companies all over the world are looking for technologies to take advantage of the fragments of spectrum that will open up in the coming years, he says.



http://www.technologyreview.com/communications/23271/?a=f