News APP

NewsApp (Free)

Read news as it happens
Download NewsApp
Home  » Business » This is the future of computing!

This is the future of computing!

By Adam L. Penenberg, FastCompany.com
January 31, 2007 11:04 IST
Get Rediff News in your Inbox:

Jefferson Han, a pale, bespectacled engineer dressed in Manhattan black, faced the thousand or so attendees on the first day of TED 2006, the annual technology, entertainment, and design conference in Monterey, California.

The 30-year-old was little more than a curiosity at the confab, where, as its ad copy goes, "the world's leading thinkers and doers gather to find inspiration." And on that day, the thinkers and doers included Google gazillionaires Sergey Brin and Larry Page, e-tail amazon Jeff Bezos, and Bill Joy, who helped code Sun Microsystems from scratch. Titans of technology. It was enough to make anyone feel a bit small.

Then Han began his presentation.

Jeff HanHis fingertips splayed, he placed them on the cobalt blue 36-inch-wide display before him and traced playful, wavy lines that were projected onto a giant screen at his back. He conjured up a lava lamp and sculpted floating blobs that changed color and shape based on how hard he pressed. ("Google should have something like this in their lobby," he joked.)

With the crowd beginning to stir, he called up some vacation photos, manipulating them on the monitor as if they were actual prints on a tabletop. He expanded and shrank each image by pulling his two index fingers apart or bringing them together. A few oohs and aahs bubbled up from the floor.

Suppressing a smile, Han told the assembled brain trust that he rejects the idea that "we are going to introduce a whole new generation of people to computing with the standard keyboard, mouse, and Windows pointer interface."

Scattering and collecting photos like so many playing cards, he added, "This is really the way we should be interacting with the machines." Applause rippled through the room. Someone whistled. Han began to feel a little bigger.

But he was far from finished. Han pulled up a two-dimensional keyboard that floated slowly across the screen. "There is no reason in this day and age that we should be conforming to a physical device," he said. "These interfaces should start conforming to us."

He tapped the screen to produce dozens of fuzzy white balls, which bounced around a playing field he defined with a wave of the hand. A flick of a finger pulled down a mountainous landscape derived from satellite data, and Han began flying through it, using his fingertips to swoop down from a global perspective to a continental one, until finally he was zipping through narrow slot canyons like someone on an Xbox.

He rotated his hands like a clock's, tilting the entire field of view on its axis--an F16 in a barrel roll. He ended his nine-minute presentation by drawing a puppet, which he made dance with two fingers.

He basked in the rock-star applause. This is the best kind of affirmation, he thought. The moment you live for.

Six months later, after TED posted the video on its Web site, the blogosphere got wind of Han's presentation. Word spread virally through thousands of bloggers, who either posted the video on their sites or pointed to it on YouTube, where it was downloaded a quarter of a million times. "Uaaaaaaaaaaaaaahhhhhhhhhhwwwwwwwwwwwllllllllll I want one!!!" whined one YouTuber. "Just tell me where to buy one," said another. "Holy s--t. This is the future," cried a third. Han's presentation became one of YouTube's most popular tech videos of all time.

In this Googly age, it only takes a random genius or two to conceive of a technology so powerful that it can plow under the landscape and remake it in its own image. People are already betting that Jeff Han is one of them

For as long as he can remember, Han, a research scientist working out of New York University's Courant Institute, has been fascinated by technology. He even doodles in right angles, rectangles, and squares--hieroglyphs that look almost like circuitry, a schematic of his unconscious.

The son of middle-class Korean immigrants who emigrated to America in the 1970s to take over a Jewish deli in Queens, Han began taking apart the family TV, VCR, "anything that was blinking," at the age of 5 (he still has a nasty scar courtesy of a hot soldering iron his little sister knocked onto his foot).

His father wasn't always happy about the houseful of half-reassembled appliances, but encouraged his son's technolust nevertheless, and even made him memorize his multiplication tables before he enrolled in kindergarten. At summer camp, Jeff hot-wired golf carts for nocturnal joy rides and fixed fellow campers' busted Walkmen in exchange for soda pop. He studied violin "like any good Asian kid." He was 12 when he built his first laser.

His parents scrimped and saved to send him to the Dalton School, an elite private high school on Manhattan's Upper East Side, then Cornell University, where he studied electrical engineering and computer science.

Han skipped out on his senior year without graduating to join a startup that bought a videoconferencing technology he developed while a student. A decade later, he's poised to change the face of computing.

Until now, the touch screen has been limited to the uninspiring sort found at an ATM or an airport ticket kiosk--basically screens with electronic buttons that recognize one finger at a time. Han's touch display, by contrast, redefines the way commands are given to a computer: It uses both movement and pressure--from multiple inputs, whether 2 fingers or 20--to convey information to the silicon brain under the display.

Already, industries and companies as diverse as defense contractor Lockheed Martin, CBS News, Pixar and unnameable government intelligence agencies have approached Han to get hold of his invention. And, no surprise, he has formed a startup company to market it, Perceptive Pixel.

"Touch is one of the most intuitive things in the world," Han says. "Instead of being one step removed, like you are with a mouse and keyboard, you have direct manipulation. It's a completely natural reaction--to see an object and want to touch it."

On a recent Tuesday afternoon, Han gives me a private demonstration at NYU. The 36-inch-wide drafting table he used at TED has since evolved into a giant screen: two 8-foot-by-3-foot panels. I notice the screen is not only smudge resistant but durable--or as Han says, "peanut butter-proof," a phrase he didn't invent but liked enough to co-opt.

In this Googly age, it only takes a random genius to conceive a technology so powerful that it plows under the landscape and remakes it in its own image.

Han teaches me the one pattern I need to know--a circular motion akin to a proofreader's delete symbol, which brings up a pie-chart menu of applications. I poke at it, and suddenly I'm inside the mapping software, overlooking an arid mountain range. Spread two fingers apart, and I'm zooming through canyons. Push them together, and I'm skying thousands of feet above.

I'm not just looking at three-dimensional terrain, I'm living in it: I'm wherever I want to be, instantly, in any scale, hurdling whole ridgelines with a single gesture, or free-falling down to any rooftop in any city on earth. This ain't no MapQuest. Han's machine is faster--much faster--because there's nothing between me and the data: no mouse, no cursor, no pull-down windows. It's seamless, immediate, ridiculously easy. No manual required.

An NYU colleague pokes his head in (Han greets him like he does most everyone: "Dude!") and tells him that a producer from the Ellen DeGeneres Show called. Han is amused but declines the invitation to appear.

Ever since he became a Web phenomenon, he has been receiving all sorts of offers, come-ons, lecture requests. An official from SPAWAR, a subdivision of the Navy focused on space and naval warfare planning, queried Han about collaborating. A producer from CBS News wondered how to make use of Han's touch screen for special events like election coverage. A dance deejay asked if he had a product to spin music at clubs. A teenager asked how he could become a computer engineer too (answer: "Study math").

Meanwhile, I get back to playing with Han's über tech. "Jesus," I say under my breath. "He's gonna get rich."

Han overhears me and laughs. The thought has occurred to him.

Before reinventing the touch screen, Han was just another dotcom refugee at a crossroads. BoxTop Interactive, an e-services firm he worked for in Los Angeles, had just flamed out with everything else (he calls the whole boom-bust era a "collusion of bulls--t"). With his father ill, and ready for a change himself, Han returned to New York.

He knew some professors at NYU and, despite his aborted stay at Cornell, landed a research position at the Courant Institute, where he has been for the past four years.

The scope of the projects he's involved in is a testament to the sheer wattage of his brain. Two are funded by DARPA, the Defense Advanced Research Projects Agency under the Department of Defense, including one involving visual odometry: Modeling his work on the brain of a honeybee, Han has been looking for ways to make a computer know where it has been and where it is going--part of an attempt to build a flying camera that would be able to find its way over long distances.

Han has also made it to the second round of a DARPA project to create an autonomous robot vehicle that can traverse terrain by learning from its own experiences. The goal: to perfect an unmanned ground combat vehicle that could operate over rough trails, in jungles or desert sand, or weave through heavy traffic as if it had a skilled driver behind the wheel.

One non-DARPA project involves reflectometry. Han came up with a way to scan materials so they are faithfully reproduced digitally. The process typically requires shining a light on a piece of fabric, a flag, say, from dozens of different angles, and scanning each one into a computer--a time-consuming proposition.

But Han developed an elegant shortcut: He built a kaleidoscope with three mirrors that reflect one another. Once a swatch of cloth is inserted, the scope yields 22 reflections mimicking different angles of light. When data from each reflection are scanned, the result is a flag that can be formed into any shape--one that looks like it's waving in the breeze, with each ripple and each slight shift in light rendered to a photographic exactitude. The whole process takes a fraction of the time Hollywood's best computer animators would need.

Han brought a similarly pragmatic do-it-yourself attitude to his study of touch-screen technology. When he began looking into the idea, he discovered that a few researchers were working on interactive walls and tabletops, and there were a number of art pieces. But that was about it.

The concept hadn't advanced much from where it was in the 1980s, when Bill Buxton, now a Microsoft researcher, was experimenting with touch-screen synthesizers. "Most of it was designed with toys in mind," Han says, "something you project on-screen like Whack-a-Mole with hand gestures. But they weren't asking themselves what purpose it served. I wanted to create something useful."

Inspiration came in the form of an ordinary glass of water. Han noticed when he looked down on the water that light reflected differently in areas where his hand contacted the glass.

He remembered that in fiber optics, light bounces on the inside of the cable until it emerges from the other end miles away. If the surface was made of glass, and the light was interrupted by, say, a finger, the light wouldn't bounce anymore, it would diffuse--some of it would bleed into the finger, some would shoot straight down, which was happening with his water glass. Physicists call the principle "frustrated total internal reflection" (it sounds like something your therapist might say).

Han decided to put these errant light beams to work. It took him just a few hours to come up with a prototype. "You have to have skills to build," he says. "You can't be strictly theoretical. I felt fortunate. I walked into a lab with crude materials and walked out with a usable model."

He did it by retrofitting a piece of clear acrylic and attaching LEDs to the side, which provided the light source. To the back, he mounted an infrared camera. When Han placed his fingers on the makeshift screen, some light ricocheted straight down, just as he thought it would, and the camera captured the light image pixel for pixel.

The harder he pressed, the more information the camera captured. Han theorized he could design software that would measure the shape and size of each contact and assign a series of coordinates that defined it. In essence, each point of contact became a distinct region on a graph.

"It's like a thumbprint scanner, blown up in scale and encapsulating all 10 or more fingers. It converts touch to light." It could also scale images appropriately, so if he pulled a photo apart with two fingers, the image would grow.

"People want this technology, and they want it bad," says Douglas Edric Stanley, inventor of his own touch-screen "hypertable" and a professor of digital arts at the Aix-en-Provence School of Art in France.

"One thing that excited me about Jeff Han's system is that because of the infrared light passing horizontally through the image surface itself, it can track not only the position of your hand but also the contact pressure and potentially even the approach of your hand to the screen. These are amazing little details, and pretty much give you everything you would need to move touchable imagery away from a purely point-and-click logic."

Han began coding software to demonstrate some of the touch screen's capabilities, running them on a standard Microsoft Windows operating system. Meanwhile, Philip Davidson, an NYU PhD candidate, got excited about the project and quickly became its lead software developer.

The first thing the pair did was to modify NASA World Wind, a free Google Earth--type open-source mapping program. (Han figured the military would be keen on anything that works faster, since split seconds mean the difference between life and death.)

Then they created the photo manipulator, which lets you upload pictures from Flickr or anywhere else on the Web (it can also make 2-D images appear as 3-D). A taxonomy tool makes it a cinch to navigate the illustrated branches of the Linnean classification system, from animals and plants down to every known species, and see on one screen how these families are structured and interrelated. (They are thinking of extending it to genealogy and an analysis of social networks.)

Multidimensional graphing and charting help you visualize spreadsheet data and move them around from one point in time to another, while Shape Sketching lets you draw on-screen as easily as you can with a pencil on paper--then animate these shapes instantly. Down the road, it may be possible to draw Bart Simpson on-screen and instruct the computer in what you want him to do.

"As computers have become more powerful, computer graphics have advanced to the point where it's possible to create photo-realistic images," Han says. "The bottleneck wasn't, How do we make pixels prettier? It was, How do we engage with them more?"

Today's computers assume you are Napoleon, with your left hand tucked into your suit," says Bill Buxton, whom Han considers to be the father of the multitouch screen. "But a lot of things are better performed with two hands. Multiple- sensor touch screens bridge the gap between the physical and virtual world."

Mind you, this doesn't mean touch screens will completely replace the computer mouse, QWERTY keyboard, or traditional graphic user interface (or GUI) any more than cinema made live theater disappear or television supplanted radio. Each continues to do what it does best.

Your iPod or cell phone may be fine for short music videos, but you probably wouldn't want to watch a two-hour movie on it. "These media fall into their appropriate niche and are displaced in areas where they are not the best," Buxton says.

Han really doesn't know how his mapping software, photo manipulator, or any of it will ultimately be used--these applications are really proofs of concept, not ends in themselves.

"When unexpected uses emerge that no one ever thought of, that's when it gets exciting and takes off," says Don Norman, a professor at Northwestern University and author of Emotional Design. Thomas Edison, after all, believed the phonograph would lead to the paperless office; businessmen would record letters and send the waxed discs in the post. And the Internet wasn't exactly invented to serve the masses and become the backbone to business and commerce.

In January, Han was set to ship his first screen to a branch of the military. He hasn't taken a dime of venture capital, so his company is already in the black.

Meanwhile, wherever touch-screen technology leads, Han will face stiff competition. Microsoft has been working on its own version, TouchLight, which offers echoes of the Spielberg sci-fi flick Minority Report. GE Healthcare, which manufactures MRI machines, is using TouchLight, licensed from Eon Reality, for 3-D imaging: Surgeons can swipe their hands across the screen and interact with an MRI of a brain, peel away sections, and look inside for tumors (retail price: $50,675).

Mitsubishi is targeting a completely different market with its DiamondTouch table, a collaborative tool for business that allows a group of people to interact at the same time via touch screen.

Canada-based Smart Technologies has created a nice niche selling interactive whiteboards to universities, corporations, and even to three branches of the U.S. military for briefings. Panasonic has been developing wall-size touch-screen displays, as has consulting firm Accenture, whose interactive billboards are already enticing passengers at O'Hare and JFK airports. Apple has filed for several patents in the field, and there are rumors, which the company won't confirm, of course, that it will soon offer a touch-screen iPod.

But Han isn't exactly worried. In January he was set to ship his first wall screen to one of the branches of the military (he won't say which one) "and they are paying military prices--six figures," he says.

His company will also be offering consulting services and support, which will generate even more revenue, and Han says he has a lot of other deals in the pipeline. He hasn't taken a dime of venture capital, so his company is in the black even before he has rented office space.

What's more, with the cost of cameras and screens plummeting, it is inevitable that interactive displays will be built into walls and in stores, in schools, on subways, maybe in taxicabs. In fact, a screen could be as thin as a slice of wallpaper, yet durable enough to handle the most rambunctious user.

Not everyone is sold on Han's idea. Ben Shneiderman, a computer science professor at the University of Maryland and a founding director of the Human-Computer Interaction Lab, calls Han a "great showman" who has "opened the door to exciting possibilities."

But he doesn't think Han's technology would be suitable for a large-scale consumer product, nor as useful as a mouse on a large display. If you are standing in front of the screen, Shneiderman wonders, how would people behind you be able to see what you're doing?

One way, Han counters, is for the demonstrator to simply move his ass out of the way. Another: Use a drafting-table display, as Han did at TED, and project the image on a wall-size screen.

But criticisms like these are a million light years from Han's mind. We're in his cluttered and cramped office at NYU. Books line a shelf, and a skein of wires unfurls across the floor. A computer circuit board is half taken apart (he stopped losing screws long ago), and a nearby whiteboard contains blueprints and sketches of the touch screen, plus a clever trick for hacking programming code.

Han is explaining why he formed Perceptive Pixel. "I want to create an environment where I can create technology, get it into the hands of someone to market it, and move on to other technologies so I can keep innovating," he says.

"I want to be a serial entrepreneur: Incubate an idea, get it to a good state, and make that an enabler to get to the next state. It's every researcher's fantasy."

Powered by

Get Rediff News in your Inbox:
Adam L. Penenberg, FastCompany.com
 

Moneywiz Live!