Monday, June 11, 2007

Lessons about Surface and Visualization with Virtualization

Morrie -

When you look at something like Surface hooked up with other technologies, you can see how valuable this bit of hardware is for Microsoft. Where they once rode on machines that were compatible with an IBM specification when they first leased out their operating system and programming software, they have an opportunity to dominate the future hardware platforms for computer services interaction.

The mouse and click method is so primitive and will hopefully disappear before long with ubiquitous voice and sight/sense interaction allowing the machine to take a place as an appliance rather than a "data processing machine".

Virtual buttons on the display (hardware buttons and controls disappearing) are a nice advance on touch screens so iPhone has a great future in the realm of portable devices, but the small display limited by what a human can comfortably carry. While in the presence of a Surface machine, the Apple method can act as an independent set of controls while the Surface handles the general needs of the local audience.

Surface takes the idea of large screen display and places it in a fixable plane the human can use as a part of the environment rather than a "station" for sitting and interacting. Surface provides the touch realm without having to touch the screen. Further enhancements of the platform can provide an ability to watch and listening to everything around it... and interacting with the humans in that space.

I said all that to say the following: Here's an example of what can be done to build a 3D experience in visual displays and it's not difficult from this simple sample to understand how the convergence of virtualized software can further involve the "hardware" of machine and human in defining the software's function and service rendered.

Items like the following example will translate well on a Surface machine.
http://blogs.zdnet.com/emergingtech/?p=601
June 11th, 2007
Using a robotic arm to scan the Iliad
Posted by Roland Piquepaille @ 10:47 am

According to Wired News, computer scientists from the University of Kentucky (UKY) recently went to Venice, Italy, to scan ‘Venetus A,’ the 10th century manuscript of Homer’s Iliad. They’ve used a 39-mexapixel Hasselblad camera to take pictures of the famous manuscript and a laser mounted on a robotic arm to create 3-D images of the 645-page parchment book. As the text is handwritten, it’s not easily readable by ordinary people. But Harvard’s Center for Hellenic Studies plans to produce XML transcriptions of the text and to put them online.

And this will not be an easy task. You can see above the first lines of Homer’s Iliad photographed from the Venetus A manuscript. (Credit for photo: Center for Hellenic Studies) [This picture is the last one of set of eight which are included in this Wired News slideshow.]
Matt Field, graduate student, and Brent Seales, an associate professor working for the UKY’s Center for Visualization and Virtual Environments, were part of a team working to create a high-resolution, 3-D copy of the 645-page parchment book.

The idea is “to use our 3-D data to create a ‘virtual book’ showing the Venetus in its natural form, in a way that few scholars would ever be able to access,” says Matt Field, a University of Kentucky researcher who scanned the pages. “It’s not often that you see this kind of collaboration between the humanities and the technical fields.”

After the manuscript was — very carefully — photographed for the first time since 1901, using “a 39-megapixel digital camera, a Hasselblad H1 medium-format camera with a Phase One P45 digital back,” Field had to scan the pages to create 3-D images of each page.

But because the manuscript is so fragile, it was impossible to use an ordinary scanner. So Seales and Field decided to use the Laser ScanArm, sold by FARO Technologies Inc., mounted on a robotic arm.

Passing about an inch from the surface, the laser rapidly scanned back and forth, painting the page with laser light. The robot arm knows precisely where in space its “hand” is, creating a precise map of each page as it scans. The data is fed into a CAD program that renders an image of the manuscript page with all its crinkles and undulations.

Of course, as you have seen above, even if the images are incredibly crisp, they will not be easily readable by people like you and me. “So, this summer a group of graduate and undergraduate students of Greek will gather at the Center for Hellenic Studies in Washington, D.C., to produce XML transcriptions of the text.” Then their work will be posted online.

No comments: