John Romant's Technology Blog

If it's technology, I want to know about it.

Art using high-definition cameras, screens, software, and moving images to capture the experience of seeing.

September/October 2011 By Martin Gayford via Technology Review

A still from the 18-screen video May 12th 2011 Rudston to Kilham Road 5 PM. Credit: ©David Hockney

One of your basic contentions, I say to the British artist David Hockney, is that there is always more to be seen, everywhere, all the time. “Yes,” he replies emphatically. “There’s a lot more to be seen.” We are sitting in his spacious house in the quiet Yorkshire seaside town of Bridlington. In front of us is a novel medium, a fresh variety of moving image—a completely new way of looking at the world—that Hockney has been working on for the last couple of years.

We are watching 18 screens showing high-definition images captured by nine cameras. Each camera was set at a different angle, and many were set at different exposures. In some cases, the images were filmed a few seconds apart, so the viewer is looking, simultaneously, at two different points in time. The result is a moving collage, a sight that has never quite been seen before. But what the cameras are pointing at is so ordinary that most of us would drive past it with scarcely a glance.

At the moment, the 18 screens are showing a slow progression along a country road. We are looking at grasses, wildflowers, and plants at very close quarters and from slightly varying points of view. The nine screens on the right show, at a time delay, the images just seen on the left. The effect is a little like a medieval tapestry, or Jan van Eyck’s 15th-century painting of Paradise, but also somehow new. “A lot of people who were standing in the middle of the Garden of Eden wouldn’t know they were there,” Hockney says.

The multiple moving images have some properties entirely different from those of a projected film. A single screen directs your attention; you look where the camera was pointed. With multiple screens, you choose where to look. And the closer you move to each high-definition image, the more you see.

Hockney “draws” with images from nine cameras. Credit: David Hockney

“Norman said this was a 21st-century version of ­Dürer’s [Large] Piece of Turf,” Hockney says. By “Norman” he means Norman ­Rosenthal, the former exhibitions secretary of the Royal Academy in London and one of the doyens of the international contemporary-­art world. The comparison is an intriguing one. Albrecht Dürer’s 1503 drawing (Das große Rasenstück in German) was a work of great originality.

Dürer used the media of the time—watercolor, pen, ink—to do something unprecedented: depict with great precision a little slice of wild, chaotic nature. He revealed what was always there but had never before been seen with such clarity. Hockney, in 2011, is doing the same job, using the tools of the moment: high-definition cameras and screens, computer software. Of course Hockney, too, is a painter—indeed, his grid of 18 flat screens, run by seven Mac Pro computers, looks much like one of his multipanel oil paintings. Except, of course, that every panel moves.

Hockney’s technology assistant, Jonathan Wilkinson, explains how this 21st-century medium works. “We use nine Canon 5D Mark II cameras on a rig we’ve made, mounted on a vehicle—either on the boot or on the side. Those are connected to nine monitors. I set it up initially, taking instructions from David, to block it in. At that point we decide the focal length and exposure of each camera. There are motorized heads with which we can pan and tilt, once we’ve got going, while we’re moving along. There’s a remote system he can operate from the car.”

Hockney compares that process to drawing. For him, drawing is not merely a matter of making lines with a tool; it’s fundamentally about constructing a two-dimensional image of three-dimensional space. He argues that the same is true of putting photographic images together in a collage, and also of altering a single photograph. Hockney complains that today’s media are full of badly drawn (that is, Photoshopped) photographs.

Jonathan Wilkinson, Hockney, and Dominic Elliott rig up the cameras. Credit: Jean-Pierre Goncalves de Lima

The wild plants at the side of the road are only one subject. A number of other films chart the sequence of the seasons in the quiet corner of the English countryside where Hockney now spends much of his time. These too present a subject that is centuries old (the four seasons were a feature of medieval books of hours), but with a twist made possible by technology that became available only very recently.

They offer a lesson in the startling changes in vegetation, quality of light, and patterns of shadow that a few months will bring. The left-hand side will show, say, a progression down a country road in early spring, the right-hand side the same journey taken at exactly the same speed past the identical trees, fields, and bushes in high summer: the same, but utterly transformed. Because it is in practice impossible to drive at absolutely the same speed along a road in spring, summer, and winter, the precise synchronization of these sequences is achieved by editing. “Because we’ve done things at different times of the year,” as Wilkinson puts it, “we remap time to get them in the same place simultaneously in each film.”

The Camera’s Eye
“A lot of people have told me,” Hockney remarks, “that before they see these films they can’t imagine what nine cameras could do that one can’t. When they see them, they understand. It’s showing a lot more; there’s simply a lot more to see. It seems you can see almost more on these screens than if you were really there. Everything is in focus, so you’re looking at something very complicated but with incredible clarity.” In a way, this is a matter of multiplication: nine cameras see many times more than one.

Furthermore, Hockney believes that his multiscreen film collages are closer than conventional photography to the actual experience of human vision: “We’re forcing you to look, because you have to scan, and in doing so you notice all the different textures in each screen. These films are making a critique of the one-camera view of the world. The point is that one camera can’t show you that much.”

Stills from Woldgate 7 November 2010 11:30 AM (left) and Woldgate 26 November 2010 11 AM (right). Credit: ©David Hockney

You could say Hockney is using cameras to reveal the limitations of the camera. The films are the result of decades’ thought about the place of old art forms—painting and drawing—in a world dominated by rapidly evolving photographic and electronic media.

Now 74, Hockney was born in Bradford, on the other side of Yorkshire, in 1937. It was apparent from early on that he was an exceptionally brilliant draftsman. Indeed, he belongs to one of the last generations of artists to receive a rigorous training in draftsmanship before art education changed in the late 1960s.

Hockney started using photographs as a basis for paintings in the late 1960s. But he became dissatisfied with the direction his work was taking, which in some cases veered toward a form of photorealism. By the ’80s he was conducting a personal research program into the nature of pictorial and photographic space. He began to entertain the idea that what the camera sees and what the eye sees are in some ways fundamentally different. “Most people feel that the world looks like the photograph,” he says. “I’ve always assumed that the photograph is nearly right, but that little bit it misses by makes it miss by a mile. This is what I grope at.”

A camera looks through one lens; we look—most of us, at least most of the time—through two eyes. Then we are not just looking at a scene from outside; we are always in it. People, you might say, are biological sensing devices, placed in an infinitely complex three-dimensional environment. What we see, subjectively, is always related to what we are interested in. Or, in Hockney’s epigram, “The eye is attached to the mind.”

Pearblossom Hwy. (1986) Credit: Collection: The J. Paul Getty Museum, Los Angeles. ©David Hockney

In the early 1980s, Hockney began a series of composite or collaged pictures made from a mosaic of Polaroid snaps, including Luncheon at the British Embassy, Tokyo, Feb. 16, 1983 and the several versions of Pearblossom Hwy. (1986). These were images with not one viewpoint but dozens, presenting—Hockney would argue—a representation of the world truer to experience than a single photograph. (Just as today he likens his multiple-screen images to drawing, he classified these Polaroid collages as drawings rather than photographs.) At the time, he wanted to make moving multiple-viewpoint images—and produced one for a television documentary—but the process was prohibitively complex and costly. Only in the last few years has the technology become available that allows him to do so with his own studio team, and in richly detailed quality.

Another result of this preoccupation with the role of lenses in the making of art was his book Secret Knowledge (2001). In it, Hockney argued that Western art had been affected by the lens-eye view for centuries before the official advent of photography in 1839. It had long been known that some artists had used the camera obscura—essentially, a filmless camera that came in portable or room-size versions (both Canaletto and Joshua Reynolds owned the former). But whereas conventional art history had tended to minimize this, Hockney maximized it.

An artist of the Renaissance or Baroque eras might have used a camera image in many ways. For Canaletto, tracing the image onto paper was evidently a handy way of noting architectural detail (such drawings by him, with a telltale traced line, exist). But other painters might have learned from observing how a camera obscura simplifies highlights and shadows onto a two-dimensional surface. There are compelling resemblances between such projections and 17th-­century painting. Once you’ve seen them it is hard to believe that Caravaggio, Van Dyck, and Dutch still-life painters hadn’t looked through a camera obscura.

But there have always been ways to draw and paint that do not imitate cameras. Hockney reminds us that Far Eastern art, for example, has neither Renaissance-style single-­vanishing-point perspectives nor shadows. The former is an optical property of a single-lens view; the latter result from the strong illumination that cameras tend to require.

Luncheon at the British Embassy, Tokyo, Feb. 16, 1983. Credit: Photo: Richard Schmidt. ©David Hockney

The most recent of Hockney’s nine-screen films were shot in his huge and light-filled Bridlington studio. They look like a cross between silent comedies and Chinese scrolls, filled with a characteristic range of astonishingly saturated color and—because of both the cameras and the light flooding through windows in the roof—without shadows.

Art as Technology
An exhilarating aspect of Hockney’s approach is that it widens art history into a unified account of pictures, images, of all kinds—handmade, photographic, cinematic, televisual. They are all part of the same story. He is, for example, strongly interested in the movies (after all, before coming to work in Yorkshire in 2000, he lived for three decades in Los Angeles, which he still calls his base).

A basic point for Hockney is that all art is based on technology. The paintbrush, as he says, is a technological device. And paint, a discovery tens of thousands of years old, can still produce an intensity of color that no screen or printing machine can equal.

Though drawing itself is a very old human technique—going back at least to the prehistoric cave paintings of southwestern France—Hockney has been adept at using new technology to find new ways to draw. In the 1980s he used early color photocopiers and fax machines to make art. Using the fax, he distributed art by telephone; with the photocopier he made prints that, paradoxically, could not be photocopied (if you make an intense black by putting the paper through the machine four times, it cannot be replicated by a single copying process).

Untitled, 30 November 2010, No. 1, created on an iPad. Credit: ©David Hockney

During the last three years, he has been fascinated by the possibilities of drawing on, first, an iPhone and then—as soon as it appeared—an iPad. He had tried earlier forms of computer drawing but found them too slow for practical use. Now the iPad, plus an app called Brushes, is his medium of choice. He uses it as an electronic sketchbook; it is always by his side. A steady flow of iPhone and iPad drawings—loose, free, experimental, and intimate—pop, sometimes every day, into the mailboxes of his friends and acquaintances. More than 200 are currently in mine. They add up to a visual diary, recording sights that fall under Hockney’s eyes as he moves through his day: the view from his bedroom window at dawn, the kitchen sink, a coffee cup, a candle burning in the evening. Looking at them gives clues to where Hockney is, how he’s feeling, and what the current weather is like in east Yorkshire.

Recently he has begun printing Brushes drawings out at a large scale (this requires a program that prevents the images from pixelating, as they otherwise would). Early next year a sequence of these grand-scale iPad pictures will fill the largest gallery at the Royal Academy, where there is an exhibition of Hockney’s new work depicting the very same Yorkshire landscapes that he films with his nine cameras and paints in oil. His work in all three media is interdependent. The paintings and drawings led on to the films, and the films in turn prompt new directions for the paintings and drawings.

All Hockney’s work and thought is dedicated to the proposition that there is always more to see in the world around us. Art is a way—you might say a set of technologies—for making images, preserving them in time, and also for showing us things we aren’t normally aware of. Those might include gods, dreams, and myths, but also hedgerows.

“Don’t we need people who can see things from different points of view?” Hockney asks. “Lots of artists, and all kinds of artists. They look at life from another angle.” Certainly, that is precisely what David Hockney is doing, and has always done. And yes, we do need it.

Visit source article.

Advertisements

AMD Releases Quad Buffer SDK for AMD HD3D technology to Accelerate the Development of Stereo 3D.

August 17, 2011 —

SUNNYVALE, CA — (Marketwire) — 08/18/11 — AMD (NYSE: AMD) today announced the availability of the AMD Quad Buffer SDK for AMD HD3D technology, delivering a vital tool to developers engaged in building immersive stereo 3D capabilities into upcoming game titles. Concurrently, new passive and active monitors from Acer, LG, Samsung, and Viewsonic have further expanded ecosystem support for AMD HD3D technology. End-users with systems including any of the following: the AMD A-Series APUs, AMD Radeon™ HD 5000 or HD 6000 HD3D-capable graphics products now have even more choice thanks to the Open Stereo 3D initiative in building their stereo 3D gaming or Blu-ray 3D playback system.

“AMD HD3D technology has reached critical mass, with more games, more movies, and supporting hardware and software from many of the industry’s leading vendors,” stated Matt Skynner, corporate vice president and general manager, AMD Graphics Division. “The addition of the Quad Buffer SDK can help our many developer partners make stereo 3D a standard part of future game titles.”

AMD Quad Buffer SDK

A big part of enabling stereo 3D support is the ability of AMD graphics hardware to drive four frame buffers simultaneously. AMD Quad Buffer SDK, available on AMD Developer Central, is designed to enable game and application developers to accelerate development time of stereo 3D within their titles. The SDK provides clear guidelines on how to implement stereo 3D to help ensure that it can be enjoyed across the expanding ecosystem of monitors and stereo 3D glasses supporting AMD HD3D technology. Additionally, the quad buffer can be used to add native support for stereo 3D in video games and supports DirectX® 9, 10 and 11.

Monitors & 3D Glasses
Computer monitors supporting AMD HD3D technology are now shipping from several major vendors, including Acer, LG, Samsung, and Viewsonic. The approach to stereo 3D varies from monitor to monitor, but they all have in common the ability to enable an incredibly immersive stereo 3D experience.  continue reading.

Sony Unveils HD Recording Digital Binoculars With 2D & 3D Capture

By

New Sony HD-recording Digital Binoculars models, DEV-3 and DEV-5, have been announced. The new models, the “World’s First Digital Binoculars With HD Video Recording, Zoom, Autofocus and SteadyShot Image Stabilization”, allow users to capture “can’t miss” moments in 1080 AVCHD 2.0 video, 3D, 7.1 MP images, and full stereo sound. According to Sony Electronics:

“Now consumers can watch birds, wildlife, sports action and more in steady, sharply-focused close-up views, while capturing their subjects in crisp Full HD. These new models add entirely new levels of flexibility and convenience to viewing, recording and enjoying your favorite images and scenes.”

The binoculars feature an ergonomic grip, a “stealh” design, a rechargeable battery pack good for about three hours of 2D recording, and a GPS receiver which allows for automatic geo-tagging of pictures (DEV-5 model only). Both models electronically autofocus at any magnification (in 2D) and the DEV-3 and DEV-5 have 10x and 20x optical zoom respectively. The new binoculars will be available for purchase for $1400 and $2000 this coming November.

What do you think of Sony’s HD Binoculars? Pretty, neat, right? Maybe we will see some higher-quality fan footage from sporting events this Christmas…visit original post.

3D Technology significanly improves interest and learning outcomes in school.

By Miriam Pia

Students show more interest in class and have better learning outcomes when 3D technology is used, according to the Boulder Valley School District.

Focused and attentive

Students focus on the content more and paid more attention in all the classrooms using 3D technology, reported Len Scrogan, director of instructional technology at the Boulder Valley School District, at the InfoComm conference in Orlando, Florida this June.

Scrogan is also Adjunct Professor at the University of Colorado at Denver and Health Sciences Center.

Teachers also reported fewer disruptions, he added, and students said they preferred the learning experience they had with 3D environments.

“It provided better visualization than a text book,” one student said, describing a 3D cellular imaging experience. This was a typical student response, particularly in astronomy, biology and chemistry classes, Scrogan said.

Altogether, the study covered eight math and science classrooms in middle school and high school at the Boulder Valley School District in Colorado. The study covered all types of students –typical, gifted and those with behavioral problems and learning disabilities, he said.

“We used Texas Instruments 3D-chip-ready Vivitek projectors, 3D glasses from Expand 3D, and software from Designmate, Cyber Anatomy, Bio Interactives, JTM, Eon, and Navtek,” Kristin Donley, the school district’s Science Research Seminar coordinator, told Hypergrid Business.

Better learning…continue reading at original article

THE LION KING 3D Conversion Images Show Off Depth of Field.

Original article by Bill Graham    Posted:August 9th, 2011 at 6:38 pm

the-lion-king-3d-scar-conversion-image-slice

Disney’s The Lion King will release into theaters this year in a new 3D format for the very first time on September 16th. The film was a childhood favorite of mine, and every time I hear “The Circle of Life,” I get goosebumps. Needless to say, I look forward to viewing the film on the big screen, something I may have done when I was little but can’t recall. However, I do wonder how a film from the ’90s will hold up, animation wise, and how a 3D conversion of it will fare on the big screen.

Today, Disney sent over some images showing just what the conversion process entails, including adding notes of depth and then using filters to key in on what will be in the foreground, background, and everywhere in between. The process is a lot more difficult than this, but it gives us a great idea of what the process entails on a basic, easy to understand level. Hit the jump to view those images, including a description of what we are looking at, a discussion with the stereographer Robert Neuman about the procedure itself, and my impressions of the scenes they showed before Cars 2.

First, let’s get to the good stuff. Disney sent over two scenes that show off the process. The first is of Pride Rock and the second is of Scar. The basic process is taking a finished image, adding a layer and marking depth details, and then using a layering system to key on those depth markings in the image to tell the computer where to place the image. Here are those images [click to enlarge]:

lion-king-3d-image

lion-king-3d-image-3

lion-king-3d-image-4

lion-king-3d-image-5

lion-king-3d-image-1

lion-king-3d-image-2

Here are the captions Disney sent over as well, explaining the images in more detail:

1. The original film image.

2. The 3D Depth Map created by Robert Neuman, the 3D Stereographer on the film. Positive numbers refer to the amount of pixels the image will come out of the screen and negative numbers refer to the amount of pixels the image will go deeper into the screen, creating the 3D depth.

3. Grey Scale – The final image in the computer representation of depth. Darker images will be furthest away, and lighter images will be closer to the viewer.”

….click here to continue reading original article.

Infographic: Gender Based Color Preference Study of 232 People from 22 countries.

I ran across this great infographic about gender based color preference at blog.kissmetrics.com

Gender in speculative fiction

Image via Wikipedia

 

 

 

The United States of Text Message Spam. Infographic

Take a look at this interesting infographic on Text Messaging statistics in the U.S.  created by the graphics team at Tantango.com


Text Message Marketing by Tatango.

Astronomy’s 3D Revolution

I ran across this article at technologyreview.com

Simple 3D tools could bring astronomy alive for scientists and the public alike. But the techniques are woefully underused, argue two astronomers.

When it comes to scientific visualisations, biochemists are the undisputed champions. These guys embraced 3D techniques to represent complex molecules at the dawn of the computer age. That’s made a huge difference to the way researchers understand and appreciate each other’s work. In fact, it’s fair to say that biochemistry would not be a poorer science without efficient 3D visualisation tools.
Now, Frederic Vogt and Alexander Wagner at the Australian National University argue that astronomy could benefit in a similar way from simple 3D tools.
“Stereo pairs are not merely an ostentatious way to present data, but an enhancement in the communication of scientifific results in publications because they provide the reader with a realistic view of multi-dimensional data, be it of observational or theoretical nature,” they say….continue reading.

Technicolor Acquires LaserPacific and Cinedigm Key Assets

I got my start in the Entertainment Industry at Laser Pacific, during the Leon Silverman era.  Under Leon, Laser Pacific developed Emmy award winning technologies which spurred innovation throughout the entire industry and beyond.   Hopefully Technicolor can rekindle the spirit of innovation that Leon Silverman developed at Laser Pacific.

See the article below:

Technicolor has been doing some shopping to dramatically boost its digital cinema business. The Paris-based company announced July 27 an agreement to acquire Cinedigm Digital Cinema Corp.’s physical and electronic theatrical distribution assets, and on the same day announced the acquisition of LaserPacific’s postproduction assets.

The Cinedigm deal will grow Technicolor’s satellite presence by 40 percent, expanding it to over 1,100 locations in North America. Distribution assets, replication equipment, and at least 300 satellite roof rights are also included, among other things. Additionally, Technicolor will license some of Cinedigm’s key software and become its preferred partner for related post-production services…continue to original post.

Protected: nanoD3 update 7/29/2011. (2D/3D) 2 Dimensional to 3 Dimensional Conversion System.

This content is password protected. To view it please enter your password below:

%d bloggers like this: