Last year I heard a talk given by one Professor Kanade of Carnegie Mellon. He’s a pretty well known figure in the field of computer vision. And I’d say CMU is the best U.S. university in the field of computer vision.
Anyway, he mentioned a bunch of projects he was working on, but one was really intriguing. The idea is, you have a bunch of cameras all over an area, and you use computers to map this information together, so what you have is the ability to view the scene from any angle. Not just the ones that the camera covers. Computers can extrapolate other “virtual” angles based on the information in other cameras. It’s essentially the same thing they did in the Matrix, in those scenes where the camera smoothly moves around the scene, except the demos I saw from CMU didn’t have a line of cameras but a dome of cameras, so you can view it from any position.
A fascinating idea Kanade brought up was the use of this type of technique at sporting events. If you have enough cameras, you can view the action from literally any angle. You can even view it from a virtual angle where you’re on the court / field. That is incredible.
So, watching the post game at the Super Bowl, they used this type of technique! Like, they’d show a play and then move the viewing angle as it unfolds. And you can tell there’s computer extrapolation involved – the image is kind of blocky, stuff like that. And the movement wasn’t totally smooth, but it was still kind of cool. And I immediately thought about that Kanade talk.
Anyway, when I watched the credits, I found out that the virtual camera stuff was done by CMU! So, it must have been Kanade behind all that. I just thought it was kind of cool to be on the “in” about that technology, like how it works and who’s behind it.