Forza Silicon Debuts New Design for Defense, Security Markets Based on New CMOS, Image-Processing Technology
4K was all the rage at NAB, but that's hardly the ceiling for resolution, even in the near future. Even at NAB, you could see 8K acquisition in the Astrodesign and NHK booths, and you may have been wondering, then, how far resolution can really be pushed. For a hint, look to CMOS sensor design specialist Forza Silicon, which is introducing a new customizable video-camera platform that can reach "resolutions approaching 200 megapixels" at 60fps.
Video professionals don't generally talk about images in megapixel units, which measure the number of pixels in an image (1 megapixel is either 1 million or 1,048,576 pixels, depending on who's counting). But, to put that in perspective, HD resolution is about two megapixels (1920 pixels horizontally multiplied by 1080 pixels vertically gives you 2,073,600 pixels). 4K is a little less than nine megapixels. Even Red Dragon's vaunted 6K resolution equals only a little more than 19 megapixels. You'd need to bump the numbers to something like 18K — 18,000 horizontal pixels — in order for a 16×9 image to approach 200 megapixels total.
Well, that seems to be what Forza is suggesting with its Forza 100+ MP CAM Platform, built around a customizable CMOS image sensor operating at 60fps and proprietary on-board image processing technology. The camera would be switchable between black-and-white and color, and the company said it would produce video images with minimal motion blur. The camera is debuting next month at the SPIE DDS 2014 show in Baltimore, MD, which is geared to defense and security applications.
Forza suggested the camera was ideal for surveillance applications, but said it can be configured to meet requirements in defense, aerospace, automotive, and medical-imaging markets. Would it come in handy in, say, sports production, where it could be pointed at a football field and endless regions of interest defined at HD resolution or higher? Sure — but it'll be a while before anyone in this industry is willing to deal with that much data coming off of the sensor.
Sections: Technology
Topics: New product beyond 8k Cameras forza silicon high-resolution
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Serious Overkill. Nothing but an effort to get us to waste more money on data we don’t need outside perhaps for scientific needs.
That’s kind of the point; this camera is aimed at those scientific/surveillance needs.
For example, positioning a camera like this at the entrance to a secure area would allow you to crop down to stunningly high quality photos of a suspect.
Likewise, as suggested, should a way to save the native datastream be developed, imagine covering an entire football field (or racetrack orโฆ) with an array of these – you would be able to have full HD instant replay of ANY inch of the field at any time.
For ordinary cinematic usage, I’m sure the folks at ILM and other FX houses are already drooling.
Conceivably, with such cameras around the field, it would be possible to fulfill many a sports fan’s dream, enabling them to view the action from any angle, pause it and then start playing it from another angle. This already exists in a primitive form on the Win U, you can watch these video sphere movies, usually of tourist spots, which enable you not turn around 360 degrees, and even up N and down 360 degrees, while the movie is playing. When you do this, you will see the camel’s eye view move with you, as if you were there in real time. You can also zoom in and out of these video clips. The next step will be doing this in real time, & in 3D. Imagine watching NASSAU and being able to jump from the camera views from each car & any anywhere along the track. This will surly make many sports more engaging, imagine being able to view a diving computing from the diver’s point of view. It would be cool to have several of these aboard the space station, on board rovers on the surface vhf Mars,the moon, space probes, etc. Imagine seeing the asteroid impact on Jupiter in high res.
That should read NASCAR, lol.
Yay!
In the surveillance world, there are a number of multi-megapixel cameras that allow a virtual “PTZ” output at some lower resolution than the native resolution of the sensor. In other words, PTZ with no moving parts. I can see a system like this being valuable for military/security applications–not just scientific needs. Also, the author’s example of pointing such a camera at a football field would not necessarily be overkill if the operator specifies a smaller (HD) Region Of Interest, and only an HD ROI is output. One could zoom or pan to any ROI instantly, perhaps with real-time object tracking. Conceivably, even multiple simultaneous outputs could be supported.
On the other hand, I wonder what type of capturing system could handle the entire sensor output in order to do after-the-fact analysis.
Imagine the bombing in Bangkok the other week and they zoomed the picture in an got a grainy bad picture of the suspects/culprit. With these cameras you could zoom in and get the persons fingerprints along with their face and maybe lip read exactly what they are saying at the time. ๐
I’m holding out for 36K.
Why, because your scared?
No because movies write and direct themselves at that resolution. And ultra sky high resolution makes all the difference in telling a great story.
Oh wait, it doesn’t. (When will the pixel madness end?)
It means they can take in more of a scene and then crop to whatever part of the scene looks best, so the camera never misses anything because it wasn’t pointed at the right place at the right time. They do it now on TV sometimes (usually on reality shows) when they zoom into a reaction shot digitally during editing, but the resolution looks terrible. These cameras would ensure the best resolution no matter what you zoom into.
I’m a DP & VFX Supervisor with 30 years experience so yes I know. But good directors will get it right 99.99% of the time the first time. Blowing up images sometimes is a compromise, even if no resolution hit is taken. Digital post though makes all things fairly fixable. My point was “it’s not the resolution that’s as important stupid.” (That’s a “stupid” directed in general, not at you.)
There’s also an extreme overhead price paid for such images at the resolution, color space and bit rates required. Most high end computer workstations can barely deal with 4K at the moment. And Moore’s Law has slowed down a bit.
Lol!
Then you can make a whole move out of the crop of one scene
I feel bad for all those overly made up women in broadcast news and their greasy lips, they are going to worse then they do today…
Haha, even more paranoid, what with botox and lip filling stuff. Bordering on paranoia already, never mind with 18k!
Overkill indeed. Isn’t the human eye only able to decipher 3K? Data mining for visual effects work I understand. Point a camera at some Raytheon, military constructed appliance for R&D makes sense but storytelling? I dunno. Where is all this going?
I haven’t heard that but I’ve heard similar comparisons. My problem with all of these attempts to quantify what the human eye sees is… Batman. Remember the Dark Knight? They shot most of the movie in 35mm (about 3.5-4k depending on who’s counting) with special sequences in Imax (about 9ish k). In a true Imax theatre (not that faux 4k but really 2k Liemax as mentioned in the earlier comment), the 35mm sequences were projected on a fraction of the screen and the Imax sequences were projected in full screen glory. The 35mm sequences (or 3.5k sequences if you will) looked positively muddy and blurry by comparison, even per square foot of screen. In other words even with the 35 mm footage “shrunk” down or not stretched out to fit a giant screen the faces and landscapes just couldn’t hold the detail of the same patch of screen from one of the Imax sequences. I suspect the eye/brain combo is constantly scanning, focusing on different parts of the image, compiling over a series of scans a much higher resolution image than we give it credit for. Therefor, as weak and poorly designed as the human eye is compared to other creatures, for an immersive cinema experience (whether on a 6 story screen or on some sort of goggles) the optimal is still Imax or better. So why want even more resolution than Imax? Imagine some day 10 or 15 years from now (or much sooner, you never know) when CPUs/GPUs/SSDs allow, a filmmaker can fine tune a problematic composition months later in post, cropping, panning, scanning, deep into the picture and still have better than Imax resolution. That would be/will be incredible. I love overkill.
Great post!
I struggle to see 2k!
The first thing I thought was–how pathetic the new Lie-Max 4k camera will look projected at Lie-Max 2k. Overkill or not, I’d love to see a serious epic filmmaker project this on a six story screen.
It’s more about cropping what they film to get the best framing.
in 25 or 30 years it will be 200 gigapixels and then 200 terapixels, and up and up.
Japan is preparing to commence 8K broadcasts in 2016. While we debate the relative merits of 4K we are being passed by.
Japan will be colonizing Mars while we’re still debating Universal Single-Payer Healthcare… ๐
I’ll wait to buy a 18k camera for $200
Pictures
There’s only so much the human eye can focus on!
You’re overlooking genetically-engineered Humans of the future.
And that Mummy chick from the Tom Cruise reboot who can like multiply her pupils from like 2 to 4 (possibly more)… Though why you’d want or need to, who knows? ๐
Old post, I know, but can you imagine what technology will be like in 20 years, going by the rate of advancement currently? 50k in the future, or something even more radical? Eyeballs won’t take it obviously or perhaps they might explode? I can only imagine what we will be filming and viewing on the big screen then. It is getting seriously crazy, but I’m loving getting carried away by it all! What do you think, at a guess?
Why is this even needed? Granted, I can understand that while 4K is imperceptible at a distance requiring 8K for immersive virtual reality, is there anything to be gained from 18k? Is there some specific type of content that 8k video won’t work for?
As a camera I can understand it’s benefit in allowing us to zoom around without loss of perceptive quality, but I can’t imagine a screen ever needing to display the full 18k resolution captured by such a camera.
yes, 8K video is not nearly enough for near-field immersive 3D VR. It’ll take decades and many times higher resolution before we can deliver that “you are there” effect without illusion-breaking artifacts.
Decades? Are you sure about that? You do realize how fast technology is developing, right? Now virtual reality is being developed to optimize by tracking eye movements to put the highest resolution where our focus is, just like how real vision works. All you really need to do is have a screen keyboard off creating the resolution required and a graphics engine capable of producing the image in just a portion of the screen. I really don’t think that’s going to take decades to figure out.
I’m not sure about the time estimate, but I know that 1440p screens placed a few inches from my eye (in VR headgear) don’t look anything close to reality, more like seeing a grainy low-res digital image through a mesh screen. I’ve heard experts say 32k resolution or higher would be needed, optimistically speaking. If my eye was freely moving around a 270 degree fixed screen, how would you ensure a minimum PPI resolution where I’m looking? Wouldn’t the whole screen have to be high PPI? Maybe if you were projecting lasers onto my retina, or wearing a high-res contact lens? But those DO both sound decades away. Also, it’s not just the super-k resolution of the display, but the entire ecosystem of content production and distribution that would have to catch up.
Pretty sure your eyes can’t even move 180 degrees, let alone 270 degrees, without some further evolution…?? ๐
You’re a camera? ๐
jesus 18k the cinema screen will have to be like 10 times bigger
That’s not how that works. It’s just more “blocks,” or pixels, on one area. Like, how video games pay for more octagons to make their 3D characters more realistic, MP just add more, smaller, colour pixels into one area, making the video clearer.
Now you WILL be able to see every single hair in Sly Stallone’s ear in crystal clear Hi-Def at 120 FPS!
It will just have to be a bigger and better projector.
Introducing Virtual Reality….
I’m saving my money for 128k.