Wednesday 24 February 2010

High Throughput for High Resolution

We've been using the ProSilica/AVT GE4900 recently to get super high resolution 16megapixel images at about 3Hz.  It's a nice camera, but that resolution tends to demand high performance from the processor.

We have about 45MB/sec of raw image data we have to process.  In order to chew through all this data we've been pushing the raw bayer mosaic images onto an NVidia GTX260 GPU and performing colour conversion, gamma correction and even the sensors flat field correction on the GPU at high speed.  We also use the GPU to produce reduced size greyscale images for processing and alalysis alongside the regular colour converted image for display.  The ability to process such high resolution images using the GPU has really made the difference for this application and it would not be possible without this capability. 
Vision Experts

Wednesday 10 February 2010

Interface Acceleration

Machine vision sensors are getting big, and Camera are increasingly available with a number of pixels that is truly enormous by historical standards.  Cameras in the 10+ Megapixel range seem to be increasing in popularity for industrial inspection, possibly driven by the consumer market in which such large sensors are now the norm, partly due to price decreases, and possibly because processing and storing the data is just about feasible these days.  

The bandwidth between cameras and computer is also increasing, which it needs to.  Already, it seems that a single GigE connection just isnt enough bandwidth for tomorrows applications.  For example, AVT have a dual GigE output on a camera to give 2Gbits/sec of bandwidth.  The CoaXPress digital interface is capable of 6.25 Gbits/sec over 50m of pretty much bog-standard coax cable, a capability I find incredible.  Likewise, the HSLINK standard, proposed by DALSA, uses InfiniBand to achieve 2100Mbytes/sec.   Most of these standards even permit using multiple connections to double, or quadruple the bandwidth.  With all this data flying around, trying to process this on a PC is going to be like taking a drink from a hose pipe.  Or two, or four. 

Think about it, at 2Gbits/sec, the computational demand will be 250Mpix/sec (assuming 8bit pixels).  Using a 3GHz processor core, thats 12 clock cycles availble per pixel.  You can't do a whole lot of processing with that.  Even if you scale up to Quad-core and make sure you use as many SSE SIMD instructions as you can, you still aren't going to be doing anything sophisticated with that data.  It could be like machine vision development 15 years ago, when I remember the only realistic goal was to count the number pixels above threshold to take a measurement!


I feel that the new generation of ultra-high resolution cameras streaming data at ultra-high bandwidths are going to require a new generation of processing solutions.  I suspect this will be in the form of massivley parallel processors - such as GPU's and perhaps Intel's Larrabee processor (when it finally materialises).


In the mean time, I'm plugging away writing GPU accelerated algorithms just for format conversion so that we can even display and store this stuff.


Vision Experts

Friday 5 February 2010

GPU Supercluster

I was interested to see this GPU system doing some biologically inspired processing at Harvard.  Whilst I doubt that there will be any practical industrial applications to emerge from this, it does show how inexpensive it can be to build a minor supercomputer. To quote from their website...


...With peak performance around 4 TFLOPS (4 trillion floating point operations per second), this little 18”x18”x18” cube is perhaps one of the world’s most compact and inexpensive supercomputers....




Vision Experts