Smartphone cameras are so good these days it can seem almost churlish to yearn for improvements. And
yet enhancements continue to shave away at imperfections as engineers turn the screw to optimize
multiple aspects of image capture hardware and software.
To wit: U.K. based startup Spectral Edge has developed a mathematical technique for improving
photographic imagery, blending data captured by a standard camera lens with an infrared shot of the
same scene in order to enhance the depth and color of the photo.
Its computation photography technique, called Phusion, works especially well for sharpening detail in
shots taken on hazy days or when elements in the scene have been over-exposed, according to
MD Christopher Cytera.
“It’s bringing extra detail into the picture that you can’t necessarily see with a normal camera. Because
infrared penetrates through mist and fog much better than visible light. And so when you have a picture
with a little bit of mist, little bit of fog you get a much more stunning effect,” he tells TechCrunch.
“The secret sauce is being able to combine… the infrared with the visible light picture in a way that’s
pleasing,” he adds. “There’s been other techniques to combine the two in the past but they don’t end up
with pictures that are nice to look at.”
For example, weapons systems used by the military to identify targets have already been using infrared
to enhance visibility. But in that case the resulting imagery is enhanced only in a utilitarian sense — i.e.
to help identify targets. Phusion is designed to serve up better natural looking pictures. You can see
some before and after examples on its website.
The hard maths underlying the technique involves mapping the rate of change across the
entire scene using differentiation calculations, says Cytera. “In simple terms we do it by transforming the
pictures into what’s called the gradient space. What we’re doing is we’re differentiating every pixel with
respect to every other pixel and color in multiple dimensions. And what that does is it preserves every
single gradient — all the gradients and edges in the picture are perfectly preserved.
“Whereas other techniques try to blend pictures and you end up with blurred effects as a result — you
lose edge definition. We don’t do that because we differentiate all these pixels.”
The technique can also have the side-effect of airbrushing/beautifying photos of people, because the
infrared filter is able to reduce the appearance of blemishes on human skin. So not only sharper
landscapes but slicker selfies too — or at least that’s the promise.
The technique can also apparently enhance low light imagery — a perennial holy grail of smartphone
makers, given the inevitable light capture constraints on small sensors.
Spectral Edge, which holds patents on the tech, was founded at the University of East Anglia in early
2011, before being spun out in March 2014. It’s announcing a new tranche of funding today — £1.5
million, following its $300,000 seed back in 2014 — with the aim of commercializing the tech. Investors
in the latest round are IQ Capital and Parkwalk Advisors, along with angels from Cambridge Angels and
the Cambridge Capital Group.
Spectral Edge’s business model is to license its IP to device makers. So the funding will be used develop
the IP product so it’s ready for licensing, along with further development and testing work of the core
Cytera reckons the first commercial deployments could arrive within 18 to 24 months, likely in on high
end professional cameras initially but trickling down thereafter to smartphone hardware as the required
sensors become cheaper to produce in volumes.
He confirms the startup is talking to some smartphone makers now but won’t name any names at this
An infrared sensor would be needed on a mobile device for the technique to function but Cytera notes
that some smartphones already have the necessary hardware — such as Google’s Tango device, which
uses an infrared sensor for gestures. Future mobile phone camera sensors could also incorporate
infrared into their sensor mix so that a device with a single lens could grab the necessary image data,
without a handset needing to have two camera lens, he adds.
“In the long run we think this could be in every single phone with no real cost penalty except a bit of
software,” he says. “Maybe [in] five years. Which is what happened with HDR.”
Although, at this stage, there’s plenty more work to do to pave the way for infrared enhanced shots as
standard, with Cytera noting the tech does not yet function in real-time on a mobile device but is rather
An earlier route to market than even pro cameras could be security cameras, in Cytera’s view. He notes
the technique works in enhancing video too so could be used to improve detail on CCTV videos for