Lets state the obvious upfront: we are witnessing a hardware tsunami. Cost of developing hardware has gone down, computational power accessible even on mobile platforms has gone up, connectivity is widespread, and amazing things are being done at the intersection of hardware + software + internet.
One of the areas that has benefitted tremendously from the this revolution is the field of imaging. Over a few short years we have gone from traditional 2D photography to advanced digital imaging, 3D imaging, augmented reality and virtual reality. Cameras are now able to capture a tremendous amount of information at very low cost leading to amazing innovations in both image production/processing as well as image mining for data. At Lux, we have invested heavily in the computational imaging space with our investments in MatterPort (3D scanning), Planet Labs (satellite imaging), CyPhy (aerial/drone imaging), LensBricks (computational imaging), Orbital Insight (image analytics), AltSpace (virtual reality).
But one area that I feel we are just barely starting to recognize as almost equally important is sound. I am really interested in what I have been calling ‘Computational Sound’. Sound is present and adds color/data to wherever we capture images (for example consider all the examples I listed above and how they would benefit by capturing the right sound for their applications). But we have taken it for granted, or have had very complicated and expensive ways of working with sound. I am interested in how we capture sound, manipulate it, analyze it, play it and share it. As we try to capture smart sound in an otherwise noisy environment, we need better sound collection devices. We need better microphone technology, but we also need to be able to triangulate sound from multiple sources to paint a more holistic 3D view of sound, or only capture sound from a certain direction or form a certain distance away. We should be able to create a 3D sound stage using computational techniques (and perhaps with use of simple hardware accessories if needed). We should be able to project sound in certain directions, selectively cancel sound in certain applications, and provide immersive experiences in sound not dissimilar form immersive experiences in virtual or augmented reality. Let me give just some examples:
- canceling noise of drone in your ‘follow-me’ drone video
- capturing low threshold sound from an interesting subject on camera despite noise from other sound sources
- projection of surround, 3D or holographic sound in spaces. Create elaborate sound stages across devices and environments
- optimizing sound in cars without using 6-7 speakers. Canceling ‘highway noise’, better hands-free phone experience
- immersive sound experience in VR, including dynamic movement of sound with scene in view
- new music instruments, synthetic sound
- simplified/accessible sound engineering and encoding etc for consumer apps, games across devices, platforms
As I have dug into this over the last few weeks and months, I have realized there are amazing scientists and engineers innovating in the space…but what we need now is entrepreneurs to focus on the space and bring exciting companies to life. I would love to hear from anyone working in this space. And hopefully become a partner in building a great company in this space. Contact me at firstname.lastname@example.org.