Create super reality video experience
We have developed a wide range of signal processing technologies for video products including super-resolution processing that converts various video formats into 4K/8K quality, noise reduction, tone and color conversion, and motion-blur elimination. These technologies enable high resolution, high dynamic range, and wide color gamut for various kinds of video with various levels of quality. They compensate well for the video degradation in spatiotemporal resolution, tone, contrast, and color resulting from noise, data compression, and other factors. We will deliver our own brand of texture and reality through the high-quality imaging which is our fundamental imaging technology, in pursuit of even greater improvement in new video experiences.
Visual codecs, used for data compression of 2D/3D video, are indispensable for distributing large amounts of video data over the Internet and for recording and storage. Sony has contributed to the international standardization in MPEG and has developed customized codec technology for each of its products. Along with the spread of new video formats such as 8K, VR, and free-viewpoint video, the amount of visual media data will continue to increase. We are developing VVC, the latest video codec standard that achieves the highest compression ratio. Also, we are developing codecs for volumetric video that provides new video experiences such as point clouds and CG Mesh.
We are developing a multicamera system that achieves high performance by placing multiple sensors with different characteristics in parallel. In recent years, the digitization of 3D information in the real world, the so-called digital twin, has been tackled in various fields, and the use of depth sensors that can measure distance from sensor to subject has begun to spread. By combining depth sensor with a conventional camera, we can more easily acquire three-dimensional information with high accuracy from images and depth taken at multiple viewpoints. In addition to technology for detecting the corresponding points between sensors, we are also developing fusion technology that fuses multiple sensor data each of which has different characteristic.
These sensing technologies understand the 3D real world around our users and devices by determining their position, orientation, and surrounding distances, then integrating the results of multiple observations. We are working on 3D computer vision technologies such as depth estimation, visual SLAM, and 3D modeling algorithms in cameras. These technologies can potentially be utilized in a broad range of Sony business areas from mobile and gaming AR to robot navigation. Our goal is to achieve the highest level of performance in the world by not only developing algorithms but also linking them tightly to our proprietary image sensors.
Free-viewpoint video technology captures the real-world as 3D data. It enables viewing of the video from any desired viewpoint. This technology is comprised of two capturing methods: omnidirectional (inside-out) visualization and arbitrary direction free-viewpoint (outside-in) visualization. Currently omnidirectional visualization has 3 degrees of freedom, however, we are developing omnidirectional visualization with 6 degrees of freedom which integrates 3 degrees of translation freedom, as well as volumetric capture which captures specific areas of space to achieve arbitrary direction free-viewpoint visualization. We are also working on technological development utilizing video and imaging technology we have accumulated so far to create photorealistic expressions that appear to be real photographed content despite actually being computer graphics.
Recently, considerable attention has been focused on the technological field of computational photography, with which new features can be devised by changing and controlling the materials and characteristics of imaging systems (optical systems, lighting, and sensors). We combined Sony’s proprietary imaging-signal processing technology with our original polarization image sensor, multispectral image sensor, and lensless camera to offer new features such as the highly accurate acquisition of shape data, measurement of the activation levels of shrubs and vegetation, and ultra-thin form factor/ultra wide image-capture devices.