Cutting Edge

Will Sony's automotive CMOS image sensor be a key to autonomous driving?

Nov 29, 2019

Sony announced the commercialization of automotive CMOS image sensors in 2014. They were, so to speak, latecomers in this industry. What is the strong point and essence of Sony products designed aiming to make a breakthrough? We asked three frontline players who are contributing to large advancement of Sony in this field.

Profile

  • Yuichi Motohashi

    Automotive Sensor
    Sales & Marketing,
    Sony Europe B.V.

  • Satoko Iida

    Research Division 1,
    Sony Semiconductor Solutions Corporation

  • Naoya Sato

    Automotive Business Division,
    Sony Semiconductor Solutions Corporation

Like a cocoon that wraps around a car

──First, what role do you play in the research and development of automotive image sensors?

Naoya Sato:As a member of the image quality group, I study and evaluate image quality standards for image sensors. I used to be in charge of sensors for surveillance cameras, and since surveillance cameras are commercially available from many manufacturers, it was relatively easy to compare Sony's sensors with those of other manufacturers. On that point, automotive cameras are difficult to compare and evaluate. Although the performance is good, there is no method established to evaluate them and we can't emphasize our advantages. So, we always consider how we can create a yardstick to prove our superiority.

Satoko Iida:I'm in charge of pixel design. Previously, I was responsible for IMX 390, which is the first automotive CMOS image sensor from Sony. Currently, I am in charge of pixel design for the next-generation automotive image sensors, and engaged in a joint development project with a major automobile parts manufacturer.

Yuichi Motohashi:I am in charge of product planning and negotiating with customers for specifications. My team and I listen to customers' needs, plan them, and determine the specifications, and then Iida-san and Sato-san design products and coordinate the product development. We divide the responsibilities, but closely cooperate with each other in a streamline manner.

──What is the development cycle for automotive image sensors?

Iida:The image sensor development cycle is two to three years, but it takes longer than other applications for those image sensors to be actually integrated into cars in the market.

Motohashi:In fact, the negotiations we're having right now are for cars that will hit the market in five years.

──What is the concept behind the development of Sony's automotive image sensors and what is the advantage when compared with other companies' sensors?

Motohashi:First of all, regarding the development concept, our former President and CEO Kazuo Hirai gave a speech at the CES (the world's largest consumer electronics trade show) in 2018 and introduced the concept "safety cocoon," which means by monitoring all directions with a camera, the whole car is protected like a cocoon. This concept is our goal to achieve at this point.

If autonomous driving becomes widespread in the future, we will have to look at the front, the sides, the back, and the inside of the car, so I think it meets customers' needs.

Iida:While we emphasize the "low illumination characteristics," the core competence Sony has cultivated over many years, we have developed Sony's original pixel architecture based on the "dynamic range expansion technology with single exposure," which is strongly demanded for automotive image sensors. I think this technology is unbeatable.

──Could you tell me more about the "requirements for automotive image sensors"?

Motohashi:First, a high dynamic range (HDR) is required. It's a technology that avoids black defects and halation when you capture an image in various situations from a dark place to a bright place. Another requirement is the measures against flickering. Flickering is a phenomenon where light looks flashing when you shoot LED light sources. This is because the LED light source is always turned on and off, so if the timing of the shutter and light emission doesn't match, the light looks to be off. If a car cannot distinguish a red light although it is turned on and crashes into other cars or pedestrians, that would be a big problem.

In order to eliminate flickering, the exposure must be prolonged. However, if the exposure is made longer, it will easily cause halation. In short, it's a trade-off, and it's a trait that's hard to reconcile.

So, Iida-san and her team are trying to devise a method to "avoid saturating pixels." To do this, it is necessary to develop a pixel that will not be filled up even if a large number of electrons are collected. Each company is trying to develop various methods for this challenge, but we believe that our own method is the best, and in fact, it is well received by our customers.

──How is it different from the standard HDR?

Sato:When performing HDR processing, the camera usually shoots multiple times at different shutter speeds, which can cause "motion artifacts" or blurs for fast-moving subjects. Such motion errors are unnatural to human eyes perception, and are also not good for image recognition algorithms to determine values. Sony's sensors, however, have their own pixel structure that doesn't produce motion artifacts or flickering even when HDR is used. This is the "dynamic range expansion technology with single exposure," mentioned by Iida-san earlier. It is a big point of differentiation from other companies.

Performance required at 125℃

──Are there any other requirements unique to vehicles?

Motohashi:One of the major differences from other applications is an ability to operate in a very wide range of temperatures. The front camera is usually mounted under the rear-view mirror, and the temperature inside the car gets extremely high in summer, so it is required for the sensor to withstand up to 125℃ as its specification.

As the temperature rises, the image sensor generates more noise (dark current). The more noise is made, the harder it is to see the dark. So, the key to using sensors at high temperatures is how much noise can be reduced, and that's where our strength lies.

Sony has its own image sensor manufacturing facilities in Kumamoto and other areas in Japan. Our competitors manufacture products at outsourced wafer factories called foundries. Sony, on the other hand, has been developing CCD image sensors since 1970, and we have been accumulating a large amount of know-how, including how to reduce noise generated in the manufacturing process, in our own factories over about half a century. I think that's one of our enormous strengths.

And this is achieved through close communication between Iida-san's pixel designer team and the engineers at the manufacturing sites.

──Do the engineers communicate directly with the manufacturing sites?

Iida:Yes. When prototypes or engineering samples first come out, in particular, we sometimes go to the Kumamoto manufacturing site, every week when necessary, to evaluate and analyze them, or respond to problems if any. I think these efforts of engineers directly visiting the manufacturing sites and closely communicating with the persons in charge are another strength of Sony.

The camera is "absolutely essential" to autonomous driving

──What perspective do you have when doing the development?

Iida:I mentioned dark noise reduction and Sony's strengths earlier, and those are the history that indicates how Sony has won based on its superior process technology. However, process technologies have become commoditized today, and it has become difficult to differentiate them. It is necessary to make differentiation through pixel architecture and show superior characteristics.

From the viewpoint of our customers, such as OEM (automobile manufacturer) and Tier 1 manufacturers (primary supplier), the automotive image sensor is just a single component, so we try to use "systems thinking" to have an overall product image. Based on this perspective, Motohashi-san's team grasps what our users are demanding, and our team proposes plans and implements them.

Motohashi:Our sensor is composed of a light-receiving silicon layer and a circuit-containing silicon layer, attached together into one unit. And adding various functions to this circuit side is one direction we are heading. For example, we're thinking of implementing a deep neural network that will not just output the image, but also output information from the image sensor, identifying whether the object is a person, or whether it is a car that is running 300 meters ahead. Today, this is called Edge AI, but the idea is to use image sensors for doing it.

From a system perspective, reducing the load on the downstream system and reducing the amount of transferred data will be a big evolution that comes next.

Sensor fusion is another initiative we are working on. It is commonly said that in order to make a self-driving car, it is essential to combine three things: a camera, radar, and LiDAR (optical sensor technology). These three items individually have strong and weak points, and they can complement each other when combined together. I think that combination of sensors with different principles is also an axis of evolution.

──Does that mean the sensor of the camera will be a terminal which collects other data from radar and LiDAR?

Motohashi:I don't know if the sensor in the camera will be the terminal, but I think the fusion system that integrates all the data coming out of all the sensors and sends the results of the processing to the downstream system is one direction we should take.

Sato:Sensor fusion technology can achieve accurate object recognition, even in strong sunlight (glare), bad weather, or nighttime situations where object recognition is difficult, by combining multiple sensors. Without a camera, you can't tell the color information, such as the color of the traffic light and the white lines of the road. So, many people have the same consensus that the camera is absolutely necessary among the three. If focusing on the camera with the best spatial resolution and using other sensor systems to compensate for the weak points is the most reasonable option for automotive applications, I think the idea of aggregating data into the camera's sensor makes sense. Something like an integrated processor might emerge and do the work in the future, but I don't think the camera will change its importance anyway.

──How difficult is it to develop an automotive image sensor?

Motohashi:We are in the electronics industry, and we used to have customers who are camera manufacturers or who are familiar with electricity and electronics technologies, but the automotive industry has a different culture and business practices. When we started the project, I didn't know that there was a difference in the first place, and when I was beginning to learn that there was a difference, then next time, it took time for me to get used to it.

Iida:Dealing with automotive image sensors are all challenging. IATF 16949 and ISO 26262 are international standards for automobiles, and system design and reliability are strictly required at a very high level. Bringing and using an existing architecture is not enough. Instead of just horizontally expanding image sensor technologies developed for other applications, we need to develop and demonstrate the technologies unique to automotive applications, which is a very difficult challenge to us. And, of course, our sensors are designed for automotive use and directly related with people's lives, which is all the more difficult.

Our strengths are technological capabilities
and proposal ability

──So, we have talked about the automotive camera for exterior monitoring, and there is another type of automotive camera: in-car camera. What kind of roles does it have?

Motohashi:There are two. One is a technology that's required for what we call Level 3 in autonomous driving. Level 3 involves a handover between a human and a system. So, if the system cannot handle automatic driving for any reasons, the camera needs to make sure that the driver is focusing on the road ahead, not looking aside, when the system hands over control back to the driver safely.

Another use is to monitor inside the car and automatically determine the system's behavior. For example, the camera looks at the passenger seats and adjust the air conditioner by recognizing the existence of additional persons, or recognizes the gestures by the driver to make the system behave accordingly.

──Is there any possibility that the technology of automotive cameras will be applied outside the car?

Sato:I think so, because it can be used as is for robots and autonomous systems.

Motohashi:The edge AI technology mentioned earlier will be needed for other applications, such as surveillance cameras. Authentication system applications, such as fingerprints and irises, will also need it because there is a demand for avoiding as much data as possible from being transmitted outside the terminal, from the viewpoint of privacy. I think there are many industries that want to handle data on the edge side.

──What will it take for Sony to become more competitive?

Motohashi:Differentiating technologies and strengthening our ability to make proposals. If we continue to propose what we can do with our technology to customers in trouble, I think we can move from the component selling business to more upstream business.

Iida:As automotive business, of course, we aim to increase sales and gain market share. Automotive image sensors originally started from image sensors for surveillance cameras. The image sensors for surveillance cameras were designed to mainly focus on low illumination characteristics, and for automotive image sensors, we have additionally focused on high illumination characteristics and improved dynamic range characteristics. There is a growing need for dynamic range to be used in surveillance cameras in turn, so we hope we can apply the technology cultivated in automotive cameras back to them in order to contribute to the sales increase of Sony's image sensors as a whole.

Motohashi:The elemental technologies we have developed are often used in other categories. Conversely, we use some technologies from mobile and camera categories. That's one of the advantages of Sony, where various technologies are developed for various purposes.

Iida:Sony's image sensors cover a wide range of categories, including mobile, camera, and surveillance applications, with a global share of over 50%. In other words, we have customers all over the world and develop technologies to meet their needs through persistent efforts of individual engineers, and the ability to share those technologies beyond categories is a strength that cannot be imitated by any other company.

Sato:Differentiating sensor technology alone is important, but if we improve performance from a system perspective—for example, using wavelengths other than visible light—we can make a good proposal as solutions.

──Before closing, please let us know if you have any particular types of persons in mind that you want to work with as a team member.

Sato:The development of new technologies requires an evaluator with the same level of technical capability. When technology improves, technical capability to evaluate it is also important. I would like to see many people who can support this kind of relationship.

Iida:This company has a lot of advanced image sensor technologies. There are many places where you can work hard. There are also many respectable engineers, and many seniors and colleagues who are role models that I want to aim for.

Motohashi:I think people who are interested in various things are very important. For example, a person who is studying or developing technology on semiconductor pixels but also has interest in AI, or a person who wants to work on technologies across categories—if there are people challenging many things with such high motivation and curiosity, the organization will become even stronger and more creative.

Related article