As advanced driver assistance systems (ADAS) applications continue to evolve and become more complex, there is an increased need for effective methods to present the safety alerts and other important information to the driver. Head-up displays (HUDs) is an emerging solution for consolidating ADAS data into one centralized screen that allows the driver to keep the eyes on the road, while viewing alerts and warnings that appear on the windshield at the same time. MEMS technology plays a central role in one type of emerging HUDs, which uses laser beam scanning (LBS) technology. We spoke with Jari Honkanen, Director of Technical Marketing and Applications Development at MicroVision, to learn more and get the latest developments.
MEMS Journal: What are the main trends for HUDs in ADAS applications? How is MEMS a key enabling technology for HUDs?
Jari Honkanen: The main motivation behind HUD technology is improved safety by overlaying essential information on top of the driving scene ahead, so the driver sees the information without taking his eyes off the road. ADAS applications share the same goal for safety improvement by automating functions to help the driver in the driving process. Hence, HUD is a natural means to relay information collected from ADAS safely to the driver.
For old school manual-driving, information relayed to the driver with HUD may have included speed, gear and RPM, and GPS navigation information. For ADAS assisted driving, there will be additional information relayed to the driver, such as an adaptive cruise control state, lane departure warning, blind spot warning, obstacle ahead alert, and traffic signs.
A MEMS scanning mirror is the core part of our laser beam scanning (LBS) display technology which can, together with relay optics, be used to build and realize HUD applications. The same MEMS scanning mirror technology acts as a sensor for ADAS applications. The difference is the use of IR lasers for sensing whereas RGB lasers are used for display.
MEMS Journal: What are the main benefits of using MEMS based laser HUDs?
Jari Honkanen: A MEMS based laser HUD is optimized for extreme driving environments, including night driving and exposure to direct sunlight. At night, a high-contrast image is produced that creates a clear, see-through display with no background “glow,” resulting in clear visibility of the projected information. During the day, it offers high brightness, brilliant laser colors, and a wide color gamut that enhances HUD information to make it readable, no matter how bright it is outside. Other benefits include low power consumption as a result of pixel-by-pixel light source modulation that produces light only when needed, and a small form factor for flexibility in embedded installation.
MEMS Journal: How does MicroVision’s HUD technology work? How does it compare to other solutions in the market?
Jari Honkanen: Our HUD technology architecture consists of red, green, and blue laser diodes, each with a lens near the laser output that collects the light from the laser. The light from the three lasers is combined with basics optics into a single beam. The intensity of each laser light source is varied to create a complete palette of colors and shades. The beam is then relayed onto a biaxial MEMS scanning mirror that scans the beam in a raster pattern. The projected image is created by modulating the three lasers synchronously with the position of the scanned beam. In the HUD application, specialized relay optics direct the beam of light from the scanning engine, making a virtual image viewable within the driver’s forward-looking field of view.
MEMS based laser virtual image HUD system overview. Image courtesy of MicroVision.
The main differences between MEMS laser beam scanning HUD, and HUDs based on panel displays: are 1) laser light sources produce brilliant colors and a wide color gamut, which helps the HUD see-through information stand out from background scenes, 2) MEMS LBS HUDs are high-contrast, meaning the lasers are completely off for pixels in the off state. This produces the best see-through display with no background glow, 3) the lasers are directly modulated pixel-by-pixel so light is only produced when needed. For panel-based displays, light sources are always on, even for black pixels, which makes them less power efficient, 4) panel based displays require more complex optical designs, which increases size and can reduce reliability, and 5) laser light, due to its coherence and polarization, is collected and relayed through the optical system with lower losses.
MEMS Journal: Will we see augmented reality (AR) on HUDs anytime soon? What are some of the main challenges to implement this?
Jari Honkanen: A few companies have recently demonstrated augment reality (AR) HUD concepts and I would not be surprised if we will start seeing first generation versions in high-end luxury vehicles in a few years.
There are several challenges in implementing AR HUDs. For example, in order for the AR HUD to be able to place the information at the appropriate position within the driver’s field of vision, it needs to be able to produce a very large field-of-view (FoV), or even create a full windscreen display. There is also sensor fusion, or the ability to intelligently combine data from multiple sensors, such as cameras, radar, and LIDAR and then correlate the appropriate information into the display. Driver’s head and eye tracking must also be considered, which requires the ability to place the AR information within the right position on the display. The system must know where the driver’s vision is focused. Finally, there is the issue of display latency and persistence -- as with any AR display, low latency and persistence are needed to avoid motion sickness.
MEMS Journal: Why did you choose to form a partnership to co-develop your LBS technology with ST Microelectronics and what does the partnership involve?
Jari Honkanen: We formed a close working relationship with STMicroelectronics over the past several years and recognized common interests and complementary skills in the LBS technology arena. The co-marketing agreement is a framework for us to work collaboratively on sales and marketing of our respective LBS solutions. We are also exploring the possibility of future technology development including a joint-LBS roadmap. Bringing our complementary skills together to grow the market for LBS and applications that both companies are focusing on makes good sense, and both companies benefit from the relationship. We benefit from ST’s expertise in semiconductor technology and its global customer reach, while ST benefits from our proprietary system, LBS engine and applications knowledge, and intellectual property.
MEMS Journal: Why did you choose MEMS over other technologies to develop your HUD solution?
Jari Honkanen: We got started with MEMS development in the late 1990s. We realized through our work with retinal scanning display (RSD) technology for wearable, augmented reality, head mounted displays, that in order to miniaturize this technology and make it cost effective, we had to develop it based on MEMS technology. We also heard from our customers that our LBS technology had wider applications beyond retinal scanning displays. The same requirements for high-quality daylight readable high-contrast display technology that was needed for head mounted displays, were very relevant for automotive HUDs as well. So it made sense that automotive HUD be an application that we began to pursue for our LBS technology.
MEMS Journal: Let’s switch topics a bit. There are many companies pursuing LIDAR. What is your take on this ADAS sensor technology and what are the main trends?
Jari Honkanen: LIDAR is one of the key enabling sensor technologies for ADAS, and eventually for self-driving vehicles, that enable cars to “see.” In the current prototype systems, a single LIDAR sensor is typically used for long range environmental mapping and modeling. However, there is an industry debate going on whether camera sensors, radar, or LIDAR is the best technology for this purpose.
Camera sensors can be used to capture video of the environment and then the ADAS system can utilize computer vision algorithms to make sense of and analyze the environment. What’s great about camera sensors is that one can distinguish and classify complex objects such as traffic signs or lane markings, as well as pedestrians or animals. The challenge is that the camera can only see what it can see -- in other words, camera sensors have challenges in low light and bright sun light. Also the vision algorithms require significant computing power.
When equipped with a radar sensor, the car transmits radio waves and interprets the back reflection. Radar works great for the detection of large objects and can easily calculate speed and distance. It also works on all weather and lighting conditions. However, the challenge with radar is that it cannot distinguish color or differentiate between objects of the same size.
Finally, LIDAR transmits light pulses and interprets the back reflection from objects. The major benefit of LIDAR is that it can classify and detect specific objects and calculate distance. It can also detect things like lane edges and it works during both dark and light conditions. However, the challenge of LIDAR is that in inclement weather conditions the light can reflect from rain, snow or fog, reducing the sensor’s effectiveness.
These different sensor technologies have their strengths and weaknesses. Hence, for the foreseeable future, cars will have to rely on a combination of these sensors. This may also be desired for redundancy and safety. The car industry can look to the aviation industry when it comes to redundancy and backup systems for safety.
But besides a single long range LIDAR for environment mapping and modeling, we believe that ADAS will have many applications for cost effective mid-range LIDAR systems. We envision future cars to contain multiple mid-range LIDAR sensors performing a variety of ADAS applications, such as blind spot detection, parking assist, lane assist and departure warnings.
MEMS Journal: How does your LIDAR technology work? And how does it measure up to other competing alternatives in terms of performance and cost?
Jari Honkanen: For LIDAR applications we use our scanning MEMS mirror, but instead of visible light laser diodes, we utilize one or several invisible near infrared (IR) laser diodes. The IR beam is reflected onto the biaxial MEMS scanning mirror that scans the beam in a raster pattern. Our LIDAR system also contains an IR photodetector that will detect reflections from the scanner IR laser beam. Since speed of light is constant and we know the time when we emit the specific laser pulse and when we receive the reflection back, we can calculate the distance to the object that the light reflected back from.
MicroVision’s MEMS based scanning LIDAR system. Image courtesy of MicroVision.
Competing LIDAR technologies are mechanical scanning LIDAR, non-scanning flash LIDAR, and phase array LIDAR.
When it comes to performance, we believe that our MEMS Scanning LIDAR offers significant benefits, including small size (a thin scanning engine at 6mm width enables new class of form factors), high resolution (we have prototypes that are capturing 5.5M points per second), dynamic (MEMS scanning allows programmable resolution and frame rate. The same sensor can perform either very fast lower resolution scan or slower high resolution scan, depending on the application and driving situation), and low persistence (scanned laser system enables blur-free capture of moving objects).
MEMS Journal: What is the target price point range for mass-adoption of LIDAR technology?
Jari Honkanen: We believe LIDAR will be one of the key enabling technologies for fully autonomous vehicles, but as I mentioned before, self-driving cars will utilize many different sensors for redundancy and to cover all environmental conditions the vehicle may encounter. Self-driving cars will also likely utilize long-range LIDAR as well as potentially several mid-range LIDARs. For long-range LIDAR the target price point for mass-adoption will have to be less than $500 and for mid-range LIDAR the target price point will be less than $100.
MEMS Journal: What are some of the challenges for using MEMS technology in HUDs and LIDAR? What are some of the disadvantages?
Jari Honkanen: The big challenge for using MEMS for automotive HUD and LIDAR is to meet the automotive quality standards. The automotive operating range spec starting from -40 °C can be challenging for MEMS devices. So to play in this market one needs to take these requirements into account from very beginning, to either design to these specs or to implement other system level solutions that allow the system that embeds a MEMS to meet these requirements.
Also, the automotive supply chain is well established, and it is very complex and difficult to penetrate. Smaller and new potential technology suppliers really need to find the right established OEM, Tier 1, or Tier 2 companies with whom to partner.
*********************************************
This article is a part of MEMS Journal's ongoing market research project in the area of sensors and electronics for automotive applications. If you would like to receive our comprehensive market research report on this topic, please contact Dr. Mike Pinelis at [email protected] for more information about rates and report contents.
Copyright 2017 MEMS Journal, Inc.
Comments