What kind of technology is the car chip maker aiming at?

In 2016, the intention of expansion in the field of autonomous driving technology is very obvious, just in the wafer segment of the autonomous vehicle, except for companies such as Nvidia, Mobileye, NXP and Texas Instruments. There have also been many "new faces" - such as IP provider Ceva and Intel and Qualcomm. Automotive OEMs are welcoming newcomers to these markets with open arms, Egil Juliussen, director of research at IHS AutomoTIve Infotainment and Advanced Driver Assistance Systems (ADAS), said at CES, “This area has suddenly become very lively.”

What kind of technology is the car chip maker aiming at?

Until now, investors and the media community have been very enthusiastic about supporting autonomous vehicles – sensing, cameras, radar and light, mapping, algorithms, deep networks (or non-depth networks) and artificial intelligence. Wait. But for most of them, it's still unclear how these technologies will end in the evolution of autonomous vehicle design, not to mention who wins and loses in this war. Mobileye co-founder and CEO Amnon Shashua said he initially thought that competitors deliberately spread the wrong news about these technologies and wanted to create a "fog of war." But he now realizes, "People are really confused because they really don't understand."

At this year's CES, Nvidia's "deep learning" technology and Mobileye show map drawing technology became the most dazzling star at the show. The two companies competed fiercely in ADAS and autonomous driving. Ceva CEO GideonWertheizer described the open dispute between the two powers as "bridge of investment." In fact, Mobileye's stock fell nearly 10% shortly after Nvidia's announcement, and rose after the press conference at CES.

Mobileye mapping technology

However, Mobileye's statement does contain a certain amount of important technical weight. In an interview, Wertheizer introduced Mobileye's newly developed mapping technology, called the Road Experience Management System (REM), and considered it to compete with chip suppliers and first-tier manufacturers such as NXP, Bosch and Denso. Said it may be "the most threatening." According to Mobileye, REM can create “multi-source real-time data” for precise positioning and high-resolution lane data, which is an important information layer needed to support fully automated driving.

This technology is based on software that processes chips in MobileyeEyeQ. It captures landmarks and road information at very low bandwidths – about 10Kb for every kilometer of travel (in contrast, Google is about 1Gbit per kilometer when positioning and mapping HD maps). Mobileye explained that the back-end software implemented in the cloud can be integrated into a global map of pieces of data sent by all cars equipped with car software. Mobileye's visual interpretation mechanism (which helps compress data) should help automakers create their own RoadBooks.

Mobileye's multi-source location coordination system works only on cars with MobileyeEyeQ chips. In short, "Mobileye is locking its customers." Ceva's Wertheizer pointed out. Clearly, REM has become more and more successful as more and more cars with Mobileye chips are installed. Shashua believes that REM is attractive to automotive OEMs because “large car manufacturers can take advantage of their scale when creating their own road guides.”

Enabling REM is not difficult for automakers, Shashua pointed out, because Mobileye's EyeQ chip already exists in the automotive ecosystem. What is needed to build a REM is the EyeQ chip and a communication link—for example, General Motors can use its own On-Star system. GM and Volkswagen announced at CES that they support the REM system proposed by Mobileye. Another size and two equivalent customers will soon sign up for REM, Shashua revealed.

It is worth noting that one-third of the global automotive industry has already used EyeQ chips, Shashua pointed out, “We are very much looking forward to the use of REM in the entire automotive industry.” Currently only Toyota and Daimler The two companies have not yet used Mobileye's chips.

Nvidia: DrivePX2 Sensor Fusion Technology

So far, the main commentator of the self-driving car in the electronics industry has been Nvidia CEO Huang Renxun. Huang Renxun, who advocates 'deep learning', often educates people that autonomous vehicles need a powerful visual computing system to fuse data from cameras and other sensors. In other words, Nvidia's latest DrivePX2 is dubbed by Huang Renxun as 'supercomputer designed for cars', which will become the standard equipment for cars, which can be used to sense the location of the car, identify objects around the car, and calculate the safest in time. path of.

Nvidia also released a deep learning platform called Digits. Nvidia is already testing its own autonomous vehicles with this platform. “Autonomous driving technology is incredibly difficult,” says Huang Renxun. “It's not as simple as programming a 'driver' with the manual of the supervisory office.” To significantly shorten the development and training required for deep neural networks. Time, car manufacturers need tools like Digits, which are implemented on their server supercomputers, Nvidia pointed out.

According to Huang Renxun's vision, each car company will eventually have an end-to-end system ranging from NvidiaDigits for deep neural networks to NvidiaDRIVEPX2 for network output in cars. LaszloKishonTI, founder and CEO of Budapest-based AdasWorks, which develops artificial intelligence software for autonomous driving, points out that the company is working with Nvidia to develop a system for Volvo, a GPU-based system that can instantly process multiple sensing Information.

KishonTI said that AdasWorks does not use a processor. "We use GPUs, FPGAs, or any other embedded vision SoC available." But one of the key advantages of using Nvidia solutions is the one-pass code and servo developed and verified on the in-vehicle computer. The code used on the device is exactly the same. Compared to Mobileye's focus on visual processing, "Our focus is on merging data from all the different sensors. Vision is just one part of the many sensor data," said Dave Anderson, senior manager of Nvidia's automotive integration division.

Nvidia's DRIVEPX2 can handle input from 12 video cameras as well as radar, optical and ultrasonic sensors. He explained: "We have combined these data to enable it to accurately detect the target object, identify it, and determine the relative position of the car to the surrounding world, and then find the optimal path for safe driving."

Polycrystalline Solar Panel

Polycrystalline Solar Panel,290W Poly Solar Panel,Portable Polycrystalline Solar Panel,Polycrystalline Solar Module Panel

Jiangsu Stark New Energy Co.,Ltd , https://www.stark-newenergy.com

Posted on