Bay Vision - Spring 2018

to generate ground truth data and with their tools, Deepen AI makes the production of ground truth training data (both in quality and quantity) as fast as possible. Their model relies on LiDAR and sensor fusion more than a regular camera. With LiDAR, sparsity is a huge issue, in particular because when you get further away from the LiDAR itself, objects become very sparse: there might be a whole car in a one-point reflection. Like with camera data, segmenting LiDAR point clouds at a point level accuracy is a huge challenge. Indeed, the LiDAR can only see the fragment that reflects the light, while for the part that doesn’t reflect, an assumption must be made. One of the benefits of LiDAR compared to radar is that it can capture a lot of things radar can’t. Many radars don't detect humans at lower frequencies, while LiDAR can always detect humans. On the other hand, LiDAR can get severely affected by snow or sand storms. Each sensor has advantages and disadvantages. Essentially, segmenting LiDAR and sensor fused data has all the challenges of segmenting camera images and more. “ It’s a much larger problem -- Musa says -- and we have to worry about many more factors, which is our focus now ”. This is particularly true depending on the usage: in level 2 or 3 with human assisted driving, there is no concern about every point the LiDAR is capturing, however at level 4 or 5 with no human involvement, the system needs to be independent and autonomous. They have developed the tools and models to allow their customers industry to achieve level 4 or 5. Their customers are good at figuring out the whole environment: hardware, software, system integration, sensors and communication system; Deepen AI is making sure they can solve the data prep, data annotation, data labelling and data visualization process much more efficiently. “ While we focus -- Musa concludes -- on semantic segmentation to fix the level accuracy of LiDAR sensor data, the other expertise of our company is optimizing a deep learning neural networks for embedded cognitive devices. As you would expect, we are making hardware devices increasingly intelligent on the edge, whether they are cars, drones, your fridge or your light; even your key will be a smart device. Vision algorithms on these devices need to be smart enough to carry out the right tasks: neural networks need to be small, power efficient and compute-efficient about CPU or GPU resources availability. It’s a very difficult problem as they’re trying to tie both what we can carry out on the data side to start training the neural network and then what we can do to run on the edge side to optimize and make the neural network more efficient and useful. Those are the two faces of the company, one close to the hardware and the other close to the data. Hopefully, as we grow we will try to do also everything in between, but now we are focused on the edges .” 9 Deepen Ai Bay Vision

RkJQdWJsaXNoZXIy NTc3NzU=