Benefits, challenges of surround-view tech for automated parking systems

Park-assist and automated parking functions make mundane tasks like parallel parking, parking in a crowded garage or a narrow parking space, or simply finding an available space easier for drivers. 

Surveys of drivers consistently show parallel parking to be among the most anxiety inducing experiences we have behind the wheel. Moreover, low speed car accidents, such as those that occur in parking maneuvers, can be a source of serious injuries that include whiplash, concussions and other soft tissue injuries. Parking assist features can help avoid these accidents and consumers love the convenience and peace of mind that these capabilities provide. 

Basic surround-view systems use multiple cameras to give drivers an overhead view of the environment, along with visual cues that show the position of the car relative to objects nearby. More advanced systems add features like an animated 3D model of the car, the ability to change the vantage point, visibility through the bottom of the car, and analytics such as distance and proximity warnings and driving-path estimation.

The most advanced systems automate the parking process, from finding an open space to performing the steering, acceleration and breaking of the vehicle without driver intervention.

This variety of use cases has a corresponding range of requirements, including the number and types of cameras used; additional radar and ultrasound sensors to complement visual data from the cameras; and processing to produce visualization, analytic and automation capabilities. Let’s explore each type of system and its corresponding processing needs.

Basic surround-view systems

Surround-view systems generally use four to six wide-angle-view cameras mounted on the front, rear and sides of a vehicle. The fish-eye lenses used on these cameras produce a distorted bowl-shaped view that geometric alignment algorithms correct. These corrected images then need balancing and color correction for consistency and final stitching into a single 360-degree view around the car.

An animated model of the car is rendered at the center of the stitched image to give the driver a bird’s eye view of her environment. It is also possible to add other overlays to the image that show the car’s position relative to objects that the cameras see.

A system-on-chip (SoC) for this application requires capacity for multiple camera inputs, an image signal processor and hardware acceleration for image adjustment and tuning, a graphics processing unit for creating the car model, and image overlays and processing cores for algorithmic analysis of the images.

Automated parking systems

Automated parking systems use a set of camera sensors similar to surround-view systems, and usually include short-range radar, ultrasound and high-performance inertial measurement unit sensors. In addition to camera processing and surround-view image creation, this system also needs to perform object detection and classification for parking spot and lane detection.

Automated parking systems combine data from all sensors and use vision and sensor fusion algorithms to safely maneuver the vehicle into an available space. The system may also record video streams from the cameras to log vehicle actions while parking.  These logs can be used for incident reporting and to improve future algorithm performance. 

An SoC for this more demanding application will need greater processing for sensor fusion and computer vision and neural network processing for the detection and classification algorithms. The SoC may also include video encode acceleration for recording, and access to external storage.

Automated valet parking

The ability to find an open parking space in a lot and safely park and un-park a vehicle without driver involvement includes even more automated capabilities. The driver may be outside the vehicle, perhaps even some distance away, when the automated system begins the parking or un-parking process.

The sensor types here are the same as for an automated parking system, but the algorithmic complexity is greatly enhanced, including the addition of simultaneous localization and mapping and path-planning algorithms. These algorithms provide the location and intelligence required to interpret the full environment in real time and make safe decisions in the parking process.

Driverless automation requires an Automotive Safety Integrity Level (ASIL) rating of ASIL-D for the full scope of sensor fusion processing, localization, path planning and drive-by-wire instruction delivery.

Making surround-view technology more widely available

Car manufacturers want to bring the most basic surround-view features into their entry and mid-range vehicles, and place advanced systems featuring automation on their higher-end and luxury models. Ideally, manufacturers could offer their customers some continuity among the systems – with a familiar look and feel to each – and the ability to upgrade the level of features through simple hardware or software changes.

A scalable implementation represents a challenge for Tier-1 manufacturers and their SoC vendors, however. Many SoC vendors offer solutions for only part of the equation – a simple SoC with camera input and visualization capabilities but no capability for analytics, or a system capable of automation that is expensive, power-hungry and impractical for lesser uses.

Manufacturers have no choice but to branch their development efforts into different systems, resulting in duplicated effort, higher development costs and no simple way to maintain continuity across a vehicle line-up.

There are some things an SoC vendor can do to address these challenges. Taking a holistic view of surround-view and parking applications, it becomes an issue of delivering scale: a greater number of sensors, more processing and memory for algorithms, and a way to incorporate safety as applications evolve. Given the range of requirements, one device can’t be the answer, but a family of devices could be. Such a device family should:

  • Be built around extensive application and use-case modeling, ensuring an understanding of corner cases and how to best balance resources.
  • Aggressively use acceleration for routine but computationally intensive tasks.
  • Make smart use of processing cores that are best tuned to the specific job required (graphics, video encoding, neural network processing, computer vision processing, safety).
  • Efficiently use memory to minimize power consumption and component count while meeting performance needs.
  • Maintain common processor cores, accelerators, inputs/outputs, and memory system and chip infrastructures across the family of products to maximize reuse.
  • Deliver a common software kit that is optimized for the device components and guarantees reuse of developed software assets.

A family of SoCs built with these principles can help realize this vision, but it’s not easily done.  A history of providing technology into ADAS markets is necessary. Collaboration with manufacturers through generations of systems builds the appreciation for subtle technical problems that occur in implementation.

AV surround view block diagram

This simplified block diagram show’s TI’s Jacinto TDA4VM processor in a surround-view use case showing video and other sensor input, display output and access to storage for compressed video files.

This allows an understanding where time is lost in development cycles and reveals the things that can be done to improve efficiency in system development. This collaboration also provides the insight necessary to anticipate the next set of challenges, and design devices and software that are ready. The combination of technology and system expertise will enable cars to be smarter and less stressful to operate, and help make our roads and parking lots safer as a result.

RELATED: 

FLIR, Veoneer team on thermal sensors for autonomous vehicles

Perception software runs inside autonomous vehicle sensor