
Have you ever wanted to build your own Android? It’s easy according to the official manual, but it’s getting harder on a Windows (or Mac) machine, or if you’d like to run it on physical hardware. Still too easy? Let’s build Android Automotive OS – the same source code, but another layer of complexity. In this manual, we’ll cover all steps needed to build and run Android Automotive OS 11 AOSP on Raspberry Pi 4B using Windows. The solution is not perfect, however. The most principal issue is a lack of Google Services because the entire AAOS is on an open-source project and Google doesn’t provide its services this way. Nevertheless, let’s build the open-source version first, and then we can try to face incoming issues.
TL;DR: If you don't want to configure and build the system step-by-step, follow the simplified instruction at https://github.com/grapeup/aaos_11_local_manifest

If you want to run the system on a physical device, you need one. I use the Raspberry Pi 4 model B with 8GB of RAM ( https://www.raspberrypi.com/products/raspberry-pi-4-model-b/ ). By the way, if you want to build and run an emulator from the source, it’s also possible, but there is a small limitation – packaging the emulator to a zip file, moving it to another computer, or even running it under Android Studio was introduced in Android 12.
To power your Raspberry, you need a power adapter (USB C, min. 5V 3A). I use the Raspberry-official 5.1V 3A model. You can also power the Raspberry computer from your desktop/laptop’s USB port, especially if you’re going to debug it via a serial connection. Check the “If it doesn’t work” section below for the required hardware.
Another piece of hardware needed is an SD card. In theory, 4GB is all you need, however, I recommend buying a larger card to have some extra space for your applications on Android . I use 32GB and 64GB cards. You’ll also need a built-in or external card reader. I use the latter.
The next step is a screen. It’s optional but fancy. You can connect your mouse and optionally keyboard to your Raspberry Pi via USB and connect any display you have via micro-HDMI but using a touch screen is much more intuitive. I use a Waveshare 10-inch screen dedicated to Raspberry ( https://www.waveshare.com/wiki/10.1inch_HDMI_LCD_(B)_(with_case ). The screen box has a place to screw the Raspberry too, so you don’t need any extra case. You can also buy it with a power adapter and a display cable.
If you don’t buy a bundle, make sure you have all necessary accessories: micro-HDMI – HDMI cable to connect a screen (Waveshare or any other), USB A – USB mini A cable to connect a touch sensor of the screen, USB mini A 5V 3A adapter to power the screen.

Of course, you need a computer. In this manual, we use a Windows machine with at least 512GB of storage (the Android source is huge) and 16GB of RAM.
You can probably build everything in pure Windows, but the recommended method is to use WSL. I assume you already have it installed, so just make sure you have the newest WSL2 version. If you have never used WSL before, see the full manual here https://learn.microsoft.com/en-us/windows/wsl/install .
The standard WSL installation uses a too-small virtual drive and limited RAM, so you need to adjust it.
Let’s start with the disk. Make sure the WSL is shut down by running ‘wsl –shutdown’ in the command prompt. Open Windows Command Prompt with admin privileges and enter ‘diskpart ’. Then run ‘select vdisk file=”<path to WSL drive file>”’. For me, the path is “C:\Users\<user>\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu_<WSL_instance_id>\LocalState\ext4.vhdx ”. Now you can expand it with the command ‘expand vdisk maximum=512000’. Around 300GB is enough for Android 11, but if you want to play with multiple branches of Android at the same time, you need more space. Now you can close the diskpart with the ‘exit’ command. Next, open the WSL and run ‘sudo resize2fs /dev/sdb 512000M’. I assume you have only a single drive attached to the WSL and it’s visible in the Linux subsystem as /dev/sdb. You can check it with the commands ‘sudo mount -t devtmpfs none /dev || mount | grep ext4’.
Now, let’s adjust the memory. Stop the WSL again. Open your home directory in Windows and open .wslconfig file. Create it if this file doesn’t exist yet. In the file, you need to create a [wsl2] section and memory configuration. The complete file should look like this:
[wsl2]
memory=16GB
As you can see, I’ve attached 16GB to the virtual machine. It’s assigned dynamically, according to needs, but you must be aware that the virtual machine can take all of it, so if you allow it to eat your entire RAM, it can force your Windows to use a hard disk to survive (which will slow everything down significantly).
Disclaimer:
Building Android on 8 cores, 16GB RAM machine takes around 4 hours. If you want to do it faster or you don’t have a computer powerful enough at your home or office, you can consider building in the cloud. Simple AWS EC2 with 32 cores and 64GB of memory does the job in one hour (to download and build) and costs just a few bucks.
..or at least to building.
We need some software but not much. Just install the following packages. This set of libraries allows you to build Android Automotive OS versions 11 to 13.
sudo apt update && sudo apt install gcc-aarch64-linux-gnu libssl-dev bc python3-setuptools repo python-is-python3 libncurses5 zip unzip make gcc flex bison -y
Let’s create a home directory for our android and download sources.
mkdir android-11.0.0_r48 && cd android-11.0.0_r48
repo init -u https://android.googlesource.com/platform/manifest -b android-11.0.0_r48 --partial-clone --clone-filter=blob:limit=10M
git clone https://github.com/android-rpi/local_manifests .repo/local_manifests -b arpi-11
repo sync
“repo init” will ask you for some personal data. It’s collected by Google. To learn more about optimizations here, check this manual: https://docs.gitlab.com/ee/topics/git/partial_clone.html . ‘git clone’ adds a custom code from Android RPI project ( https://groups.google.com/g/android-rpi ) with drivers for your Raspberry Pi. The project is great and it’s all you need if you want to run Android TV. To run Android Automotive OS, we’ll need to adjust it slightly (see “Adjustments” section below). ‘repo sync’ will take some time because you need to download around 200GB of code. If you have a powerful machine with a great Internet connection, you can use more threads with ‘-j X’ parameter added to the command. The default thread count is 4. If you have already synchronized your source code without android-rpi local manifest, you need to add --force-sync to the ’repo-sync’ command.
All changes from this section can download as a patch file attached to this article. See the “Path file” section below.
Android-rpi provides Android TV for Raspberry Pi. We need to remove the TV-related configuration and add the Automotive OS one.
Let’s start with removing unnecessary files. You can safely remove the following files and directories:
To remove the user notice screen not needed in Automotive OS, create a new file device/arpi/rpi4/overlay/packages/services/Car/service/res/values/config.xml with the following content:
<?xml version="1.0" encoding="utf-8"?>
<resources xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2">
<string name="config_userNoticeUiService" translatable="false"></string>
</resources>
To replace the basic TV overlay config with the Automotive overlay config, adjust the configuration in device/arpi/rpi4/overlay/frameworks/base/core/res/res/values/config.xml.
Remove:
… and add:
Now let’s rename the android-rpi original /device/arpi/rpi4/rpi4.mk to /device/arpi/rpi4/android_rpi4.mk. We need to adjust the file a little bit.
Remove the following variables definitions. Some of them you will re-create in another file, while some of them are not needed.
Remove the following invocations. We’re going to call necessary external files in another mk file.
In PRODUCT_PROPERTY_OVERRIDES remove debug.drm.mode.force=1280x720 and add the following properties. This way you remove the TV launcher configuration and override the default automotive launcher configuration.
Now you need to completely remove the android-rpi TV launcher and add RenderScript support for Automotive OS. In PRODUCT_PACKAGES remove:
… and add:
Create a new rpi4.mk4 with the following content:
PRODUCT_PACKAGE_OVERLAYS += device/generic/car/common/overlay
$(call inherit-product, $(SRC_TARGET_DIR)/product/core_64_bit.mk)
$(call inherit-product, device/arpi/rpi4/android_rpi4.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/full_base.mk)
$(call inherit-product, device/generic/car/common/car.mk)
PRODUCT_SYSTEM_DEFAULT_PROPERTIES += \
android.car.number_pre_created_users=1 \
android.car.number_pre_created_guests=1 \
android.car.user_hal_enabled=true
DEVICE_PACKAGE_OVERLAYS += device/arpi/rpi4/overlay device/generic/car/car_x86_64/overlay
PRODUCT_NAME := rpi4
PRODUCT_DEVICE := rpi4
PRODUCT_BRAND := arpi
PRODUCT_MODEL := Raspberry Pi 4
PRODUCT_MANUFACTURER := GrapeUp and ARPi
Due to the license, remember to add yourself to the PRODUCT_MANUFACTURER field.
Now you have two mk files – android-rpi.mk is borrowed from android-rpi project and adjusted, and rpi.mk contains all changes for Automotive OS. You can meld these two together or split them into more files if you’d like, but keep in mind that the order of invocations does matter (not always, but still).
As Android Automotive OS is bigger than Android TV, we need to increase the system partition size to fit the new image. In device/arpi/rpi4/BoardConfig.mk increase BOARD_SYSTEMIMAGE_PARTITION_SIZE to 2147483648, which means 2GB.
You need to apply all changes described in https://github.com/android-rpi/device_arpi_rpi4/wiki/arpi-11-:-framework-patch too. Those changes are also included in the patch file attached .
If you use the 8GB version of Raspberry Pi, you need to replace device/arpi/rpi4/boot/fixup4.dat and device/arpi/rpi4/boot/start4.elf files. You can find the correct files in the patch file attached or you may use the official source: https://github.com/raspberrypi/firmware/tree/master/boot . It’s probably not needed for 4GB version of Raspberry, but I don’t have such a device for verification.
If you prefer to apply all changes described above as a single file, go to your sources directory and run ‘git apply --no-index <path_to_patch_file> ’. There is also a boot animation replaced in the patch file . If you want to create one of your own, follow the official manual here: https://android.googlesource.com/platform/frameworks/base/+/master/cmds/bootanimation/FORMAT.md .
That’s the easy part. Just run a few commands from below. Firstly, we need to build a custom kernel for Android. ‘merge_config.sh’ script just configures all variables required. The first ‘make’ command builds the real kernel image (which can take a few minutes). Next, build a device tree configuration.
cd kernel/arpi
ARCH=arm64 scripts/kconfig/merge_config.sh arch/arm64/configs/bcm2711_defconfig kernel/configs/android-base.config kernel/configs/android-recommended.config
ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make Image.gz
ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- DTC_FLAGS="-@" make broadcom/bcm2711-rpi-4-b.dtb
ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- DTC_FLAGS="-@" make overlays/vc4-kms-v3d-pi4.dtbo
cd ../..
The next part is to build the entire system. “envsetup.sh” script sets up variables and adds custom commands to your terminal. Then you can pick the correct pair of Android versions and devices with “lunch”. You can run it without parameters to see (almost) all possible configurations. In this step, you can decide to build a system for dedicated hardware (eg. Dragonboard) and switch between phone/tablet/TV/wearable/automotive versions of Android. The last line is a real building. We can’t run just “make” or “m”, as documented in the official manual because we need to create three specific images to write them on an SD card and run them on Raspberry Pi. Replace “X” in ‘-j X’ with the number of threads you want to use. The default value is the number of logical processors on your computer.
source build/envsetup.sh
lunch rpi4-eng
make -j X ramdisk systemimage vendorimage
I hope you have a delightful book next to you because the last building takes a few hours depending on your hardware. Good news! If you need to adapt something and build again, in most cases you just need the three last lines (or even just the very last one) – to source the environment setup, to pick the lunch configuration, and to make ramdisk, system, and vendor images. And it takes hours for the first time only.
This step seems to be easy, but it isn’t. WSL doesn’t contain drivers for the USB card reader. You can use usbip to forward a device from Windows to the subsystem, but it doesn’t work well with external storage without partitions. The solution is a VirtualBox with Ubuntu installed. Just create a virtual machine, install Ubuntu, and install Guest Additions. Then you can connect the card reader and pass it to the virtual machine. If you’re a minimalist, you can use Ubuntu Server or any other Linux distribution you like. Be aware that using a card reader built into your computer may be challenging depending on drivers and the hardware connection type (USB-like, or PCI-e).
Now, you need to create a partition schema on the SD card. I assume the card is loaded to the system as /dev/sdb. Check your configuration before continuing to avoid formatting your main drive or another disaster. Let’s erase the current partition table and create a new one.
sudo umount /dev/sdb*
sudo wipefs -a /dev/sdb
sudo fdisk /dev/sdb
Now let’s create partitions. First, you need a 128MB active partition of the W95 FAT32 (LBA) type, second a 2GB Linux partition, third a 128MB Linux partition, and the rest of the card for user data (also Linux partition). Here’s how to navigate through fdisk menu to configure all partitions.
Welcome to fdisk (util-linux 2.37.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x179fb9bc.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-61022207, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-61022207, default 61022207): +128M
Created a new partition 1 of type 'Linux' and of size 128 MiB.
Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): 0c
Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.
Command (m for help): n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (2-4, default 2):
First sector (264192-61022207, default 264192):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (264192-61022207, default 61022207): +2G
Created a new partition 2 of type 'Linux' and of size 2 GiB.
Command (m for help): n
Partition type
p primary (2 primary, 0 extended, 2 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (3,4, default 3):
First sector (4458496-61022207, default 4458496):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (4458496-61022207, default 61022207): +128M
Created a new partition 3 of type 'Linux' and of size 128 MiB.
Command (m for help): n
Partition type
p primary (3 primary, 0 extended, 1 free)
e extended (container for logical partitions)
Select (default e): p
Selected partition 4
First sector (4720640-61022207, default 4720640):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (4720640-61022207, default 61022207):
Created a new partition 4 of type 'Linux' and of size 26,8 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Be careful with the last partition – fdisk proposes creating an extended one by default, which is not needed in our use case.
If you don’t do it for the first time on the same card, you may see a warning that some partition already contains a file system signature. You can safely agree to remove it.
Partition #4 contains a ext4 signature.
Do you want to remove the signature? [Y]es/[N]o: Y
The signature will be removed by a write command.
Now, let’s supply file systems for the first and the last partitions.
sudo mkdosfs -F 32 /dev/sdb1
sudo mkfs.ext4 -L userdata /dev/sdb4
We won’t write anything to the last one, as it’s for user data only and will be filled by Android during the first boot. But we need to write some files for the first one. Let’s create a temporary mount directory under /mnt/p1 (like “partition 1”), mount it, and copy the necessary files from the Android built in the earlier section. It’s strange, but we’re going to copy files from one virtual machine (WSL) to another (VirtualBox). You can simply mount a wsl drive as a shared folder in VirtualBox. If you don’t see a WSL drive in your Windows Explorer, you can map it as a network drive using “\\wsl$\Ubuntu” path
sudo mkdir /mnt/p1
sudo mount /dev/sdb1 /mnt/p1
sudo mkdir /mnt/p1/overlays
cd <PATH_TO_YOUR_ANDROID_SOURCES_IN_WSL>
sudo cp device/arpi/rpi4/boot/* /mnt/p1
sudo cp kernel/arpi/arch/arm64/boot/Image.gz /mnt/p1
sudo cp kernel/arpi/arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb /mnt/p1
sudo cp kernel/arpi/arch/arm/boot/dts/overlays/vc4-kms-v3d-pi4.dtbo /mnt/p1/overlays/
sudo cp out/target/product/rpi4/ramdisk.img /mnt/p1
sudo umount /mnt/p1
sudo rm -rf /mnt/p1
If you’re looking at the official android-rpi project manual, there is a different path for vc4-kms-v3d-pi4.dtbo file. That’s OK – they use a symbolic link we are unable to use in this filesystem.
Sometimes, you can see an error message when creating an “overlays” directory. It happens from time to time, because “mount” returns to the console before really mounting the drive. In such a case, just call “mkdir” again. Be aware of that, especially if you’re going to copy-paste the entire listing from above.
Now, let’s copy the two remaining partitions. If you’re struggling with dd command (it may hang), you can try to copy big *.img files from WSL to VirtualBox first.
cd <PATH_TO_YOUR_ANDROID_SOURCES_IN_WSL>/out/target/product/rpi4/
sudo dd if=system.img of=/dev/sdb2 bs=1M status=progress
sudo dd if=vendor.img of=/dev/sdb3 bs=1M status=progress
You’re done. You’ve downloaded, prepared, built, and saved your own Android Automotive OS. Now you can put the SD card into Raspberry, and connect all cables (make sure you connect the Raspberry power cable at the end). There is no “power” button, and it doesn’t matter which micro-HDMI or USB port of Raspberry you use. It’s now time to enjoy your own Android Automotive OS!
The world is not perfect and sometimes something goes terribly wrong. If you see the boot animation for a long time, or if your device crashes in a loop a few seconds after boot, you can try to debug it.

You need a USB-TTL bridge (like this one https://www.sunrom.com/p/cp2102-usb-ttl-uart-module ) to connect the correct pins from the Raspberry to the USB. You need to connect pin 6 (ground) to the GND pin in the bridge, pin 8 (RXD) to the RXD pin of the bridge and pin 10 (TXD) to the TXD pin of the bridge. If you want to power the Raspberry via the bridge, you need to also connect pin 2 to +5V pin of the bridge. It is not recommended, because of the lower voltage, so your system might be unstable. If you don’t have a power adapter, you can simply connect a USB cable between your computer port and the USB C port of the Raspberry. Warning! You can’t connect both a +5V connector here and a USB C power port of the Raspberry or you’ll burn the Raspberry board.
See the schema for the connection reference.

Depending on your bridge model, you may need an additional driver. I use this one: https://www.silabs.com/developers/usb-to-uart-bridge-vcp-drivers?tab=downloads .
When you connect the +5V pin or USB-C power adapter ( again, never both at the same time! ), the Raspberry starts. Now you can open Putty and connect to your Android. Pick Serial and type COMX in the serial line definition. X is the number of your COM port. You can check it in your device manager – look for “USB to UART bridge (COM4)” or the like. The correct connection speed is 115200.

Open the connection to access the Android shell. By default, Android spawns all logs to the standard output, so you should see a lot of them right away. Anyway, it’s dual-side communication and you have full terminal access to your Android if you need to check/modify any file or call any command. Just strike enter to see the command prompt. You can even call ‘su’ to gain superuser access on your Android running on Raspberry.
If you want to use Android Debug Bridge to connect to your device, using a USB bridge is not enough. When running ‘adb devices’ on your computer, the Android Automotive OS running on Raspberry is not recognized. You can use a putty connection to turn on a TCP debug bridge instead.
Make sure you’ve connected Android and your computer to the same network. Open putty and connect to the running Android console. Log as root and enable ADB via TCP. Then check your IP address.
su
setprop service.adb.tcp.port 5555
stop adbd
start adbd
ifconfig wlan0
Now, using your Windows command line, go to the Android SDK platform-tools directory and connect to your device. As you can see, the IP address of mine Raspnberry is 192.168.50.47.
cd %userprofile%\AppData\Local\Android\Sdk\platform-tools
adb connect 192.168.50.47:5555
If you want to use ADB in WSL, you can link the Windows program in WSL using the following command.
sudo ln -s /mnt/c/Users/<your_username>/AppData/Local/Android/Sdk/platform-tools/adb.exe /usr/bin/adb
You can now use ADB to use logcat without putty or to install applications without manually transferring APK files to the SD card. Fun fact – if you use a USB bridge and USB power supply, you have two physical connections between your computer and the Android-running one, however, you still need to use ADB over WiFi to use the debug bridge.
That’s all. Android Automotive OS 11 is running. You can install the apps you need, take them to your car, or do whatever you’d like with them. Using hardware instead of an emulator allows you to manually manage partitions (e.g. for the OTA update ) and connect external devices like a real GPS receiver or accelerometer. The bare metal hardware overperforms the emulator too. And most importantly – you can easily take it to your car, receive power from an in-car USB port, connect it to an ODB-II port and run real-life tests without a laptop.
Is your project ready? Great, now you can try doing the same with AAOS 13 .

Ready to ship? Let's talk.
Read our blog and stay informed about the industry's latest trends and solutions.
In this article, we will explore the implementation of a four-zone climate control system for vehicles using Android Automotive OS (AAOS) version 14. Multi-zone climate control systems allow individual passengers to adjust the temperature for their specific areas, enhancing comfort and personalizing the in-car experience. We will delve into the architecture, components, and integration steps necessary to create a robust and efficient four-zone HVAC system within the AAOS environment.
A four-zone climate control system divides the vehicle's cabin into four distinct areas: the driver, front passenger, left rear passenger, and right rear passenger. Each zone can be independently controlled to set the desired temperature. This system enhances passenger comfort by accommodating individual preferences and ensuring an optimal environment for all occupants.
To implement a four-zone HVAC system in Android AAOS14, we first need to modify the SystemUI, which handles the user interface. The application is located in packages/apps/Car/SystemUI . The HVAC panel is defined in the file res/layout/hvac_panel.xml .
Here is an example definition of the HVAC panel with four sliders for temperature control and four buttons for seat heating:
<!--
~ Copyright (C) 2022 The Android Open Source Project
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<com.android.systemui.car.hvac.HvacPanelView
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:systemui="http://schemas.android.com/apk/res-auto"
android:id="@+id/hvac_panel"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="@dimen/hvac_panel_full_expanded_height"
android:background="@color/hvac_background_color">
<androidx.constraintlayout.widget.Guideline
android:id="@+id/top_guideline"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal"
app:layout_constraintGuide_begin="@dimen/hvac_panel_top_padding"/>
<androidx.constraintlayout.widget.Guideline
android:id="@+id/bottom_guideline"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal"
app:layout_constraintGuide_end="@dimen/hvac_panel_bottom_padding"/>
<!-- HVAC property IDs can be found in VehiclePropertyIds.java, and the area IDs depend on each OEM's VHAL implementation. -->
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/driver_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@+id/row2_driver_hvac"
systemui:hvacAreaId="1">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/row2_driver_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintTop_toBottomOf="@+id/driver_hvac"
app:layout_constraintBottom_toBottomOf="parent"
systemui:hvacAreaId="16">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/seat_heat_level_button_left"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/top_guideline"
app:layout_constraintLeft_toRightOf="@+id/driver_hvac"
app:layout_constraintBottom_toTopOf="@+id/recycle_air_button"
systemui:hvacAreaId="1"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/recycle_air_button"
android:layout_width="@dimen/hvac_panel_button_dimen"
android:layout_height="@dimen/hvac_panel_group_height"
android:background="@drawable/hvac_panel_button_bg"
app:layout_constraintTop_toBottomOf="@+id/seat_heat_level_button_left"
app:layout_constraintLeft_toRightOf="@+id/driver_hvac"
app:layout_constraintBottom_toTopOf="@+id/row2_seat_heat_level_button_left"
systemui:hvacAreaId="117"
systemui:hvacPropertyId="354419976"
systemui:hvacTurnOffIfAutoOn="true"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_recycle_air_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_recycle_air_off"/>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/row2_seat_heat_level_button_left"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/recycle_air_button"
app:layout_constraintLeft_toRightOf="@+id/row2_driver_hvac"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="16"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
<LinearLayout
android:id="@+id/fan_control"
android:background="@drawable/hvac_panel_button_bg"
android:layout_width="@dimen/hvac_fan_speed_bar_width"
android:layout_height="@dimen/hvac_panel_group_height"
app:layout_constraintTop_toBottomOf="@+id/top_guideline"
app:layout_constraintLeft_toRightOf="@+id/seat_heat_level_button_left"
app:layout_constraintRight_toLeftOf="@+id/seat_heat_level_button_right"
android:layout_centerVertical="true"
android:layout_centerHorizontal="true"
android:orientation="vertical">
<com.android.systemui.car.hvac.referenceui.FanSpeedBar
android:layout_weight="1"
android:layout_width="match_parent"
android:layout_height="0dp"/>
<com.android.systemui.car.hvac.referenceui.FanDirectionButtons
android:layout_weight="1"
android:layout_width="match_parent"
android:layout_height="0dp"
android:orientation="horizontal"
android:layoutDirection="ltr"/>
</LinearLayout>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/ac_master_switch"
android:background="@drawable/hvac_panel_button_bg"
android:scaleType="center"
style="@style/HvacButton"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
app:layout_constraintLeft_toRightOf="@+id/row2_seat_heat_level_button_left"
systemui:hvacAreaId="117"
systemui:hvacPropertyId="354419984"
systemui:hvacTurnOffIfPowerOff="false"
systemui:hvacToggleOnButtonDrawable="@drawable/ac_master_switch_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ac_master_switch_off"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/defroster_button"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintLeft_toRightOf="@+id/ac_master_switch"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="1"
systemui:hvacPropertyId="320865540"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_front_defroster_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_front_defroster_off"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/auto_button"
android:background="@drawable/hvac_panel_button_bg"
systemui:hvacAreaId="117"
systemui:hvacPropertyId="354419978"
android:scaleType="center"
android:layout_gravity="center"
android:layout_width="0dp"
style="@style/HvacButton"
app:layout_constraintLeft_toRightOf="@+id/defroster_button"
app:layout_constraintRight_toLeftOf="@+id/rear_defroster_button"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_auto_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_auto_off"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/rear_defroster_button"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintLeft_toRightOf="@+id/auto_button"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="2"
systemui:hvacPropertyId="320865540"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_rear_defroster_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_rear_defroster_off"/>
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/passenger_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@+id/row2_passenger_hvac"
systemui:hvacAreaId="2">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/row2_passenger_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/passenger_hvac"
app:layout_constraintBottom_toBottomOf="parent"
systemui:hvacAreaId="32">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/seat_heat_level_button_right"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/top_guideline"
app:layout_constraintRight_toLeftOf="@+id/passenger_hvac"
app:layout_constraintBottom_toTopOf="@+id/row2_seat_heat_level_button_right"
systemui:hvacAreaId="2"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/row2_seat_heat_level_button_right"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/seat_heat_level_button_right"
app:layout_constraintRight_toLeftOf="@+id/row2_passenger_hvac"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="32"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
</com.android.systemui.car.hvac.HvacPanelView>
The main changes are:
BackgroundAdjustingTemperatureControlView for each zone and changing their systemui:hvacAreaId to match the values from VehicleAreaSeat::ROW_1_LEFT, VehicleAreaSeat::ROW_2_LEFT, VehicleAreaSeat::ROW_1_RIGHT , and VehicleAreaSeat::ROW_2_RIGHT . SeatTemperatureLevelButton for each zone. The layout needs to be arranged properly to match the desired design. Information on how to describe the layout in XML can be found at Android Developers - Layout resource .
The presented layout also requires changing the constant values in the res/values/dimens.xml file. Below is the diff with my changes:
diff --git a/res/values/dimens.xml b/res/values/dimens.xml
index 11649d4..3f96413 100644
--- a/res/values/dimens.xml
+++ b/res/values/dimens.xml
@@ -73,7 +73,7 @@
<dimen name="car_primary_icon_size">@*android:dimen/car_primary_icon_size</dimen>
<dimen name="hvac_container_padding">16dp</dimen>
- <dimen name="hvac_temperature_bar_margin">32dp</dimen>
+ <dimen name="hvac_temperature_bar_margin">16dp</dimen>
<dimen name="hvac_temperature_text_size">56sp</dimen>
<dimen name="hvac_temperature_text_padding">8dp</dimen>
<dimen name="hvac_temperature_button_size">76dp</dimen>
@@ -295,9 +295,9 @@
<dimen name="hvac_panel_row_animation_height_shift">0dp</dimen>
<dimen name="temperature_bar_collapsed_width">96dp</dimen>
- <dimen name="temperature_bar_expanded_width">96dp</dimen>
+ <dimen name="temperature_bar_expanded_width">128dp</dimen>
<dimen name="temperature_bar_collapsed_height">96dp</dimen>
- <dimen name="temperature_bar_expanded_height">356dp</dimen>
+ <dimen name="temperature_bar_expanded_height">200dp</dimen>
<dimen name="temperature_bar_icon_margin">20dp</dimen>
<dimen name="temperature_bar_close_icon_dimen">96dp</dimen>
The next step is to add additional zones to the VHAL configuration. The configuration file is located at hardware/interfaces/automotive/vehicle/2.0/default/impl/vhal_v2_0/DefaultConfig.h .
In my example, I modified HVAC_SEAT_TEMPERATURE and HVAC_TEMPERATURE_SET :
{.config = {.prop = toInt(VehicleProperty::HVAC_SEAT_TEMPERATURE),
.access = VehiclePropertyAccess::READ_WRITE,
.changeMode = VehiclePropertyChangeMode::ON_CHANGE,
.areaConfigs = {VehicleAreaConfig{
.areaId = SEAT_1_LEFT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
VehicleAreaConfig{
.areaId = SEAT_1_RIGHT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
VehicleAreaConfig{
.areaId = SEAT_2_LEFT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
VehicleAreaConfig{
.areaId = SEAT_2_RIGHT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
}},
.initialValue = {.int32Values = {0}}}, // +ve values for heating and -ve for cooling
{.config = {.prop = toInt(VehicleProperty::HVAC_TEMPERATURE_SET),
.access = VehiclePropertyAccess::READ_WRITE,
.changeMode = VehiclePropertyChangeMode::ON_CHANGE,
.configArray = {160, 280, 5, 605, 825, 10},
.areaConfigs = {VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_1_LEFT),
.minFloatValue = 16,
.maxFloatValue = 32,
},
VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_1_RIGHT),
.minFloatValue = 16,
.maxFloatValue = 32,
},
VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_2_LEFT),
.minFloatValue = 16,
.maxFloatValue = 32,
},
VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_2_RIGHT),
.minFloatValue = 16,
.maxFloatValue = 32,
}
}},
.initialAreaValues = {{(int)(VehicleAreaSeat::ROW_1_LEFT), {.floatValues = {16}}},
{(int)(VehicleAreaSeat::ROW_1_RIGHT), {.floatValues = {17}}},
{(int)(VehicleAreaSeat::ROW_2_LEFT), {.floatValues = {16}}},
{(int)(VehicleAreaSeat::ROW_2_RIGHT), {.floatValues = {19}}},
}},
This configuration modifies the HVAC seat temperature and temperature set properties to include all four zones: front left, front right, rear left, and rear right. The areaId for each zone is specified accordingly. The minInt32Value and maxInt32Value for seat temperatures are set to -3 and 3, respectively, while the temperature range is set between 16 and 32 degrees Celsius.
After modifying the VHAL configuration, the new values will be transmitted to the VendorVehicleHal. This ensures that the HVAC settings are accurately reflected and controlled within the system. For detailed information on how to use these configurations and further transmit this data over the network, refer to our articles: "Controlling HVAC Module in Cars Using Android: A Dive into SOME/IP Integration" and "Integrating HVAC Control in Android with DDS" . These resources provide comprehensive guidance on leveraging network protocols like SOME/IP and DDS for effective HVAC module control in automotive systems.
Building the SystemUI and VHAL components requires specific commands and steps to ensure they are correctly compiled and deployed.
mmma packages/apps/Car/SystemUI/
mmma hardware/interfaces/automotive/vehicle/2.0/default/
After building the SystemUI and VHAL, you need to upload the compiled applications to the device. Use the following commands:
adb push out/target/product/rpi4/system/system_ext/priv-app/CarSystemUI/CarSystemUI.apk /system/system_ext/priv-app/CarSystemUI/
adb push out/target/product/rpi4/vendor/bin/hw/android.hardware.automotive.vehicle@2.0-default-service /vendor/bin/hw
In this guide, we covered the steps necessary to modify the HVAC configurations by updating the XML layout and VHAL configuration files. We also detailed the process of building and deploying the SystemUI and VHAL components to your target device.
By following these steps, you ensure that your system reflects the desired changes and operates as intended.
The automotive industry has been rapidly evolving with technological advancements that enhance the driving experience and safety. Among these innovations, the Android Automotive Operating System (AAOS) has stood out, offering a versatile and customizable platform for car manufacturers.
The Exterior View System (EVS) is a comprehensive camera-based system designed to provide drivers with real-time visual monitoring of their vehicle's surroundings. It typically includes multiple cameras positioned around the vehicle to eliminate blind spots and enhance situational awareness, significantly aiding in maneuvers like parking and lane changes. By integrating with advanced driver assistance systems, EVS contributes to increased safety and convenience for drivers.
For more detailed information about EVS and its configuration, we highly recommend reading our article "Android AAOS 14 - Surround View Parking Camera: How to Configure and Launch EVS (Exterior View System)." This foundational article provides essential insights and instructions that we will build upon in this guide.
The latest Android Automotive Operating System , AAOS 14, presents new possibilities, but it does not natively support Ethernet cameras. In this article, we describe our implementation of an Ethernet camera integration with the Exterior View System (EVS) on Android.
Our approach involves connecting a USB camera to a Windows laptop and streaming the video using the Real-time Transport Protocol (RTP). By employing the powerful FFmpeg software, the video stream will be broadcast and described in an SDP (Session Description Protocol) file, accessible via an HTTP server. On the Android side, we'll utilize the FFmpeg library to receive and decode the video stream, effectively bringing the camera feed into the AAOS 14 environment.
This article provides a step-by-step guide on how we achieved this integration of the EVS network camera, offering insights and practical instructions for those looking to implement a similar solution. The following diagram provides an overview of the entire process:

To enable RTP camera streaming on Android, the first step is to build the FFmpeg library for the platform. This section describes the process in detail, using the ffmpeg-android-maker project. Follow these steps to successfully build and integrate the FFmpeg library with the Android EVS (Exterior View System) Driver.
First, install the Android SDK. For Ubuntu/Debian systems, you can use the following commands:
sudo apt update && sudo apt install android-sdk
The SDK should be installed in /usr/lib/android-sdk .
Download the Android NDK (Native Development Kit) from the official website:
https://developer.android.com/ndk/downloads
After downloading, extract the NDK to your desired location.
Clone the ffmpeg-android-maker repository and navigate to its directory:
git clone https://github.com/Javernaut/ffmpeg-android-maker.gitcd ffmpeg-android-maker
Set the environment variables to point to the SDK and NDK:
export ANDROID_SDK_HOME=/usr/lib/android-sdk
export ANDROID_NDK_HOME=/path/to/ndk/
Run the build script:
./ffmpeg-android-maker.sh
This script will download FFmpeg source code and dependencies, and compile FFmpeg for various Android architectures.
After the build process is complete, copy the .so library files from build/ffmpeg/ to the EVS Driver directory in your Android project:
cp build/ffmpeg/*.so /path/to/android/project/packages/services/Car/cpp/evs/sampleDriver/aidl/
Edit the Android.bp file in the aidl directory to include the prebuilt FFmpeg libraries:
cc_prebuilt_library_shared {
name: "rtp-libavcodec",
vendor: true,
srcs: ["libavcodec.so"],
strip: {
none: true,
},
check_elf_files: false,
}
cc_prebuilt_library {
name: "rtp-libavformat",
vendor: true,
srcs: ["libavformat.so"],
strip: {
none: true,
},
check_elf_files: false,
}
cc_prebuilt_library {
name: "rtp-libavutil",
vendor: true,
srcs: ["libavutil.so"],
strip: {
none: true,
},
check_elf_files: false,
}
cc_prebuilt_library_shared {
name: "rtp-libswscale",
vendor: true,
srcs: ["libswscale.so"],
strip: {
none: true,
},
check_elf_files: false,
}
Add prebuilt libraries to EVS Driver app:
cc_binary {
name: "android.hardware.automotive.evs-default",
defaults: ["android.hardware.graphics.common-ndk_static"],
vendor: true,
relative_install_path: "hw",
srcs: [
":libgui_frame_event_aidl",
"src/*.cpp"
],
shared_libs: [
"rtp-libavcodec",
"rtp-libavformat",
"rtp-libavutil",
"rtp-libswscale",
"android.hardware.graphics.bufferqueue@1.0",
"android.hardware.graphics.bufferqueue@2.0",
android.hidl.token@1.0-utils,
....]
}
By following these steps, you will have successfully built the FFmpeg library for Android and integrated it into the EVS Driver.
In this chapter, we will demonstrate how to quickly implement RTP support for the EVS (Exterior View System) driver in Android AAOS 14. This implementation is for demonstration purposes only. For production use, the implementation should be optimized, adapted to specific requirements, and all possible configurations and edge cases should be thoroughly tested. Here, we will focus solely on displaying the video stream from RTP.
The main files responsible for capturing and decoding video from USB cameras are implemented in the EvsV4lCamera and VideoCapture classes. To handle RTP, we will copy these classes and rename them to EvsRTPCamera and RTPCapture . RTP handling will be implemented in RTPCapture . We need to implement four main functions:
bool open(const char* deviceName, const int32_t width = 0, const int32_t height = 0);
void close();
bool startStream(std::function<void(RTPCapture*, imageBuffer*, void*)> callback = nullptr);
void stopStream();
We will use the official example from the FFmpeg library, https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/demux_decode.c, which decodes the specified video stream into RGBA buffers. After adapting the example, the RTPCapture.cpp file will look like this:
#include "RTPCapture.h"
#include <android-base/logging.h>
#include <errno.h>
#include <error.h>
#include <fcntl.h>
#include <memory.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <unistd.h>
#include <cassert>
#include <iomanip>
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <fstream>
#include <sstream>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, height;
static enum AVPixelFormat pix_fmt;
static enum AVPixelFormat out_pix_fmt = AV_PIX_FMT_RGBA;
static AVStream *video_stream = NULL, *audio_stream = NULL;
static struct SwsContext *resize;
static const char *src_filename = NULL;
static uint8_t *video_dst_data[4] = {NULL};
static int video_dst_linesize[4];
static int video_dst_bufsize;
static int video_stream_idx = -1, audio_stream_idx = -1;
static AVFrame *frame = NULL;
static AVFrame *frame2 = NULL;
static AVPacket *pkt = NULL;
static int video_frame_count = 0;
int RTPCapture::output_video_frame(AVFrame *frame)
{
LOG(INFO) << "Video_frame: " << video_frame_count++
<< " ,scale height: " << sws_scale(resize, frame->data, frame->linesize, 0, height, video_dst_data, video_dst_linesize);
if (mCallback) {
imageBuffer buf;
buf.index = video_frame_count;
buf.length = video_dst_bufsize;
mCallback(this, &buf, video_dst_data[0]);
}
return 0;
}
int RTPCapture::decode_packet(AVCodecContext *dec, const AVPacket *pkt)
{
int ret = 0;
ret = avcodec_send_packet(dec, pkt);
if (ret < 0) {
return ret;
}
// get all the available frames from the decoder
while (ret >= 0) {
ret = avcodec_receive_frame(dec, frame);
if (ret < 0) {
if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
{
return 0;
}
return ret;
}
// write the frame data to output file
if (dec->codec->type == AVMEDIA_TYPE_VIDEO) {
ret = output_video_frame(frame);
}
av_frame_unref(frame);
if (ret < 0)
return ret;
}
return 0;
}
int RTPCapture::open_codec_context(int *stream_idx,
AVCodecContext **dec_ctx, AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret, stream_index;
AVStream *st;
const AVCodec *dec = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
stream_index = ret;
st = fmt_ctx->streams[stream_index];
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Allocate a codec context for the decoder */
*dec_ctx = avcodec_alloc_context3(dec);
if (!*dec_ctx) {
fprintf(stderr, "Failed to allocate the %s codec context\n",
av_get_media_type_string(type));
return AVERROR(ENOMEM);
}
/* Copy codec parameters from input stream to output codec context */
if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0) {
fprintf(stderr, "Failed to copy %s codec parameters to decoder context\n",
av_get_media_type_string(type));
return ret;
}
av_opt_set((*dec_ctx)->priv_data, "preset", "ultrafast", 0);
av_opt_set((*dec_ctx)->priv_data, "tune", "zerolatency", 0);
/* Init the decoders */
if ((ret = avcodec_open2(*dec_ctx, dec, NULL)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
*stream_idx = stream_index;
}
return 0;
}
bool RTPCapture::open(const char* /*deviceName*/, const int32_t /*width*/, const int32_t /*height*/) {
LOG(INFO) << "RTPCapture::open";
int ret = 0;
avformat_network_init();
mFormat = V4L2_PIX_FMT_YUV420;
mWidth = 1920;
mHeight = 1080;
mStride = 0;
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, "http://192.168.1.59/stream.sdp", NULL, NULL) < 0) {
LOG(ERROR) << "Could not open network stream";
return false;
}
LOG(INFO) << "Input opened";
isOpened = true;
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
LOG(ERROR) << "Could not find stream information";
return false;
}
LOG(INFO) << "Stream info found";
if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
/* allocate image where the decoded image will be put */
width = video_dec_ctx->width;
height = video_dec_ctx->height;
pix_fmt = video_dec_ctx->sw_pix_fmt;
resize = sws_getContext(width, height, AV_PIX_FMT_YUVJ422P,
width, height, out_pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
LOG(ERROR) << "RTPCapture::open pix_fmt: " << video_dec_ctx->pix_fmt
<< ", sw_pix_fmt: " << video_dec_ctx->sw_pix_fmt
<< ", my_fmt: " << pix_fmt;
ret = av_image_alloc(video_dst_data, video_dst_linesize,
width, height, out_pix_fmt, 1);
if (ret < 0) {
LOG(ERROR) << "Could not allocate raw video buffer";
return false;
}
video_dst_bufsize = ret;
}
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!audio_stream && !video_stream) {
LOG(ERROR) << "Could not find audio or video stream in the input, aborting";
ret = 1;
return false;
}
frame = av_frame_alloc();
if (!frame) {
LOG(ERROR) << "Could not allocate frame";
ret = AVERROR(ENOMEM);
return false;
}
frame2 = av_frame_alloc();
pkt = av_packet_alloc();
if (!pkt) {
LOG(ERROR) << "Could not allocate packet";
ret = AVERROR(ENOMEM);
return false;
}
return true;
}
void RTPCapture::close() {
LOG(DEBUG) << __FUNCTION__;
}
bool RTPCapture::startStream(std::function<void(RTPCapture*, imageBuffer*, void*)> callback) {
LOG(INFO) << "startStream";
if(!isOpen()) {
LOG(ERROR) << "startStream failed. Stream not opened";
return false;
}
stop_thread_1 = false;
mCallback = callback;
mCaptureThread = std::thread([this]() { collectFrames(); });
return true;
}
void RTPCapture::stopStream() {
LOG(INFO) << "stopStream";
stop_thread_1 = true;
mCaptureThread.join();
mCallback = nullptr;
}
bool RTPCapture::returnFrame(int i) {
LOG(INFO) << "returnFrame" << i;
return true;
}
void RTPCapture::collectFrames() {
int ret = 0;
LOG(INFO) << "Reading frames";
/* read frames from the file */
while (av_read_frame(fmt_ctx, pkt) >= 0) {
if (stop_thread_1) {
return;
}
if (pkt->stream_index == video_stream_idx) {
ret = decode_packet(video_dec_ctx, pkt);
}
av_packet_unref(pkt);
if (ret < 0)
break;
}
}
int RTPCapture::setParameter(v4l2_control&) {
LOG(INFO) << "RTPCapture::setParameter";
return 0;
}
int RTPCapture::getParameter(v4l2_control&) {
LOG(INFO) << "RTPCapture::getParameter";
return 0;
}
std::set<uint32_t> RTPCapture::enumerateCameraControls() {
LOG(INFO) << "RTPCapture::enumerateCameraControls";
std::set<uint32_t> ctrlIDs;
return std::move(ctrlIDs);
}
void* RTPCapture::getLatestData() {
LOG(INFO) << "RTPCapture::getLatestData";
return nullptr;
}
bool RTPCapture::isFrameReady() {
LOG(INFO) << "RTPCapture::isFrameReady";
return true;
}
void RTPCapture::markFrameConsumed(int i) {
LOG(INFO) << "RTPCapture::markFrameConsumed frame: " << i;
}
bool RTPCapture::isOpen() {
LOG(INFO) << "RTPCapture::isOpen";
return isOpened;
}
Next, we need to modify EvsRTPCamera to use our RTPCapture class instead of VideoCapture . In EvsRTPCamera.h , add:
#include "RTPCapture.h"
And replace:
VideoCapture mVideo = {};
with:
RTPCapture mVideo = {};
In EvsRTPCamera.cpp , we also need to make changes. In the forwardFrame(imageBuffer* pV4lBuff, void* pData) function, replace:
mFillBufferFromVideo(bufferDesc, (uint8_t*)targetPixels, pData, mVideo.getStride());
with:
memcpy(targetPixels, pData, pV4lBuff->length);
This is because the VideoCapture class provides a buffer from the camera in various YUYV pixel formats. The mFillBufferFromVideo function is responsible for converting the pixel format to RGBA. In our case, RTPCapture already provides an RGBA buffer. This is done in the
int RTPCapture::output_video_frame(AVFrame *frame) function using sws_scale from the FFmpeg library.
Now we need to ensure that our RTP camera is recognized by the system. The EvsEnumerator class and its enumerateCameras function are responsible for detecting cameras. This function adds all video files from the /dev/ directory.
To add our RTP camera, we will append the following code at the end of the enumerateCameras function:
if (addCaptureDevice("rtp1")) {
++captureCount;
}
This will add a camera with the ID "rtp1" to the list of detected cameras, making it visible to the system.
The final step is to modify the EvsEnumerator: :openCamera function to direct the camera with the ID "rtp1" to the RTP implementation. Normally, when opening a USB camera, an instance of the EvsV4lCamera class is created:
pActiveCamera = EvsV4lCamera::Create(id.data());
In our example, we will hardcode the ID check and create the appropriate object:
if (id == "rtp1") {
pActiveCamera = EvsRTPCamera::Create(id.data());
} else {
pActiveCamera = EvsV4lCamera::Create(id.data());
}
With this implementation, our camera should start working. Now we need to build the EVS Driver application and push it to the device along with the FFmpeg libraries:
mmma packages/services/Car/cpp/evs/sampleDriver/
adb push out/target/product/rpi4/vendor/bin/hw/android.hardware.automotive.evs-default /vendor/bin/hw/
To stream video from your camera, you need to install FFmpeg ( https://www.ffmpeg.org/download.html#build-windows ) and an HTTP server on the computer that will be streaming the video.
Start FFmpeg (example on Windows):
ffmpeg -f dshow -video_size 1280x720 -i video="USB Camera" -c copy -f rtp rtp://192.168.1.53:8554
where:

After starting FFmpeg, you should see output similar to this on the console:

Here, we see the input, output, and SDP sections. In the input section, the codec is JPEG, which is what we need. The pixel format is yuvj422p, with a resolution of 1920x1080 at 30 fps. The stream parameters in the output section should match.
Next, save the SDP section to a file named stream.sdp on the HTTP server. Our EVS Driver application needs to fetch this file, which describes the stream.
In our example, the Android device should access this file at: http://192.168.1.59/stream.sdp
The exact content of the file should be:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 192.168.1.53
t=0 0
a=tool:libavformat 61.1.100
m=video 8554 RTP/AVP 26
Now, restart the EVS Driver application on the Android device:
killall android.hardware.automotive.evs-default
Then, configure the EVS app to use the camera "rtp1". For detailed instructions on how to configure and launch the EVS (Exterior View System), refer to the article "Android AAOS 14 - Surround View Parking Camera: How to Configure and Launch EVS (Exterior View System)".
In this chapter, we will measure and compare the latency of the video stream from a camera connected via USB and RTP.
This latency is composed of several factors:
Below are the photos showing the latency:
USB Camera

RTP Camera

From these measurements, we found that the average latency for a camera connected via USB to the Android device is 200ms , while the latency for the camera connected via RTP is 150ms . This result is quite surprising.
The reasons behind these results are:
All camera modes can be checked using the command:
ffmpeg -f dshow -list_options true -i video="USB Camera"

This article has taken you through the comprehensive process of integrating an RTP camera into the Android EVS (Exterior View System) framework, highlighting the detailed steps involved in both the implementation and the performance evaluation.
We began our journey by developing new classes, EvsRTPCamera and RTPCapture , which were specifically designed to handle RTP streams using FFmpeg. This adaptation allowed us to process and stream real-time video effectively. To ensure our system recognized the RTP camera, we made critical adjustments to the EvsEnumerator class. By customizing the enumerateCameras and openCamera functions, we ensured that our RTP camera was correctly instantiated and recognized by the system.
Next, we focused on building and deploying the EVS Driver application, including the necessary FFmpeg libraries, to our target Android device. This step was crucial for validating our implementation in a real-world environment. We also conducted a detailed performance evaluation to measure and compare the latency of video feeds from USB and RTP cameras. Using a timer displayed on a computer screen, we captured the timer with the EVS camera and compared the time shown on both the computer and Android screens. This method allowed us to accurately determine the latency introduced by each camera setup.
Our performance tests revealed that the RTP camera had an average latency of 150ms, while the USB camera had a latency of 200ms. This result was unexpected but highly informative. The lower latency of the RTP camera was largely due to the use of the JPEG format, which our particular USB camera handled less efficiently due to its slower YUV processing. This significant finding underscores the RTP camera's suitability for applications requiring real-time video performance, such as automotive surround view parking systems, where quick response times are essential for safety and user experience.
Quite a new droid around. The operating system has been in the market for a while, still missing a lot, but it is out, already implemented in cars, and coming for more. Polestar and Volvo were the first to bring Android Automotive OS to their Polestar 2 and XC40 Recharge.
Other car manufacturers like PSA, Ford, Honda, GM, and more announced that they are going to bring Android Automotive OS to their cars or just hinted about cooperating with Google Mobile Services. Part of implementations coming with Google Automotive Services(GAS): Play Store, Google Maps, Google Assistant, another part without, own app stores, assistants. What's most interesting for now is to bring your application to the store.
Creating an android app for automotive doesn't differ that much from mobile and is similar to android auto. Starting in an android studio, setting it up for canary releases to get the emulators. The first issue is that Android Automotive OS emulation needs an Intel CPU right now and doesn't support Apple M1 or AMD. Available emulators start on Android 9(Pie), with Google and a custom one for Polestar 2, Android 10(Q) also with Volvo, skinned to look like XC40 cockpit, Android 11 and freshly released Android 12(API 32) emulators are Google only. To get your hands on custom versions for Volvo or Polestar 2, you need to add links to SDK update sites .
Diving into the details of development and Android Automotive Operating System in general, the main thing you are going to spot is a problem with documentation and communication with Google, as the Android Automotive car feels like it is lacking options and solutions.
Developers and mobile groups are complaining about it, some of them trying to establish a communication channel and get Google on the other side. Google is not providing a clear roadmap for AAOS, and it is risky or at least could be expensive to develop applications right now. Some parts of the Operating System code hint at certain features, but documentation is silent about them.
Automotive applications are run in a shell (Google Automotive App Host) similar to those for Android Auto, and they do not have Activity thus UI can't be changed. Apps are automatically rendered, and all of them look similar.
There is still an option to install a regular application through ADB, but this might sound easy only for an emulator. Options for app developers to brand their applications are very limited, actually it is just an app icon at the top side of the screen and a color of progress bars, like those showing how much of a podcast or song you listened to already.
Car manufacturers and automotive OEMs have more options to reflect their brand and style of an interior. They can customize colors, typography, layouts, and more. There is still a requirement to follow design patterns for automotive, and Google is providing a whole design system page .
Applications submitted to the store are mandatory for an additional review. Reviewers have to be able to perform a full check, logins, payments, etc., so they need to be provided with all required data and accounts. That adds additional uncertainty with innovation and going beyond what is expected, as the reviewer has to agree that our app meets the requirements.
Right now, the documentation states that supported categories for Android Automotive OS apps are focused on in-vehicle infotainment experience: Media, Navigation, Point of Interest, and Video. Compared to Android Auto, it is missing Messaging category and adds Video. Requirements are in place for all apps in general or specific categories and most of those requirements follow the principle to make the app very simple and not distract the driver.
How does it work? If you don't have a payment option set on your account, it should ask you to add it on another device. You can't ask a user to agree to recurring payments or purchase multiple items at once. It is not allowed even if you are not driving, and that appears to be inconsistent with the video app category. For example, it is not allowed to work at all during driving, but can display video normally when stopped.
Play Store right now presents a handful of applications, fairly easy to count all of them, most of them being in-vehicle infotainment systems : media(music and podcasts) and navigation apps. Nothing is stated about mixing categories, and none of the existing apps seems to cover more than one category.
Android Automotive Operating System being an integral part of the car, brings ideas about controlling features of a car, or at least reading them and reacting within an application accordingly. Emulation provides just a few options to simulate car state, ignition, speed, gear, parking brake, low fuel level, night mode, and environment sensors(temperature, pressure, etc.). There is an option to load a recording of sensor reads.
There are definitely more sensors that we are missing here that could have come in handy, and there is an extensive list of vehicle property ids to be read, with possible extensions from a car manufacturer and an option to subscribe for a callback informing us that property changed.
Coming to controlling a car's features leaves us with scarce information. The first thing that came to my mind was getting all the permissions through ADB, and it brought joy when permissions like car climate change appeared, but no service or anything is provided to control those features. Documentation reveals that there is a superuser responsible for running OEM apps that are controlling e.g. air-conditioning, but for now, there is no option for a dev to make your own app that will open a window for you.
The infotainment system should be possible to make and bring all the information you can get on a car screen(worth mentioning Android Automotive Operating System should be able to control the display behind the steering wheel, that is missed in documentation as well), but do not forget that there is no such category and possibly won't get through mandatory check.
After all, AAOS is here to standardize what we will see in our cars. It brings our most used applications, without plugging in the phone. We can choose our favorite navigation application and make shortcut icons for the most visited places. Our vehicle will remember where we were with our podcast and what playlist was on.
Looks like the system releases are becoming more frequent, Google is adding features that are necessary to control everything correctly from different cars. We should see it in more and more cars as this cuts costs for manufacturers and saves on developing applications. Custom skins and customizations for the screens can bring a bit of your style to your car.
That summary of what is going on in Automotive Android Operating System and Google Automotive Services might show there is a slight mess, both around code and documentation. That seems to be the feeling of most of the devs sharing their experiences. It is risky to develop apps without having a clear understanding of which way is the new droid going and without any board or support medium, at least to gather developers together.
That being said, it is a great time to put your app in the store and be there first. Explore what could get through the check and how far they let apps develop. We would love to get in the car at some point over a phone over an NFC spot and let it quickly adjust everything for you, with your key apps.
Do you want to start building apps for AAOS? Here is our guide to help you create AAOS Hello World .
Reach out for tailored solutions and expert guidance.