User Tools

Site Tools


agl-distro:sep2017-audio-f2f

Purpose and Goals

We are holding a face to face audio workshop from September 13 - 14 in Montreal. The main purpose of the meeting is to finalize the sound manager design and code for the Electric Eel release and the audio roadmap for 2018.

Event Details

DATE: September 13 - 14, 2017
Time: 9:00 am - 6:00 pm
VENUE: Audiokinetic, Inc
215 Rue Saint-Jacques #1000 (10th floor)
Montreal, QC H2Y 1M6
Canada

Video Conference https://meet.google.com/qis-oxgk-ruv

Local participant who want to share screen should present to meeting instead of actually joining it

ACCOMMODATIONS:

Next door to the Audiokinetic offices is the Intercontinental Hotel Montreal

Attendees

  • Francois Thibault - Audiokinetic
  • Tai Vuong - Audiokinetic
  • Fulup Ar Foll - IoT.bzh
  • Michael Fabry - Microchip
  • Yuichi Kusakabe - Fujitsu TEN
  • Karl Gladigau - Fiberdyne Systems
  • Mark Farrugia - Fiberdyne Systems
  • Toshiaki Isogai - Denso (Video)
  • Naohiro Nishiguchi - ADIT (Video)
  • Kazumasa Mitsunari - WITZ (Video)
  • Hiroshi Kojima - Mentor (Video)

Agenda

Day 1

  • 9:00-9:30 Welcome information, setup, discuss agenda
  • 9:30-10:30 High-level binding deep dive (Francois)
  • 10:30-10:45 Break
  • 10:45-12:00 HAL and controller changes (Fulup)
  • 12:00-13:00 Lunch
  • 13:00-15:00 Microchip perspective on audio efforts (Michael), Japanese OEM requirements (Yuichi)
  • 15:00-15:15 Break
  • 15:15-18:00 Setting up most hardware/software with M3 boards

Day 2

  • 9:00-10:30 Define demonstration component architecture, use cases, roadmap for Dresden AMM and CES 2018
  • 10:30-10:45 Break
  • 11:00-12:00 Fiberdyne demo and DSP overview
  • 12:00-13:00 Lunch
  • 13:00-15:00 Standard HAL control definition
  • 15:00-15:15 Break
  • 15:15-18:00 Technical topics for integration of AAAA components (HAL, Controller, Unicens, DSP controls, high-level API)

References:

Technical Decisions

  • Audio policy requires much context. Policy will be kept as (shared library) module (at least initially).
    • End goal will be to further isolate it further with JSON interface so that it can be relocated within a dedicated audio policy binding implementation
  • High-level audio binding will keep application specific context (through session) in order to enforce that applications cannot control other applications stream characteristics
  • We should limit the number of permissions linked to audio. Proposed initial working list could look something like: stream playback access, stream record access, priority signal (policy can affect other stream volume), audio routing changes
  • High-level API will need to consider format information when reporting list of potential source/sinks for both explicit and automatic routing
  • HAL API will need to be extended to report device format capabilities to support the above
  • High-level API will provide list of standard properties to application and be extendable easily to new ones (e.g. echo cancellation). Initial working examples of that will provide access to master EQ controls on M3 DSP though Fiberdyne lower-level service for example.
  • Currently keeping AAAA development on DD until Master stabilizes, expected that everyone will migrate to EE for CES (likely shortly after Dresden AMM)
  • HAL capability to connect to vehicle signaling might be required
  • HAL should have provision to report URIs for additional capabilities (e.g. specific meta-data interface for VU meter display)
  • Specific HAL implementations will need be provided for generic dynamic device classes such as USB and Bluetooth
  • Discussion on handling and support for legacy applications.

Demonstrations

  • Audiokinetic will show high-level binding implementation with policy application examples at Dresden AMM BoF session (HTML5 demo). Example scenarios could be: volume control based on speed, prevent phone ringing in reverse gear, audio ducking cases, hands free call interruption, and others from https://wiki.automotivelinux.org/eg-ui-graphics-req-audiorouting
  • Desire to upgrade the audio quality of embedded demonstrations for CES. Detailed planning to occur in Dresden during audio working session
  • Embedded demonstrator should be connected directly to AGL signals (e.g. simulated CAN events). Different simulators/applications can be used to simulate those signals.
  • Consider migrating media player back-end from QT to AGL service to MPDC or else (e.g. GStreamer based)
  • Advanced audio dashboard application needed. A merge of capabilities existing in Audiokinetic and Fiberdyne demonstrator is likely a good basis
  • Common hardware platform for demonstrator should be Kingfisher, although early availability may be a problem, we can get started with just the Renesas platform for the time being.
  • HALs will need to be available for different hardware (e.g. Renesas / Intel / Microchip / Fiberdyne)

Results

  • Shared understanding and technical alignment between various components: high-level audio binding and policy, Fiberdyne DSP capabilities, Microchip Unicens, HAL and afb-controller, etc.
  • Setup MOST hardware to work in AGL environment with the help of Microchip
  • Collective agreement that scope of audio work necessary for secure and complete audio services is currently significantly under estimated
  • Definition of shared roadmap responsibilities between companies to have most components and functionality demoable at AMM Dresden in order to be able to build solid embedded demonstrator for CES 2018
agl-distro/sep2017-audio-f2f.txt · Last modified: 2017/09/21 10:50 by fthibault