[RFC PATCH v2 0/7] Add audio support in v4l2 framework

Hans Verkuil hverkuil at xs4all.nl
Fri Aug 4 22:34:09 AEST 2023


On 04/08/2023 14:19, Shengjiu Wang wrote:
> On Wed, Aug 2, 2023 at 8:28 PM Hans Verkuil <hverkuil at xs4all.nl> wrote:
>>
>> On 02/08/2023 14:02, Shengjiu Wang wrote:
>>> On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai <tiwai at suse.de> wrote:
>>>>
>>>> On Wed, 02 Aug 2023 09:32:37 +0200,
>>>> Hans Verkuil wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> On 25/07/2023 08:12, Shengjiu Wang wrote:
>>>>>> Audio signal processing has the requirement for memory to
>>>>>> memory similar as Video.
>>>>>>
>>>>>> This patch is to add this support in v4l2 framework, defined
>>>>>> new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
>>>>>> V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
>>>>>> for audio case usage.
>>>>>>
>>>>>> The created audio device is named "/dev/audioX".
>>>>>>
>>>>>> And add memory to memory support for two kinds of i.MX ASRC
>>>>>> module
>>>>>
>>>>> Before I spend time on this: are the audio maintainers OK with doing
>>>>> this in V4L2?
>>>>>
>>>>> I do want to have a clear statement on this as it is not something I
>>>>> can decide.
>>>>
>>>> Well, I personally don't mind to have some audio capability in v4l2
>>>> layer.  But, the only uncertain thing for now is whether this is a
>>>> must-have or not.
>>>>
>>>
>>> Thanks,  I am also not sure about this.  I am also confused that why
>>> there is no m2m implementation for audio in the kernel.  Audio also
>>> has similar decoder encoder post-processing as video.
>>>
>>>>
>>>> IIRC, the implementation in the sound driver side was never done just
>>>> because there was no similar implementation?  If so, and if the
>>>> extension to the v4l2 core layer is needed, shouldn't it be more
>>>> considered for the possible other route?
>>>>
>>>
>>> Actually I'd like someone could point me to the other route. I'd like to
>>> try.
>>>
>>> The reason why I select to extend v4l2 for such audio usage is that v4l2
>>> looks best for this audio m2m implementation.  v4l2 is designed for m2m
>>> usage.  if we need implement another 'route',  I don't think it can do better
>>> that v4l2.
>>>
>>> I appreciate that someone can share his ideas or doable solutions.
>>> And please don't ignore my request, ignore my patch.
>>
>> To give a bit more background: if it is decided to use the v4l API for this
>> (and I have no objection to this from my side since API/framework-wise it is a
>> good fit for this), then there are a number of things that need to be done to
>> get this into the media subsystem:
>>
>> - documentation for the new uAPI
>> - add support for this to v4l2-ctl
>> - add v4l2-compliance tests for the new device
>> - highly desirable: have a virtual driver (similar to vim2m) that supports this:
>>   it could be as simple as just copy input to output. This helps regression
>>   testing.
>> - it might need media controller support as well. TBD.
>>
>> None of this is particularly complex, but taken all together it is a fair
>> amount of work that also needs a lot of review time from our side.
>>
>> I want to add one more option to the mix: drivers/media/core/v4l2-mem2mem.c is
>> the main m2m framework, but it relies heavily on the videobuf2 framework for
>> the capture and output queues.
>>
>> The core vb2 implementation in drivers/media/common/videobuf2/videobuf2-core.c
>> is independent of V4L2 and can be used by other subsystems (in our case, it is
>> also used by the DVB API). It is a possibility to create an alsa version of
>> v4l2-mem2mem.c that uses the core vb2 code with an ALSA uAPI on top.
>>
>> So in drivers/media/common/videobuf2/ you would have a videobuf2-alsa.c besides
>> the already existing videobuf2-v4l2.c and -dvb.c.
>>
>> Perhaps parts of v4l2-mem2mem.c can be reused as well in that case, but I am
>> not sure if it is worth the effort. I suspect copying it to an alsa-mem2mem.c
>> and adapting it for alsa is easiest if you want to go that way.
>>
> 
> Thanks.
> 
> Does this means that videobuf2-v4l2.c and v4l2-mem2mem.c are dedicate
> for video device? if audio want to use v4l2 framework,  need to create
> videobuf2-alsa.c and alsa-mem2mem.c, but it may cause a lot of function
> duplicate.

The videobuf2-v4l2.c sits on top of videobuf2-core.c and provides the V4L2
uAPI for the streaming functionality. If you don't want to use the V4L2
uAPI for this, then you would need a videobuf2-alsa.c that provides a
(possibly new) ALSA uAPI. Whether that makes sense is something I cannot
decide.

v4l2-mem2mem.c uses videobuf2-v4l2.c, so if you need a ALSA version, then
you probably need to create an alsa-mem2mem.c (possibly some functionality
can be shared).

It's just a third option, and it can be useful if there is a strong desire
to keep the uAPI for this functionality entirely within the ALSA subsystem,
but you want to reuse the streaming I/O functionality that the videobuf2
core provides.

If the decision is that it is fine to use the V4L2 uAPI for this type
of audio functionality through a /dev/v4l-audioX device, then just ignore
this option and use V4L2.

Regards,

	Hans


More information about the Linuxppc-dev mailing list