[RFC PATCH v2 0/7] Add audio support in v4l2 framework

Hans Verkuil hverkuil at xs4all.nl
Wed Aug 2 22:28:13 AEST 2023


On 02/08/2023 14:02, Shengjiu Wang wrote:
> On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai <tiwai at suse.de> wrote:
>>
>> On Wed, 02 Aug 2023 09:32:37 +0200,
>> Hans Verkuil wrote:
>>>
>>> Hi all,
>>>
>>> On 25/07/2023 08:12, Shengjiu Wang wrote:
>>>> Audio signal processing has the requirement for memory to
>>>> memory similar as Video.
>>>>
>>>> This patch is to add this support in v4l2 framework, defined
>>>> new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
>>>> V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
>>>> for audio case usage.
>>>>
>>>> The created audio device is named "/dev/audioX".
>>>>
>>>> And add memory to memory support for two kinds of i.MX ASRC
>>>> module
>>>
>>> Before I spend time on this: are the audio maintainers OK with doing
>>> this in V4L2?
>>>
>>> I do want to have a clear statement on this as it is not something I
>>> can decide.
>>
>> Well, I personally don't mind to have some audio capability in v4l2
>> layer.  But, the only uncertain thing for now is whether this is a
>> must-have or not.
>>
> 
> Thanks,  I am also not sure about this.  I am also confused that why
> there is no m2m implementation for audio in the kernel.  Audio also
> has similar decoder encoder post-processing as video.
> 
>>
>> IIRC, the implementation in the sound driver side was never done just
>> because there was no similar implementation?  If so, and if the
>> extension to the v4l2 core layer is needed, shouldn't it be more
>> considered for the possible other route?
>>
> 
> Actually I'd like someone could point me to the other route. I'd like to
> try.
> 
> The reason why I select to extend v4l2 for such audio usage is that v4l2
> looks best for this audio m2m implementation.  v4l2 is designed for m2m
> usage.  if we need implement another 'route',  I don't think it can do better
> that v4l2.
> 
> I appreciate that someone can share his ideas or doable solutions.
> And please don't ignore my request, ignore my patch.

To give a bit more background: if it is decided to use the v4l API for this
(and I have no objection to this from my side since API/framework-wise it is a
good fit for this), then there are a number of things that need to be done to
get this into the media subsystem:

- documentation for the new uAPI
- add support for this to v4l2-ctl
- add v4l2-compliance tests for the new device
- highly desirable: have a virtual driver (similar to vim2m) that supports this:
  it could be as simple as just copy input to output. This helps regression
  testing.
- it might need media controller support as well. TBD.

None of this is particularly complex, but taken all together it is a fair
amount of work that also needs a lot of review time from our side.

I want to add one more option to the mix: drivers/media/core/v4l2-mem2mem.c is
the main m2m framework, but it relies heavily on the videobuf2 framework for
the capture and output queues.

The core vb2 implementation in drivers/media/common/videobuf2/videobuf2-core.c
is independent of V4L2 and can be used by other subsystems (in our case, it is
also used by the DVB API). It is a possibility to create an alsa version of
v4l2-mem2mem.c that uses the core vb2 code with an ALSA uAPI on top.

So in drivers/media/common/videobuf2/ you would have a videobuf2-alsa.c besides
the already existing videobuf2-v4l2.c and -dvb.c.

Perhaps parts of v4l2-mem2mem.c can be reused as well in that case, but I am
not sure if it is worth the effort. I suspect copying it to an alsa-mem2mem.c
and adapting it for alsa is easiest if you want to go that way.

Regards,

	Hans


More information about the Linuxppc-dev mailing list