[PATCH 2/2] ftrace: prevent preemption in perf_ftrace_function_call()

王贇 yun.wang at linux.alibaba.com
Wed Oct 13 12:45:45 AEDT 2021



On 2021/10/12 下午7:20, Peter Zijlstra wrote:
> On Tue, Oct 12, 2021 at 01:40:31PM +0800, 王贇 wrote:
> 
>> diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
>> index 6aed10e..33c2f76 100644
>> --- a/kernel/trace/trace_event_perf.c
>> +++ b/kernel/trace/trace_event_perf.c
>> @@ -441,12 +441,19 @@ void perf_trace_buf_update(void *record, u16 type)
>>  	if (!rcu_is_watching())
>>  		return;
>>
>> +	/*
>> +	 * Prevent CPU changing from now on. rcu must
>> +	 * be in watching if the task was migrated and
>> +	 * scheduled.
>> +	 */
>> +	preempt_disable_notrace();
>> +
>>  	if ((unsigned long)ops->private != smp_processor_id())
>> -		return;
>> +		goto out;
>>
>>  	bit = ftrace_test_recursion_trylock(ip, parent_ip);
>>  	if (bit < 0)
>> -		return;
>> +		goto out;
>>
>>  	event = container_of(ops, struct perf_event, ftrace_ops);
>>
> 
> This seems rather daft, wouldn't it be easier to just put that check
> under the recursion thing?

In case if the condition matched, extra lock/unlock will be introduced,
but I guess that's acceptable since this seems unlikely to happen :-P

Will move the check in v2.

Regards,
Michael Wang

> 


More information about the Linuxppc-dev mailing list