[PATCH linux dev-4.7 v3 2/2] ARM: dts: Enable checkstop and cooling gpio keys

Joel Stanley joel at jms.id.au
Fri Apr 21 12:14:17 AEST 2017


Hi Brad,

On Thu, Apr 20, 2017 at 11:54 PM, Brad Bishop
<bradleyb at fuzziesquirrel.com> wrote:
>>> Each gpio will have an application waiting for its state to change.  My
>>> concern was waking up x number of applications every time any gpio changes
>>> state.  Is that a valid concern?
>>
>> How many applications do we expect to be reading the evdev? What are
>> our expected interrupt rates? What are the worst expected cases?
>
> For Witherspoon/Zaius or any OpenBMC platform?  I obviously can’t say for the
> latter case.  On a bigger/enterprise systems I can see on the order of 10 to 100
> applications looking at these.  I don’t have a good sense of how often.
>
> On Witherspoon/Zaius, sure, we only have a handful of applications and the
> interrupt rate should be very infrequent.
>
> I think you typically see all the gpio keys mapped to a single device because
> the usual thing to do is have a single application (Xorg) treat it like a keyboard.

I've been following the discussion you've been having. I went to merge
the patches today, but first decided to satisfy myself that this was
the right way to go. I didn't arrive at the same conclusion as you and
Andrew.

The userspace use cases you've presented would be served well by a
single node. We have the flexibility to revisit this decision when
someone builds a machine that has 100 GPIOs they want to monitor; in
addition I think we would revisit the decision to have 100 userspace
programs.

By having separate nodes, you're creating a new instance of the kernel
driver for each node.

The driver, and the binding, is designed to support your use case -
multiple key events generated by the hardware.

Finally, I looked at the other machines in the tree. None of them have
separate gpio-keys nodes

arch/arm/boot/dts/armada-xp-netgear-rn2120.dts:

        gpio-keys {
                compatible = "gpio-keys";
                pinctrl-0 = <&power_button_pin &reset_button_pin>;
                pinctrl-names = "default";

                power-button {
                        label = "Power Button";
                        linux,code = <KEY_POWER>;
                        gpios = <&gpio0 27 GPIO_ACTIVE_HIGH>;
                };

                reset-button {
                        label = "Reset Button";
                        linux,code = <KEY_RESTART>;
                        gpios = <&gpio1 9 GPIO_ACTIVE_LOW>;
                };
        };

arch/arm/boot/dts/at91sam9261ek.dts
        gpio_keys {
                compatible = "gpio-keys";

                button_0 {
                        label = "button_0";
                        gpios = <&pioA 27 GPIO_ACTIVE_LOW>;
                        linux,code = <256>;
                        wakeup-source;
                };

                button_1 {
                        label = "button_1";
                        gpios = <&pioA 26 GPIO_ACTIVE_LOW>;
                        linux,code = <257>;
                        wakeup-source;
                };

                button_2 {
                        label = "button_2";
                        gpios = <&pioA 25 GPIO_ACTIVE_LOW>;
                        linux,code = <258>;
                        wakeup-source;
                };

                button_3 {
                        label = "button_3";
                        gpios = <&pioA 24 GPIO_ACTIVE_LOW>;
                        linux,code = <259>;
                        wakeup-source;
                };
        };

Cheers,

Joel



>
> We aren’t structured that way so rethinking the usual approach seems reasonable.
> Another reason I went for per gpio devices because it prevents applications from
> reacting to each others events without any special library code.
>
> I’m not saying there aren’t disadvantages to this approach - I just don’t know what
> they are?
>
>>
>> It's hard to judge without knowing the numbers, but considering the
>> chips we run on I agree we should generally favour performance if
>> design is getting in the way. But to make that trade off we should be
>
> Again, what exactly is being traded-off ?
>
>> clear on the performance numbers.
>>
>> Andrew


More information about the openbmc mailing list