/etc/migration.d
Anton Kachalov
rnouse at google.com
Fri Oct 23 03:19:20 AEDT 2020
Hello,
any objections about distro feature flag to cover root Vs. non-root configs
& code?
Thanks.
On Tue, 20 Oct 2020 at 13:22, Anton Kachalov <rnouse at google.com> wrote:
> Hello,
>
> so, I'm ending up at the moment with an idea for this specific case of
> migration from root "space" to unprivileged. The idea is simple: cover
> config files and compile-time chunks of code to be covered by distro
> feature flag. This flag should be enabled for qemuarm target and then
> iteratively enabled across other platforms once they are ready. The
> rollback from non-root permissions back to root is painless and easy to
> achieve. No actual migration scripts should be required, just config
> changes.
>
> On Fri, 16 Oct 2020 at 23:01, Anton Kachalov <rnouse at google.com> wrote:
>
>> Hello, Patrick.
>>
>> On Fri, 16 Oct 2020 at 22:25, Patrick Williams <patrick at stwcx.xyz> wrote:
>>
>>> On Wed, Oct 14, 2020 at 08:47:57PM +0200, Anton Kachalov wrote:
>>> > With moving from root-only environment to unprivileged users' space, we
>>> > need to ensure a smooth transition. To achieve that we need a
>>> mechanism for
>>> > one-shot per-package scripts that would take care of migration. That's
>>> not
>>> > only about groups & owners, but a general approach. It's similar to
>>> > firstboot, but has a different purpose.
>>> >
>>> > I'm going to prototype a robust / naive solution to start a service
>>> before
>>> > everything else in the system with a condition (non-empty
>>> /etc/migration.d)
>>> > and iterate through all files. Each script has to run at list with
>>> "set -e"
>>> > to bail out on failures. If the script succeeded -- it will be removed.
>>> >
>>> > The tricky part is: what if the script fails? Keep it, ignore the
>>> failure
>>> > and proceed with others and then boot the system? Or proceed other
>>> scripts
>>> > as well and then enter some "failure state"?
>>>
>>> Hi Anton,
>>>
>>> I have some high-level questions / ideas about this.
>>>
>>> * Would these migrations be restricted to just useradd/groupadd
>>> operations? Or
>>> are you trying to create a general framework for "upgrade scripts"?
>>>
>>
>> This might be a general framework.
>>
>>
>>>
>>> * Have you looked at any existing support by Yocto or systemd to provide
>>> what you need? Yocto has USERADD_PACKAGES, postinst_intercept.
>>> Systemd has firstboot. There might be other mechanisms I'm not
>>> remembering as well. (I guess you mentioned firstboot). There is
>>> hacky override to install a "@reboot" directive in the crontab.
>>>
>>
>> afaik, systemd's firstboot is only about to run special units right after
>> installation. Once the system is configured, the firstboot units wouldn't
>> be executed anymore.
>> This thread I've started to find possible solutions.
>> The postinst chunks executed during the image formation (as a part of rpm
>> / deb packages' scripts).
>>
>>
>>>
>>> * How long would a "migration" be kept around for? Are we expecting
>>> that packages provide them forever?
>>>
>>
>> That is a good question because we don't know how old the firmware is
>> being upgraded. I suppose, that like one-two-whatever release cycles. Then
>> the update process should be either using an intermediate firmware version
>> or forcing the non-volatile storage to be wiped. Regardless of the
>> migration scripts, we might have some incompatibilities between two
>> releases that will require NV (overlayfs back partition) cleanup.
>>
>>
>>>
>>> * How do we handle downgrades? Some systems are set up with a "golden
>>> image" which is locked at manufacturing. Maybe simple
>>> useradd/groupadd calls are innately backwards compatible but I worry
>>> about a general framework falling apart.
>>>
>>
>> In general, that's an issue. Golden-image downgrades should be allowed
>> within a compatible release branch (without wiping data). As above,
>> golden-images might be incompatible and wouldn't allow downgrades.
>>
>> The particular migration from root-only users to unprivileged users
>> should be one way without wiping data. If the downgrade is requested, then
>> it will be required to wipe the data.
>>
>>
>>>
>>> * Is there some mechanism we should do to run the migrations as part of
>>> the upgrade process instead of waiting to the next boot? The
>>> migrations could be included in the image tarball and thus be signed.
>>> That would save time on reboots for checking if the migrations are
>>> done.
>>>
>>
>> Yes, it could be done as a set of scripts during the update process. That
>> is one of the possible approaches. This also could be an approach for
>> downgrades. I'm only worrying about the effort to support downgrades from
>> random version to random version. The least effort with incompatible
>> upgrades / downgrades is to keep special transition firmware allowing
>> downgrade from current Golden version to the previous Golden version from
>> incompatible branch. For upgrades the latest version of transition firmware
>> might not be golden. This will require a separate repo with an
>> auto-generated set of scripts to be used to build transition fws.
>>
>>
>>
>>>
>>> * Rather than have a single migration script that runs before everything
>>> else (and is thus serial), you might create a template service
>>> (phosphor-migration- at .service) that can be depended on by the services
>>> needing the migration results. (ie. service foo depends on
>>> migration-foo).
>>>
>>
>> While migration is one-off, it might be safer to run serial one by one.
>>
>>
>>>
>>> * In a follow up email you mentioned something about hashing. I was
>>> going to ask how you know when a particular migration has been
>>> executed. Maybe there are some tricks of recording hash values in
>>> the RWFS could prevent multiple executions.
>>>
>>
>> We can track the succeeded scripts by touching some file in a directory
>> like /var/lib/migration (e.g. create a file named as sha-sum of the runned
>> script).
>>
>>
>>>
>>> --
>>> Patrick Williams
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20201022/53574357/attachment-0001.htm>
More information about the openbmc
mailing list