[Simplicity] Questions about Simplicity
roconnor at blockstream.io
Sat Dec 29 09:23:30 AEDT 2018
This ties in with some of the objectives of using type inference. Our goal
with type inference is to determine what the principle (i.e. "minimal")
type of the expression is. With the principle type in hand we compute the
minimal amount of data required by witness values in order for the
computation to proceed (at least minimal from this static analysis point of
view). This eliminates witness size malleability since a third party is
unable to pad the witness values with unused junk. Because type inference
proceeds on the pruned expression (with untaken branches eliminated) the
principle type inferred may even be smaller than the principle type of the
original, unpruned expression.
With the current design, Simplicity doesn't necessarily infer the principle
type of the expression. Due to "sharing malleability", third parties may
be able to, in some cases, increase sharing to manipulate the monomorphic
type inference into unifying two interfered type annotations that aren't
supposed to be unified. This is largely a theoretical issue at the moment,
and I don't think that this is a real problem for practical expressions.
In the few examples I've looked at there has been no opportunity for
sharing malleability (sharing malleability may even end up decreasing costs
in practice since it reduces the program size, which is the most
significant cost type). However, using polymorphic type inference would
entirely eliminate this concern because, even when subexpressions are
shared, they can still end up with different type instances when used in
As mentioned before, the problem is that polymorphic type inference is
exponential time in the worst case. If we try to mitigate this by using
explicit type annotations, then we are no longer guaranteed to inferring
the principle type. We are open to type annotation malleability by third
parties that might be used to bloat witness values. We could commit to the
type annotations, but again, we wouldn't be inferring the principle type of
pruned expressions because we do not know which branches will be pruned.
We would also leak the types of the pruned expression both impacting
privacy and possibly spending more computational resources than needed
during evaluation. All that said, being able to commit to some explicit
types for subexpressions is near the top of my list of new features that
Simplicity might end up requiring.
There may be virtue in metering the cost of the polymorphic type inference
and adding that to the weight of the witness data. If the stated total
cost for polymorphic type inference isn't exactly the amount actually used,
the transaction would be invalid, and thus the stated cost would not
subject to malleability. On the other hand, because the cost of running
the polymorphic type inference would be stated upfront, if it is too large
the transaction could also be rejected. This solution involves studying
the polymorphic type inference algorithm, in the context of Simplicity, and
instrumenting it in a suitable way to assign a cost to executing the
algorithm. We also need to choose a suitable representation of the output
and also ensure we can compute a reasonable bound on the cost of the
polymorphic inference at commitment time. All of which seems fairly
reasonable, so long as the total costs of polymorphic type inference isn't
vastly larger than the current monomorphic design.
On Tue, Dec 25, 2018 at 10:43 PM ZmnSCPxj <ZmnSCPxj at protonmail.com> wrote:
> Good morning Russell,
> > * I have an open problem that unfortunately isn't written up in detail
> yet, but essentially asks if Simplicity's type inference can use
> polymorphic type inference instead of monomorphic type inference without
> opening of Denial of Service attacks? Polymorphic type inference takes
> exponential time in the worse case (as opposed to monomorphic type
> inference which can be done in linear (or more realistically quasi-linear)
> time); however, polymorphic type inference also allows for exponentially
> smaller terms on some cases too. So if polymorphic type inference only
> takes exponentially more time in those cases where we use exponentially
> smaller terms, then there is no real cost to using it and many benefits
> that go beyond the exponentially smaller amounts of bandwidth used in some
> My PL is old and out-of-date, but if my stale local cache is correct, even
> if polymorphic type inference is potentially exponential, if type
> annotations are provided, polymorphic type verification is linear on
> expression size.
> If so, potentially inference can have some engineered limit imposed by
> consensus, and optional type annotations added to the encoding.
> A Simplicity script would then have to add some annotations if it would
> violate that limit.
> Or simply annotate all polymorphic sub-expressions.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Simplicity