[Skiboot] [PATCH v3 02/10] npu2: Rework NPU data structures for OpenCAPI

Alistair Popple alistair at popple.id.au
Wed Feb 7 15:16:12 AEDT 2018


> Whether you're an OpenCAPI or an NVLink device, you're going to have a 
> device tree node, BARs, an NPU pointer, a SCOM base, a lane mask, and 
> the hw procedure tracking. NVLink has a IODA cache at the NPU level and 
> the pci-virt stuff at the device level. OCAPI has a PHB per device.

As far as I can tell though most (if not all) of this init is done
independently in npu2-opencapi.c, so I'm not sure I see the value in just
reusing the data struct definitions. Although there is some overlap in the
PHY sections so I can see why things exist like that.

> Obviously when we eventually support a mixed NVLink/OCAPI setup on the 
> same NPU we're going to need to share whole-npu data somehow.

Agreed, but I'm guessing there is going to be some significant code
refactoring required to support that so it might be easier to merge things
once we have explicitly worked out what is generic vs. specific to
NVLink/OCAPI rather than having to go back and figure out which
fields/hacks are for one versus the other.

> The hw procedures code is shared and takes lots of npu2_dev, we'd have 
> to refactor all of that. I know you've suggested splitting the stuff 
> that's relevant to hw procedures out of npu2_dev into a separate 
> npu2_brick or npu2_link or something, though that'd also involve a fair 
> bit of code churn.

My suggestions were more in the hope that someone else would refactor all
my code for me :-) I have started looking at refactoring out the phy code
which will create a clear split between OCAPI/NVLink. It doesn't look too
difficult.

> I'd like to get this series in sooner rather than later especially given 
> the corresponding kernel driver has already been merged, so I'd like to 
> avoid as much code churn as possible unless it's obviously better.

Understood. I will post my refactoring as a patch on top of this series. I
would however like to see the following cleaned up:

> >> @@ -1008,7 +1008,7 @@ static int64_t npu2_set_pe(struct phb *phb,
> >>   			   uint8_t fcompare,
> >>   			   uint8_t action)
> >>   {
> >> -	struct npu2 *p = phb_to_npu2(phb);
> >> +	struct npu2 *p;
> >>   	struct npu2_dev *dev;
> >>   	uint64_t reg, val;
> >>   
> >> @@ -1025,13 +1025,19 @@ static int64_t npu2_set_pe(struct phb *phb,
> >>   		return OPAL_UNSUPPORTED;
> >>   
> >>   	/* Get the NPU2 device */
> >> -	dev = npu2_bdf_to_dev(p, bdfn);
> >> +	if (phb->phb_type == phb_type_npu_v2_opencapi) {
> >> +		dev = phb_to_npu2_dev_ocapi(phb);
> >> +		p = dev->npu;
> >> +	} else {
> >> +		p = phb_to_npu2_nvlink(phb);
> >> +		dev = npu2_bdf_to_dev(p, bdfn);
> >> +	}
> >>   	if (!dev)
> >>   		return OPAL_PARAMETER;
> >>   
> >>   	val = NPU2_CQ_BRICK_BDF2PE_MAP_ENABLE;
> >>   	val = SETFIELD(NPU2_CQ_BRICK_BDF2PE_MAP_PE, val, pe_num);
> >> -	val = SETFIELD(NPU2_CQ_BRICK_BDF2PE_MAP_BDF, val, dev->gpu_bdfn);
> >> +	val = SETFIELD(NPU2_CQ_BRICK_BDF2PE_MAP_BDF, val, dev->nvlink.gpu_bdfn);
> >>   
> >>   	if (!NPU2DEV_BRICK(dev))
> >>   		reg = NPU2_REG_OFFSET(NPU2_STACK_STCK_0 + dev->index/2,
> >> @@ -1043,7 +1049,7 @@ static int64_t npu2_set_pe(struct phb *phb,
> >>   	npu2_write(p, reg, val);
> >>   	val = NPU2_MISC_BRICK_BDF2PE_MAP_ENABLE;
> >>   	val = SETFIELD(NPU2_MISC_BRICK_BDF2PE_MAP_PE, val, pe_num);
> >> -	val = SETFIELD(NPU2_MISC_BRICK_BDF2PE_MAP_BDF, val, dev->gpu_bdfn);
> >> +	val = SETFIELD(NPU2_MISC_BRICK_BDF2PE_MAP_BDF, val, dev->nvlink.gpu_bdfn);
> >>   	reg = NPU2_REG_OFFSET(NPU2_STACK_MISC, NPU2_BLOCK_MISC,
> >>   			      NPU2_MISC_BRICK0_BDF2PE_MAP0 + (dev->index * 0x18));
> >>   	p->bdf2pe_cache[dev->index] = val;

All the if (nvlink) blocks make this function basically unique for NVLink
vs. OCAPI. There is a small amount of common code but this is the only
function which is shared between NVLink and OCAPI so it would make my
refactoring much easier if the OCAPI version of this function could just be
split out into it's own phb op in npu2-opencapi.c. Thanks!

Regards,

Alistair


More information about the Skiboot mailing list