-
srnlrsn humanpropensity Generally FL isn't really DP, that implies particular measures taken at the collection and storage level that FL isn't intrinsically required to do. The outcome of FL is k-anonymous data tho. Google actually does a great job explaining here - federated.withgoogle.com
-
srnlrsn humanpropensity Or models that are roughly analogous to having k-anon data. I think that, unless browser standards change, FL is generally beyond the capabilities of Clean Room companies because of their positioning relative to user devices. This oversimplifies a bit, even for Twitter...
-
srnlrsn humanpropensity Arguably the properties that secure the data inside the clean room against leakage to either side would be DP - noising, obscuring, data minimization, etc... while outcome would be k-anon - cohorts that are not mapped per-user in a way not transparent to other side of the room...
-
srnlrsn humanpropensity In reality I suspect few, if any, clean room cos are actually doing any differential privacy other than only doing matches per-assigned-data-points. The hope is they at lest are not breaching cohort integrity down to individuals for either side, in which case there is no privacy.
-
srnlrsn humanpropensity The question of if data used in this way constitutes a sale or share or some other legal definition is a different considerations IMO, but this is something that at least some of the clean rooms are trying to use privacy principles to prove.
-
srnlrsn humanpropensity (This is sort of what I mean, the application--or promise of application--of DP on one side and k-anon on the other meaningfully makes the result cohort targeting, even if clean rooms don't phrase it that way. So if CRs deliver on their privacy promise - it is cohort targeting)