-
Here's a weird question: if you have to survey users on trust, isn't that just admitting that algorithms based on user data lack sufficient capacity to address requirements? Or is it that users' actions don't always follow their desires?
-
If users act in a way counter to their needs, desires or health (smoking, bad news, following exes, etc..) doesn't that indicate any system that bases reactions on user actions to be inherently flawed?