This is a note I originally posted to Facebook in response to their “suicide AI” system described in this TechCrunch article:
Below is the original, unedited content:
So the first thing that came to mind when I read this is that it’s going to result in a lot of false-positives (anyone familiar with AI knows this). That means Facebook will automatically be sending cops to peoples doors, and there’s a lot of problems with that. Here’s a couple examples:
The more I thought about it, the more potentially sinister it became. If Facebook AI is reading everything you post, and they are willing to alert the authorities based on this to prevent suicide, what else are they alerting the authorities to?
Then I realized the real motivation for this technology: marketing.
Facebook is an advertising company, and think for just a moment how vulnerable someone considering suicide is to being sold happiness; in the form of material goods, activities and of course in pill form.
Perhaps even worse, consider how vulnerable surviving friends and relatives are, in the wake of a successful suicide.
This might sound like wild speculation, but to anyone familiar with the way Facebook willingly and repeatedly violates its users trust, it’s hard to imagine why they wouldnt do exactly this.
Considering all this, and the fact that there’s no way to “opt out” of this system (it’s already been reading your posts) I can no longer voluntarily interact with Facebook and I’ll be leaving the system permanently soon.
Unfortunately (like the mafia), you can’t just “quit” Facebook. But it’s a good first step. I urge you to read all the linked articles in this post & comments and seriously consider if you should do the same.
I would like to stay I touch with you and to that end I will be reaching-out to all my Facebook friends individually so I can update my private contact book.