Aside from oil, soil, land or plutonium (analogies with data people have said in the past), another analogy can be “data is the new Uranium.” You can refine it to empower people. Or refine it too much and make it a weapon of mass destruction.
The question is how to draw the line between empowerment and destruction? When does informing become influencing to predicting to controlling?
I can share a meme or article but tweak it in a way as suggested by a machine learning model and it can be a tool to influence people.
Word use, punctuation, color, font, images, layout, etc. all play a part in the persuasiveness of an article. All can be analyzed using data and AB testing.
An article made to influence can then be deployed. Predictive models can be trained to predict if a user will change one’s mind.
When you can predict if a user will change one’s mind based on the article’s form and content, is it still influencing? Or does it become a form of control? A form of data-driven mind conditioning?
This is how the current targeted ad system works.
Amazon, for example, can predict what items you will buy before you buy them. They can thus include those items in a truck going to your place. Again, even before you buy it. They can then choose to bombard you with product recommendations pushing you to buy them.
Facebook and Google ads are based on our behavior on the web. They can predict when you’re sick or when you’re suffering a back ache. They can then show ads for paracetamol and chiropractors.
Cambridge Analytica crossed the line way too much.
They used data in the Philippines to find who can be convinced by political propaganda. They experimented with different memes and taglines that are most effective in changing human opinion. Not to give them too much credit in the election of President Duterte, but Cambridge Analytica’s parent company, SCL, claims to have found that the Filipino electorate was most easily swayed by the strongman no-nonsense image. This, we know, was how Duterte was branded during the campaign period.
Our data are being collected and used as weapons to manipulate us to buy products, to get addicted to the platform, or to adopt a certain political viewpoint. Is this acceptable? Or does this take away our freedom to decide? Should we rethink the use of Facebook, a privately-owned profit-oriented platform, as a public space to share ideas? Do we need policies that will explicitly define what is acceptable or unacceptable work in data science?