Yet, in a mad dash for profit, we are blowing a huge opportunity. Mobile, in its own right, is a bunch of marketing channels. But mobile data, in sum, brings the already formidable raft of data (from shopping, Web behaviors, etc.) to an inflection point. That is, Big Data now has a shot at helping people avoid security and health threats as they happen. What issues will we face? What happens when Marlboro Man meets Big Brother?
A brief review of digital marketing history will give us some clues.
In the 1980s, with help from computers, marketers transformed a personal, ubiquitous, networked device (the telephone) into a new marketing channel. They called it telemarketing. The United States government quickly made the practice illegal. Those of you who experienced it know exactly why it became media-non-grata.
advertisement
advertisement
In the mid ‘90s came opt-in channels, email in particular.
By 2005, anyone could purchase an email list of people who, for example, had deadly diseases. This was 100% legal, although tricks like “co-reg” tainted all of it.
Now, marketers can traffic those email lists to Facebook. Facebook will build lookalike models based on people who click on ads sent to the people who have those email addresses.
Using advanced modeling and huge amounts of data, any number of big data players could isolate individuals who may be in danger from disease, or mugging, or a tornado -- or anything else where location, lifestyle or behaviors add up to create the DNA for a bad event.
Epidemiologists used phone movement patterns to help track the spread of Ebola, so this is already happening. Should the owners of such data be barred from sending warnings -- or be forced to?
It turns out the U.S. legal system already has something to say about this. From Wikipedia:
"A duty to warn is a concept that arises in the law of torts in a number of circumstances, indicating that a party will be held liable for injuries caused to another, where the party had the opportunity to warn the other of a hazard and failed to do so."
Is a company negligent, then, if it fails to warn someone who might be in danger based on advanced modeling? If so, ignorance might be the strategy of choice for companies that could help. More intriguing still, what if the imperiled person is anonymous? Do we send the warning as an ad, hoping he or she will notice?
So problem #1 is that those with the chutzpah to do this are more likely to be defensive than they are to be magnanimous. Problem #2 is balancing privacy against value. But the most concerning problem is #3: Who will believe the warning?
Without the requisite trust in the source, the value of any information is suspect. The stronger the assertion, the more trust is required. Advertisers have cried wolf too many times, sending self-serving warnings. Today, the Web is littered with clickbait suggesting, for example, I might die of a heart attack if I don’t read the article hiding behind the click.
The problem is, the organizations that have the most data are marching to the beat of money or politics. Who is both trustworthy and data-rich?
The obvious candidates are the big data companies and the government. Welcome to the world of “Minority Report.” Which of those entities do you trust never to confuse their own interests with yours? “Do the right thing” -- for whom?
None of them pass the sniff test. If they know, or almost know, will they help? Will people let them help? Will the justice system make them do it? It seems very possible that the forces of the marketplace will ultimately constrain benevolent use of society’s most powerful new tool.