The whole of CES was buzzing with AI promises, debates, and showcases.
The only thing is, I don’t think we are really talking about artificial intelligence.
When I was a teenager, I devoured books in general, and developed a fascination with science fiction in particular. My favorites were books by Heinlein, Herbert, Clarke and Asimov. The last author is of course famous for penning the three laws of robotics back in 1942, and adding a forth (Law Zero, listed below first) in the early 1960s:
-- A robot may not harm
humanity, or, by inaction, allow humanity to come to harm.
-- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
-- A robot must obey the orders
given it by human beings except where such orders would conflict with the First Law.
-- A robot must protect its own existence as long as such protection does not conflict with the First or
Second Laws.
advertisement
advertisement
A robot can only exist with some form of artificial intelligence. In fact, you could argue that any kind of algorithm that is programmed to make choices is a form of AI. This means that programmatic buying, for instance, is AI. But equally, your Roomba vacuum cleaner, Amazon’s Alexa, the lane assist in your car and that bot that is answering your chats on a website is a form of AI.
But here is the thing. As smart as all of these systems are, I don’t think they are artificial intelligence in the way Asimov and other science fiction writers envisioned it. I believe they are better described by what I would call automated responsive software engines. And woohoo, that spells ARSE!
Because let’s face it: what we are talking about to date are not smart, autonomous and intelligent decision-making systems that replicate human decision-making. Rather, these are incredibly sophisticated automated decision tree calculators. If (X) occurs, check (a), (b) and (c), then determine the most favorable outcome.
Now I am very much in favor of technical advancement, and this is not a piece to denounce or belittle the tech of today and the future. But I do believe that this kind of AI, like robotics, require a set of laws to ensure it doesn’t do harm. So I am proposing that AI systems that govern marketing- and/or advertising-related decision-making adhere to the following laws:
-- Marketing and Advertising AI must keep a permanent and accessible record of the decisions, parameters and alternatives it
considered, and inputs that influenced its decision-making process.
-- Marketing and Advertising AI must be driven to deliver the best result possible within the parameters available.
--
Marketing and Advertising AI must protect its decision-making from manual overrides that would lead to a sub-optimal outcome.
That is my first attempt. I am sure that these are very much open for improvement. I am also very aware that the chances of these actually being adopted and put in place are probably zero. And I calculated that without the help of an algorithm…
Good piece Maarten. Permission to pinch your acronym?
John: Permission granted (with proper attribution of course!). Thanks!
FWIW, your first rule is probably baked in already. One of the great things about Decision Tree models is that the parameters, alternatives, and inputs are all part of the code so they can always be reproduced.
Thanks Joshua - does that include the data that informs the decisions? My thinking was that there needs to be a record of the actual data that helped inform the decision, especially to understand what the drivers of a decision were, and to ensure the decisions aren't influenced by data manipulation.
How does "Guru Maarten" sound?