Ethics matter in business - to a degree. Take the decision by big brands such as Unilever to halt advertising on Facebook over the social media giant’s handling of hate speech and misinformation. Today, more companies are prepared to make these trade-offs to defend their reputations. You don’t want to be caught on the wrong side of history, after all.
The same cannot be said for the exponential deployment of AI and analytics. Like a gold rush, consumer data are now so readily accessible - and valuable - that most companies are far too busy scrambling to exploit it to worry too much about the implications or impact.
There are no benchmarks for navigating and addressing complex ethical challenges within organizations, for example, in terms of privacy and usage. Worse still, uneven and patchy global regulation has created unfair playing fields.
In this context, it is a brave product manager that refuses the promise of more profit through the collection of that little bit of extra data, and then just a little bit more, and so on.
Why ethics matter in tech
The companies that have weathered the pandemic crisis have been able to predict what was happening and to make swift decisions through data analytics and machine learning. These companies had the right infrastructure to collect the right kind of data and to disseminate it effectively. Crucially, they had business models that could turn data into action almost in real time.
As companies prepare for a post-pandemic world and whatever crisis comes next, the effective management of “tech ethics” will prove a crucial differentiator for companies. In the future, addressing questions such as “What data should we collect?” or “What should we do with our existing datasets?” will enhance processes and offerings rather than derail or devalue them.
Most customers might be relaxed about data collection for now, but they will grow more sensitive to the ethical positions and actions of brands. We have seen the scale of concern over the use of data analytics in elections. It may not be long before the intensive commercial use of data goes too far and changes consumer habits.
Brand ethics also matter to the brightest and best in the labour market. If companies want to attract the talent that will help them prepare for and survive the next crisis, they need to get their ethics in order.
And let’s not forget our humble product manager, dutifully accepting incremental changes in data usage or collection to deliver ever smaller profit gains. That slippery slope is already creating business models that rely too heavily on practices that may soon be deemed unacceptable. Good luck when the regulatory or legal backlash comes.
What can you do?
If companies fail to act now on tech ethics, how many will be around to see the next crisis, let alone survive it? There are three steps that organizations can take to incorporate a more ethical approach to the management of data analytics, AI and machine learning.
1. Draw your red line. Be clear on what it is you will never do. For example, Google has declared it will never make its data available for the production of weapons.
2. Make everyone responsible. Empower decision makers at all levels so that they have the authority to draw ethical boundaries. The product manager should be allowed to say “No”.
3. Appoint a Chief Ethics Officer. Put someone in charge of managing ethical challenges across your business. The individual, or team, handling this should have both the authority to make calls on ethical issues, such as collection of private data, but also the final responsibility in case something goes wrong.