Micro Singularity & Ethics

The Guardian long read on “How algorithms rule our working lives” was a fantastic though distressing read, about employers using algorithms to filter out candidates based on reasons ranging from mental health to race to neighbourhoods to income. This in itself has massive implications on creating and expanding class divides and closing access to folks based on biases that are arguably unfair and lacking nuance.

If we zoom out beyond work and jobs, it’s fairly easy to see that algorithms are having an increasing impact on our consumption and life in general. The biggest services in play – Facebook, (M, newsfeed items) Google, (search results, Google Now) Amazon, (Echo, recommended products) Apple (Siri) – all heavily have algorithms in play. And that brings us to biases in algorithms. Factor Daily had a couple of posts on teaching bots ‘good values‘. Slate had a great read on the subject too – on how Amazon’s computerized decision-making can also deliver a strong dose of discrimination. Both offer perspectives on how biases, both intentional and unintentional, creep into the algorithms, and the Slate article also brings out some excellent nuances on the expectation from algorithms, and how offline retail chains (selection of store locations, for instance) and human decisions compare to algorithms. 

In a different domain – money – I had read a superb thought exercise recently – whether capital was approaching its own singularity. The noble thought of a standard currency has been corrupted and the market has become the single greatest machine of growing capital for capital’s sake. That caused me to think – tech singularity might be a few decades away, but in more limited domains (narrow AI) built by specific corporations, (the examples above) is it possible that algorithms could reach singularity faster – because they are after all optimised for efficiency, which is quite tangible and computable? Micro singularities in various domains, I wonder if these are the vague equivalents of unicellular organisms, which finally led to different dominant species on the planet.

All of this also reminded me of something I had written on morality in artificial intelligence a couple of years ago. While writing that post, I had learned that even morality has gradations in the context of AI – amorality, operational morality, functional morality, and full moral agency. The difference between operational and functional morality is ethical sensitivity, making the latter more complex. For example, parental control software vs MedEthEx, which is designed to assist physicians navigate the complexities of difficult medical ethical questions. (via) I had also realised that morality is quite the subjective thing, as opposed to say, ethics.

Maybe semantics is important, and in our discourse, we should be using ethics and not morals. After all, when humans themselves don’t really score an A+ on morality, do we have the moral right to demand that of another ‘species’? 🙂 But more importantly, as we advance in applications, we should be demanding that corporations show us how they build functional morality into their systems and practices, and thus move away from “opaque algorithms” that we only consume, and don’t really know. These are arguably the building blocks of a “strong AI” and we’d thus be shaping its evolutionary protocol. And if efficiency is what we’re building into it as a dictum, evolution, I’d think couldn’t be happier.

One could argue that from an evolutionary perspective, we are overreaching and should let evolution take care of itself. But while evolution seems quite unconcerned about which species dominates, humanity might not want it left to chance.

the_creation_by_eliant-d49d233

(via)

Leave a Reply

Your email address will not be published. Required fields are marked *