Symbolic Image: Artificial Intelligence

Four points on AI

Most people have heared about Zuckerbergs and Musks argument about AI. Here are four observations that probably make this argument mute:

  1. Here’s a nice summary on the original argument: “Artificial Intelligence: Are Elon Musk of Tesla and Facebook’s Mark Zuckerberg arguing about the same thing? I don’t think so….
  2. … along with an interesting point of view about the timing of all of that. So, let’s look at timing and at where we can control the evolution of AI. (By the way… who is “we” in the previous sentence?)

  3. The advent of the Tensor processing unit (Wikipedia) and the TensorFlow framework (Wikipedia) is giving “everybody” access to a degree of AI processing power that was unthinkable a mere few years ago. Similar to the advent of the PC as a means to give processing power to the individual (the smartphone has repeated and exploded that effect a few decades later) or the Web that gave everybody a chance to “publish” everything (and thus created the “fake news” problems”) or SaaS infrastructures that made not just quality but also quantity of processing power affordable, TPU and TensorFlow already put AI into the hands of “everybody” to the extent that the creation of chatbots and face recognition are now covered in tutorials in supermarket-level computer magazines.
  4. Bloomberg writes “China’s Plan for World Domination in AI Isn’t So Crazy After All
    China has all it takes: paying customers (the government), a wealth of data (from the government – privacy laws are far less strict than here in Europe) and a huge IT sector with plenty of smart people. And I personally don’t believe the Chinese are going to have their AI efforts distracted by Western concerns about potential unwanted side effects on society.
  5. AIs (chatbots, specifically) have already started to evolve their own languages. (Fast Co Design)
    … and we don’t stand a chance of understanding their language. If we hand over responsibility to AIs, it is already impossible to verify their conclusions, but if they now start to negotiate, mere humans are locked out for good.

In other words, while I’m philosophically on the side of Musk (“higher forms of AI could in the long run turn human beings into second-class citizens, dominated by machines that have become capable of taking their own decisions.”), I believe it is too late already: If AIs are already creating opaque interaction (like Facebooks negotiating chat robots) and China has decided to become world leader in AI by 2030, we are already in the middle of an uncontrollable rat race.

To come back to the question earler: No matter who we assume to be “we”, “we” won’t be able to control what’s going on. We can’t control China, we already see that AIs create uncontrollable outcomes, and we can’t control the masses of script kiddies who now set out to create something “cool”, just because they can.

Instead of (or at least in addition to) discussing how to control the evolution of AI, we should probably start thinking carefully about how to deal with the inevitable.

What do you think?