<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2233467260228916&amp;ev=PageView&amp;noscript=1">

Human Algorithms, the human side of algorithms

Runar Vestrheim Chief Services Officer at Cegal. Runar has overall responsibility for the global consulting services of Cegal.
09/17/2020 |

Artificial intelligence/Machine Learning/Deep Learning are collective terms for algorithms that are intelligent or learn on their own. The human side of the algorithms can thus be incorporated, which is what we call "Human Algorithms".


Artificial intelligence is an umbrella term for technologies that make computer systems "intelligent" in the sense that they can solve problems and learn from their own experiences. In today's world, Machine Learning (or learning software) is one of the most well-known and widely used technologies.

Algorithms use mathematics, statistics, rules, computer science, neurology, and linguistics to systematize and recognize patterns in large amounts of data.

In simple terms, algorithms collect, systematize, and recognize patterns in large quantities of data by combining mathematics, statistics, rules, computer science, neurology, and linguistics. Algorithms describe a series of operations used to solve one or more problems. Precisely, it is a set of commands that performs a sequence of mathematical equations with a hierarchy and logic.

Now is the time to bring out the human (kind) side of algorithms. In this context, algorithms can be considered objective or true since they are advanced computer programs

We at Cegal call this Human Algorithm, the human side of the algorithm.

However, algorithms are programmed(Machine Learning/Deep Learning) by humans or influenced by human behavior. This is what we at Cegal call the human side of algorithms or Human Algorithms.

Often, algorithms are able to see through our prejudices and make better decisions than people. Other times, algorithms reinforce our prejudices, either because they are programmed with them or because we have trained them with our prejudices or as a combination of these.

Here are three examples of the latter:

  1. Initially, the algorithms for face recognition(Machine Learning) were incapable of recognizing colored/dark-skinned faces because the data set was too homogenous. In other words, the majority of the images in the data set were white.


    https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html

  2. Apple card and credit rating: A Danish developer discovered that he received 20 times more credit than his wife, despite the fact that they had roughly the same income and that she had a better credit rating than him. The gender bias is likely caused by bias in the database behind the algorithms.

    https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/

  3. Tay and Rinna, Microsoft's artificial intelligence chatbots, evolved quite differently. Learning from American Twitter accounts, Tay became racist, quoting Hitler, denying the Holocaust, and developing an unpleasant personality before Microsoft had to remove him. Rinna, who was educated in Japan, referred to Hitler positively, and railed against Microsoft, but was mostly positive and pleasant, before "she" suddenly went into "teenage depression".

    A number of online sources refer to Tay and Rinna, including the following:


    https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

The human side or the bias of the algorithms means that we have to measure and test the algorithms to make them better.

Due to this potential human bias, we must measure and test the algorithms in order to make them better and more accurate. How?
• Analyzing the data and the results of the training. Identify and correct any bias in the data.
• Identify possible biases in advance.
• Using synthetic data to train and test.
• Developing algorithms with a greater degree of diversity. They must be developed by teams consisting of diverse backgrounds, genders, origins, etc.
• Regulations should be improved in general. Would it make sense to test algorithms in the same way that society tests drugs?



Cegal and Human Algorithms

At Cegal, we have several consultants with experience in various projects within machine learning and solutions within artificial intelligence. Our Data Scientists assist customers with the creation of tailored machine learning and analysis solutions.

Read more about Cegal's services within Data and Analytics >

Related words:

Data Scientist, Artificial Intelligence,  Deep Learning, Chatbot, RPA, Robots, Machine Learning

 

Would you like a non-binding chat with us? We are ready to help you.