Artificial intelligence/Machine learning/Deep learning are umbrella terms for "intelligent" or learning algorithms. Algorithms are programmed by or learned (Machine learning/Deep learning) from humans or human behaviour. This is what we at Cegal call the human side of algorithms or "Human algorithms". With "Human algorithms", Cegal is focusing on the human aspects of the algorithms.
Artificial intelligence is an umbrella term for technologies which make computer systems "intelligent", in the sense that they are able to solve problems and learn from their own experiences. Today, machine learning is one of the best-known and most widely used of these technologies.
In simple terms, it is about using mathematics, statistics, rules, computer science, neurology and linguistics to collect, systematise and recognise patterns in large amounts of data.
An algorithm is a description of a series of operations, which is used to solve one or more problems. More precisely, an algorithm is a continuation of mathematical equations with a logic and hierarchy. An algorithm on a computer is basically the execution of a series of commands that will perform a task.
It is now time to highlight the human aspect of algorithms. Because algorithms are (in this context) advanced computer programs, it is easy to think of them as being objective or true.
Sometimes, algorithms can see through our prejudices and make better decisions than we can. At other times, algorithms reinforce our prejudices, either because we have programmed them in or because we train them in our prejudices, or a combination of these.
Here are three examples of the latter:
The first facial recognition algorithms (machine learning) were poor at recognising coloured/dark-skinned faces because the people who developed them were white and the data set that taught the algorithms were not sufficiently diverse. In other words, the data set mostly contained images of white people. You can read more about the case in this article from the New York Times >
Apple cards and credit ratings: A Danish developer discovered that he was granted 20 times more credit than his wife, even though their finances were very similar and she had a higher credit rating than him. Gender bias can originate from bias in the underlying data behind the algorithms. In this case, models were trained using historical credit data from the 1970s, without taking account of female financial independence. You can read more about the case in this article from The Washington Post >
Microsoft artificial intelligence chatbot Tay and its cousin Rinna evolved quite differently. Tay, who learned from American Twitter accounts, soon became a racist and a Holocaust denier, quoted Hitler and developed a rather unpleasant personality, before Microsoft realised that it had to remove Tay from the web. Rinna, who was trained up in Japan, referred to Hitler in positive terms and cracked down on Microsoft, but was mostly positive and pleasant, until "she" suddenly became a “depressed teenager". Tay and Rinna are referred to in many places on the web, including in this case from techcrunch.com >
This possible human side or bias of the algorithms means that we must measure and test algorithms to make them better and fairer. How?
Analysis of data and training results. Correct the data if there is bias.
Detect possible instances of bias in advance.
Training and testing using synthetic data.
Greater diversity as algorithms evolve. They must be created by teams composed of people from different backgrounds, genders, origins, etc.
Better regulations generally. Perhaps algorithms should be tested in the same way as medicines?
Cegal and Human Algorithms
At Cegal, we have a team of consultants with experience of machine learning projects and solutions within artificial intelligence. Cegal also has many experts within the field of Data science. Our Data scientists help customers to create tailor-made solutions for machine learning and analysis.