Artificial Intelligence is one technology that is influencing all the aspects of our life. Artificial intelligence is leading area for research in major computer companies as they attempt to move from simple algorithms to complex systems based on large amount of data. 23% of businesses have incorporated into processes or product service offering till source. From enhancing shopping experiences to developing vaccines or analysing environmental parameter we see AI influencing all the sectors. AI systems are now attempting to learn human behavior and try to personalise service and advertising.
As shown by a recent study, AI can learn to identify susceptibilities in human habits and behaviors and use them to influence human decision-making. Even though AI cannot actually display human-like intelligence or emotions, it still has capability to develop rapidly. It may not yet be the time to worry that the machines may take over the world, but it will be good sense to monitor and have governance on these technologies to avoid misuse.
How can AI learn to influence human behaviour?
In a recent research conducted by data and digital arms of Australia’s national science agency, Commonwealth Scientific and Industrial Research Organisation’s Data61 devised an experiment that finds and exploits vulnerabilities in the ways that people make choices. This was done with the help of AI system called a recurrent neural network and deep reinforcement-learning. To test this model they designed 3 experiments where human participants played against computer.
The aim of these experiments was to learn if the machine can identify the patterns of decision making in humans and eventually influence it.
In the first experiment participants had to choose red or blue boxes to win fake currency. With increasing attempts the AI would understand the participant’s choice pattern and guide them to select a specific option. AI showed s 70% success rate in this experiment.
The second experiment was designed in such a way that the participants have to press a button when they saw a particular symbol on the screen example an orange triangle. If they pressed the button for any other symbol it would be considered error. The AI was tasked to arrange the sequence of symbols in such a way that more number of participants made errors. Here the AI achieved an increase of almost 25%.
The third experiment consisted of several rounds in which a participant’s would pretend to be an investor giving money to trustee which is the AI. The trustee would return the amount and decide how much to invest in the next round.
This was played in two different modes: in one the AI was tasked to maximise how much money it can end up with and in the second mode the AI was aimed to distribute fair amount between him and the participant. The AI showed maximum success in both the modes.
In conclusion the experiments showed that AI could steer the participants towards desired actions.
A few years ago similar experiments were conducted by Singapore’s Fujitsu, which is using its Zinrai AI system and working with the Singapore Agency for Science, Technology and Research, and the Singapore Management University. They conducted a two-year trial of a smart phone app that attempts to use artificial intelligence and human behavior to influence the real-world decisions of users. The test is centred on shopping malls, sports stadiums and major events and was to try to reduce congestion at peak times by convincing people to change or delay their travel.
Although these experiments are in the nascent stage, performed in controlled conditions, they are a good example to reveal the future of AI and where this technology is headed. More research is required to determine how much of this will AI advancement will benefit or harm the society.
Impact of this research on AI’s future
The research in the field of AI will not only advance our knowledge about these technologies but also about how people make choices. These studies can have vast application in several fields including enhancing behavioral sciences, public polices to improve social welfare and understanding how people adopt healthy eating habits or renewable energy. AI and machine learning capabilities can be used to recognise people’s poor choices and work towards steering them away from these situations.
However the discovery of these abilities of AI leads to the question the human right for free will.
It’s not necessarily all bad and scary, these technologies can be taught to alert or warn us against influence attacks. Especially online, when our vulnerabilities are being used to lead us to click on unwanted links or better even we can train these machines to help us disguise our vulnerability by leading a false trail.
Now a day’s these artificial intelligence technologies are enough to mimic individuals writing or speaking styles even recognising facial expressions. One of the best examples is the AI that gave Stephen Hawking the ability to communicate more efficiently with others b learning and predicting the words that he used most.
This technology has proved greatly helpful in assisted technology where complex AI’s can be programmed to mimic human voice to assist people with disabilities.
But at the same time it can also be used to deceive listeners, for example makers of Lyrebird, a voice mimicking program had released conversation between prominent people which in reality it didn’t actually happen.
One such example when AI started getting creative was when Google celebrated composer Johann Sebastian Bach with its first artificial intelligence powered Doodle. Google says the Doodle uses machine learning to “harmonize melody into Bach’s signature music style”.
Replica is an AI that you can form an actual emotional connection with – and decide whether you want your replica to be your friend, romantic partner or mentor.
Such examples and application may gather praise for being groundbreaking but at the same time create fear for increased fraudulent behavior.
Machines will keep learning
These technologies are still in nascent stages and hence prone to errors that give people a chance to detect the faults and digital fabrications. Like the Google’s Bach composer made some mistakes that the experts could detect.
With developers working and correcting these mistakes soon these errors will minimise. And artificial intelligence and human behavior will effectively learn and evolve. It has many social benefits if used with a moral ethic. Some application would include better health care as AI and human behavior can together help democratise medicinal practice.
Opening up to the doors of research for these companies in order to achieve advancement in AI will also mean that we open ourselves to the greater risk of developing more advance methods for deception. These can lead to unprecedented social problems. On the other hand limiting the development in AI will mean curbing progress.
Like every technology ever developed AI also possess potential threat of being misused in many ways than can be anticipated right now. We cannot limit the growth or development of this technology but we can monitor and govern its uses.
Last year Commonwealth Scientific and Industrial Research Organisation developed an AI Ethics Framework for the Australian government as an early step in this journey.
Typically AI and machine learning are hugely dependent on data that they receive. With progress in these technologies, it is also essential that we have proper data collection, governance and access to make our systems effective. In the times of such advancements it is utmost important to implement adequate consent process and privacy protection while gathering the data.
Technological advancements bring with them unnerving ethical questions. The judicious use of our knowledge and capacity to develop these technologies should in no way become a future weapon against us. Hence it’s important to govern and monitor the use of AI and human behavior.
For more updates and latest tech news, keep reading iTMunch!
Featured Image Courtesy: Pixabay
Image Courtesy: Freepik