Abstract
The expectation of spreading autonomous vehicles lies in the hope of significantly decreasing the 1,3 million death toll accidents worldwide, which are caused by human factor 90% of the time. In policies of insurance companies the reaction time of a human realizing any dangerous situation, reacting to it and putting the breaks into action is two seconds. The reaction time would be reduced by the power of AI that can process the huge amount of data coming from sensors and with the information regarding the situation could make decision much faster than men. The aim of the research is to denote several situations and possibilities that are capable of deceiving, diverting, capturing a self-driving car or even turning it against the other vehicles by influencing the decision-making of the artificial intelligence. In this paper I will discuss several situations that might be able to confuse the artificial intelligence of the autonomous vehicles or to make them come to an inadequate decision. You can see that safe decision-making depends on the teaching method of the artificial intelligence as well as the correctness of the data uploaded. The other aim of the research is to demonstrate how could work a Manchurian Artificial Intelligence in autonomous vehicles. I will introduce the idea of Manchurian artificial intelligence which can be activated by a certain event and can pose a threat to the passengers of the vehicles. If it is present in the software of several vehicles, a chain of worldwide accidents can be induced at a certain time.
Get full access to this article
View all access options for this article.
