{"id":22111,"date":"2019-02-12T09:00:38","date_gmt":"2019-02-11T22:00:38","guid":{"rendered":"https:\/\/www.kaspersky.com.au\/blog\/?p=22111"},"modified":"2019-02-12T05:11:25","modified_gmt":"2019-02-11T18:11:25","slug":"when-ai-decides","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com.au\/blog\/when-ai-decides\/22111\/","title":{"rendered":"When Artificial Intelligence affects lives"},"content":{"rendered":"<p>Despite our previous coverage of some <a target=\"_blank\" href=\"https:\/\/www.kaspersky.com\/blog\/machine-learning-nine-challenges\/23553\/\" rel=\"noopener nofollow\">major issues with AI<\/a> in its current form, people still entrust very important matters to robot assistants. Self-learning systems are already helping judges and doctors make decisions, and they can even predict crimes that have not yet been committed. Yet users of such systems are often in the dark about how the systems reach conclusions.<a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/102\/2019\/02\/12050709\/when-ai-decides-featured.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/102\/2019\/02\/12050709\/when-ai-decides-featured-1024x673.jpg\" alt=\"Artificial intelligence assists judges, police officers, and doctors. But what guides the decision-making process?\" width=\"1024\" height=\"673\" class=\"aligncenter size-large wp-image-22112\"><\/a><\/p>\n<h2>All rise, the court is now booting up<\/h2>\n<p>In US courts, AI is deployed in decisions relating to <a target=\"_blank\" href=\"https:\/\/www.wired.com\/2017\/04\/courts-using-ai-sentence-criminals-must-stop-now\/\" rel=\"noopener nofollow\">sentencing, preventive measures, and mitigation<\/a>. After studying the relevant data, the AI system considers if a suspect to be prone to recidivism, and the decision can turn probation into a real sentence, or lead to bail refusal.<\/p>\n<p>For example, US citizen Eric Loomis was <a target=\"_blank\" href=\"https:\/\/www.bbc.com\/news\/magazine-37658374\" rel=\"noopener nofollow\">sentenced to six years in jail<\/a> for driving a car in which a passenger fired shots at a building. The ruling was based on the COMPAS algorithm, which assesses the danger posed by individuals to society. COMPAS was fed the defendant\u2019s profile and track record with the law, and it identified him as an \u201cindividual who is at high risk to the community.\u201d The defense challenged the decision on the grounds that the workings of the algorithm were not disclosed, making it impossible to evaluate the fairness of its conclusions. The court rejected this argument.<\/p>\n<h3>Electronic clairvoyants: AI-powered crime prediction<\/h3>\n<p>Some regions of China have gone a step further, <a target=\"_blank\" href=\"https:\/\/www.theglobeandmail.com\/news\/world\/china-using-big-data-to-detain-people-in-re-education-before-crime-committed-report\/article38126551\/\" rel=\"noopener nofollow\">using AI to identify potential criminals<\/a>. Facial-recognition cameras monitor the public and report to law enforcement authorities if something suspicious swims into view. For example, someone who makes a large purchase of fertilizer might be preparing a terrorist attack. Anyone guilty of acting suspiciously can be arrested or sent to a reeducation camp.<\/p>\n<p>Pre-crime technology is being developed in other countries as well. Police in some parts of the United States and Britain <a target=\"_blank\" href=\"https:\/\/www.bbc.com\/news\/business-46017239\" rel=\"noopener nofollow\">use technology<\/a> to predict where the next incident is most likely to occur. Many factors are considered: the area\u2019s criminal history, its socioeconomic status, and even the weather forecast. Remarkably, since the tools\u2019 deployment in Chicago districts, gun crime there has <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/article\/us-chicago-police-technology\/as-shootings-soar-chicago-police-use-technology-to-predict-crime-idUSKBN1AL08P\" rel=\"noopener nofollow\">dropped by about a third<\/a>.<\/p>\n<h3>The computer will see you now<\/h3>\n<p>New technologies are also widely used in healthcare. Artificial doctors <a target=\"_blank\" href=\"https:\/\/www.artificialintelligence-news.com\/2018\/06\/01\/opinion-ai-remote-medical-consulting\/\" rel=\"noopener nofollow\">consult patients<\/a>, <a target=\"_blank\" href=\"https:\/\/www.telegraph.co.uk\/news\/world\/china-watch\/technology\/artificial-intelligence-in-medicine\/\" rel=\"noopener nofollow\">make diagnoses<\/a>, analyze checkup results, and <a target=\"_blank\" href=\"https:\/\/news.microsoft.com\/apac\/features\/ai-in-the-operating-theater-technology-transforms-cosmetic-surgery-in-korea\/\" rel=\"noopener nofollow\">assist surgeons during operations<\/a>.<\/p>\n<p>One of the best-known self-learning systems in healthcare is <a target=\"_blank\" href=\"https:\/\/www.ibm.com\/watson\/uk-en\/health\/\" rel=\"noopener nofollow\">IBM Watson Health<\/a>. Doctors coach the AI to diagnose diseases and prescribe therapy. Watson Health has had a lot of positive feedback. Back in 2013, for example, the probability that the <a target=\"_blank\" href=\"https:\/\/www.wired.co.uk\/article\/ibm-watson-medical-doctor\" rel=\"noopener nofollow\">supercomputer would select the optimal treatment plan<\/a> was put at 90%.<\/p>\n<p>However, in the summer of 2018, it was revealed that some of the <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2018\/7\/26\/17619382\/ibms-watson-cancer-ai-healthcare-science\" rel=\"noopener nofollow\">system\u2019s cancer treatment advice was unsafe<\/a>. In particular, Watson recommended that a cancer patient with severe bleeding be given a drug that could cause even more blood loss. Fortunately, the scenarios were hypothetical, not real cases.<\/p>\n<p>Sure, human doctors make mistakes too, but when AI is involved, the lines of responsibility are blurred. Would a flesh-and-blood doctor risk contradicting a digital colleague whose creators have crammed it with hundreds of thousands of scientific articles, books, and case histories? And if not, would the doctor shoulder any negative consequences?<\/p>\n<h3>AI must be transparent<\/h3>\n<p>One of the main problems with using AI to decide the fate of humankind is that the algorithms are often opaque, and tracing the cause of errors so as to prevent a repeat isn\u2019t easy at all. From the viewpoint of developers of self-learning systems, that is understandable: Who wants to share knowhow with potential competitors? But when people\u2019s lives are at stake, should commercial secrets take priority?<\/p>\n<p>Politicians worldwide are trying to come to grips with regulating <a target=\"_blank\" href=\"https:\/\/en.wikipedia.org\/wiki\/Right_to_explanation\" rel=\"noopener nofollow\">nontransparent AI<\/a>. In the European Union, \u201cdata subjects\u201d have the right to know on what basis AI decisions affecting their interests are made. <a target=\"_blank\" href=\"https:\/\/asia.nikkei.com\/Business\/Science\/Japan-to-hold-companies-accountable-for-AI-decisions\" rel=\"noopener nofollow\">Japan<\/a> is going down a similar route, but the relevant law is still only being considered.<\/p>\n<p>Some developers are in favor of transparency, but they are thin on the ground. One is tech company CivicScape, which in 2017 <a target=\"_blank\" href=\"https:\/\/qz.com\/938635\/a-predictive-policing-startup-released-all-its-code-so-it-can-be-scoured-for-bias\/\" rel=\"noopener nofollow\">released the source code of its predictive-policing system<\/a>. But this is very much the exception, not the rule.<\/p>\n<p>Now that the AI genie is out of the bottle, there is little chance of humankind ever putting it back. That means until AI-based decisions become provably fair and accurate, AI\u2019s use must rely on well-crafted laws and the competence of both the creators and the users of self-learning systems.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence assists judges, police officers, and doctors. But what guides the decision-making process?<\/p>\n","protected":false},"author":2049,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1789],"tags":[1140,960,1876,321],"class_list":{"0":"post-22111","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-technology","7":"tag-ai","8":"tag-artificial-intelligence","9":"tag-machine-learning","10":"tag-technology"},"hreflang":[{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/when-ai-decides\/22111\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/when-ai-decides\/15239\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/when-ai-decides\/12808\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/when-ai-decides\/17177\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/when-ai-decides\/15338\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/when-ai-decides\/14056\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/when-ai-decides\/17854\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/when-ai-decides\/16903\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/when-ai-decides\/22243\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/when-ai-decides\/5693\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/when-ai-decides\/25607\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/when-ai-decides\/11430\/"},{"hreflang":"pt-br","url":"https:\/\/www.kaspersky.com.br\/blog\/when-ai-decides\/11514\/"},{"hreflang":"pl","url":"https:\/\/plblog.kaspersky.com\/when-ai-decides\/10353\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/when-ai-decides\/18553\/"},{"hreflang":"ja","url":"https:\/\/blog.kaspersky.co.jp\/when-ai-decides\/22440\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/when-ai-decides\/17938\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/when-ai-decides\/22046\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com.au\/blog\/tag\/artificial-intelligence\/","name":"artificial intelligence"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts\/22111","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/users\/2049"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/comments?post=22111"}],"version-history":[{"count":1,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts\/22111\/revisions"}],"predecessor-version":[{"id":22113,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts\/22111\/revisions\/22113"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/media?parent=22111"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/categories?post=22111"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/tags?post=22111"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}