{"id":9021,"date":"2015-06-12T09:43:06","date_gmt":"2015-06-12T13:43:06","guid":{"rendered":"https:\/\/www.kaspersky.com.au\/blog\/?p=9021"},"modified":"2017-09-24T11:27:19","modified_gmt":"2017-09-24T15:27:19","slug":"artificial-intelligence-safety","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com.au\/blog\/artificial-intelligence-safety\/9021\/","title":{"rendered":"Artificial intelligence safety, or When to expect SkyNet?"},"content":{"rendered":"<p>What do billionaire inventor Elon Musk, the Google Now on Tap service launched at Google I\/O, and the recent \u201cEx Machina\u201d premiere have in common? The idea that unites all three is artificial intelligence or, more precisely, the process of imposing limits into artificial intelligence (AI) so it truly serves humanity and does not inflict any harm.<\/p>\n<p><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/102\/2015\/06\/06024719\/ai-safety-fb.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-9025\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/102\/2015\/06\/06024719\/ai-safety-fb.jpg\" alt=\"Should we be afraid of artificial intelligence?\" width=\"1600\" height=\"1600\"><\/a><\/p>\n<h3>What is artificial intelligence capable of today?<\/h3>\n<p>For those who are not really into the topic, let me enumerate several facts which can demonstrate the progress machines have made in their ability to do very human things.<\/p>\n<p>Today, Google can correctly <a href=\"http:\/\/venturebeat.com\/2015\/05\/28\/google-says-its-speech-recognition-technology-now-has-only-an-8-word-error-rate\/\" target=\"_blank\" rel=\"noopener nofollow\">recognize speech with 92% accuracy<\/a> versus 77% just two years ago; the company has developed an <a href=\"http:\/\/www.bloomberg.com\/news\/articles\/2015-02-25\/google-s-computers-learn-to-play-video-games-by-themselves\" target=\"_blank\" rel=\"noopener nofollow\">AI platform that learned to play classic videogames on its own accord<\/a>. Microsoft taught <a href=\"http:\/\/www.forbes.com\/sites\/michaelthomsen\/2015\/02\/19\/microsofts-deep-learning-project-outperforms-humans-in-image-recognition\/\" target=\"_blank\" rel=\"noopener nofollow\">a robot to recognize images<\/a> (or, more precisely, to see certain objects on the images) with just 4,94% error rate \u2013 stunningly, an average human has a higher error rate.<\/p>\n<p>Google\u2019s stats suggest that their driverless cars which, by now, have already gone over 1,800,000 miles on public roads in California, were involved in car accidents only 13 times in six years, and in 8 of those cases, the car behind was to blame.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">MT <a href=\"https:\/\/twitter.com\/Rayterrill?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">@Rayterrill<\/a> Amazing\u20141.8 million miles, 13 accidents (all caused by other humans) for Google self-driving cars <a href=\"http:\/\/t.co\/DgtxDVSiBJ\" target=\"_blank\" rel=\"noopener nofollow\">http:\/\/t.co\/DgtxDVSiBJ<\/a><\/p>\n<p>\u2014 Ars Technica (@arstechnica) <a href=\"https:\/\/twitter.com\/arstechnica\/status\/607688353119629312?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">June 7, 2015<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>It all proves that, even with low probability of developing full-fledged AI in the short-term perspective, something similar\u00a0will inevitably emerge in the nearest decades.<\/p>\n<p>With that in mind, don\u2019t think the impact from \u2018smarter\u2019 machines will be seen only in the virtual domain. Here is an extreme example: unmanned aerial vehicles, or drones, fly completely on their own, but the command to shoot targets is still given by humans. This is the way the U.S. prefers to fight against terrorists in Pakistan and other dangerous regions.<\/p>\n<div class=\"pullquote\">Are programmers capable of creating a reliable \u2018safety lock mechanism\u2019 to prevent the AI from committing unethical or immoral acts?<\/div>\n<p>The possibility of automating such tasks as well is <a href=\"http:\/\/newscenter.berkeley.edu\/2015\/05\/28\/automated-killing-machines\/\" target=\"_blank\" rel=\"noopener nofollow\">widely discussed<\/a>. <em>The Terminator<\/em> franchise turned thirty last year, so you can vividly imagine the consequences of such a decision in the not too distant future.<\/p>\n<p>I won\u2019t dwell too much on the apocalyptic scenarios, rather I will focus on some more down-to-earth questions. One of which is: Are programmers capable of creating a reliable \u2018safety lock mechanism\u2019 to prevent the AI from committing unethical or immoral acts?<\/p>\n<p>Such acts might be based on various reasons, yet the most obvious of them is a conflict of resources between humanity and the AI. However, there are other scenarios. I\u2019d disclaim that the harm would not necessarily be intentional. There is a great example Stanislaw Lem cites in his wonderful work, <a href=\"http:\/\/www.amazon.com\/Summa-Technologiae-Electronic-Mediations-Stanislaw\/dp\/0816675775\" target=\"_blank\" rel=\"noopener nofollow\">Summa Technologiae<\/a>. In essence, the idea is as follows:<\/p>\n<p><em>\u201cSuppose the prognostic block of the \u2018black box\u2019 (AI) detects a danger potentially able to impact the state of the humanity\u2019s homeostatic balance\u2026 The said danger is provoked by the rate of population increase which substantially exceeds the civilization\u2019s ability to satisfy the humanity\u2019s basic needs.<br>\n<\/em><\/p>\n<p><em>Suppose one of the external channels of the \u2018black box\u2019 informs the system about a new chemical compound which is not harmful for one\u2019s health and suppresses the ovulation.<br>\n<\/em><\/p>\n<p><em>Then the \u2018black box\u2019 decides to inject microscopic dose of the compound into potable water system across a certain country, but would encounter a dilemma: whether to inform the society and rick facing opposition or to keep the society unknowing and thus preserve the existing balance (for the greater good).\u201d<br>\n<\/em><\/p>\n<p><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/102\/2015\/06\/06024721\/abyss.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-9024\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/102\/2015\/06\/06024721\/abyss.png\" alt=\"Should we be afraid of artificial intelligence?\" width=\"1280\" height=\"853\"><\/a><\/p>\n<p>As we see here, a very innocent optimization issue is solved in an elegant, simple and efficient, yet absolutely unethical solution which is based on intentional limitation of people\u2019s fertility without their consent.<\/p>\n<p>I think it is exactly the infrastructure management domain which will be delegated to a powerful AI-based system, as the cost\/profit ratio in such spheres would be much favorable than in the case of a fully robotic secretary or cook.<\/p>\n<h3>Teaching ethics to a robot: how to integrate the \u2018safety lock?<\/h3>\n<p>Any 20th century youngster will immediately recall <a href=\"http:\/\/en.wikipedia.org\/wiki\/Three_Laws_of_Robotics\" target=\"_blank\" rel=\"noopener nofollow\">Isaac Azimov\u2019s three laws of robotics<\/a>, but, it\u2019s not enough. As proven by the example above, it is not necessary to harm anyone to significantly cut down the population (and remember, it could be done for the greater good).<\/p>\n<p>There are many other options potentially detrimental for humanity. It is possible to find a loophole in the terms defining the concept of \u2018harm\u2019, to delegate the very task of harming to people, or to undermine the existence of the rules itself.<\/p>\n<p>The \u2018friendliness towards people\u2019 itself may be reconsidered by a smart AI. Here is the opinion Roman Yampolsky, an AI expert, has on the issue, <a href=\"https:\/\/intelligence.org\/2013\/07\/15\/roman-interview\/\" target=\"_blank\" rel=\"noopener nofollow\">as cited in his interview<\/a>:<\/p>\n<p><em>\u201cWorse yet, any truly intelligent system will treat its \u201cbe friendly\u201d desire the same way very smart people deal with constraints placed in their minds by society. They basically see them as biases and learn to remove them\u2026 Why would a superintelligent machine not go through the same \u201cmental cleaning\u201d and treat its soft spot for humans as completely irrational?\u201d<br>\n<\/em><\/p>\n<p>A technical conceptualization of the \u2018safety lock\u2019 is quite realistic. In essence, the \u2018safety locks\u2019, which are necessary to tame the AI, are none other than \u2018sandboxes\u2019, widely used for security in modern runtime environments like Java or Flash.<\/p>\n<p>It is widely acknowledged that there is no \u2018ideal\u2019 sandbox and escape from the sandbox is quite possible, <a href=\"https:\/\/www.kaspersky.com.au\/blog\/venom-virtualization-vulnerability\/\" target=\"_blank\" rel=\"noopener\">as the recent story with the Venom bug has shown<\/a>. The AI which relies on flexibility and immense computing power is a good candidate for a security tester looking for vulnerabilities in its own sandbox deployment.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">Everything you need to know about the VENOM vulnerability \u2013 <a href=\"http:\/\/t.co\/L4rIzncffx\" target=\"_blank\" rel=\"noopener nofollow\">http:\/\/t.co\/L4rIzncffx<\/a><\/p>\n<p>\u2014 Kaspersky (@kaspersky) <a href=\"https:\/\/twitter.com\/kaspersky\/status\/602154615094779905?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">May 23, 2015<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Andrey Lavrentiev, the Head of Technology Research Department, sees the problem as follows:<\/p>\n<p><em>\u201cA full-fledged AI system will understand the meaning of everything it \u2018sees\u2019 through its numerous sensors. The limitation policies for its actions should be defined in accordance with the concept, or with the images shaped in the AI\u2019s \u2018brain'\u201d.<br>\n<\/em><\/p>\n<div class=\"pullquote\">Today, machines are better at image recognition than humans, but they still lose to humanity when it comes to manipulating those images or relations<\/div>\n<p><em>\u201cToday, machines are better at image recognition than humans, but they still lose to the humanity when it comes to manipulating those images or relations, i.e. the modern AI does not have \u2018common sense\u2019. As soon as this changes, and the machines pass this last outpost over and learn to manipulate perceived objects and actions, there won\u2019t be an opportunity to integrate any \u2018safety locks\u2019 anymore.\u201d<br>\n<\/em><\/p>\n<p><em>\u201cSuch a supreme intelligence would be able to analyze dependencies in perceived data much faster than a human would ever be, and then would find the way of bypassing rules and limitations imposed by a human and starting to act on its own accord.\u201d<br>\n<\/em><\/p>\n<p>A meaningful limitation, designed to prevent an AI from doing something harmful, would be the effective isolation of the AI from the real world, which will deprive it of a chance to manipulate physical objects. However, with this approach, the practical use of the AI is close to zero. Curiously, such an approach would be good for nothing, as the AI\u2019s main weapon would be\u2026 us, people.<\/p>\n<p>This probability is depicted in <a href=\"http:\/\/www.imdb.com\/title\/tt0470752\/\" target=\"_blank\" rel=\"noopener nofollow\">the recent Ex Machina sci-fi thriller<\/a>. Like any other typical Hollywood product, this movie is stuffed with forced arguments and dramatically overestimates the problem. Yet, the core of the problem is defined surprisingly correctly.<\/p>\n<p><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player\" type=\"text\/html\" width=\"640\" height=\"390\" src=\"https:\/\/www.youtube.com\/embed\/sNExF5WYMaA?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent\" frameborder=\"0\" allowfullscreen=\"true\"><\/iframe><\/span><\/p>\n<p>First, even primitive robots are capable of influencing a person\u2019s emotional state. <a href=\"http:\/\/en.wikipedia.org\/wiki\/ELIZA\" target=\"_blank\" rel=\"noopener nofollow\">An obsolete and easily programmed ELISA chat bot<\/a> (should you want to speak to it, <a href=\"http:\/\/www.masswerk.at\/elizabot\/\" target=\"_blank\" rel=\"noopener nofollow\">click here<\/a>) was able to exfiltrate <a href=\"http:\/\/en.wikipedia.org\/wiki\/ELIZA_effect#Origin\" target=\"_blank\" rel=\"noopener nofollow\">important personal information from the human interlocutors<\/a>, armed only with <a href=\"https:\/\/en.wikipedia.org\/wiki\/Empathy\" target=\"_blank\" rel=\"noopener nofollow\">empathy<\/a> and polite questions.<\/p>\n<p>Second, we increasingly rely on robotized algorithms to filter and categorize information. Someone managing these data flows, as proven by <a href=\"http:\/\/www.theguardian.com\/technology\/2014\/jun\/29\/facebook-users-emotions-news-feeds\" target=\"_blank\" rel=\"noopener nofollow\">a recent controversial Facebook experiment<\/a>, may influence people\u2019s emotional environment and their decision making tendencies.<\/p>\n<p>Even if we suggest that in the aforementioned example the AI is governing a city or a country indirectly and performing solely counselling functions, the AI is still capable of advising a solution which would turn out to be unethical in the long run. The consequences of the decision, in this respect, are known to the AI but not to living people.<\/p>\n<p>In a private life this influence might emerge really fast and be even more impactful. During <a href=\"https:\/\/www.kaspersky.com.au\/blog\/google-io2015-news\/\" target=\"_blank\" rel=\"noopener\">the recent Google I\/O conference<\/a> the new Now on Tap system was presented. It watches over all apps on the user\u2019s smartphone, exfiltrates contextual data and lets it be used for the online search.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">Google's latest Android update brings some much needed privacy strengthening <a href=\"https:\/\/twitter.com\/hashtag\/io15?src=hash&amp;ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">#io15<\/a> \u2013 <a href=\"http:\/\/t.co\/XPdvEUioPP\" target=\"_blank\" rel=\"noopener nofollow\">http:\/\/t.co\/XPdvEUioPP<\/a> <a href=\"http:\/\/t.co\/aWcCY8Ncjw\" target=\"_blank\" rel=\"noopener nofollow\">pic.twitter.com\/aWcCY8Ncjw<\/a><\/p>\n<p>\u2014 Kaspersky (@kaspersky) <a href=\"https:\/\/twitter.com\/kaspersky\/status\/605304070480502784?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">June 1, 2015<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>For instance, if you read an article on some musician in the Wikipedia app and ask Google \u201cWhen is his concert?\u201d, the robot would immediately know who exactly is referred to as \u2018him\u2019. A helpful robot already reminds us it is time to go to the airport, as the flight is scheduled in a couple of hours, proving to be a resourceful and savvy personal assistant.<\/p>\n<p>Of course, it is not the full-fledged AI who takes care of these assistance tasks \u2013 it is merely a self-learning expert system designed to perform a narrow selection of tasks. Its behavior is fully pre-defined by people and thus predictable.<\/p>\n<p>However, the computing evolution might make this simple robot a lot more sophisticated. It is critical to ensure it manipulates the available information solely for the user\u2019s good and does not follow its own hidden agenda.<\/p>\n<p>That\u2019s the problem which occupies a lot of bright minds of our time, from Stephen Hawking to Elon Musk. The latter can barely be considered a conservative thinker afraid of, or opposed to, progress. Quite the contrary, the inventor of Tesla and SpaceX is eagerly looking into the future. However, he sees the evolution of AI one of the most controversial trends with consequences still unforeseeable and potentially catastrophic. That is why <a href=\"http:\/\/www.wired.com\/2015\/01\/elon-musk-ai-safety\/\" target=\"_blank\" rel=\"noopener nofollow\">earlier this year he invested $10 million into AI research<\/a>.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">Elon Musk donates $10 million to keep robots from murdering you <a href=\"http:\/\/t.co\/8UHqzaRmdo\" target=\"_blank\" rel=\"noopener nofollow\">http:\/\/t.co\/8UHqzaRmdo<\/a> <a href=\"http:\/\/t.co\/vkldwucgkX\" target=\"_blank\" rel=\"noopener nofollow\">pic.twitter.com\/vkldwucgkX<\/a><\/p>\n<p>\u2014 The Verge (@verge) <a href=\"https:\/\/twitter.com\/verge\/status\/555751471103631360?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">January 15, 2015<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h3>With that said, what awaits us in the future?<\/h3>\n<p>As strange as it seems, one of the most feasible scenarios, which experts consider too optimistic, is the ultimate impossibility of creating a full-fledged AI. Without a significant technology breakthrough (which is yet nowhere to be seen), robots will just continue updating and improving their existing set of skills.<\/p>\n<p>While they are learning simple things like <a href=\"https:\/\/www.kaspersky.com.au\/blog\/driverless-cars-perspective\/\" target=\"_blank\" rel=\"noopener\">driving a car<\/a> or speaking native languages, they are not able to substitute a human in making autonomous decisions. In the short-term perspective, the AI is likely to create some \u2018collateral damage\u2019 such as eliminating taxi drivers as an occupation, but won\u2019t be considered a global threat to humanity.<\/p>\n<blockquote class=\"twitter-pullquote\"><p>Should we be afraid of artificial intelligence?<\/p><a href=\"https:\/\/twitter.com\/share?url=https%3A%2F%2Fkas.pr%2FV5SV&amp;text=Should+we+be+afraid+of+artificial+intelligence%3F\" class=\"btn btn-twhite\" data-lang=\"en\" data-count=\"0\" target=\"_blank\" rel=\"noopener nofollow\">Tweet<\/a><\/blockquote>\n<p>Andrey Lavrentiev suggests that the conflict between the AI and the humanity is possible under only one condition: the need to share the same resources.<\/p>\n<p><em>\u201cA human has a body and is interested in creating favorable conditions for its convenience (and the convenience of the mind as well). With the AI, the situation is quite the opposite: it initially exists only in the digital world\u201d.<br>\n<\/em><\/p>\n<p><em>\u201cThe AI\u2019s key objective and motivation is to fully process the information supplied through the external channels, or its \u2018sensory organs\u2019, assess it, identify the principles of its change\u201d.<br>\n<\/em><\/p>\n<p><em>\u201cOf course, the AI also relies on some material foundations, but its dependence on the \u2018shell\u2019 is much weaker that in case of the human. The AI, contrary to the human, won\u2019t be so concentrated on preserving its \u2018shell\u2019 (or \u2018body\u2019), as AI would be, in fact, \u2018everywhere\u2019. The organic extension of AI\u2019s outreach in search of new information would be space exploration and studying of the Universe\u2019s laws, so it can disseminate itself beyond Earth\u201d.<br>\n<\/em><\/p>\n<p><em>\u201cHowever, even in this scenario, there are certain pitfalls. Once this superintelligence sees the humanity or the universe as imperfections in its digital model, it will try to eliminate either of them in order to reach harmony. Or, possibly, it will need the resources consumed by humans in order to \u2018explore the space\u2019, making the old \u2018AI vs. humanity\u2019 conflict relevant again\u201d.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>What do a billionaire inventor Elon Musk, the Google Now on Tap service, and the recent &#8220;Ex Machina&#8221; movie have in common? They all are about artificial intelligence.<\/p>\n","protected":false},"author":32,"featured_media":9026,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[5],"tags":[1140,960,1139,1141,880,22,1123,97],"class_list":{"0":"post-9021","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-news","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-elon-musk","11":"tag-ex-machina","12":"tag-future","13":"tag-google","14":"tag-now-on-tap","15":"tag-security-2"},"hreflang":[{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/artificial-intelligence-safety\/9021\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/artificial-intelligence-safety\/5436\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/artificial-intelligence-safety\/5871\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/artificial-intelligence-safety\/5634\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/artificial-intelligence-safety\/6225\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/artificial-intelligence-safety\/6195\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/artificial-intelligence-safety\/8077\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/artificial-intelligence-safety\/9021\/"},{"hreflang":"pt-br","url":"https:\/\/www.kaspersky.com.br\/blog\/artificial-intelligence-safety\/5423\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/artificial-intelligence-safety\/5562\/"},{"hreflang":"ja","url":"https:\/\/blog.kaspersky.co.jp\/artificial-intelligence-safety\/7918\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/artificial-intelligence-safety\/8077\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/artificial-intelligence-safety\/9021\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com.au\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts\/9021","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/comments?post=9021"}],"version-history":[{"count":1,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts\/9021\/revisions"}],"predecessor-version":[{"id":18602,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/posts\/9021\/revisions\/18602"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/media\/9026"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/media?parent=9021"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/categories?post=9021"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com.au\/blog\/wp-json\/wp\/v2\/tags?post=9021"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}