An uptick in corporate espionage, security as friction, and the continuing effort to deceive our robot overlords:
- China is circling back around for another pass: One of the great advancements in cybersecurity in 2015 was the agreement between the U.S. and China to stop cyber-enabled espionage for commercial purposes. The agreement was accompanied by a substantial drop in commercially-motivated nation-state hacking, and helped the U.S. cybersecurity conversation expand beyond the then-overwhelming challenge of the theft of intellectual property. Two years on, we're starting to see reports that this agreement is losing some of its force. If this trend is borne out, this could be extremely troubling. Where nation-state hacking is concerned, consistent diplomatic pressure is one of the few tools we have seen have meaningful deterrent effect. If that control is weakening, that is another arrow in the deterrence arsenal that needs to be carefully assessed. If anything, this reinforces how important it is for organizations of every stripe to take security of their institution, processes, and intellectual property seriously.
I'm reading: "Chinese hackers starting to return focus to U.S. corporations."
- They hid a microphone where?: We often forget that the point of security isn't to build impenetrable walls – there is no such thing. It's to put enough friction in the way of intruders that you have time to find them and catch them before they get to something truly dangerous. The physical security community has learned this lesson through years of painful education, and this tweet was the perfect reminder of what those lessons look like. If you can fit a microphone there, at that size, nowhere is safe. This is even more true for digital surveillance – security on the network is about friction, not walls.
I'm reading: "Microphone embedded in RJ45 plug. Where is your god now?"
- What happens when you mix AI and Magritte's The Treachery of Images ...: The story of technology over the last decade has been the discovery of new applications (and new unforeseen consequences) with greater and greater physical-world implications. Enabling AI's ability to identify objects on sight is the newest frontier in this exercise, because sight is an essential component of many autonomous functions. Unfortunately, it's also the subject of many creative opportunities for nefarious activity. Recently, researchers proved that they could easily "perturb" an image of a cat so that an AI system would identify it instead as guacamole. Sound silly? In another example, researchers tricked an AI system into misidentifying a turtle as a rifle. Every day we build more and more autonomous systems that rely on sight. Of particular concern, the researchers here tricked the systems with 3D-printed objects – they didn't need to access or manipulate the code itself. At some point in the sci-fi future, we may need tricks like this to hide from our robot overlords, but in the meantime, imagine the havoc that could be wreaked by a malicious hacker who figured out how to manipulate the results of an autonomous weapons system, or a traffic camera.
I'm reading: "Fooling Neural Networks in the Physical World with 3D Adversarial Objects."