Visible to the public Biblio

Filters: Keyword is Journalism  [Clear All Filters]
Shere, A. R. K., Nurse, J. R. C., Flechais, I..  2020.  "Security should be there by default": Investigating how journalists perceive and respond to risks from the Internet of Things. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :240—249.
Journalists have long been the targets of both physical and cyber-attacks from well-resourced adversaries. Internet of Things (IoT) devices are arguably a new avenue of threat towards journalists through both targeted and generalised cyber-physical exploitation. This study comprises three parts: First, we interviewed 11 journalists and surveyed 5 further journalists, to determine the extent to which journalists perceive threats through the IoT, particularly via consumer IoT devices. Second, we surveyed 34 cyber security experts to establish if and how lay-people can combat IoT threats. Third, we compared these findings to assess journalists' knowledge of threats, and whether their protective mechanisms would be effective against experts' depictions and predictions of IoT threats. Our results indicate that journalists generally are unaware of IoT-related risks and are not adequately protecting themselves; this considers cases where they possess IoT devices, or where they enter IoT-enabled environments (e.g., at work or home). Expert recommendations spanned both immediate and longterm mitigation methods, including practical actions that are technical and socio-political in nature. However, all proposed individual mitigation methods are likely to be short-term solutions, with 26 of 34 (76.5%) of cyber security experts responding that within the next five years it will not be possible for the public to opt-out of interaction with the IoT.
Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.