Keika Mori (Deloitte Tohmatsu Cyber LLC, Waseda University), Daiki Ito (Deloitte Tohmatsu Cyber LLC), Takumi Fukunaga (Deloitte Tohmatsu Cyber LLC), Takuya Watanabe (Deloitte Tohmatsu Cyber LLC), Yuta Takata (Deloitte Tohmatsu Cyber LLC), Masaki Kamizono (Deloitte Tohmatsu Cyber LLC), Tatsuya Mori (Waseda University, NICT, RIKEN AIP)

Companies publish privacy policies to improve transparency regarding the handling of personal information. A discrepancy between the description of the privacy policy and the user’s understanding can lead to a risk of a decrease in trust. Therefore, in creating a privacy policy, the user’s understanding of the privacy policy should be evaluated. However, the periodic evaluation of privacy policies through user studies takes time and incurs financial costs. In this study, we investigated the understandability of privacy policies by large language models (LLMs) and the gaps between their understanding and that of users, as a first step towards replacing user studies with evaluation using LLMs. Obfuscated privacy policies were prepared along with questions to measure the comprehension of LLMs and users. In comparing the comprehension levels of LLMs and users, the average correct answer rates were 85.2% and 63.0%, respectively. The questions that LLMs answered incorrectly were also answered incorrectly by users, indicating that LLMs can detect descriptions that users tend to misunderstand. By contrast, LLMs understood the technical terms used in privacy policies, whereas users did not. The identified gaps in comprehension between LLMs and users, provide insights into the potential of automating privacy policy evaluations using LLMs.

View More Papers

Location Data and COVID-19 Contact Tracing: How Data Privacy...

Callie Monroe, Faiza Tazi, Sanchari Das (university of Denver)

Read More

Power-Related Side-Channel Attacks using the Android Sensor Framework

Mathias Oberhuber (Graz University of Technology), Martin Unterguggenberger (Graz University of Technology), Lukas Maar (Graz University of Technology), Andreas Kogler (Graz University of Technology), Stefan Mangard (Graz University of Technology)

Read More

LADDER: Multi-Objective Backdoor Attack via Evolutionary Algorithm

Dazhuang Liu (Delft University of Technology), Yanqi Qiao (Delft University of Technology), Rui Wang (Delft University of Technology), Kaitai Liang (Delft University of Technology), Georgios Smaragdakis (Delft University of Technology)

Read More

The Skeleton Keys: A Large Scale Analysis of Credential...

Yizhe Shi (Fudan University), Zhemin Yang (Fudan University), Kangwei Zhong (Fudan University), Guangliang Yang (Fudan University), Yifan Yang (Fudan University), Xiaohan Zhang (Fudan University), Min Yang (Fudan University)

Read More