I advocate for including the human factor in the design of computer and AI technologies. Through the lens of empirical methods, both qualitative and quantitative, I inform the design of future technologies. Since 2018, I have studied privacy and security technologies directed at software developers to evaluate, design, and build tools that can assist software developers in building privacy-friendly and secure systems. Recently, I have started exploring how to make AI technologies that respect individuals, societies, and our future.
Ph.D. in Informatics, 2021
University of Edinburgh
M.Sc. in Computer Science, 2017
University of Bonn
M.Sc. in Information Technology, 2013
University of Tehran
While the literature on permissions from the end-user perspective is rich, there is a lack of empirical research on why developers request permissions, their conceptualization of permissions, and how their perspectives compare with end-users’ perspectives. Our study aims to address these gaps using a mixed-methods approach.
Through interviews with 19 app developers and a survey of 309 Android and iOS end-users, we found that both groups shared similar concerns about unnecessary permissions breaking trust, damaging the app’s reputation, and potentially allowing access to sensitive data. We also found that developer participants sometimes requested multiple permissions due to confusion about the scope of certain permissions or third-party library requirements. Additionally, most end-user participants believed they were responsible for granting a permission request, and it was their choice to do so, a belief shared by many developer participants. Our findings have implications for improving the permission ecosystem for both developers and end-users.
To make privacy a first-class citizen in software, we argue for equipping developers with usable tools as well as providing support from organizations, educators, and regulators. We discuss challenges and propose solutions for stakeholders to help developers perform privacy-related tasks.
Health data is considered to be sensitive and personal; both governments and software platforms have enacted specific measures to protect it. Consumer apps that collect health data are becoming more popular, but raise new privacy concerns as they collect unnecessary data, share it with third parties, and track users. However, developers of these apps are not necessarily knowingly endangering users’ privacy; some may simply face challenges working with health features.
To scope these challenges, we qualitatively analyzed 269 privacy-related posts on Stack Overflow by developers of health apps for Android- and iOS-based systems. We found that health-specific access control structures (e.g., enhanced requirements for permissions and authentication) underlie several privacy-related challenges developers face. The specific nature of problems often differed between the platforms, for example additional verification steps for Android developers, or confusing feedback about incorrectly formulated permission scopes for iOS. Developers also face problems introduced by third-party libraries. Official documentation plays a key part in understanding privacy requirements, but in some cases, may itself cause confusion.
We discuss implications of our findings and propose ways to improve developers’ experience of working with health-related features—and consequently to improve the privacy of their apps’ end users.
Mobile apps enable ad networks to collect and track users. App developers are given “configurations” on these platforms to limit data collection and adhere to privacy regulations; however, the prevalence of apps that violate privacy regulations because of third parties, including ad networks, begs the question of how developers work through these configurations and how easy they are to utilize. We study privacy regulations-related interfaces on three widely used ad networks using two empirical studies, a systematic review and think-aloud sessions with eleven developers, to shed light on how ad networks present privacy regulations and how usable the provided configurations are for developers.
We find that information about privacy regulations is scattered in several pages, buried under multiple layers, and uses terms and language developers do not understand. While ad networks put the burden of complying with the regulations on developers, our participants, on the other hand, see ad networks responsible for ensuring compliance with regulations. To assist developers in building privacy regulations-compliant apps, we suggest dedicating a section to privacy, offering easily accessible configurations (both in graphical and code level), building testing systems for privacy regulations, and creating multimedia materials such as videos to promote privacy values in the ad networks’ documentation.
Privacy tasks can be challenging for developers, resulting in privacy frameworks and guidelines from the research community which are designed to assist developers in considering privacy features and applying privacy enhancing technologies in early stages of software development. However, how developers engage with privacy design strategies is not yet well understood. In this work, we look at the types of privacy-related advice developers give each other and how that advice maps to Hoepman’s privacy design strategies.
We qualitatively analyzed 119 privacy-related accepted answers on Stack Overflow from the past five years and extracted 148 pieces of advice from these answers. We find that the advice is mostly around compliance with regulations and ensuring confidentiality with a focus on the inform, hide, control, and minimize of the Hoepman’s privacy design strategies. Other strategies, abstract, separate, enforce, and demonstrate, are rarely advised. Answers often include links to official documentation and online articles, highlighting the value of both official documentation and other informal materials such as blog posts. We make recommendations for promoting the under-stated strategies through tools, and detail the importance of providing better developer support to handle third-party data practices.
Reliably recruiting participants with programming skills is an ongoing challenge for empirical studies involving software development technologies, often leading to the use of crowdsourcing platforms and computer science (CS) students.
In this work, we use five existing survey instruments to explore the programming skills, privacy and security attitudes, and secure development self-efficacy of participants from a CS student mailing list and four crowdsourcing platforms (Appen, Clickworker, MTurk, and Prolific). We recruited 613 participants who claimed to have programming skills and assessed recruitment channels regarding costs, quality, programming skills, as well as privacy and security attitudes.
Advertising networks enable developers to create revenue, but using them potentially impacts user privacy and requires developers to make legal decisions. To understand what privacy information ad networks give developers, we did a walkthrough of four popular ad network guidance pages with a senior Android developer by looking at the privacy-related information presented to developers.
We found that information is focused on complying with legal regulations, and puts the responsibility for such decisions on the developer. Also, sample code and settings often have privacy-unfriendly defaults laced with dark patterns to nudge developers’ decisions towards privacy-unfriendly options such as sharing sensitive data to increase revenue. We conclude by discussing future research around empowering developers and minimising the negative impacts of dark patterns.
Mobile advertising networks present personalized advertisements to developers as a way to increase revenue, these types of ads use data about users to select potentially more relevant content, but the choice framing also impacts developers’ decisions which in turn impacts their users’ privacy. Currently, ad networks provide choices in developer-facing dashboards that control the types of information collected by the ad network as well as how users will be asked for consent. Framing and nudging have been shown to impact users’ choices about privacy, we anticipate that they have a similar impact on choices made by developers. We conducted a survey-based online experiment with 400 participants with experience in mobile app development.
Across six conditions, we varied the choice framing of options around ad personalisation. Participants in the condition where privacy consequences of ads personalisation are highlighted in the options are significantly (11.06 times) more likely to choose non-personalized ads compared to participants in the Control condition with no information about privacy. Participants’ choices of an ad type are driven by impact on revenue, user privacy, and relevance to users. Our findings suggest that developers are impacted by interfaces and need transparent options.
Software development teams are responsible for making and implementing software design decisions that directly impact end-user privacy, a challenging task to do well. Privacy Champions—people who strongly care about advocating privacy—play a useful role in supporting privacy-respecting development cultures. To understand their motivations, challenges, and strategies for protecting end-user privacy, we conducted 12 interviews with Privacy Champions in software development teams.
We find that common barriers to implementing privacy in software design include: negative privacy culture, internal prioritisation tensions, limited tool support, unclear evaluation metrics, and technical complexity. To promote privacy, Privacy Champions regularly use informal discussions, management support, communication among stakeholders, and documentation and guidelines. They perceive code reviews and practical training as more instructive than general privacy awareness and on-boarding training. Our study is a first step towards understanding how Privacy Champions work to improve their organisation’s privacy approaches and improve the privacy of end- user products.
Static analysis tools (SATs) have the potential to assist developers in finding and fixing vulnerabilities in the early stages of software development requiring them to be able to understand and act on tools’ notifications. To understand how helpful such SAT guidance is to developers, we ran an online experiment (N=132) where participants were shown four vulnerable code samples (SQL injection, hard-coded credentials, encryption, and logging sensitive data) along with SAT guidance, and asked to indicate the appropriate fix.
Participants had a positive attitude towards both SAT notifications and particularly liked the example solutions and vulnerable code. Seeing SAT notifications also led to more detailed open-ended answers and slightly improved code correction answers. Still, most SAT (SpotBugs 67%, SonarQube 86%) and Control (96%) participants answered at least one code-correction question incorrectly. Prior software development experience, perceived vulnerability severity, and answer confidence all positively impacted answer accuracy.
Computer programming operates and controls our personal devices, cars, and infrastructures. These programs are written by software developers who use tools, software development platforms, and online resources to build systems used by billions of people. As we move towards societies that rely on computer programs, the need for private and secure systems increases. Developers, the workforce behind the data economy, impact these systems’ privacy, and consequently, the users and society. Therefore, understanding the developer factor in software privacy provides invaluable inputs to software companies, regulators, and tool builders.
This thesis includes six research papers that look at the developer factor in software privacy. We find that developers impact software privacy and are also influenced by external entities such as tools, platforms, academia, and regulators. For example, changes in regulations create challenges and hurdles for developers, such as creating privacy policies, managing permissions, and keeping user data private and secure. Developers interactions with tools and software development platforms, shape their understanding of what privacy means, such as consent and access control. Presentation of privacy information and options on platforms also heavily impact developers’ decisions for their users’ privacy, and platforms may sometimes nudge developers into sharing more of their users’ data by using design (dark) patterns.
Other places developers learn about privacy include universities, though they may not learn how to include privacy in software. Some organisations are making efforts to champion privacy as a concept inside development teams, and we find that this direction shows promise as it gives developers direct access to a champion who cares about privacy. However, we also find that their organisation or the wider community may not always support these privacy champions. Privacy champions face an uphill battle to counter many of the same privacy misconceptions seen in the general population, such as the `I’ve got nothing to hide’ attitude.
Overall, I find that research in developer-centred privacy is improving and that many of the approaches tried show promise. However, future work is still needed to understand how to best present privacy concepts to developers in ways that support their existing workflows.
Payment cultures around the globe are diverse and have significant implications on security, privacy and trust. We study usable security aspects of payment cultures in four culturally distinct societies. Based on a qualitative study in Germany and Iran, we developed an online survey and deployed it in Germany, Iran, China, and the United States.
The results reveal significant differences between the studied countries. For example, we found that participants from Iran and China are more comfortable with credential sharing and German participants were most accepting towards cryptocurrencies. We suggest these kinds of differences in payment culture need to be considered in the context of HCI research when evaluating current payment mechanisms or designing new ones.
We analyse Stack Overflow (SO) to understand challenges and confusions developers face while dealing with privacy-related topics. We apply topic modelling techniques to 1,733 privacy-related questions to identify topics and then qualitatively analyse a random sample of 315 privacy-related questions.
Identified topics include privacy policies, privacy concerns, access control, and version changes. Results show that developers do ask SO for support on privacy-related issues. We also find that platforms such as Apple and Google are defining privacy requirements for developers by specifying what “sensitive” information is and what types of information developers need to communicate to users (e.g. privacy policies). We also examine the accepted answers in our sample and find that 28% of them link to official documentation and more than half are answered by SO users without references to any external resources.
The security attitudes and approaches of software developers have a large impact on the software they produce, yet we know very little about how and when these views are constructed. This paper investigates the security and privacy (S&P) perceptions, experiences, and practices of current Computer Science students at the graduate and undergraduate level using semi-structured interviews.
We find that the attitudes of students already match many of those that have been observed in professional level developers. Students have a range of hacker and attack mindsets, lack of experience with security APIs, a mixed view of who is in charge of S&P in the software life cycle, and a tendency to trust other peoples’ code as a convenient approach to rapidly build software. We discuss the impact of our results on both curriculum development and support for professional developers.
Software developers are key players in the security ecosystem as they produce code that runs on millions of devices. Yet we continue to see insecure code being developed and deployed on a regular basis despite the existence of support infrastructures, tools, and research into common errors. This work provides a systematised overview of the relatively new field of Developer-Centred Security which aims to understand the context in which developers produce security-relevant code as well as provide tools and processes that that better support both developers and secure code production.
We report here on a systematic literature review of 49 publications on security studies with software developer participants. We provide an overview of both the types of methodologies currently being used as well as the current research in the area. Finally, we also provide recommendations for future work in Developer-Centred Security.
In my free time, I enjoy walks in the nature, travel to new places, watch movies, and hang out with family and friends. I spent about four years working as a software engineer in industry before starting my research career (see my CV).
Informal chats: if you’re interested in my research and would like to chat, I’m happy to have a 20-minute video call about anything related to my work. I can often squeeze in a call within 5-7 days of your email. Topics can be but are not limited to, what I do, future research avenues, possible collaborations, my experience in academia/industry, and doing a degree in my research area (I don’t offer any positions). I speak Farsi and English :)