Author of BIG DATA, BIG DESIGN, Helen Armstrong is a professor and Director of Master of Graphic & Experience Design at NC State University, as well as a researcher at the forefront of digital rights, human-machine collaboration, and inclusivity in design.
By:
Gregor Mittersinker
November 29, 2023
Helen Armstrong is a researcher and author at the forefront of digital rights, human-machine collaboration, and inclusivity in design. She holds an MFA in Graphic Design from The Maryland Institute College of Art (MICA) and an MA in English Literature from The University of Mississippi. Through collaborative research, she has engaged with many for-profit and nonprofit entities such as IBM, Redhat, REI, Advance Auto Parts, SAS Analytics, Sealed Air, and the Laboratory for Analytic Sciences. As a mother of a child with disabilities and a staunch advocate, she is deeply committed to creating interfaces and experiences that champion inclusivity and intelligence.
We caught up with Helen via Zoom between classes at NC State.
Loft: You have been exploring UX and AI systems for years. How do you approach the design of human-AI interfaces to enhance user trust, considering that AI systems can sometimes hallucinate? How can we mitigate not only the harm on the immediate application but also its broader societal impacts?
Helen Armstrong: That's a great question and one that is very relevant to designers right now. I really love this question because it highlights the significant role we can play in creating systems that users find trustworthy. Let me step back for a moment and identify two challenges that we, as designers, can address. First, humans tend to over-trust these systems. We tend to give autonomous systems a lot of authority, partly because these systems are fairly invisible. Examples include someone following a GPS system into a lake or having their tastes swayed by recommendation systems. When we over-trust these systems, we don't question their biases or flaws. For instance, if someone is hired after an AI job interview, they're likely to praise the system without considering any potential biases. However, the flip side is when we disagree with the AI's predictions or decisions, trust can quickly erode because, in most cases today, we lack the means to challenge or question the system. Returning to the AI interview example, if a candidate is rejected, they often have no recourse or way to understand why, or what factors influenced the decision. Currently, we are not designing interfaces that encourage people to question these systems. I believe that designers now need to focus on creating systems that not only encourage skepticism toward these systems but also provide avenues for recourse when disagreements arise. If we can achieve this, we enable users to 'talk back' to the systems, thereby protecting and supporting their digital rights.
Loft: Designers cannot blindly apply technology without the risk of creating totalitarian systems that discriminate, surveil, and manipulate not just individuals but societies at scale. How can designers ensure their creations are not contributing to the demise of democratic societies at large? Is there a methodology for creating these systems in a safe manner?
Helen Armstrong: I am a huge fan of the Blueprint for an AI Bill of Rights that the US White House released a year ago. It's an excellent document focused on protecting the digital rights of citizens in ways that preserve democratic values. To me, the entire document reads like a design brief. It outlines five main areas where we need to preserve rights, prompting us as designers to think about how we can design interfaces or systems that achieve these objectives. This document is an excellent starting point for thinking about critical A.I. issues. For instance, one of the topics is 'Notice and Explanation,' which is particularly interesting. This section discusses providing people with timely explanations. As designers we might think about, for example, providing users with information at the moment of use. If we ask people to protect themselves up front by reading through long notices or opting out of data shares, they tend to skip through this content because they don't understand the impact of their responses.
I am a huge fan of the Blueprint for an AI Bill of Rights that the US White House released a year ago. It's an excellent document focused on protecting the digital rights of citizens in ways that preserve democratic values.
However, if we offer choices at the moment of use, someone can understand what is being asked and respond thoughtfully. This presents an interface challenge, one that we can incorporate into the systems we create, enabling people to interact thoughtfully with these systems. So, there's a lot we can do in these spaces.
Loft: So you believe that the government plays an important role, which I think is fascinating. The idea of regulatory frameworks is quite prevalent in industries like financial services, med tech, and life sciences. It can feel like a constraint for designers, being told to follow rules, and guidelines, or adhere to rigorous regulatory boundaries, How can we navigate these challenges, especially in the age of AI, while still providing intuitive and proactive UX interactions that enhance user engagement?
Helen Armstrong: Well, that's a complex question, and there are various things we can do. For example, we can consider designing for the edge, where processing happens on personal devices to reduce the amount of data sent to the cloud. We can also design systems that avoid collecting unnecessary data, which has not been a common practice in recent decades. As designers, we should educate ourselves about these back-end methods because we are responsible for the data collected by the systems we design.
Loft: As designers, we need to be much more knowledgeable about backend systems than ever before. At the same time, the human side of data science tends to be underrepresented in both public policy and corporate decision-making. It's crucial for designers to understand how data works, how it's presented, how it's explained, and what the regulatory boundaries for that data are. This can be challenging for someone coming out of school as a UX designer - to grasp all these aspects. Could you expand on the importance of human-centered thinking in the context of data innovation?
Helen Armstrong: Human-centered design thinking offers a distinct perspective on data. In brief, as long as humans use these systems, it's crucial for the creators of A.I. systems to consider how humans interact with them. The more detailed answer involves acknowledging the role of data scientists and developers. They are skilled in creating optimal systems with high functionality, innovating with data, and designing systems for effective data use. These skills are vital and highly valuable. However, it's equally important to consider how these systems benefit people and the experiences people have with them.
Systems may be optimal, elegant, and function flawlessly, but if they don't benefit people or uplift society, their value is diminished.
Systems may be optimal, elegant, and function flawlessly, but if they don't benefit people or uplift society, their value is diminished. As long as humans are part of the equation, using these systems, the need for designers to craft experiences that bridge humans and machines remains essential. If a future arises where humans are no longer a part of this equation, perhaps the need for such designers might fade. But for now, as long as humans exist, our role in designing these human-machine interactions is indispensable.
Loft: There's another aspect to consider when humans interact with AI systems using their data. They need to be incentivized to share their information. Essentially, they should understand what benefits they receive in return. For example, when you provide data such as age and weight, it could lead to better health outcome predictions. This is where the human aspect of data becomes significant. It's crucial to comprehend the objectives of human interfaces within the context of these systems. Understanding the goals of the interface is important, not just in terms of monetizing a business or its value proposition, but also in how it serves and benefits the user.
Helen Armstrong: I completely agree. It circles back to empowering people to ask questions like, "Why do you need my age? What will you do with it?" And then enabling these same people to easily understand the answers through the design of the interface. This is where designers excel. They are adept at prototyping possible, positive futures. Such a vision is very much needed right now.
Loft: Your book is fascinating because it covers both aspects: how designers create interface systems and how systems influence interface creation. Tools to replace conventional UX designers are evolving rapidly. On one hand, these tools are disrupting the traditional UX design process. On the other hand, Machine Learning is becoming central to the future of UX development. How does UX and Design education have to evolve as AI systems are replacing conventional UX methodologies? Is there a role for UX designers without a computer science degree in this changing landscape? This includes navigating Large Language Models and a deep understanding of data, including its backend aspects.?
Helen Armstrong: First, as educators, we are responsible for teaching our students about AI's capabilities. A computer science degree isn't necessary to understand basic algorithms and the capabilities of machine learning systems. Designers need to view machine learning as a design material and gain a deeper understanding of data than in the past. For instance, at our large research university, data science minors and certificates are being integrated into our graphic and experience design degree. This allows students to easily engage with data. We encourage them to pursue these minors or certificates throughout their undergraduate and graduate studies. This represents a basic educational need.
Returning to your initial question about tools, there is an ongoing debate about automation potentially eliminating our current tasks. However, our roles will inevitably evolve. We weren't doing the same things 20 years ago as we are now. For example, in the 1970s, graphic designers didn't focus on designing interfaces, software applications, or websites. We can't yet envision what we will design in the future, but it's certain that we won't be designing the same things we are today. Much of what we do now may be automated, but that's okay because new opportunities will arise. It's the natural evolution of the discipline in response to technology.
We shouldn't fear these changes. Instead, we should seize the moment and take responsibility for the direction technology takes, as we represent a significant voice for humanity. There's no need to fear that our jobs or industry will disappear; instead, we should embrace the evolution and adapt accordingly.
We shouldn't fear these changes. Instead, we should seize the moment and take responsibility for the direction technology takes, as we represent a significant voice for humanity. There's no need to fear that our jobs or industry will disappear; instead, we should embrace the evolution and adapt accordingly.
Loft: The real question is whether UX design is like typesetting in the 80s, a field that has largely disappeared due to technological advancements in desktop publishing. Should UX evolve into a data science or engineering degree? Do we need to completely overhaul the UX curriculum to be more science-focused, data-centric, and oriented towards backend development?
Helen Armstrong: I believe that UX’ers are becoming more data literate, and this trend is likely to continue. However, as we enter a phase where we are shaping the future of AI systems, human-centric disciplines like psychology, sociology, and philosophy are regaining prominence. These fields are crucial for truly understanding and building systems for people. I don't foresee UX becoming a strictly hard science or developer-focused field in the future. Although our methods will inevitably change as technology evolves, it's important to note that our methods have already been evolving. We don't use the same methods as we did a couple of decades ago. Our methods will continue to change at a rapid pace, but they won't all shift toward quantitative methods. I think the qualitative, the humanities and social science aspects, will become more important, not less.
Loft: How do we ensure that society puts the right safeguards in place to ensure these positive outcomes prevail? I'm curious about your stance on design ethics. In the last decade, we've seen the rise in the importance of Chief Privacy Officers and the establishment of privacy as a relevant C-suite discipline. How do you envision the future of design ethics, and in what form might it take?
Helen Armstrong: Regarding the topic, I'm a great admirer of Rumman Chowdhury, a design ethicist who has led several significant initiatives in recent years. She's a strong advocate for social science and often discusses the vast amount of existing knowledge in social sciences that could be utilized to address complex issues around data ethics. Historically, there has been a significant divide between the social science and technology worlds. However, bridging this gap is essential for tackling these issues effectively. If this collaboration occurs, there's much hope for the future. From an institutional university perspective, this integration is already happening. It seems like every day, a new AI center is opening, uniting Humanities and Sciences to confront challenging questions surrounding AI. In many ways, AI is becoming a focal point that is bringing different disciplines together, breaking down silos that have been rigid in past generations.
Loft: How do you see these think tanks and AI-centered policy centers influencing societal policy in the future?
Helen Armstrong: These centers are indeed influencing government policy. However, it's not something the government can handle alone; industry involvement is crucial. We need to see the same kind of energy around AI ethics in industry as we see in government. While the government side has made significant strides, there seems to be a lull on the industry side. It's essential for the industry to re-engage and actively participate in this discourse. Without industry involvement, we are unlikely to succeed in effectively addressing these challenges.
Loft: I feel there's an overhyped gold rush happening right now, where there's a massive land grab on the technology side with very few considerations. It's a 'take no prisoners' kind of approach to capturing these markets, which honestly worries me a bit.
Helen Armstrong: Yeah, scary.
Loft: One large worry is data privacy and making sure these Large Language Models don't share data. How do we help companies collaborate on data? Currently, we know that having everyone on the same data model is challenging. However, It's also not feasible for every pharmaceutical company to develop its own machine-learning model. Do you have any thoughts on how to develop foundational datasets that are safe, protected, and can be used by many people?
Helen Armstrong: That's a great question, which, in my opinion, leans more towards data science. There's a lot of interesting work happening in the data science world regarding this. One approach is to use open-source foundational models as a basis for building personal models, rather than relying on industry-owned large language models. While it's possible to build privacy guardrails around these, it's a controversial topic regarding their effectiveness. Some people are working on creating smaller models that are easier to secure and develop.
There are many debates in data science currently that touch on bias—how to secure models, control bias within them, and protect individual personal data. While I don't mean to shift the responsibility onto data science, it is a significant challenge within that industry, and that's where the real expertise lives and where new strategies are being developed.
Loft: We appreciate your valuable thoughts and viewpoints. Thank you so much for your candid insights!
We want to add that Helen’s book is a useful resource if you are interested in learning more about the subject. The book simplifies the intricacies of AI and machine learning for those in the design industry. The book encourages designers to explore AI's potential in their work, emphasizing the importance of centering human experience in technology's advancement. It provides an overview of how AI and big data are changing design, suggesting ways designers can adapt to these shifts. Aimed at both experienced designers and those with a burgeoning interest in the field, the book serves as a thoughtful examination of AI's role and potential in design practices.
Previous
Next
AI
Education
Leadership