AI has been “the story” of 2023 (and the headlines just keep coming!). However, as companies race to roll out their AI product, or incorporate AI into their existing products, many are skipping one of the most essential steps in AI development: The human factor. (We get the irony here.)
InterQ has been conducting AI qualitative market research for companies since our founding in 2015, and the goal of the projects remains the same: How will humans respond to AI? Is there a perception of privacy invasion? Will the AI help or hinder processes they’re working on? And, crucially, does the AI product take into account the diversity of human experiences?
To answer these questions, you can’t just apply more AI or computer modeling to the product development cycles. If you’re developing AI that affects humans (and it all does, in some ways), you need to be doing market research testing that includes qualitative and quantitative methods.
We want to stress here the importance of qualitative research, in particular, as surveys are simply not robust enough to get the feedback required. In other words, you can’t package up responses in a multiple choice format and expect to get a true picture of how people respond to and perceive the AI technology in your product. Let’s look at some examples of how we build qualitative research into AI product development.
Qualitative research and AI: How do employees feel about AI prompts?
In one of the studies we’ve conducted for a client that was rolling out a new AI technology into their healthcare software, we were first asked to test how HR managers (the gatekeepers) would feel about AI prompts being sent to employees who were considering expensive healthcare procedures (which often don’t have positive outcomes). If employees were to Google or look up information about these procedures, they would be sent AI-generated articles, warning against potential dangers of these surgeries.
We first interviewed HR directors about this technology. From them, we learned the privacy protocols already in place, and how a technology like this might violate certain privacy standards. They also told us how it could be developed to be administered ethically (and the language to use around it).
Next up, we hosted focus groups with employees who work at companies that employ the healthcare software in question. We explained the technology and showed examples. Perhaps not surprisingly, the reactions were not positive. We did learn that there are instances where employees appreciate educational materials, but the way the AI was being developed felt too “Big Brother” and they felt it would build distrust between them and their employer.
Armed with this research, the healthcare company went back to the drawing board and came up with an improved product that still helped achieve some of the original goals (cost cutting on unnecessary healthcare procedures), but without the invasive factor of AI that could be unsettling to employees. Had the healthcare software company simply tried to go to market with the technology without this feedback, we can imagine that the backlash would have been swift and potentially damaging to the healthcare software company’s reputation.
The moral of the story? Anytime you are developing AI, first talk (through rigorous qualitative research, done by a team of experts) to the humans who will be affected by it. We’ll leave you with one more example.
Qualitative research and AI: How do you position AI in your product?
Frequently, we have clients who have incorporated AI into their products (sales software, cybersecurity), and their instinct is to put AI, front-and-center in their branding narrative. Interestingly, when we test this with end users and buyers of the software or hardware, the response is not as positive as the companies think it will be. Sure, we all know AI is overdone right now, but the story is more nuanced than that when we interview potential customers of the products.
How the AI story is told is just as important as mentioning that AI “drives” or “empowers” the product, and through qualitative interviews with these end-users, we have learned about the nuances in language that can either turn off or catch the attention of users. Yes, AI is a great technology, but where and how it’s applied, and what’s emphasized can make or break a sale.
The moral of this story? Don’t assume that splashing “AI” over your product and marketing will automatically be received well. Make sure you invest in qualitative research that includes detailed discussions with the end users of the product. You may be very surprised at what you learn, and if you’re like other companies we’ve worked with, this knowledge could drastically change your go-to-market strategy.