On user research
My masters degree was in human-computer interaction (HCI). I was drawn to the field because it was at the intersection of computer science, design, and social science, and for whatever reason I really liked (and still like) thinking about computer user interfaces.
One area of focus within HCI is user research. Initially I was drawn to it because it felt like science. The idea that we could prove that a certain approach to building software is better than another was intriguing to me (keep in mind that this was 2010-11, before the replication crisis in psychology and medicine). One class in my program was essential a social science research course where spent time looking at HCI studies and critiquing the design of those studies. This was fun because it showed how, even if you set out to prove something, how easily it was to miss something in your study design that would ultimately invalidate your results.
My first job had a user research component to it, which I was initially thrilled about. One of my responsibilities was to do usability studies on software that the company used (it was a very large company with a lot of custom, customized, and off-the-shelf software).
My enthusiasm quickly abated. I found myself doing studies on software that was just obviously just not serving the users needs. The software I was evaluating was often clunky and slow and paid no regard to information design or usability. Pleasant visual design was generally non-existent.
Usability studies seemed to me a very strange technique to evaluate this software. Usability studies are time consuming, and doing them well requires getting enough users and narrowing the scope of the study enough for the result to be meaningful. Management liked these studies though because the findings could be distilled to numbers, which could be reported on and scored. It was honestly sisyphean to evaluate all this software. I could have spent a few hours on each and made up a grade from A to F and arrived at something comparable to the scores produced that weeks and weeks of studies. Of course, this conclusion shouldn’t be a surprise coming from me.
I quickly found myself favoring heuristic evaluations, which is basically just taking a set of known heuristics about good software design, and evaluating software with these heuristics. That was better, as I was able to deliver more complete conclusions more quickly without wasting everyone’s time, but it was still kind of bleak, because I had to judge an endless sea of garbage software, while all I wanted to do was just design and build something better.
I was able to move to a hybrid design/engineering role where I did equal parts design and engineering with some occasional research. This was super enjoyable and remains one of my all-time favorite jobs. Over the course of this job, I validated and honed my conclusion that most software design problems can be solved by an understanding of design principles and heuristics without any rigorous research process. Research is most helpful early on in the project. Even calling it “research” feels too fancy - it’s literally just talking to people to figure out what they want to do with the software, then getting frequent feedback from your users as you iterate.
In the 2010s at least, it was en vogue for design to throw other disciplines like engineering, management, and product management under the bus for not caring about their users because they didn’t do user research. This kind of vilification of other disciplines seems to appeal to some, but not to me. As I came to understand other disciplines better, almost everyone wanted to build better software for their customers and users, they were just prevented from doing so by management, budgets, or available skills. Having a designer lecture them about not caring about the end user just added insult to injury.
Once I got further into my career and started working at better tech companies, I heard the term “user research” less, though I still heard it. I feel like the term has a lot of baggage and is very exclusive. “Have we talked to any of our users recently” or “have we gotten any good or bad feedback from our users” are straightforward questions with straightforward answers. “Have we done any user research” leaves people wondering what exactly you are asking. Maybe a team talks to their users every week, and they have analytics set up to watch usage of their product’s key features and workflows. That’s a pretty healthy situation, but they haven’t “done user research” recently.
Talking to users is great, do that. Be wary of anyone who tries to make it sound any fancier than that.