I contributed by providing user experience research, product design, prototyping, critique and generally by being as helpful as I could.
"Data Browser is the single most awesome feature for which we get a lot of positive feedback every time we demo the platform."
— Anurag Sethi, Senior Bioinformatics Scientist, Seven Bridges


Team built Data Browser, a powerful tool for querying petabytes of phenotypic and genotypic data that was hard to use, but they need help to identify key usability challenges and improve the design.
In order to achieve this, I ran usability tests based on tasks formulated with team's product manager. To gain better understanding I analysed feedback tickets and usage recordings. This allowed me to identify core mismatches between mental model users had and the app. Based on the research, I was able to suggest valuable improvements.


Better user autonomy

Back before the redesign, there was a constant stream of support request from researchers. After the redesign support requests dropped sharply and now happen only from time to time.

One of the strongest parts of the Seven Bridges platform

We’ve got a lot of positive feedback and praise for Data Browser from external users. Many internal people consider it one of the top selling points for the platform.

Foundation for future upgrades

The modular approach used to build Data Browser enabled gradual improvement and building upon the foundation. Since the redesign there have been numerous improvements focused on onboarding, flexibility, search, ease of use, suggestions and guidance, which improved the experience even further.

Understanding the domain

Data Browser, as its creative name suggests, helps cancer researchers mine data relevant to their research. The flow researcher usually goes through consists of few steps:
One commonly used dataset holds about 250 thousand files, some 500 terabytes in size (512000 Gb). This data is tagged by relevant metadata - properties stating who’s female, Asian, smoker, what was diagnosed, what kind of treatment was done, etc. There are dozens, sometimes hundreds of these fields for each file, explaining it in details.
Researcher were often interested only in a small subset of files that matched their criteria. It was orders of magnitude smaller than the starting dataset.
In order to test the hypothesis - “Does smoking introduce mutations connected to lung cancer” researcher has to filter this dataset for smokers, from some age bracket who were diagnosed with lung cancer, and compare their genome with non-smokers from same bracket. Competing products enabled this, but struggled with more complex examples. In order to answer them, researchers had to download manifest files containing lists of files and related metadata and write scripts that would cross-reference different lists and fields to get matching files. Coding was a skill only a subset of target users had, a lot of them had strong biological and genomic knowledge and relied on visual tools and colleagues who knew how to code.
This process was also disconnected from the next step—analysis—that requires them to download the data (size varying from gigabytes to terabytes), set up complex tools manually and only then proceed with the analysis. Seven Bridges platform already provided cloud storage and analysis, and Data Browser was built to fill a gap before it.
It started as a prototype build by two engineering teams. As resources were strapped and deadlines loomed closer, it had to be launched. Some time went by and it got praise for its power and a stream of feedback concerning steep learning curve - people didn’t know how to use it. I was brought as a first designer to this team, to help understand  users' issues and how to address them.

Discovering core problems

Before redesign, Data Browser behaved and looked like this:
In order to understand it and the problem it was built to solve, I conducted interviews with different stakeholders - product managers, engineers and domain experts. This helped build the feel for the territory.
Together with a product manager we devised a script for usability tests - set of tasks that would help us answer questions we deemed important and explain users thinking about troublesome areas.
We run the tests with internal bioinformatics folks, who were kind enough to help us, and similar enough to external researchers role. The results:
Some of the main takeaways were that starting a search was a big obstacle, also exporting results for a further analysis was hard, it was visually similar to another Seven Bridges tools that had different way of working, etc. There were a lot of frustrations, concepts that were hard to grasp, controls didn’t behave as expected, in some cases there were different meanings for same signifiers, and overall experience was poor.
The results of this study had an amazing effect on the whole team. Instead of discussing what ifs, team started discussing how to solve concrete issues that real people had. It was a great experience to witness and be a part of this transformation.
There was also a round of gathering feedback from other sources - tickets users reported, sales and training folk who had first hand experience in explaining and using it, and it was all gathered into a single place:
Once there was enough buy-in to address core problems instead of sticking bunch of tutorials on top of existing interface, exploration started.


Before jumping to details, let’s compare the product before and after the redesign. During research I found that mental model people had was different than one provided by the product. The bottom example illustrates a semi-complex example - sequencing data from both tumor and normal tissue for same cases. This enables researchers to compare these tissues and find differences in the genome.
Among other things, redesign provided better grouping, new naming convention, color coding and unified mechanism for adding additional properties and filters.

Evolving the filters

One of the main pain points was the filtering mechanism. It was the crucial part of the product - users came to Data Browser to find certain files and this was the main step.
The existing interface consisted of nodes that matched certain concepts, like files and cases, and sub-nodes that described properties and values (data format: txt) of parent-nodes.
However, this functionality was built upon an existing interface inherited from another Seven Bridges tool used for orchestrating workflows. This introduced a mismatch between two mental models - a linear one for which users already had some expectation how it should work, and pattern based one whose workings went against the workflow-like foundation.
These problems were addressed with few starting points that redesign was supposed to reflect:
  • Data Browser needed to look and behave differently than the workflow tool
  • Filtering mechanism should be clear and effortless
  • It should provide better grouping
  • Everything should be named properly
After a few iterations the team was happy with new filter structure.

Counting items

Number of cases, files and other items that matched the filter was important and useful to researcher. However it was hard to get those since correlation between and objects wasn't obvious and backend limitations required users to manually refresh each number.
This was improved by providing color coding and global refresh. I tried advocating for automatic refresh on every filter change, but due to backend limitations a sequential global refresh was best we could do. It wasn’t perfect, but it worked better than the original.
These cards had a dual purpose - they communicated the number of items and served as tabbed header for result details.

Learning the ropes

Data Browser introduced a foreign way for filtering that was unknown to users. A set of templates, called Example Queries, intended to show how to utilize its power.
Redesign addressed some problems users had with this feature. Templates were split from the queries saved by users, preview was added and few adjustments to the flow were made.