Catalant serves as a marketplace that helps businesses find the right talent resource for their critical projects. Our platform gives clients access to consultants and consulting firms in every industry possible all over the world. In just a few years, we helped more than 30% of the Fortune 100 accelerate and deliver their pivotal initiatives and projects.
However, we also felt the growing pain. A lot of clients had to seek help from our customer success because they couldn't find the right talent by themselves. We have seen a high abandon rate from project owners after browsing experts. It is very costly to seize the business opportunities if we lose our customers at such an early stage.
One challenge was to help a new client to figure out "who are these experts". Unfortunately, Catalant Marketplace didn't have the right features to support every decision point for clients. Instead, clients have to carry the mental work on their own and feel helpless browsing experts.
The project owners do not always have the domain knowledge of the role they are recruiting for. They do not know which skills or industries they should search unless they reach out to our internal teams for help. Our Browse experience provided a "flat" experience with the assumption that users knew what they want to look up.
From conversations with clients, we have found that they usually go through these 4 steps before hiring a consultant - discover, learn, compare, and execute. A good Browse experience would serve the right information to the clients at every stage. However, Catalant Marketplace missed out some opportunities to provide in-context help, which led to losing the projects.
We scheduled interviews and conducted surveys to drive the problem-solving process. After the research process, we divided the Browse experience into 3 phases. Clients have a goal at the end of each stage. Making sure that they can achieve the goal is critical to move them closer to the next stage.
The UX strategy is simple - we help them with the decision-making process and provide only effective information and necessary features that matter to each decision point.
When clients have new projects, they start searching for experts who meet the project needs. At this stage, their goal is to quickly browse experts' info and identify the ones they are interested in learning more about.
After the brief browsing, clients will try to shortlist to a manageable amount of experts to contact. They ask a few questions when reviewing the experts' profile: Does the expert have the skills required? Can this expert work onsite...
Sometimes there are a few experts seem equally qualified. At this time, deciding who to hire is hard. A tiny detail can make an expert stand out. Clients' goal is to find the best expert through a side-by-side comparison.
Using the insights from the participants and previous research, we were able to design some guidelines before getting to interface and interaction design. The principles are also a qualitative benchmark when I make design decisions.
To support every decision point, the new search experience focuses on lifting the mental work for clients and help them visualize the information better. Clients can browse and compare experts easily with the most relevant information and clear actionable next steps.
Clients can view a summary of the works experience, the reputation of the expert, and an overview of the industry experience at a glance. The information presented on the result cards is the most important at this phase of browsing, validated by user interviews and testing. By using "great match" and "good match", we are making recommendations in clients' language.
After browsing experts, clients can review how the experts meet the project requirements easily in the slide-out. Rather than the confusing quantitative value, we qualitatively presented the match score so users can quickly understand why these experts are recommended by Catalant.
Clients who have difficulty deciding which experts to move forward with can compare a few experts side-by-side. The comparison feature helps lay out the most important information needed to compare, so clients can focus on comparing experts and making decisions.
Let me walk you through how we made this happen. One of the biggest challenges when designing the search experience is to understand the information needs from users at various levels.
The strategy was to prepare as much as possible before going into field research. I worked with my team and started the research early, so when we talked with real customers, we always had specific goals in mind. That saved us time discussing design decisions when there was not enough data to support.
Before kicking off the exploratory user research, I did an analysis of the current search interface and match score. Multiple usability issues were found through the heuristics evaluation. I categorized the issues into cosmetic problems, minor usability problems, and major usability problems.
The major problems that need to be solved include low satisfaction of the exploratory search needs, lack of explanation for search filters and necessary interpretation of match score.
When I was working on the heuristics evaluation, other team members focused their effort on competitive analysis to see how they design for their users. We identified and analyzed a few products that involve solving for similar user needs, including LinkedIn, Entelo, and Upwork.
We studied and discussed the pros and cons of each competitors' search design and shared how we think we can provide a better experience. We brought the pros and cons of each competitor to our unique use cases and discussed what to use or avoid.
Once we gained a solid understanding of how our product and our competitors' products perform, we started to interview project owners while they were performing searching for experts. I led the semi-structured interview process by planning the tasks to perform and questions to ask.
The questions I asked focused on the information project owners need at various phases. I also observed how they used the product today in a more realistic way. Through the interview, we also identified that the project owners are not always the final decision-makers in deciding which experts to hire.
The next step was to visualize the user goals and pain points after we had a good understanding. To save time from the back-and-forth discussion of how we should divide the journey, we used the concept of a universal job map from the job-to-be-done framework.
At every phase, we listed the users' goals and pain points. We also listed what we had done to address the pain points and how well we addressed them.
Understanding every edge case was important and creating an intuitive interface to contain the 3 phases of browsing was critical. Therefore, I created a flow map to guide myself in designing actual screens and it also served as a communication tool within our team.
The UI design iterations were highly impacted by user feedback. For every iteration, I scheduled feedback sessions with both internal teams and external stakeholders to get their feedback.
Some valuable feedback includes that the "Top 10%" created confusion that the score was relevant to the entire expert pool but not the project needs; the "x" icon that indicates "not a match" made users think it was a button.
We fixed the major usability issues through iterations and launched the features to the platform.