Making Hive Insights 2.0 – The User Experience Perspective
This article describes the design process behind the making of Hive Insights 2.0.
Hive Streaming provides a highly efficient video distribution service, utilizing minimal network load while ensuring a high quality of experience.
Hive Insights is Hive’s interactive analytics solution. It provides an easy, intuitive way for users to assess an individual streaming event.
Who I am and why I am writing this
I am Sondos Seif El-Din, a front-end developer in the Customer Facing Tools Team (CFTT) at Hive Streaming.
In our team, we thrive to improve our design process and always experiment with the latest and greatest. We believe that keeping an open mind and getting our hands dirty is absolutely crucial to ensure the quality of our products. Needless to say, our environment, processes, and tools move really fast, which makes proper documentation an essential step in our work by allowing us to assess our past endeavors, current status, and eventually where we will be heading next.
This article tells the story of the process that we at Hive Streaming went through to implement Insights. It has two main sections: Conceptualization and Insights 2.0 in action. In the conceptualization section, I go through the process of designing the workflow and how we organized various information in Event decomposition, and show some sketches. In the In Action section, I will take you through the final product, showing various interactions. Finally, I will wrap up with a conclusion.
Streaming event: This refers to a video distribution event, whether it was/is a live or on-demand event.
Catering to different user groups, we decided to follow a simple and basic pattern of progressive disclosure: a basic “Overview”, and an “Explore”.
The app would be composed of two main views, an “Overview” and an “Explore”
The overview provides a bird’s eye view for the whole event, providing the user with a quick snapshot of the status of the investigated event. For further details, the user can easily navigate to Explore, where he/she can drill down to certain details, depending on the problem at hand.
The typical workflow would go as follows, a user would land on Overview where he/she could investigate the overall status of a streaming event. Then, if needed, he/she could navigate to Explore to drill down to a certain viewer set of the event, investigating the event at a micro scale.
The clear and direct way for navigation is through the main navbar, which displays a link for the main two views of the tool: Overview and Explore. Furthermore, for the more advanced users, we decided to create various shortcuts in Overview that would redirect to Explore with an active filter based on the clicked hotspots.
The navigation paths from “Overview” to “Explore”
After settling on a workflow, we started to lay down all of the information that could describe a streaming event. Realizing the amount of information that we could display, we had to introduce a grouping of some sort in order to ensure that we don’t overwhelm the user. During the grouping process – which was based on relevancy – the concept of having multiple perspectives to an event emerged.
A sample of the information we have that describes an event
The emerged perspectives to a streaming event were: viewers, event quality, and network performance. These three perspectives will propagate through the whole tool – in both Overview and Explore – which will be tackled in the next section.
Three perspectives emerged to assess a streaming event
Sketching the views
At that point, we started to dive into sketching. The starting point was the Overview.
Designing the Overview was challenging in two aspects:
- Presenting the user with concise yet comprehensible & sufficient information.
- Wiring up Overview with Explore with various shortcuts, to cater for advanced users, without overwhelming the novice ones.
We decided to push the wiring-up piece to the end of the development process, as it is not a core feature. Therefore, in this section, I will only tackle challenge #1: we have a huge amount of information and want to present it in a clear way.
We started by asking a question, “What summarizes a streaming event?” and answered it using the first component in the overview. We added a number of labels at the top of the overview, with various metrics that are the most important to a user observing an event. Then, for the rest of the view, the basic idea was having a simple panel for each perspective we have previously identified; one for viewers, one for event quality, and finally one for network performance. Each panel will contain the most important data from that perspective.
Designing Explore was a bit more complicated, however. While Overview provides the user with a less flexible, yet quick snapshot of the different aspects of a streaming event, Explore is an advanced and flexible debugging tool which is targeted towards more advanced users. When attempting to articulate the core functions of this view, we came up with a workflow. It was very basic and allowed user to apply various filters to the viewers participating in the event at hand, updating various metrics based on these filters.
The Explore workflow
Using that workflow, we were able to derive the following components to form the view:
- Viewers: The filtered set represented in a list or on a map.
- Metrics: Various aggregation charts based on the filtered set of viewers, from the same three perspectives adopted in Overview.
- Filter: The currently applied filter to the viewers set.
Although the view was a bit crowded, identifying its basic components enabled us to sketch it down with ease.
Insights 2.0 In Action
Based on the sketching phase, this view is composed of 4 components. The first one is the labels summarizing key metrics for an event.
“What summarizes a streaming event?”
For the rest of the page, we exposed the most important information for a streaming event based on the three previously mentioned perspectives.
The first perspective is the Viewers.
The viewers’ panel in the overview page
This perspective shows the highlights of viewer-related data: the top 15 largest locations, the viewer count over the time of the event, and the geographical distribution of viewers.
The second perspective is the Event Quality.
The event quality panel in the overview page
This shows the overall quality of viewer experience, requests per bitrate, and locations with quality lower than “good” (In the shown example, there was none).
The third and final perspective is the Network.
The network component in the Overview page
This shows the distribution type, savings, connection types, and source load over time, which all are crucial information needed to assess an event from a network performance point of view.
Assembling all the previous components together eventually composes the Overview.
Similar to Overview, Explore was designed to be composed of 3 main components. The first one is the viewers’ representation. Since this is a bit of an advanced view, we wanted to make it as flexible as possible, enabling the user to select via different representations, which are simply a list or a map. However, we added a couple of subviews to each as well. As for the list representation, the purpose at first was to show a list of aggregated data based on locations accompanied with the plain viewers’ list as well. However, for performance reasons, we resorted to implementing them separately into two views.
The two subviews can be accessed via a toggle button at the top right corner.
Viewers’ list representation in Explore: viewers aggregated by locations on the left and plain viewers’ list on the right
As for the map representation, we added two views: viewer count and quality of experience. Using these two views, a user can easily answer questions such as “What is the location with the highest viewer count?” and “What are the locations with the worst quality of experience?”.
Similarly to the list view, the two subviews can be accessed via a toggle button at the top right corner.
Viewers’ map representation in Explore; using viewers count and quality of experience
The second component in Explore is the metrics panel. The metrics panel contains graphs (timeline and/or aggregated pie charts) for the filtered viewer’s lists, which are grouped based on the recurring three perspectives.
Metrics panel in Explore; using the recurring 3 perspectives to an event
The final component in Explore is the applied filter. The first purpose of this part is to serve as a constant reminder for the user of the currently viewed set of viewers. For that we added a simple tag based panel, displaying the current filter. The second purpose was noticed after some preliminary user testing. Some users were not aware that applying a filter (using various ways, discussed in the next section), will affect the whole view. So, we added a panel that shows some statistics for the currently filtered set of viewers in relation to the whole set, which gets animated when a filter is applied. The combination of both the tags in the current filter and the stats panel highlighted that there was a state change of some sort, which solved the prior problem.
Current filter in Explore
Filter data stats in Explore
Assembling all the prior 3 components together yields Explore.
Before moving on to the wiring-up of Overview with Explore, I will dissect the various filtration paths in Explore. Since this view was designed with extreme flexibility in mind, we tapped into some not-so-obvious filtration paths.
How to filter the original set of viewers in Explore
The most obvious way is using the search fields in the viewer list.
Filtering using the search fields in the list of viewers
As for the locations list, it was a conscious decision not to add search fields because we thought that would unnecessarily complicate the filtration at this phase. However, we exposed a subtle filtration path using the links in the location column. Clicking on these links will apply a filter showing only the viewers in the clicked location.
Filtering using the locations table
Moving to the more graphical path, we find the map. The implementation of this part was a bit tricky because the map supports two types of interactions: drilling down and filtering. Since we didn’t want to get caught up in double-clicking, we resorted to a glaring toggle button to switch between the two modes of interaction.
Filtering using the map
The default is the drill down, so clicking on the US, for example, would load the country’s map with its highlighted states. However, when switching to filtration mode, clicking on a region would apply a filter showing only viewers in the selected region.
The final path and the most subtle one is through the metrics panel.
Filtering using the charts in the metrics panel
Clicking on various charts in the metrics panel will apply a corresponding filter as well. For example, clicking on a slice in the quality of experience pie chart will only show viewers who reported the clicked quality of experience.
Overview to Explore Shortcuts
In this section, I will go back a bit to the Overview and discuss its subtle wiring with Explore. For an advanced user, a very common scenario that he/she would want to further investigate a piece of information on the Overview using the more flexible Explore. Without having shortcuts from the Overview to Explore, a user would navigate to the explore, apply a certain filter to reach to a state initially spotted on Overview. Of course, that’s a bit tedious and could interrupt a supposed to be smooth flow, hence the shortcuts.
The shortcuts are implemented using the now familiar filtration paths in Explore. The first one is using lists. In the two lists in the Overview, clicking on the location link would redirect to Explore with a corresponding location filter applied.
Filtering using largest locations and locations with quality issues lists in Overview
The second one is using the map, using the same interaction pattern implemented in Explore. One exception is clicking on a region in filtration mode, which will redirect to Explore with the corresponding location filter being applied.
Filtering using the map in Overview
The final one is using pie charts. Similarly to Explore, clicking on a slice will redirect to Explore with an already filtered set of viewers.
Filtering using charts in Overview
An Inevitable Conclusion
Looking back to the whole thing, I admit it didn’t feel as smooth as it does writing it right now. The initial push of conceptualizing and prototyping Insights took a bit longer than planned. We weren’t able to involve users in the design process the way we wished to in the beginning phases of development. However, despite the initial turmoil, we were able to deliver very close to our original delivery date. Furthermore, what was unique in building this app was that our whole organization was truly invested in its development. All of us at Hive were (and still are) proactive in giving feedback, whether it was for a design, a usability perspective, or simply going on a bug hunt.
Insights 2.0 is up and running. It has been well received by our customers, but we always welcome suggestions on how it can be improved.
In the CFTT at Hive, our job is to create a multitude of tools for our customers to be able to interact with our various streaming solutions. Hive Insights was our first product in a way, and we are very happy with the results to date. However, we have lots of experiments to explore and improvements to make.
Also, by tapping into the process for Insights 2.0 it, we recently completed our first design sprint for another tool we are developing. In a week, we conceptualized the entire thing, did a mock up, and tested it on many users. It was a vast improvement to what we did previously. Eventually, we were able to implement a robust tool in less than a month. Furthermore, we started to involve more users in our design process, conducting periodic user interviews, which always provide valuable insights to consider in planning for our future sprints.
What you just read
In this article, I discussed the process we went through to design and implement the first iteration of Hive Insights. I showcased both the obvious and subtle interactions in the applied product. Finally, I concluded with a retrospective, where we are now, and where we are heading from here. Please email firstname.lastname@example.org if you’d like to make suggestions or get further information.