This article describes the design process behind the making of Hive Insights 2.0.
Hive Streaming provides a highly efficient video distribution service, utilizing minimal network load while ensuring a high quality of experience.
Hive Insights is Hive’s interactive analytics solution. It provides an easy intuitive way for users to assess an individual streaming event.
Who I am, and why I am writing this
I am Sondos Seif El-Din, a front-end developer in the Customer Facing Tools Team (CFTT) at Hive Streaming.
In our team, we thrive to improve our design process and always experiment with the latest and the greatest. We believe that keeping an open mind and getting our hands dirty is absolutely crucial to ensure the quality of our products. Needless to say, our environment, processes, and tools move really fast, which makes proper documentation an essential step in our work, to assess our past endeavors, current status and eventually where we will be heading next.
This article tells the story of the process that we at Hive Streaming went through to implement Insights. It has two main sections: Conceptualization and Insights 2.0 in action. In the conceptualization section, I go through the process of designing the workflow, how we organized various information in Event decomposition, and finally showing some sketches. In the in action section, I will take you through the final product, showing various interactions. Finally, I will wrap up with a conclusion.
Streaming event: This refers to a video distribution event, whether it was/is a live or on-demand event.
Catering to different user groups, we decided to follow a simple and basic pattern of progressive disclosure; a basic “Overview”, and an “Explore”.
The app would be composed of two main views, an “Overview” and an “Explore”
The overview provides a bird’s eye view for the whole event, providing the user with a quick snapshot of the status of the investigated event. For further details, the user can easily navigate to Explore, where he/she can drill down to certain details, depending on the problem at hand.
The typical workflow would go as follows, a user would land on Overview where he/she could investigate the overall status of a streaming event. Then, if needed, he/she could navigate to Explore to drill down to a certain viewer set of the event, investigating the event at a micro scale.
The clear and direct way for navigation is through the main navbar, which displays a link for the main two views of the tool; Overview & Explore. Furthermore, for the more advanced users, we decided on creating various shortcuts in Overview that would redirect to Explore with an active filter based on the clicked hotspots.
The navigation paths from “Overview” to “Explore”
After settling on a workflow, we started to lay down all of the information that could describe a streaming event. Realizing the amount of information that we could display, we had to introduce a grouping of some sort in order to ensure that we don’t overwhelm the user. During the grouping process which was based on relevancy, the concept of having multiple perspectives to an event emerged.
A sample of the information we have that describes an event
The emerged perspectives to a streaming event were: viewers, event quality, and network performance. These three perspectives will propagate through the whole tool, in both Overview, and Explore, which will be tackled in the next section.
Three perspectives emerged to assess a streaming event
Sketching the views
At that point, we started to dive into sketching. The starting point was the Overview.
Designing the Overview was challenging in two aspects:
1. Presenting the user with concise yet comprehensible & sufficient information.
2. Wiring up Overview with Explore with various shortcuts, to cater for advanced users, without overwhelming the novice ones.
For the wiring up part, we decided to push it to the end of the development process, as it is not a core feature. So, in this section, I will only tackle challenge #1; we have a huge amount of information and want to present it in a clear way.
We started by asking a question, “What summarizes a streaming event?” And answered it using the first component in the overview. We added a number of labels at the top of the overview, with various metrics that are the most important for a user to observe an event. Then, for the rest of the view, the basic idea was having a simple panel for each perspective, we have previously identified; one for the viewers, one for the event quality and finally one for the network performance. Each panel will contain the most important data from that perspective.
Designing Explore was a bit more complicated, though.
While Overview provides the user with a less flexible yet a quick snapshot of the different aspects of a streaming event, Explore is an advanced and flexible debugging tool which is targeted towards more advanced users. When attempting to articulate the core functions of this view, we came up with a workflow. It was very basic; actually, a user can apply various filters to the viewers participating in the event at hand, and based on that filter various metrics are updated.
The Explore workflow
Using that workflow, we were able to derive the components forming the view:
1. Viewers: The filtered set represented in a list or on a map.
2. Metrics: Various aggregation charts based on the filtered set of viewers, from the same three perspectives adopted in Overview.
3. Filter: The currently applied filter to the viewers set.
Although the view was a bit crowded, identifying its basic components enabled us to sketch it down with ease.
Insights 2.0 in action
Based on the sketching phase, this view is composed of 4 components. The first one is the labels summarizing key metrics for an event.
“What summarizes a streaming event?”
For the rest of the page, we exposed the most important information from the previously mentioned three perspectives to a streaming event.
The first perspective is the Viewers.
The viewers’ panel in the overview page
It shows the highlights of viewers related data; the top 15 largest locations, the viewers count over the time of the event, and finally the geographical distribution of viewers.
The second perspective is the Event quality.
The event quality panel in the overview page
It shows the overall quality of experience, requests per bitrate, and finally the locations with quality lower than good (In the shown example, there was none).
The third & final perspective is the Network.
The network component in the Overview page
It shows the distribution type, savings, connection types, and source load over time, which all are crucial information to assess the event from a network performance point of view.
Assembling all the previous components together eventually composes the Overview.
Starting with dismantled components as we did in Overview, Explore was designed to be composed of 3 main components. The first one is the viewers’ representation. Since this is a bit of an advanced view, we wanted to make it as flexible as possible enabling the user to select via different representations, which are simply a list or a map. However, we added a couple of subviews to each as well. As for the list representation, the purpose at first was to show a list of aggregated data based on locations accompanied with the plain viewers’ list as well. However, for performance reasons, we resorted to implementing them separately into two views.
The two subviews can be accessed via a toggle button at the top right corner.
Viewers’ list representation in Explore; viewers aggregated by locations on the left, and plain viewers’ list on the right
As for the map representation, we added two views to it; viewers count and quality of experience. Using these two views, a user can easily answer questions like, “What is the location with the highest viewers count?” and “What is the locations with the worst quality of experience?”.
Similarly to the list view, the two subviews can be accessed via a toggle button at the top right corner.
Viewers’ map representation in Explore; using viewers count and quality of experience
The second component in Explore is the metrics panel. The metrics panel contains some graphs whether timeline or aggregated pie charts for the filtered viewer’s list, which are grouped based on the recurring three perspectives.
Metrics panel in Explore; using the recurring 3 perspectives to an event
The final component in Explore is the applied filter. The first purpose of this part is to serve as a constant reminder for the user of the currently viewed set of viewers. For that we added a simple tag based panel, displaying the current filter. The second purpose was noticed after some preliminary user testing. Some users were not aware that applying a filter (using various ways, discussed in the next section), will affect the whole view. So, we added a panel that shows some statistics for the currently filtered set of viewers in relation to the whole set, which gets animated when a filter is applied. The combination of both the tags in the current filter and the stats panel highlighted that there was a state change of some sort, which solved the prior problem.
Current filter in Explore
Filter data stats in Explore
Assembling all the prior 3 components together yields Explore.
Before moving on to the wiring up of Overview with Explore, I will dissect the various filtration paths in Explore because since this view was designed with extreme flexibility in mind, we tapped in some not so glaring filtration paths.
So how to filter the original set of viewers in Explore?
The most obvious way is using the search fields in the viewers’ list.
Filtering using the search fields in the list of viewers
As for the locations list, it was a conscious decision not to add search fields, because we thought that would unnecessarily complicate the filtration at this phase. However, we exposed a subtle filtration path, using the links in the location column. Clicking on these links will apply a filter showing only the viewers in the clicked location.
Filtering using the locations table
Moving to the more graphical path, the map. The implementation of this part was a bit tricky because the map supports two types of interactions; drilling down and filtering. Since we didn’t want to get caught up in the double click hell, we resorted to a glaring toggle button to switch between the two modes of interaction.
Filtering using the map
The default is the drill down, so clicking on the US, for example, would load the country’s map with its highlighted states. However, switching to filtration mode, clicking on a region would apply a filter showing only viewers in the clicked region.
The final path and the most subtle one is through the metrics panel.
Filtering using the charts in the metrics panel
Clicking on various charts in the metrics panel would apply a corresponding filter as well. For example, clicking on a slice in the quality of experience pie chart would only show viewers who reported the clicked quality of experience.
Overview to Explore shortcuts
In this section, I will go back a bit to the Overview and discuss its subtle wiring with Explore. For an advanced user, a very common scenario that he/she would want to further investigate a piece of information on the Overview using the more flexible Explore. Without having shortcuts from the Overview to Explore, a user would navigate to the explore, apply a certain filter to reach to a state initially spotted on Overview. That’s a bit tedious of course and could interrupt a supposed to be smooth flow, hence the shortcuts.
The shortcuts are implemented using the now familiar filtration paths in Explore. The first one is using lists. In the two lists in the Overview, clicking on the location link would redirect to Explore with a corresponding location filter applied.
Filtering using largest locations and locations with quality issues lists in Overview
The second one is using the map, using the same interaction pattern implemented in Explore. Except that clicking on a region infiltration mode, would redirect to Explore with the corresponding location filter being applied.
Filtering using the map in Overview
The final one is using pie charts. Similarly to Explore, clicking on a slice would redirect to Explore with an already filtered set of viewers.
Filtering using charts in Overview
An Inevitable Conclusion
Looking back to the whole thing, I admit it didn’t feel as smooth as I am writing it right now. The initial push of conceptualizing and prototyping Insights took a bit longer than planned. We weren’t able to involve users in the design process the way we wished to be. However, despite the initial turmoil, we were able to deliver nearly on time. Furthermore, what was unique in building this app that our whole organization was truly invested in its development. All of us at Hive were and still are, proactive in giving feedback whether from a design or a usability perspective, or simply going on a bug hunt.
Insights 2.0 is up and running. It has been well received by our customers, but we always welcome suggestions on how it can be improved.
In the CFTT at Hive, our job is to create a multitude of tools for our customers to be able to interact with our various streaming solutions. Hive Insights was our first product in a way, we are very happy with the results to date, but we have lots of experiments to explore and improvements to make. Tapping on the process itself, we just finished our first design sprint for another tool we are developing. In a week, we conceptualized the whole thing, did a mock up, and tested it on many users. It was a vast improvement to what we did previously. Eventually, we were able to implement a robust tool in less than a month. Furthermore, we started to involve more users in our design process, conducting periodic user interviews, which always provide valuable insights to consider in planning for our future sprints.
What you just read
In this article, I discussed the process we went through to design and implement the first iteration of Hive Insights. I showcased both the glaring and subtle interactions in the applied product. Finally, I concluded with a retrospective, where we are now, and where we are heading from here. Please email firstname.lastname@example.org if you’d like to make suggestions or get further information.