Behind the product: Miro
Our guest, Anna Boyarkina, has been leading Miro's product function since the first few users to today, supporting over 60M users across 200K organizations.
Brought to you by:
CustomerIQ is the AI platform to turn customer feedback into value. CustomerIQ surfaces actionable insights from channels like CRM notes, surveys, support tickets, and call transcripts to help your product and GTM teams generate fact-based content. Drive revenue, increase retention, and align every team with the needs of your customer with one powerful AI platform. Learn more here.
Before writing a line of code for CustomerIQ we interviewed over 25 product leaders to learn how they worked and what their biggest challenges were. We were working to validate some of our assumptions around how product teams use customer feedback.
We learned two main things during those conversations:
The first, of course, was that teams struggle to aggregate and synthesize their customer feedback.
But separately, on the topic of tools these teams used and loved, Miro came up time and again. Teams we interviewed used Miro for everything from brainstorming to managing product roadmaps and seemed to love every aspect of it.
Naturally, I was so excited to sit down and chat with Anna Boyarkina, Head of Product Excellence at Miro, to learn more about how Miro builds products people love.
Today Miro supports over 60M users across 200K organizations including Nike, IKEA, Deloitte, WPP, and Cisco. But Anna has been leading the product management function in its various shapes and sizes since Miro was merely a prototype.
Over her 13 year tenure with Miro the company has grown from a fledgling startup to a $17.5B enterprise (as of January 2022).
Here’s what we learned:
How Anna discovered Miro’s first use cases
Miro’s “Painted picture” planning strategy
Their AMPED team structure
How Miro runs experiments
The challenges with synthesizing customer feedback
The importance of trust if you want to move fast
What Anna looks for in new hires
Please enjoy our conversation with Anna Boyarkina, head of product excellence at Miro.
Behind the Product: Miro
Presented by CustomerIQ
CustomerIQ is the AI platform to help teams aggregate, search, and synthesize customer feedback.
CustomerIQ aligns teams with insights from channels like CRM notes, surveys, support tickets, and call transcripts, to help drive retention, product improvements, and revenue. Align every team with the needs of your customer and deliver an exceptional experience with one powerful AI platform.
Tell us about your first role at Miro
I've been at Miro for almost 13 years, basically since the beginning. I initially joined on a part-time basis as the company was in its exploration phase and my skills are not engineering, but rather somewhere in the mix of marketing, product, UX. Eventually, I transitioned to full-time as we worked to figure out the business aspect.
I started as a PR and community manager, which didn't quite capture what I was doing, but involved a lot of reaching out to customers and trying to understand how they were using our product, which at the time was called Real-Time Board. I had to figure out who would buy it and why, so I spent a lot of time talking to customers, answering support tickets, and just generally seeking to understand their experience and needs.
As the company started to scale up, it became clear that my skills were best suited to focusing on product development. This wasn't a role I consciously pursued – it was more of a natural progression. I think it caught me by surprise when I realized that I was essentially doing product management.
How did you identify the best use cases for Miro?
When we first built Miro, it wasn't initially clear what the specific focus would be. We toyed around with the idea of it being a B2C product, possibly even integrating some social network elements, for example, allowing people to subscribe to view other people's boards. However, through engaging with our users and observing their interaction with our product, we realized that most individuals were actually using Miro for work rather than personal use. This marked our first significant insight and milestone.
Following that, we also noted that Miro functioned more effectively as a team-based tool, rather than individual use. This influenced our perspective on our business model and product functionality. It became apparent that building a B2C product differed vastly from building and evolving it into an enterprise-level solution.
Who were your first users?
Our first audience were primarily those focused on elements similar to design thinking, especially UX researchers conducting workshops. At that time, design and customer centricity were quite the trends. So naturally, these were the people who found major use in our product, using it for things like customer journeys, facilitating workshops and so forth.
Product managers and engineering teams were also a significant user base. Instead of using complex software, they found it more convenient and efficient to use our product for visual tracking on the board - you know, kind of like good old sticky notes.
The education sector did have a notable presence, but it wasn't the core focus of our business. But all things considered, I'd say our early users were quite a diverse group!
Were there any big inflection points over the years that led us to today?
There were three significant inflection points over the years that helped shape the Miro we know today. The first was when we switched our platform from Flash to HTML in 2015. Frankly, it was a big leap. The thing is, nobody was using Flash, and it was almost impossible to pass security protocols in enterprises with it. Plus, we had to manage multiple versions of our product as we already had hundreds of thousands of users.
The second turning point happened when we dove into the enterprise field around 2016. At that juncture, we weren't sure if we were equipped or even had the potential to enter the enterprise market. We were unsure if our product was exclusively targeted towards small teams for miscellaneous tasks. However, our experimentation provided us with enough evidence that our product could be utilized in enterprises. So that informed our strategy for the next few years.
Lastly, the COVID-19 pandemic presented an unforeseen but impactful shift in our market. As remote work and collaboration became the norm, our product suddenly became indispensable for teams as they tried to define their shared workspace in a virtual setting. Our product essentially became their shared space.
How far out do you plan and how has that changed over the years?
When we started, planning was very basic. It was just a spreadsheet with weeks as columns, so everyone involved knew what the goals were for each week. It was every week releasing experiments or functionality and we would have company all hands meetings with 12 to 15 of us. We'd discuss what we were doing and what every person's focus was.
But then, a few years later, when we felt the company became more sustainable, we began to use OKRs. This became popular in the market, around 2013, so we thought, why not? At first, OKRs were on a personal level, done quarterly, but then we switched to company level and removed the personal ones.
A couple of years after that, we saw a need to think about a longer-term vision for the company. We were greatly inspired by Atlassian and admired their company culture, growth, and focus. That's why we started planning three years ahead. We called this our "Painted Picture", a snapshot of what we envision the company to look like in the future, covering different aspects like company culture and business direction. This then influenced our strategy and biannual OKRs.
What percentage of ideas come from top down and what percentage come bottom-up?
I wouldn't definitively quantify it as 50% top-down or 50% bottom-up. What I can tell you is that our strategy largely comes from the top down, but solutions are significantly informed from the bottom up.
We somewhat operate under the concept of a double-loop model. When we put forward a direction, we invite our teams to validate it and also provide their own valuable input. It's not so much that we follow this process rigidly, but rather that there's always a continuous feedback loop in operation.
When we're formulating our strategy, for example, we hold share-out sessions where feedback can be freely given. It is this input that helps inform a lot of our strategic thinking. An important point to keep in mind is that whatever strategic decisions we take, they should be appealing not just to the end-users of our product, but also the admins who are effectively the product's custodians within the organization.
But it's not just about setting a direction and feeding off of internal feedback. We then take the proposed direction to our customers to try to understand the practical use cases and prioritize them accordingly. We recently launched a new product which had already seen a lot of demand and received excellent feedback during its private beta testing phase. This, I believe, is testament to the incredible work of our team and that definitely makes me proud.
How are the product teams structured?
Yeah, so in our setup, we have several product streams. These streams focus on different personas or specific use cases. One key aspect is that every product stream is united by a common mission and vision. You know, I strongly believe that teams are much more engaged when they're working within the same domain and have a shared mission. It gives a real sense of purpose.
However, we try to ensure a product team isn't too big. Coordinating 200 people can be a real challenge. So, as a rule of thumb, we try to keep it under 100, but admittedly, sometimes that's just not possible.
We're also really big on cross-functional work. Within each stream, there are three EPD leaders who work on a day-to-day basis. In addition, they have cross-functional partners in analytics and product marketing. We've tagged it with this term AMPED, an acronym that stands for all these functions.
Every decision is driven by these people because they're the ones in charge, and it's essential they're all in sync. We experimented with different setups, with separate leaders who weren't communicating. You'd often end up with design handing off something to engineering without context, then it'd rebound to product marketing. We learned it is more effective when including all those involved in collaboration from beginning to end.
Everyone understands the challenges of each function, the customer problem we're tackling, and how we're going to solve it. Honestly, promoting this interdepartmental collaboration was a game-changer for how we operate.
What’s the ratio of engineers to PMs?
On average we have around eight engineers for every one product manager (PM). But, it's important to underline that teams can vary quite a bit. We usually operate with one designer per two to three teams.
And then there's the matter of analytics and marketing - these are shared resources among the teams, and it's an evolving situation. We've recently adjusted the operating model of the teams, even changing the focus of some roles.
In general terms, it shakes out to about one analyst for about three to five teams. Product marketing roles work in a similar way. They usually zero in on a specific domain, and inside that domain, you might have one or two people working across anywhere from four to ten teams.
How are you managing behavioral tracking and how has that evolved?
From the get-go, we've been all about measuring everything. And that data needs to be cross referenced with customer feedback, another huge element we've always considered. So essentially, we scrutinize every single click, every step - you name it, we're on top of it.
We’re also big on experimentation; we've integrated that into our approach over time. Now, we've got this beautiful fusion of using customer feedback to validate the signals we spot in the data, and vice versa. So, in a nutshell, that's how we're managing behavioral tracking. It's been an evolution, but I like to think we're always improving, always morphing into something better.
What’s an example of experimentation at Miro?
One interesting example of experimentation at Miro is our website experimentation. This involves changing the copy, altering how it works, how it looks and even experimenting with our brand's visual identity. We recently did restyling with a new visual identity. This was a stage where we had to test things beforehand to ensure we didn't see a dip in our metrics. You see, when we did rebranding or restyling in the past, there was usually an initial period where people didn't respond very well, so it was important to keep an eye on that.
The other facet of our experimentation is within the product itself. It could involve introducing new functionality and then gauging user response. I consider this an experiment as well, because we're testing things to understand if it's garnering a positive response and if the overall use case is improving. Essentially, we're trying to ascertain if people are spending more time in the product than before and whether it's enabling them to work faster, among other things.
We've also dabbled with pricing experiments, testing different things within that area. I'm always curious to see the outcomes and learn from them.
People most often talk about successful experiments, but how do you handle failed experiments?
When it comes to handling failed experiments or features in product operations, first you need to comprehend if the experiment or feature has market potential. Then, we usually test it with a subset of customers before releasing it on a full-scale. We label this stage as 'private beta.' Doing so, the customers have a high-touch experience with the new feature and understand clearly that what they're interacting with is experimental. So, even if an experiment fails or we decide that we're probably not going to work on it further, the message and expectations were clear right from the outset.
How do you manage qualitative customer feedback and what are the challenges?
This part is definitely a challenge. I still remember the good ol’ days when we could manage feedback manually. We would manually cluster all the feedback that was given to us in support tickets and social media on a spreadsheet. The beauty of that approach was being able to directly trace the release of a feature back to the exact users who asked for it.
However, when this spreadsheet reached more than 1000 rows, it became virtually unmanageable. That was probably a decade ago. We began experimenting with different solutions, such as Productboard amongst others, to capture the information.
The main challenge was that these solutions still required a significant amount of manual tagging. Understanding the information and making sense of it, especially if you weren't the person who input it, can be difficult.
We then decided to create a customer feedback system in a BI platform, Looker. That helped but we are still exploring more efficient solutions to manage it. We get over 100,000 pieces of feedback monthly, making manual processing just impossible.
At the end of the day, the goal is to extract valuable insights from the feedback - to understand customers’ needs and wants. And that's challenging in an ocean of thousands of comments. So, we are looking into more efficient solutions that can facilitate this process.
Timely note from CustomerIQ
CustomerIQ helps B2B teams aggregate and synthesize feedback from sales, CS, support, and product with AI. Every note is processed, categorized, quantified, and completely searchable so teams can easily get a pulse on customer experiences and prioritize their work.
How do you go from feedback to product?
Initially, we think about the strategy, looking closely at insights such as trends and patterns. We have research and insights teams that support us with analysis and interpretation so everyone can access and validate the assumptions.
We heavily prioritize our customers and want to interact with them, delving into their insights which then shape our hypotheses. Once we have this feedback, we try to validate if it's something that makes sense. This is done by describing your idea, usually in a product review process. In this process, our product managers, designers, and various other managers come together to propose something in what we call a kick-off.
The aim is to figure out if the problem is worth solving, and if there is evidence to support it, such as customer feedback. If the idea is supported, the next steps include designing a solution and a testing phase where we talk to customers to validate the product. Sometimes, if monetization is a significant aspect, people are asked to pay beforehand.
Lastly, the product goes to the user, and we aim to close the feedback loop thereafter. I don’t think we’re doing awfully well in this last area currently, but we're aiming to refine the process and get better at communicating to those who are requesting specific features.
How does Miro use Miro?
We use it for many things, though of course, we operate within its appropriate use-cases and avoid using it for tasks it's not designed for. For example, we use Slack for communication, even though Miro has a chat function.
In terms of specifics, we utilize Miro extensively for ideation, for strategy thinking employing different mental models, creating sticky notes and for workshops, both virtual and in person. In fact, presentations are an area where we extensively utilize Miro and chose not to use any other platform. Solution designs, wireframes, user flows - they all have a place in our Miro workflow and it aids in providing a comprehensive visual platform.
We have also implemented Miro at a micro-level, allowing teams to track tasks, particularly benefiting from the Jira bi-directional sync Miro offers. This helps teams collaborate on work in Miro, synchronize it in Jira, and ensure everyone is up to date.
How do you manage the communication of new features with users?
Our process for communicating new features is pretty sophisticated because we have a lot of teams involved and it's crucial to keep everyone in the loop. Marketing, sales, customer success, and support teams all need to be aware of what’s going on. We also make sure our customers know, especially if they are sensitive to change management, they should be informed of upcoming changes, not afterwards.
Most, if not all, the features are released to Miro employees initially because we want to test things first. This allows us to assess all aspects of the product, from bugs and user experience to the specific use of the product. Once the beta phase is over and it's been thoroughly tested, we usually have a private and public beta stage before it goes into general availability.
We don't launch anything without completing an extensive pre-release checklist to ensure it's accessible, compliant, localized. This is really important because we have a global array of customers with different needs.
Our customer-facing teams are then notified about the new features. They typically have access to the customer-facing roadmap, so they understand when new things are coming. We also notify our enterprise customers via a special portal, so admins can stay updated with changes.
It’s a strategic move we have made to prevent any unexpected surprises. It’s crucial to keep everyone informed because we have people who manage change management and if they are not aware, they can't do that efficiently.
If there were one lesson you send everyone who wants to learn about how you think and operate, what would you want to include?
The organization moves at the speed of trust.
Over the course of my experience at Miro I've worn a lot of hats. And with every new project, I've learned that without trust, all your energy is spent wondering who’s aligned with what, what the agendas are, and trying to understand the goals of differently incentivized people.
So invest in trust, especially when setting up a new team or a new project. Create a safe space where we’re all pursuing the same goals. This ties into the idea of vulnerability. As a leader, it’s absolutely crucial for me to model this trait because without it, teammates don't feel safe. I believe vulnerability is an essential leadership trait, irrespective of your title - it's more about behavior.
Lastly, the importance of continuously learning. Look at what's happening in the market right now. I've heard people say - it feels like we're in the early days of the Internet with the advent of AI. You can't avoid this. You have to upgrade your knowledge consistently, focusing not just on big things, but smaller ones too. It allows you to expand different avenues of thinking about customer problems and gain new insights. Keep talking to people, keep reasoning about things that might not be in your usual area of business focus. Embracing new evolutions is just a must.
What do you look for in new product hires?
I always look for curiosity, empathy, attention to detail and a strong alignment with our company's values.
Curiosity, because a product manager should always be driven by an intrinsic motivation to learn. Empathy, because to be customer-focused, which I consider essential, one needs to empathize with the consumers' needs.
We also need people with attention to detail because our product is very visual, and everything we do is seen by the customer. This focus on detail extends to the whole craft of product management. How you do anything is how you do everything.
The final key point is alignment with our company's values. This is not just a box to tick during recruiting or performance reviews but something that’s evaluated continuously.
Our hiring process via recruitment and hiring manager interviews often culminates in a test case which aligns with what the candidate will potentially be focusing on. Usually, it has a visual component, like asking candidates to create wireframes of solutions, just to gauge their product craft and attention to detail.
A favorite question of mine is asking candidates to name products they admire in certain areas and exploring why with follow-up queries.
In the end, recruiting is a careful balance of evidence-based questioning against our four values and selected behaviors. The stories people tell us in the interview process can reveal much about a candidate's fit with our company.
What are fun traditions or rituals you have on your team?
One fun tradition is what we call "Friday wins." It's a weekly gathering where the builders in our team present a quick, five-minute demonstration of what they've delivered to our customers through the past week. It’s like show and tell, only less salesy. It provides the team with a sense of realism and accomplishment, seeing the fruits of their labor in action.
Another favorite event is our biannual hackathon. In our last one held in December, we had over 50 teams participating. These hackathons take us out of our day-to-day work and give rise to innovation and creativity. Being able to accomplish something unique creates an incredible sense of motivation.
We are also reviving the tradition of "fails night." After all, it’s not all about successes – we want to normalize the concept that it's okay to fail. This event helps us learn from each other’s missteps.
Lastly, one of the more serious, yet vital traditions is our product reviews. Though they could sometimes be stressful due to the review process, they're key to our product-focused company. These reviews serve as a significant learning platform as colleagues get to learn from each other. What we're currently changing is the size of the meetings, since decision-making becomes challenging with too many people involved.
A huge thank you to Anna Boyarkina for sitting down with us and sharing her time and expertise. You can follow along with Anna on Linkedin here.
If you’re finding this newsletter valuable, consider subscribing and share it with a friend