This is a continuing series of articles documenting month by month a full year of building a brand-new Product Operations function at HeliosX a rapidly scaling, £150m+ HealthTech. It is designed to show you the journey, it’s not a warts-and-all exposé.
I make no apologies that this focuses on the achievements, the methods and how we overcame challenges in a positive light, showing a practical journey with details to reuse and take away. It is still my job, after all!
This series is not an advertisement for the company, but is published with the blessing of HeliosX.
Read Month 6 here
May has comprised a greater volume of strategic initiatives than has been typical in this series. I am a firm believer that Product Ops comprises a lot of tactical effort particularly when building 0-1, but now 8 months in, several areas of the division feel more stable through said tactical work (more on this in this edition) giving me the capacity to look longer term at how we want the division to operate in a year or two years time.
Capacity Surfacing
Last month, I detailed how we have evolved the implementation of our Sprint Goals to surface more detail on the types and size of effort needed to complete the Goals. This allows both product teams and leadership to have a clear picture of what is in flight. The next iteration is to both see and prove the ability to deliver on that picture, by seeing with a high level of detail the engineering capacity available, per squad, per Sprint. Whilst demonstrating this with numbers to leadership is one outcome (and one that in time I hope will fade in importance), far more valuable is simply surfacing this to the product teams when they are planning the Sprints/Goals - seeing what capacity they have to play with before they even start planning.
What have I implemented? In my trusty Airtable product database that continues to grow, a simple scheduler, of sorts, collects the availability - or rather unavailability - of the engineers individually. When an engineer is on leave, or they are scheduled to our support desk (which takes them out of circulation for the Sprint), a record is added. This links their ‘profile’ record to the unavailability record to the Sprint in question. Through a series of formula fields, I can then calculate how much front-end specialist engineering capacity a squad has for x Sprint, how much back-end too (Full stack was fun to play with but with the engineering managers we agreed, the ease of use, to just split the capacity 50:50 for any full-stack engineer).
Capacity is measured in days, a Sprint is 10 days (2 working weeks). If an engineer is on the support desk for a week (which is usual), then their capacity for that Sprint is reduced to 5 days. 2 days also on leave, down to 3. So now, each PM has access to these stats at the point of planning the Sprint, as typically both the support desk rota and leave days are known well in advance.
We absolutely use these stats as a guide, it is NOT for further calculation. The headlines are a percentage of full capacity, so a Sprint may be down front-end capacity by 25%, the team knows the plan around 25% less FE work for that Sprint. As well, we also use these totals as part of the Sprint review reports to additionally justify any reduction in planned and completed work, simply down to capacity, and ultimately paint a clearer, more dependable picture of performance and velocity.
Maturity Matrix - Update
A small update on the Maturity Matrix since last month - we have continued to refine the criteria through several reviews now encompassing the entire product leadership (CPO, 2x Directors of Product, VP Engineering and myself), and have begun to self-assess independently of each other (so as not to be influenced by the scoring of others). Each has taken a couple of rounds of refinement and reassessment to settle on some scores, where now we have some averages for each criterion. Scoring is from 0 (no maturity) to 5 (highest maturity).
Maturity, of course, has connotations with getting old - whereas I prefer to think more fully developed, like a sprawling Oak tree!
Our next phase with this is to do some estimation poker, whereby we discuss any scores that are not aligned to understand different perspectives. In this case, where the range between the lowest and highest score is greater than 1, those scoring those scores give some context to the group as a F2F session coming up in June, and decide what actions in what priority order to take to improve our maturity over time.
The results so far have been very interesting, with a 50:50 split between being very aligned (everyone scoring largely the same) and significantly different, and in those, how different perspectives based on role and tenure at the organisation influence the scores.
For the Excel geeks, plenty of cross-workbook links to automatically display, and then average, the scores from each colleague’s copy into the master workbook - as each score changes, I could instantly see them filter in and affect the averages.
The long-term plan here will be to assess at least once a year and chart these scores to show our progress.
Capability Matrix
Sort of hand in hand with the Maturity Matrix, I was asked to get involved with the production and alignment of a career framework/capability matrix for product managers, now focusing on the individual skills and promotion pathways (rather than the division with the Maturity Matrix). This has been in demand by our PMs for some time, and with the expansion of our People Team at HeliosX, we had the capacity to make progress here, too.
I’ll admit - I cheated. Rather than reinvent the wheel completely, I combined a number of existing frameworks (one from the inter-web, my own Product Ops one for core skills, and one from my colleague who had built one herself prior). The end result was a fairly quick turnaround, yet very detailed capability matrix levelled across all IC roles from Associate PM to Principle PM (and how and where this overlaps with the management track), we have descriptors for every headline skills for each role type - giving a clear indication on expectations for each role.
Aside from some negotiation (ongoing) of the level numbers vs role names, to align with other departments, with the People Team (ongoing at the time of writing), this is both done, and nothing particularly special. Other than that, and here is some advice for readers - don’t reinvent the wheel, the hard work has already been done by others, so use that!
What I have done additionally is map the capabilities to the elements on the Maturity Matrix. Around 60% of items are mapped to capabilities, and I believe this will help give wider context to the PMs on the importance of the skills they are measured against and how this fits into a maturing division.
Maybe…
Onsite Planning
May saw our Q2 Digital Experience Division onsite day, our second event like this we have run this year, and ever. !00 product managers, engineers, designers, data engineers and leaders came together to share plans, projects in flight, new ideas for ways of working and initiatives being played with, and several opportunities for feedback on the journey we are on as a division and business.
Planning wasn’t as smooth as it could have been with some external factors, meaning the venue, format, and day changed several times, also impacting the agenda until quite close to the day. However, the day itself was very smooth, the agenda packed with (mostly) useful sessions and the overall feedback was extremely positive, indicated by post-event feedback surveys (My Airtable to the rescue once again!)
Our agenda and the feedback we received:
CPTO Update and Tech Strategy: Liked and useful to hear in an organised way (as opposed to 1:1 or third hand).
Lightning Talks: Best session of the day, where 10 different colleagues from across product, engineering and design put together short informative talks on what they are working on and new ideas. Very positive feedback, particularly for those who were not used to speaking in front of a large audience.
Leadership AMA: The team liked hearing directly from leadership on their individual (pre-submitted) questions, even where the answers may not have been what they wanted to hear. Next time, we’re opening it up to live questions/follow-ups as well.
CRO Insights: A great deep dive session on how the CRO team operates, the experiments they have planned. Very detailed and well-received.
Breakout Sessions (x2): In groups, providing feedback on different topics. This was hit and miss. The concept was good, but there was no time for live playback of ideas, and there was no clear explanation of what was going to be done with the feedback itself. Plenty to learn from here.
Overall Feedback from onsite day: Overall, very positive to all be together, having that F2F time, most sessions hit the mark with some detailed feedback to make it even better next time. It was noted too this was a significant improvement from our Q1 event.
AI Tools - Update
Events are moving at a pace when it comes to me, my role at HeliosX and AI for the business. I am writing this on the train home from our Business Strategy afternoon, where it was announced we are adopting a more AI-first mindset in terms of how we work internally - start using AI tools in all aspects of our work to learn, and (conceptually) be more efficient…
In the preceding week, I’ve experimented with a number of additional tools to find useful new ways of working, using AI, that I can try, refine, and iterate on before providing to my teams as a more robust ‘hack’ for an element of their work.
In Confluence, using their Rovo AI tool to produce release notes for the current sprint for a specific squad. Piloted with 2 squads to verify the output - it’s not bad! It’s formatted well, it’s suitable for a less-technical audience, and it brings out the sentiment of tasks well. It can be a bit grandiose (a big fix comes out as ‘delivering product excellence’!), but pretty good. Ultimately, this would save 80% of the effort, with the remaining 20% being a top/tail and rewriting 1 in 10 descriptions, probably.
In Jira Service Desk, using the AI tools (It is Rovo but not yet labelled as such) to identify and categorise common support ticket types, and utilise either a chatbot tool to identify a response using our growing knowledge base, or use ‘Intents’ - a fixed branching approach - to provide standard answers (i.e. please contact the IT Helpdesk with your laptop issue). TL;DR - this is to reduce the number of support tickets for our human engineers by providing self-serve answers or signposting. The branching works well (particularly how it already identifies and categorises the common tickets from analysing past tickets!). The chatbot is OK - IF your knowledge base is comprehensive - it cannot make up answers.
Using SuperWhisper - The great Dave Killeen has sold many a folk on speaking rather than typing to produce written communication, documentation, in this instance using SuperWhisper. I am (now off the train!) using the tool to write this paragraph. It takes some getting used to, dictating your words and then letting it transform what you say into different sentiments and for different audience types. I’m still experimenting, and I’m simply using dictation for this paragraph, but as a big, big fan of being audience aware, I love the concept of being able to speak your mind, and then letting AI turn those words into different outputs based on the audience you tell it to write for!
Tech Demo - Success
Last month, I mentioned the Tech Demo we were spinning up (Link here). This happened. It was pretty successful! It is basically the same format as the bi-weekly Product Demo, just with curated content for a technical audience.
Hats off to the teams’ leaders who rallied to bring suitable content to the session of the right technical level (high), meaning that this content did not need to be shared at the more widely attended Product Demo the preceding or following week. Feedback from the session after the event was overwhelmingly positive, and the value was felt. In short, this will continue as a regular event. I would recommend a similar type of tech-focused session to any organisation or product division that runs a similar Product or Sprint Demo session currently. Be audience aware and have more tailored content for a tailored audience.
That is it for May… phew!
Graham
My thanks to HeliosX (obviously for employing me!) for blessing this continuing series - to my colleagues and specifically to Joe Tarragano for his support on this.