This is a continuing series of articles documenting, month by month, a full year of building a brand-new Product Operations function at HeliosX, a rapidly scaling, now £750m+ HealthTech. It is designed to show you the journey; it’s not a warts-and-all exposé.
I make no apologies that this focuses on the achievements, the methods and how we overcame challenges in a positive light, showing a practical journey with details to reuse and take away. It is still my job, after all!
This series is not an advertisement for the company, but is published with the blessing of HeliosX.
Read Month 7 here
June has been a month of introspection and reflection, particularly on how we as a division are organised, how far we have come and where we want to focus our efforts for further transformation of the digital teams. This coincides with our overall strategy refresh as a business, and we understand our strengths and where there are opportunities to advance.
Elsewhere, AI features heavily with multiple separate projects and tools in the mix.
Maturity Matrix - Outcomes
Last month, I spoke about the Maturity Matrix, which I’ve been working on, to allow us to self-assess where we are across dozens of criteria, identify where we want to improve, and have some measure of success over time.
In June, the product leadership, including me, individually self-assessed before coming together for the better part of a day across two sessions to discuss where there was significant disparity in those scores. The purpose here was not to influence and some to a consensus on a score, but to share different perspectives that help others understand (which may and did result in a rethink on scores on some occasion), of particular value where the criteria was focused on a more niche characteristic - engineering principles for example.
An incredibly valuable exercise, with the end result being a set of maturity scores covering all aspects of the division’s work, and the identification of areas we want to focus on first to investigate, develop and improve. Interestingly, these were not necessarily all the lowest-scoring criteria, but ones deemed most important to improve now, in some cases, we believe will unlock other improvements down the line.
The plan for the matrix is to re-run the self-assessment in 6 months, and 6 months after that, and amongst everything above, track the changes… measure success.
Psychological Safety
One of the criteria identified to improve, and one I instantly identified as important, and I wanted to take on to investigate, is Psychological Safety. This is personally close to my heart in connection with the work I do on mental health in product management, via the Product Mind Community and the Female Product Lead.
Our assessment of this area did not reflect there being a significant problem, but rather, we did not know exactly how our teams felt, and we need to know more before taking action.
I could wax lyrical about the importance of having a safe, open environment where teams know they can share, challenge, and ideate freely without fear of blame, and turn failure into learning. There is so much documented, and despite my passion, I am no expert - though I highly recommend reading the work, thoughts and opinions of Beks Yelland and Meg Porter.
By the end of June, I was focused on how I could gather feedback from the teams in a way that is supportive and allows them to share openly, with useful ideas from the aforementioned Meg Porter and Gayle Silverman, to utilise surveys in a particular way… more on this in coming publications.
Airtable Demand in the Wider Business
Back now into my ever-favourite platform of 2025, Airtable, and the work I have been doing for the product division for the better part of 8 months has been noticed by other divisions in the business. 4 separate teams have asked to have something similar to what we have built, largely all task- and project-management facilities with a hierarchy of objective-project-task-task updates, ultimately to facilitate reporting to leadership on progress and contribution to company goals and outcomes.
Interestingly too, is the demand to be able to link between initiatives/projects the product division is working on, and their own, which will facilitate greater awareness of what different parts of the business are working on, at what stage they are at and alignment on things like launch planning.
Airtable is rapidly becoming the platform of choice for project/task management for the wider business. Some teams use Asana and are significantly embedded in using that for their workflows - as yet, I do not know the desire to migrate these, if it is even necessary (if it is not broken…). Of course, we use Jira in the product division for engineer task management, sprint management, there is ZERO chance, or need, to move away from this (if it is not broke…).
For these teams that want ‘their Airtable setup’, I’m consulting with them at the time of writing, and it is great to be extending the influence of, and trust in, Product Operations across the business.
Velocity & Recommended Story Points
While writing about Airtable, a small but significant enhancement to the Product Database is the ability to accurately* track sprint velocity, and the expected/predicted/recommended volume of work for forthcoming sprints.
*Accurately is, of course, a subjective term whenever it comes to task estimation… let’s not get hung up on the level of accuracy.
For those following my progress on sprint stats and reporting, we recently implemented the facility to record when our engineers are ‘unavailable’ during each sprint, to understand the capacity of engineers against the work planned. Knowing the capacity of each sprint (and asking the teams to backdate these records a few months), we can measure this against the completion rates of tasks in sprints to understand the average velocity of the squads individually.
Now that we know our performance to date, and we know future engineer capacity per sprint per squad, we can simply make recommendations for how much work (in terms of story points - leave the hate for story points at the door, dear reader!) should be planned in x next sprint. This is a guide, a suggestion, and the maximum points that should be allocated based on stats, not taking into account qualitative, external factors, meaning the decision absolutely still rests with the squad leaders.
The feedback from the teams piloting this has been positive - firstly that the numbers are largely reflective of the manual calculations they were doing individually, in silo already, and that it is calculated and simply presented to them for each sprint plan. They also appreciate that this is a guide for them, not a target to hit necessarily, which is where there is some small concern too that these stats may turn into that. That is not the intention at all; in fact, the surfacing of the fact that they have these stats (rather than the stats themselves) will provide confidence in leaders that there is process and logic in how the teams plan.
Behind the scenes, this is a bunch of rollup and formula fields building on top of the work done so far on capacity planning.
Gemini Summary of the Product Demo
Moving away from Airtable, I have been continually experimenting with a variety of off-the-shelf AI tools to streamline simple tasks, and as a Google business, Gemini is right in-front of much of the work we do.
A task I perform bi-weekly is to prep a summary of the Product Demo - what was presented in headlines, and in a format the wider business can quickly absorb. Here, I simply went into the Slides file and asked Gemini to provide a summary. The output was actually really good; accurate and even without refinement to focus on a specific audience, at the right level of detail. Like much of my focus with AI tools currently, I am not expecting perfection without it learning more about the outputs we regularly use (prior to AI) or the audience needs, I want 80% of the work done so I just need to review, not create. Gemini here provided about this (just, if I am being generous).
A task that would take me 30-45 minutes now takes 5-10. Win.
AI Research Tool - Newswire
Moving further into custom territory, I have also been exploring the idea of using AI to do the heavy lifting when it comes to keeping on top of the latest news and market changes related to our products, competitors and regulatory issues. In fact, in the background, this is something I’ve been playing with for some months now and refining how this might work with the tools at our disposal. Airtable (sorry!) provides the ability to create an interface suited to our needs and AI tool integration (Gemini, ChatGPT, etc) to allow me to build what I need without everything being hardcoded, or be a copy/paste of prompts.
What I call Newswire is an app allowing for subjects and criteria to be added to a database, and a schedule for the AI agents to run regularly and return the results, which so far range from news on our weight-loss medication market, competitors, suppliers, to essentially internet searches. Dozens of different variations of the topic, filtered by regions of the world we operate in.
This is a business analyst co-pilot of sorts (a junior one at at) to do the heavy lifting for our teams and present a summary of the findings, for exactly what they need, when they need it, and fully referenced back to the sources too. Hours or days of searching are now minutes, meaning it can be done weekly, daily, if we really wanted to.
Though not enabled yet, these summaries can be pushed to our Slack platform for wider publication. I’m experimenting too with extracting discrete figures from each crawl results, such as investments made by suppliers or competitors, market prices of similar products in different regions, and feeding those automatically into analytics dashboards.
Geek level: 11
Airtable Demo
Sticking with Airtable, for many months now, I have been asked for a look under the hood at what I have built for our product division - the architecture, the facilities, the successes. So I finally got my bottom in gear and organised a demo for around 50 product ops and product professionals from all over, and included some Airtable staff too.
The demo I pre-recorded to ensure it went well, and then took plenty of questions, all in all, I was very pleased with it, and I hope attendees felt the same (those I have since followed up with seemed to like it!)
At the time of writing, I’m excited to be speaking with Airtable’s Head of Product, Anthony Maggio , one of the attendees, and as a result of this webinar, keen to chat all things Product Ops.
And seems this went down well more widely, as I have once again been invited to speak at the PLA Product Ops Summit: London in December about my developments using Airtable - yet to decide what to focus on, this or the AI tools (which by then I hope to be much more refined).
AI PM Platform (Zentrik)
Finally, on the AI front, I’ve been chatting with various individuals for some time about new AI tools that can slot in and solve real-world problems for my teams. One of those is the lovely Jorge Alacanta, CEO at Zentrik, an AI platform specifically designed for product teams to help lift the burden of the bulk effort of creating PRDs, feature documents, tasks, epics, etc.
We started chatting about all things Product Ops, we aligned instantly, and there was learning on both sides. And Jorge spoke about how the Zentrik platform operates, how it needs to learn about your business, your teams, your operating style, how you write Jira tickets, etc, etc. As I have learnt more and more about AI, I have realised how critical this step is, and indeed was impressed seeing this in action in real time.
Here is the important part, and my measure for success with this platform: I do not expect it to write all the tickets perfectly for all squads. If it can write 80% of them with 80% of the content correct, readable, sensible, and including our key assets (acceptance criteria, for example), then this is a huge win. If I can get my product managers just reviewing, topping/tailing tickets, rewriting 20% and picking up on outliers, then this is a massive saving in time, effort AND cognitive load.
At the time of writing, we’re kicking off a pilot with 2 squads… more on this next month.
Graham
My thanks to HeliosX (obviously for employing me!) for blessing this continuing series, to my colleagues, and specifically to Joe Tarragano for his support on this.