What’s more intriguing is how many companies are seeking to rebuild the SaaS products they've grown increasingly reliant on to sustain their operations — the products of their vendors.
Many companies rely on specialized solutions to manage mission-critical context or even core aspects of their operations. Initially developed as custom internal solutions for some other company, these products were fine-tuned for internal business goals and operational needs, then morphed into SaaS offerings for customers, often to recoup the significant investment in custom software. Although these solutions fulfill around 80% of immediate new customer needs, the remaining 20% often necessitates process or system workarounds and manual effort. Despite hefty fees and lengthy commitments, the functionality leaves much to be desired.
The crux of the issue lies in the SaaS providers' proficiency in service provision but their difficulty in transitioning into software product companies. They often fail to heed customer feedback, iterate, or rectify shortcomings beyond the initial development phase. Instead, they hastily implement quick-fix enhancements and rely on past accomplishments or exclusive features. This is partially because their roadmap is pre-set with an internal need to invest in new features that aren’t used internally and exceed the revenue from a few additional customers.
This approach invariably results in high per-seat fees and a disconnect between promised features and user utilization. Dissatisfied customers, facing looming contract renewals and fee hikes, often contemplate alternatives like in-house development and support. It also often means that customers are asked to foot the bill for new feature development. With that, the cycle continues.
However, amidst this seemingly endless cycle of big spend, success stories such as Slack stand out as beacons of hope. Slack's transformation from a failed gaming endeavor to a disruptive communication tool underscores the importance of a product-centric mindset and customer value proposition. It also underscores a level of investment required for success. Killing the original mission to focus on Slack was a complete organizational pivot; one many companies are not willing to undertake.
When companies are thinking about rebuilding the software they get from vendors, they need to ask themselves some important questions:
Should we try to copy what our vendor made in-house?
What problems could we run into if we try to turn our internal tools into products we sell?
How do we balance building our own solutions versus buying from others so that it makes sense for our business?
These aren’t simple questions, but they’re worth digging into. Let’s take a closer look.
Yet, the question persists — should companies embark on replicating their vendors' offerings? The answer lies in a strategic reassessment of your business needs and organizational workflows in order to steer away from blind replication toward a tailored solution that drives the business. Failing to do so will likely lead to a very expensive failure. Understanding and addressing operational inefficiencies through user feedback and iterative development are pivotal in achieving operational excellence. It’s easy to replicate workarounds that have been in place for any length of time and forget that it was only a workaround in the first place.
A testament to this strategic approach is Grubhub's deliberation between enduring steep vendor fee hikes or embarking on a ground-up rebuild. Opting for the latter, the company embraced design-thinking and event-storming to reveal hidden efficiencies and surpass vendor limitations. Their system has become a competitive advantage in onboarding drivers and is now core to how they grow their delivery business.
Transitioning internal tools into SaaS offerings requires careful evaluation, considering the allure of recurring revenue against the burden of ongoing support and maintenance. Crucially, catering to external users demands a paradigm shift in support mechanisms and user engagement strategies, underscoring the importance of customer-centricity.
It doesn’t make sense to open a core system to your competitors no matter the revenue potential. We wouldn’t advise GrubHub to sell its ATS system and give away a part of its winning strategy. It also would be a major distraction from the core business and would not further customer’s priorities.
Can your offering achieve customer goals — both roadmap direction and SLAs — while offering a reasonable price point? If not, you could put yourself in the same spot as your vendor, having to increase costs while the system requires workarounds and potentially doesn’t match the customers’ use cases.
Considerations for rebuilding a vendor’s offering go beyond mere technical execution. The key lies in striking the right balance between four areas: innovation, customer satisfaction, operational excellence, and the total cost of ownership.
If you’re deciding whether to build or buy, you need to think about how well your business can come up with new ideas and set itself apart. Building your own solutions in-house gives you more control over the development process and lets you create unique features tailored to your specific needs. However, this approach takes a lot of resources and expertise. On the flip side, buying ready-made solutions can give you access to the latest technology and features, but it might limit you in terms of customization that make you stand out from the competition.
At the end of the day, any software solution needs to meet the needs of the people using it. When you’re deciding between building and buying, you have to think about how each choice will affect the user experience. Building your own solutions gives you more control over how things look and work, so you can create an intuitive experience. But, you need to really understand what your users need and want. Buying ready-made solutions can give you a user experience that’s been proven to work, but it may not completely align with your company’s unique requirements or branding.
Choosing between building and buying software solutions can have a major impact on how effectively your business runs. Developing solutions in-house lets you create things that work seamlessly with your existing systems and processes, which can streamline workflows and cut down on manual work. However, it takes a lot of resources to develop and maintain. Buying ready-made solutions can be faster to implement and give you access to best practices, but you might have to change some of your processes or find workarounds to make them fit your unique needs.
Lastly, considering the financial side of the choice to build versus buy is a crucial and careful step in this process. Building your own solutions in-house requires a big upfront investment in development resources, infrastructure, and ongoing maintenance. But this approach can save you money in the long run by getting rid of recurring license fees and giving you more control over future improvements. Buying ready-made solutions can have a lower upfront cost and predictable ongoing expenses, but it might end up costing more in the long run because of vendor lock-in and limited negotiating power.
As companies navigate this landscape, the emphasis should be on developing a deep understanding of their unique operational needs and leveraging technology not as a shortcut to industry competition but as a strategic tool for crafting bespoke solutions that propel them forward.
The success stories of companies such as Slack and Grubhub underscore the importance of a product-centric approach and the value of listening to user feedback to drive continuous improvement. Ultimately, the decision to build or buy should not be taken lightly, as it involves the potential for significant cost savings and efficiency gains, and the risk of diverting focus from core business objectives. By prioritizing customer-centricity and embracing iterative development, companies can avoid the pitfalls of vendor lock-in and create a competitive edge that is both sustainable and aligned with their long-term vision.
Interested in breaking free from vendor limitations and building software that effectively meets your needs? Let’s talk about how our strategic approach and technical expertise can help you navigate this buy versus build landscape and create solutions that drive your business forward.
Planning for the realities of interpersonal harm, such as domestic violence, means first acknowledging the reality that our users are having their tech turned against them by partners, parents, and even employers. A first-of-its-kind study in Australia found that 99% of victims of domestic violence have experienced technology-facilitated abuse. This harm ranges from well-documented issues such as using AirTags for stalking, installing stalker ware on a target’s devices, and accessing texts and emails, but also includes a nearly endless list of more insidious uses of technology, such as using the Amazon Echo drop-in feature to listen in on conversations, using IoT devices such as smart doorbells and thermostats to surveil, harass, and torment victims, and remotely taking control of modern Internet-connected cars. You can find more in-depth examples of technology-facilitated interpersonal harm in my book, Design For Safety.
Aspect | Percentage |
---|---|
Users experiencing online harassment | 41% |
Forms of harassment leading to distress | 66% |
Users expecting companies to intervene | 79% |
The table above provides food for thought. A significant number of users have experienced online harassment, with a majority of these incidents causing real distress. Even more strikingly, the overwhelming expectation is for tech companies to play an active role in making their products safer.
Each company and company team (depending on the size and nature of the organization) can be well-served by defining their own set of safety principles. A product manager might set aside two hours for a workshop consisting of people from each department (design, engineering, QA, product, sales, marketing, and leadership) to co-create these principles. Some examples of safety principles might be ‘Location privacy as a default’ for a product with location-based features. ‘Make power imbalances transparent’ at a fintech company where banking software requires one user to be designated the admin user within a joint bank account. This guide about writing design principles is a helpful reference for writing your own set of principles.
Once the realities of tech-facilitated interpersonal harm are acknowledged and safety principles have been established, product managers should integrate a practice of preventing and mitigating these harms within their products.
Product managers can support their teams in designing and developing safe products by building time into roadmaps for each team member to do the necessary work. Writing stories for each of the teams’ pieces of safety-focused work is a key part of baking safety into the process rather than making it an afterthought. Here’s a breakdown of what this looks like:
Designers should have dedicated time (and stories in the roadmap) for:
Research into the harms of similar existing products
Novel abuse case brainstorming
Creation of archetypes
Designing solutions to identified harms
Testing those solutions
Developers should be encouraged to learn about interpersonal harm perpetuated through tech, and product managers should make sure to keep up with the ongoing changes to regulations and standards, especially when it comes to emerging technologies such as AI. The people writing the code are often the ones who identify strange, exceptional edge cases and uncover safety and privacy issues through workarounds. “If the user sets themselves to private, doesn’t save, hits the back button … what should happen?” In my experience, developers are amazing resources for these nitty-gritty details that are difficult or outright impossible for a designer to anticipate.
Product managers play a pivotal role in promoting user safety and privacy among developers. They can achieve this by allocating sufficient time for developers to understand the intricacies of safety and privacy fully. Additionally, sharing stories related to user safety and privacy issues often found by developers can further highlight the importance of these concerns. These scenarios should be considered just as important as the other work developers do. Product managers treating it the same as other cards on a Jira board go a long way to legitimizing it.
Additionally, product managers can ensure that QA teammates are testing out safety features just as rigorously as they test core features. For example, suppose a fitness app has various sharing settings. In that case, the product manager might create a card for QA to thoroughly test that absolutely no data about a user who puts their sharing settings to the most private option available can be viewed by any other sort of user (those who follow them, those who don’t, etc.).
Similarly to protecting the time of developers to do this work, product managers can prioritize the time QA spends on this type of testing by making it a regular part of the process, with their own stories on the roadmap, instead of something that gets tacked onto other cards
Designing for safety goes beyond a single product iteration; it's an ongoing commitment. Product managers can ensure a process for gathering feedback on safety issues and the ability to identify new harms that weren’t previously considered, just like you ensure there’s a way to react to bugs and gather ongoing feedback from users. Although the ideal workflow involves anticipating harms and taking a proactive approach, there’s also enormous benefit to users in vulnerable situations when we set systems in place to learn about abuse that is currently being perpetuated with our tech. Ensure there’s a method for gathering this information and responding to it with updates to the product.
Designing for safety is not a solitary endeavor; it thrives on integrated product-design teams. Product managers should actively engage with the designers, developers, and QA teammates to create a holistic safety strategy. Remember:
Designers — who hold the blueprint of the product — play a key role. They should always strive to incorporate safety functions into the initial graphical representation of the product itself. When this doesn't initially happen, make it a goal to systematically integrate safety mechanisms into the design, keeping in mind that simplicity can be synonymous with safety.
On the other hand, developers transform ideas into tangible, functional products. With their profound knowledge of coding and models, developers significantly reduce the risk of safety glitches by embedding safety provisions within the codes. Experience has shown that safety integrators — developers with a strong background and understanding of safety protocols — are valuable assets in improving project safety.
Quality assurance testers are the gatekeepers of user experience and must ensure that the final product is safe from inadvertent harm and malicious activity. Security should be as vital as functionality in a QA tester's checklist.
Collaboration has power, and you should always appreciate your team's insights. By continuously evaluating and iterating upon safety measures throughout the product development cycle, product managers can minimize the risks of interpersonal harm and launch safe and inclusive products, meaning that more people can use them.
Contact us to learn more about how we can help you build safer products for everyone.
To make well-informed decisions, delve deeper into the intricate details of these platforms and determine how they would fit into your enterprise model. Here, we take an engaging, straightforward approach to elaborate on the crucial differences between low-code and no-code solutions. Then, we explore the world of rapid development tooling, an entirely different realm from low-code/no-code solutions.
Low-code/no-code solutions and traditional coding are not a one-size-fits-all solution. There would be scenarios where opting for traditional coding expertise becomes necessary, while in other cases, a low-code or no-code platform might be the most efficient choice. It's all about evaluating what fits your specific business needs the most effectively.
You may wonder: how do low-code and no-code differ in their features and functionality?
Low-code platforms, as the name suggests, allow developers to create applications with a minimal amount of actual coding. They use a visual interface where developers can drag and drop application components and then integrate them without having to write extensive code. This can lead to remarkable savings in time and resources, and it democratizes the process of application development to a certain extent. However, this simplicity can sometimes limit the flexibility and customization capabilities that come with traditional development methods.
On the other hand, no-code platforms further simplify the process by enabling app development without any coding at all. They also use a visual interface but are designed to allow nontechnical professionals or 'citizen developers' to build functional applications. No- and low-code solutions are best used for internal-facing data and workflow-driven applications. The speed and ease of developing on these platforms are their USP, yet it comes at a cost in user licensing requirements as well as performance limitations. Additionally, more advanced features and customization may still require traditional development and integrations.
The synergy between the two approaches is not to be underestimated. By using no-code/low-code platforms for rapid prototyping and development, and traditional coding for customization and flexibility, teams can leverage the strengths of both methodologies. This can result in optimized resources, increased productivity, and symbiotic progress in the world of product development.
Rapid development tooling is known primarily for its deployment flexibility. Unlike low-code/no-code platforms that are commonly envisaged for citizen developers for creating internal, data-driven applications with limited public-facing functionalities — rapid development tooling can be used virtually anywhere. It could be a simple proof of concept (POC) or a complex enterprise-level development — you can deploy it as per your requirement, just like traditional coding.
Additionally, rapid development tooling generally avoids the licensing conundrum. Understanding these differences might just be the key to unlocking your business potential.
Although low-code and no-code solutions are indeed revolutionizing the coding world, traditional coding techniques maintain their place at the helm of software development. Yes, they require greater time and professional programmers for implementation, but they also grant a level of granularity and flexibility that is unparalleled.
Scripting complex algorithms, building intensive data processing systems, or orchestrating ingenious error-handling measures is feasible with traditional coding — facets that low-code or no-code solutions may find challenging to handle. These platforms — though excellent for building quick solutions — often fall short in sophisticated functionality and intricate design elements that experienced developers can skillfully craft through traditional coding.
Opting for a low-code/no-code solution or sticking with traditional coding is a crucial decision. The best way to make the right choice? Understand the pros and cons of all available options.
Pros ✅ | Cons ❌ |
---|---|
|
|
Pros ✅ | Cons ❌ |
---|---|
|
|
Recognition of these potential business outcomes creates an unmistakable understanding of the scale of decisions in your hands. Low-code/no-code solutions and traditional coding are not a one-size-fits-all solution. There would be scenarios where opting for traditional coding expertise becomes necessary, while in other cases, a low-code or no-code platform might be the most efficient choice. It's all about evaluating what fits your specific business needs the most effectively.
Here's a way to make the decision-making process a tad smoother. Consider how hands-on you want to be with your application development. If you have tech specialists on your team who are fluent in complex coding languages, traditional coding may serve your goals more effectively. However, if you're an entrepreneur or small business owner without a substantial development team, choosing a low-code or no-code development platform might just be the silver bullet you’re looking for.
With low-code/no-code platforms, you have the freedom of customization without needing to break a sweat going over lines of coding syntax. Pre-designed templates, point-and-click interfaces, and drag-and-drop environments all come together to provide a user-friendly experience that accelerates the development process.
Additionally, the synergy between the two approaches is not to be underestimated. By using no-code/low-code platforms for rapid prototyping and development, and traditional coding for customization and flexibility, teams can leverage the strengths of both methodologies. This can result in optimized resources, increased productivity, and symbiotic progress in the world of product development.
In the grand scheme of things, it might not be a question of which approach replaces the other. Rather, how to productively mold them together to optimally address the unique needs of the project, end user and the organization as a whole.
If you’re grappling with these critical decisions and need some guidance, we’re here to help. And it’s not just about choosing the right development tool for your needs. It’s also about aligning those tools with your strategic business objectives. We offer professional assessments to help steer you in the right direction. Choosing between traditional coding, low-code/no-code solutions, or rapid development tooling doesn’t have to be an uphill task. Let’s streamline your software development process together.
]]>This is Part 2 in our blog series on how the design process works alongside the Agile methodology to produce results within an integrated team. If you haven’t read Part 1 — which lays the foundation for these practical activities that the team can collaborate on — please check that out before reading further.
Finding Out is where it all starts, but it most certainly doesn’t only happen at the start! We 'find out' in continuous cycles throughout the project, not just from our users, but also from our stakeholders, and product and development teammates. After all, though the end product may be ultimately for our end users, design is also influenced by viability (the business) and feasibility (the developers). So it makes sense to 'find out' collaboratively.
Stakeholder interviews are an integral early step for determining what your stakeholders care about — their hopes and fears, past project experiences that might drive their decision-making, key metrics they need to hit, and any assumptions they may have. Not only will you build rapport, you’ll also learn how to engage your stakeholders through the design process. I’ve experienced that not all stakeholders have immediate faith in design, so understanding how they measure success and engaging with their success criteria throughout your work, can be crucial to earning their trust.
Developers must also 'find out' from stakeholders; in addition to familiarizing themselves with the existing codebase and infrastructure, they must understand organizational capabilities that would affect feasibility and timelines. Of course, it is sensible to share learnings from stakeholder interviews with your dev teammates so that they, too, can understand how to appeal to the business’ sense of success.
As domain experts, your client has an excellent understanding of their competitors. Leverage this when performing competitive analysis. Developers should simultaneously perform a competitive assessment of the tech landscape of the product domain to determine feasible options that would best realize the experience we end up defining through design and infrastructure.
What does your client believe gives their contemporaries a competitive edge? Their answer also identifies non-competing businesses that solve different problems similarly, informing analogous research. Analogous research, in turn, informs development spikes and experiments.
Getting a stakeholder (ideally a product owner) involved in user interviews and testing gets someone in your corner, translating user feedback to the rest of the business more compelling. However, ensure that your chosen stakeholder is well-prepped for their observer role throughout the sessions so that they do not unduly influence your users. Also, avoid inviting more than one stakeholder to any sessions where possible — the fewer overall contributors, the less intimidating it will be for your user participants. Follow up with your collaborators to discuss and download your initial impressions after each session to get everyone on board.
Is there an existing product that you're redesigning? Developers can set up automated accessibility testing tools such as axe or Lighthouse to audit current state. Are you building a green-field product from the ground up? Developers can additionally set up continuous integration pipelines with such accessibility testing tools so that there is a cycle of feedback between design and development with user needs at the center.
It is imperative that you have your teammates onside for your design endeavors, and this all comes down to alignment through your ability to Translate. Bring your team into what you learned in language that appeals to them. If translation is successful, progress can be made quickly and priorities are decided confidently based on evidence. It allows efficiency and provides motivation even through ambiguity.
The UX team is involved in the first step of translation: translating all your user research and feedback through synthesis. The second step of translation happens when that synthesized research is presented to the rest of the team. Consider your audience! Convey to stakeholders how insights interact with their business goals. Convey to developers how user expectations might have feasibility implications. Focus on outcomes rather than process. If a stakeholder was involved in your user interviews and/or testing, invite them to give their take on insights to bolster your translation.
I have often found that client product teams need more adequate documentation of the service they provide — a gap resulting in team misalignment and inefficiency. Communicating the product service through a service blueprint presents not just the customer journey through a process (including where they interact with your product) but also where they interact with other actors (perhaps other users or service providers), along with the software frontend and backend infrastructure that supports it.
Therefore, it shows your team members how they each influence the product that underpins the user experience. In addition to providing clarity through product evolution, service blueprints present new opportunities for service improvements across team responsibilities.
In order to keep 'finding out' we need to Create something to put in front of users, keeping the wheel turning towards product success. Creation outcomes could be thoughtful interview questions, wireframe concepts, generating compelling ideas, prototypes, or fully refined and branded UIs built from consistent components. Most of these outcomes benefit from team collaboration.
Use “assumptions” and metrics gathered in stakeholder interviews to inform moderation guides for user interviews and testing. Better yet, hold a workshopping session to understand what your team members really want to know from users. Examine these team concerns through your designer lens to ask open, non-leading questions that generate insightful user responses.
No one person has all the best ideas — draw from your teammates’ expertise and experiences in your ideation sessions.
Developers can apply what they learned from code audits, competitive assessments, experiments, and spikes to inform the feasibility of your proposed solutions.
Stakeholders can unlock awareness of project history before your time. A proposed solution may bring back memories of past failures that lead to design principles and constraints. These act as guardrails to help prevent similar mistakes with the latest design.
Prototypes allow teams to make ideas and concepts tangible rather than just words. Prototypes also lead to discoveries and new hypotheses. This is where the development team really shines.
Though designers are adept at creating prototypes with software such as Figma, AdobeXD, or Sketch, there are limits to the fidelity of interactions. Often, the best way to validate designs is to build prototypes in code. This also means that any successful designs are already in place for further developer iteration.
For a recent client in youth sports tournament scheduling, the UI was required to be richly interactive and highly intuitive, though it was deceptively complex to design. Dragging and dropping games across time slots while viewing relationships, conflicts, and field availability was easily sketched in Figma, but it needed to be prototyped in code in order to be user-tested faithfully. Close collaboration between design and development quickly produced concepts in code where interactions could be quickly understood and tweaked for improvement.
To build prototypes quickly and react to feedback efficiently, UI libraries and the design systems that support them accelerate progress.
Developers can begin building (or configuring an existing) UI library in code at the start of any project, even before research is complete. A basic UI library contains the most commonly used atomic UI elements users need (e.g., a button). Competitive analysis and initial research also inform more domain-specific additions to the library. If customers expect certain UI elements or interactions because that’s what all client competitors have, it is reasonable to incorporate them for use in your prototypes. If no design system exists, keep components low-fidelity. Branding and styling can easily update as your product becomes more refined.
In a recent project with GeneDX, designers and developers collaborated to build a library of UI components that could be consumed by initial prototypes and eventually by a higher fidelity app in production. If this library is uncoupled from the app, then the infrastructure for the prototype can be as simplistic as possible and completely sacrificial. The dev team gets to work on prototypes without worrying about final infrastructure and scalability in the preliminary phases of the project. Deploying this UI library also allows for visibility from the business — early concepts can even be user-tested in this way!
Integrating the design process with Agile development ultimately expedites innovation and assures that our solutions are user-centric and capable of evolving within a dynamic market. Instead of viewing design and agile development as conflicting processes, use this opportunity to leverage the wide range of expertise within your integrated team for product success. If you’re interested in transforming the outcomes of your projects through Agile and design collaboration, please reach out for a consultation — we’d love to create something impactful together!
This issue is eloquently described by design leader Maria Giudice inChangemakers: How Leaders Can Design Change in an Insanely Complex World:
Agile started as a pure development process, which has made it difficult to weave in traditional design processes, like big picture strategic thinking and up-front research that may not fit nicely into a sprint cycle. It requires flexibility and adaptability to incorporate those elements into the process. Without experienced guidance, it can easily spiral into chaos.
To avoid this chaos, your team must become integrated and collaborative - it is more than possible for the Design process to work alongside the Agile methodology to produce results in tandem with development and product team members.
What seems like a tension between an Agile development process, user experience tasks, and a sustainable business is actually a representation of the three design-thinking ingredients for any successful product: feasibility, desirability, and viability. All three must be considered in the product’s design lest it fails - it can’t be implemented (unfeasible), the market rejects it (undesirable), or the business tanks (unviable). In other words, the differing concerns and expertise within your integrated team actually make y’all better placed to build something great!
Therefore, design is not a waterfall process but rather a cross-functional practice that your whole team participates in, whether they know it or not.
→ For a tangible example of how this balance can be achieved, take a look at our work with the Royal Academy of Arts. This project exemplifies how thoughtful UX research, combined with Agile development, can lead to exceptional results that serve both the client's goals and the users' needs.
The design process could be described as a wheel turning along a track — and much like the Agile methodology, it advances in iterative cycles. A cycle can be split into three categories describing the activities within them: Find Out, Translate, Create. Activities in each category inform the next.
Design is more than just an upfront investment. A UX professional engaged throughout the entire design through delivery cycle helps the team work in a more iterative and Agile manner by testing each new iteration in a controlled, low-stakes environment. Each new cycle validates team assumptions and dives deeper into understanding a user’s desires, the problems they want to solve, and how they expect to solve them. This results in a higher degree of confidence that the solution is a product market fit.
The involvement of developers and stakeholders to Find Out, Translate, and Create, positively influences the design direction, as well as challenging and redressing viability and feasibility hypotheses.
→ Interested in seeing the power of this approach in practice? Check out our work on Axus Travel, where our team took a full-service approach from initial concept through brand development to the creation of user-friendly interfaces packed with extensive functionality.
You would be right to think that it’s not always practical to completely occupy your development and product teammates’ time in every design activity. However, it is imperative that your findings inform each others’ work.
Productive activities for collaboration include (but are not limited to):
By viewing design as an opportunity to join forces and include the entire team’s expertise on feasibility, desirability, and viability, teams enjoy the creation of a successful product much more. To learn more about how your team members can collaborate on each of these activities, be sure to check out Part II in this series.
Consider how a design-led, Agile approach could revolutionize your projects. If you’re ready to see these results in your work, give us a shout!
We’ve seen it before and what usually follows: Leadership wonders why software development velocity is slow (“Just ship it faster!”), customers are not adopting the product (“Make it cooler!”), and teams frequently miscommunicate or misunderstand requirements (“Provide more documentation!”). And yet, simply changing the mechanics of how the product is built may not achieve the desired result, making development slower or features less delightful.
What levers can we pull to solve “slow delivery”? Is it the number of people on the team? The requirements process or lack thereof? Is it tooling and infrastructure? There is a wealth of information surrounding product development, delivery frameworks, and an industry of SaaS products that bake in their own development workflows as best practice. So, when faced with uncertainty about how to solve a problem like “slow delivery,” we often default to what is most known to us and thus perceived as less risky. We pull on the levers that we know and are tractable.
Organizations employ a myriad of software development practices and team structures with no two companies following the exact same form. The most successful software product companies master connecting their customers with the value their product offers. What do these companies have in common? Product mindset.
Instead of focusing on processes and tools, organizations have shifted toward focusing on the mentality of approaching a problem to solve. The organizations able to hone this skill across their teams are more likely to achieve their desired outcomes.
A product mindset places the customer at the center of every decision, values continuous improvement, and emphasizes the long-term success and sustainability of a product. Before diagnosing your organization’s particular issues, observe how your teams work together, make decisions, and frame problems within four pillars of product mindset:
The necessity of prioritizing features based on customer feedback and market demand. Evaluate how priorities are defined within each workstream. Do the parameters being considered, such as level of effort and addressable market, fit the target customer?
The importance of delivering value to the customer. Has your team attended a user research call? Does your team see customer support tickets? Building empathy for the customer can improve how we decide what to build.
The need for a long-term vision for the product. Does your team make near-term technical decisions that balance the long-term vision? Does the long-term vision provide the team with tractable problems to solve?
The significance of cross-functional collaboration and teamwork. How are the teams communicating with each other? Is there transparency or siloes?
Customer feedback is the lifeblood of a product-driven mindset. To develop this mindset, your teams should actively seek, analyze, and act upon customer feedback. This requires a shift from a "build it and they will come" mentality to a "listen, iterate, and improve" approach.
Collect feedback through surveys, user interviews, and data analytics.
Create a feedback loop that channels insights back to the development teams.
Use feedback to prioritize feature development and product improvements.
Continuously iterate based on customer input
A product mindset requires a clear understanding of what success looks like. Establish key performance indicators (KPIs) and metrics that align with your product's objectives. This enables your teams to measure progress and make data-driven decisions incrementally. The continuous measurement and milestones towards the long-term vision allow for course corrections along the way.
Define measurable objectives for your product.
Establish realistic timelines and milestones.
Monitor KPIs and adjust strategies accordingly.
Celebrate achievements and learn from setbacks
Fostering a product mindset starts with breaking down silos within your organization. Encourage cross-functional collaboration among product, design, development, and quality assurance teams. This multidisciplinary approach ensures that all stakeholders are aligned with the product vision and share responsibility for its success.
Create cross-functional teams where members work together on the same product.
Establish clear communication channels to facilitate information sharing.
Encourage regular meetings and brainstorming sessions.
Celebrate team achievements rather than individual accomplishments.
By focusing our improvements around mindset and less on promoting “foolproof processes” we promote the agility of the team. This agility allows for organizations to react quickly to changing market conditions, leveraging the empowered team to extract and deliver continuous value to customers.
Interested in learning more about how your organization can adopt a product mindset? Give us a shout!
]]>But over time, as threats exploded and password chaos reigned supreme across most average consumers’ digital lives, I’ve become more pragmatic. Now, I believe dedicated password managers — for all their potential risks — encourage better habits. They simplify people’s lives while providing additional guardrails around our virtual identities.
In my webinar, I break down the landscape of options out there. I’ve assessed browsers, open-sourced self-hosted systems, and turnkey cloud-based managers. Each has pros and cons. Fundamentally, cloud services seem best positioned for mass adoption. Built-in password generation, dark web monitoring, and secure sharing — these make safe habits the easiest path for non-techies.
Do I still have some lingering doubts? Of course. Attention and skepticism are still required! Your data leaving your direct control means you surrender an element of fate to the vendor. Autofill can be hijacked under certain conditions as well. And no one wants to lose everything due to a software glitch or lapse of memory.
But my goal is to provide guidance rooted in real-world experience rather than perfection. I want people empowered to evaluate tradeoffs themselves and commit to better personal password hygiene within their comfort zone. Something — almost anything — is better than the widespread password reuse I’ve witnessed daily.
If this quick read piques your interest, I hope you’ll carve out 20 minutes for my full talk! I pack a ton of nuts-and-bolts detail into an easily digestible format. Protect yourself, your business, and your family as threats only accelerate across our digital lives. Here’s to online safety and security!
Occasionally, delivering value quickly is impossible. Customers need a feature that involves multiple teams or services, which can only feasibly be delivered after six months. In these situations, we adapt by investing more rigor into upfront analysis. Business cases or more in-depth research identify the return on the investment and justify the work. Work associated with product features is relatively easy to justify and prioritize, as it is more easily tied to revenue. This includes work that isn’t directly customer-facing, like adding a new database.
Some work is much harder to justify through revenue — refactoring, dependency upgrades, and unit testing are good examples. This work is often small and repetitive but over the course of a project, represents a significant time commitment.
A possible solution is to consider by doing this technical work, we create more confidence to add customer features more easily in the future. How much easier though? Will it be delivered 5% faster? 20%? 50%?
It’s difficult to predict, and impossible to measure, as you can’t A/B test feature delivery speed. Theoretically, you could pursue two different approaches to a problem in parallel, but that would require different people to do it, so you couldn’t separate the difference in delivery speed from the individuals completing the work. And if you tried to do it in series, you’d be unable to separate beneficial learnings from approach A in approach B. In the future, there will be an AI solution for this. Two identical AI models could A/B test some solutions, but when they’re able to do that, they’ll be writing the code as well as this blog post!
In the face of revenue-justified feature work, it can be very difficult to prioritize this technical work in a quantitative way. The result? Technical work gets prioritized when it has to be — when it’s too late and there’s a fire.
Nicholas Nassim Taleb, in his book Antifragile, outlines the concept of concave and convex decisions. The former suffer in the face of uncertainty, whereas the latter benefit. People, he says, prefer to consider the ordered, predictable nature of the world around them rather than the scary, unpredictable. In his world of economics, this manifests as a tendency toward stocks that are steady and stable, but suffer hugely in economic downturns over volatile stocks that occasionally skyrocket.
Rather than dig into economics, let’s consider a more relatable idea: flossing your teeth. It’s become largely a cultural norm that brushing your teeth is a good thing. However, flossing hasn’t received the same cultural acceptance, less than 25% of adults floss. Why is this? Because in the short term, the value uplift over just brushing is pretty small. First time flossers also face the prospect of bleeding gums. Despite this, most of us have been advised to floss, initially by caregivers, later by dentists. The reason for this is long-term gains; people who floss are statistically less likely to have gum disease and plaque buildup, and have fresher breath.
Unlike our teeth and flossing, we don’t grow up with people telling us to “write unit tests, do the simplest thing that could possibly work, be considerate of naming.” Many business leaders remain unaware that over the long-term, ignoring these things can lead to problems. And if leadership doesn’t see the value in something, it will always struggle to be prioritized.
How can we define and justify the value of a set of behaviors on a development team that drives quality into a code base, instead of faster short-term delivery. The benefit of which is revenue not lost in the future when feature delivery slows and outages take longer to resolve. And unlike gum disease, typical employee churn means the negative side effects of poor software hygiene won’t necessarily be seen by the people making the decisions.
The solution needs to be approached from multiple angles, including:
Educate leadership on the long-term benefits, referencing books such as Accelerate for evidence.
Give teams the training they need to learn the skills and the time to implement them. A good rule of thumb is 20% of a team's work should be focused on preventative technical efforts.
Create a culture of quality, open feedback, and good software hygiene.
Those three points are simple to write down, but by no means simple to implement. Changes of this size and scale, especially to culture are difficult, but the results are worth their weight in gold. Leadership needs to drive this change, not through mandates but demonstration. By embodying the new culture, it allows others to learn it through osmosis, and start to demonstrate it themselves. And before you know it, people will react to colleague PR feedback about adding unit tests, the same way someone does when they are the only person being offered a mint.
For a long time, if you wanted to do load or stress testing of your application, your first choice may have been to go to Apache JMeter. JMeter was first released in 1998, making it 25 years old. Happy Birthday, JMeter!
If software ages like dogs, that makes it as old as your Grandpa. He’s no longer flash or nimble but he can still get the job done. However, there is always a young whipper snapper who thinks they can do the job better.
K6* (which comes from Grafana Labs) is this young upstart. It’s a command line tool written in Go and runs tests written in Javascript. So all those young'uns will like it. No XML here, Grandpa! Unlike JMeter, there is no Graphical User Interface; you just write Javascript. This makes it super simple to run as part of a Continuous Delivery pipeline.
This is what a basic test looks like:
import http from 'k6/http'; import { check } from 'k6'; export const options = { discardResponseBodies: true, scenarios: { example: { executor: 'constant-vus', vus: 50, // Run 50 Virtual Users duration: '60s', // for a duration of 60 seconds }, }, }; export default function () { const result = http.get(‘insert your URL to test here!'); check(result, { 'http response status code is 200': result.status === 200, }); }
See the comments inline in the above test, which hopefully makes the test above self-explanatory. You can run this by installing K6 (brew install K6 on a Mac) then running the test:
k6 run test.js
Please remember to only performance test services that you have permission to.
After a minute, you’ll get a test report in your console.
Similar to Junit, you can do setup and teardown as part of your tests. You might use setup to put your system in a certain state before you hammer it, then use teardown to put it back again to clean up after yourself. Setup looks like this and runs once:
export function setup() { const result = http.post(``); check(result, { "Successfully set up test": result.status === 200, }); }
Teardown is very similar and also runs once:
export function teardown() { const result = http.post(``); check(result, { "Successfully back in original state": result.status === 200, }); }
There’s a whole bunch of different executors you can use to try to shape the traffic as you wish on K6's own Docs page.
What I’ve found particularly helpful is the ‘ramping-vus’ executor, which allows you to control your test in a fine-grained fashion. Example:
export const options = { scenarios: { example: { executor: 'ramping-vus', startVUs: 0, // Start off with 0 virtual users stages: [ { duration: '20s', target: 1000 }, // Ramp up to 1000 VUs over 20 seconds { duration: '10s', target: 10 }, // Back down to 10 VUs over 10 seconds { duration: '30s', target: 10 }, // Maintain 10 VUs for 30 seconds ], } } };
Nice and easy. To give this a go with JMeter, you have to use “Thread Groups,” which represent your users. Each thread group has a number of users you can specify and the ramp-up time. This sounds similar to what we are doing in the code above. The screenshot shows the “ramp up."
The problem comes when we want to control it a bit more and reduce our users to 10. You have to start digging around in the documentation. Then you realise you need a plugin to achieve what you want to do. But before you can do that, you need to install a plugin manager by downloading a ‘jar’ file and copying it into the installation! Here, you can install the “Ultimate Thread Group” plugin, which allows you to make finer adjustments to your plan. This plugin shows a graph as you build your test plan, e.g.:
What I found with messing around with the default Thread Group, then installing a plugin, then fiddling around with the above graph to get the test plan to do what I wanted it to do, was that I wanted an easy life and just wanted to write a bit of code. Everything is ‘as-code’ these days; who wants to use a GUI? That’s for old people!
As previously mentioned, you can run K6 on the command line so we can put it in a CI/CD pipeline with ease. It may be wise to have it as a manual step as part of a pipeline so you can run it in an ad-hoc fashion or run it when your code is deployed to a performance test environment, for example. GitLab CI and Github actions will let you do this easily. You will, however, need a test report of some form, and you may find the textual test report from K6 a bit lacking. On the other hand, JMeter allows you to get an HTML test report that you can publish as part of your pipeline. One up for JMeter there.
K6 can output the test results to other formats, such as a JSON file, so you can transform it into something you want. It also supports the ability to send the output to Prometheus using ‘Remote Write.’ This then allows you to run a Prometheus query on your test result metrics. Building yourself a Grafana dashboard from these metrics to display test results becomes possible.
Given the low requirements of K6 in terms of memory compared to other alternatives, you can generate quite a lot of load with it. You will get to the point, though, that your laptop will not be able to generate enough load due to factors such as network bandwidth, CPU load, or memory. You could run it from multiple laptops, but the more sensible way of doing it is running it in the cloud.
One option is Grafana Cloud, where scaling is taken care of for you at a cost. Or running it in Kubernetes. If you go with the Kubernetes option, you can install a “K6 operator” into your cluster that allows you to run your K6 tests. This means you can treat your K6 tests as normal Kubernetes resources, so it works great if you are already on that platform.
JMeter requires heavy use of the UI, which still looks like something made in the 1990s and hasn’t embraced ‘as-code.’ In JMeter 5.6, there is an experimental feature to write tests as code, so it’ll be interesting to see how this develops.
K6 is written in Go, so it has a very small executable and starts up quickly. You install it and run it. This makes it very easy to run in a CI/CD pipeline.
JMeter, on the other hand, is significantly larger and has a bigger memory footprint due to the GUI. Also, running on the JVM adds an overhead in terms of startup time and memory. The GUI is meant to be used for designing tests and not running a load test, however. There is a CLI version of JMeter, so you can use it in a CI/CD pipeline.
K6 has a very small learning curve. Even though JMeter has a GUI, it’s not as easy to learn, counterintuitively.
If you want test reports from K6, you’re going to need to do some work to get a nice report (or spend some money). With JMeter, it’s built-in.
Gatling is another tool that allows you to write tests ‘as-code’ in Scala, which is not to everyone’s taste. It now also allows you to write in Kotlin though, which is probably more palatable than Scala. It also runs on the Java Virtual Machine so has some of Java’s baggage. It also allows you to create HTML test reports and provides a paid plan to allow you to scale your tests.
Along with unit testing and other types of testing commonly found in the test pyramid, performance testing should complement your current testing strategy, and K6 is an easy-to-use tool to achieve this.
As for JMeter, there’s life in the old dog yet!
* Note: If you turn the 6 upside down in K6, you get K9. Coincidence? :-)
Unlike some of my colleagues, I’m not a data expert, so this was an opportunity for me to get a feel for the industry first hand. With the advent of accessible AI and LLMs, data engineering is quickly increasing in importance. And although there’s a market need for all teams to be more literate in this space, the ability to capture and harness good quality data continues to be the real work that needs doing.
Inspiring sessions from Jaguar TCS, Department of Education, and Women in Data prove that great innovation is already happening. Jaguar are using Formula E track data to improve both electric car performance, but also efficiency for their upcoming exclusively electric consumer vehicles. The DoE now has accurate data on nationwide school attendance by midday, every single day. The panel by Women in Data showed their growth with more than 33,000 members strengthened by the presence of key female role models and as evidenced by representation from the Met Police, Dstl and MoD.
A central theme among several speakers was the number of data teams struggling with their budgets. Niamh O’Brien from Fivetran stated that tech departments typically have a budget 10 times that of their data teams. Strange that the level of investment in AI and LLMs doesn’t match the current hype, especially with everyone looking to integrate AI into their products. It’s likely the growth potential of leveraging AI will force companies to invest more in data.
The pushback to investing in data projects seems to be the unreliable return on investment. Even if we acknowledge there is some inherent risk in data projects due to the exploratory nature of their outcomes, it doesn’t account for the staggering statistic that 85% of data projects fail. Leveraging AI does nothing to address the failure rate either, so everyone’s left wondering what can be done.
Enter Jesse Anderson with one of the most popular talks, “Why Most Data Projects Fail.” Speaking to a packed, standing-room-only audience like a pack of Swifties, Jesse gave a talk on agile project management 101! It was “deliver what you can first, watch out for scope creep, and make sure you’ve got a balanced team”. It was all absolutely correct, absolutely obvious, and absolutely a surprise to me that such a topic was covered.
“Choice of data platform isn’t the issue, projects don’t fail because you chose Snowflake over Databricks” said Jesse. However, it’s easy to see why people might get bogged down by that decision.
This diagram from FirstMark Venture Capital appeared in at least five presentations, and for good reason. It represents probably the most significant challenge beginners to the industry face. As I walked around the exhibit, it was impossible to distinguish the offerings of many of the vendors with a number of stands offering the “solution to all data problems”. Exhibitors fell generally into one of two categories, the “we can do everything end to end” and the “we can do one specific thing well, but we can also do everything else” camp. Although the diagram above suggests everyone falls into distinct categories, I don’t think many of the vendors would put themselves in those restrictive boxes.
It struck me as an opportunity, as demonstrated by Jesse Anderson, to be an advisor in this space. For struggling data teams, but especially for newcomers, finding a partner to help cut through the noise in this increasingly complex space is essential.
I look forward to going back next year, a little better educated, and more equipped to cut through the noise. It’ll be exciting to see how much has changed in a space that is poised to expand exponentially with the advent of AI.
As the industry moves towards more accessible data science, I’m sure we’ll see continued emphasis on the fundamentals of data quality and engineering. The industry wants business stakeholders to be using AI/LLMs to complete tasks that were previously only feasible by engineers. Excitingly, OpenAI’s DevDay proves that we are already close to that reality.