Free webinar in English | Two sessions in May 2026 | Online
Choose the session that fits your time zone — same content, same real experiment.
Free webinar in English | Two sessions in May 2026 | Online
Choose the session that fits your time zone — same content, same real experiment.
$3/min for the psychic hotline. $17/min for your meeting with 10 developers.
And that's only half the equation.
Over recent years, I've witnessed this across countless organisations: meetings cost twice. Once for everyone's salary in the room. And once for the work that doesn't happen during that time.
Not because your team is too slow. But because your QA sits in the same location as your developers.
Every Scrum Master knows the pattern: the last two days of a sprint belong to QA. Developers twiddle their thumbs or start new stories that are guaranteed not to finish. The result: carry-over. Sprint after sprint.
The usual response? Form a separate QA team that works "vertically" across multiple Scrum teams. That solves one problem—and creates three new ones.
Over recent years, I've repeatedly observed the same pattern across my projects. It became particularly evident when managing multinational teams for a company-wide data platform in automotive development—there the problem with vertical QA structures showed its full impact: The QA bottleneck at sprint-end isn't a capacity problem. It's a timezone problem.
Fixed price or Time & Material? Over the last few years, I've accompanied dozens of IT projects where this question became a point of contention – often before even a single line of code was written. Purchasing departments insist on fixed prices because budget certainty simply isn't negotiable. Project leaders simultaneously know that requirements in complex software projects will inevitably change. The result: contracts that are formally "successfully" fulfilled but delivered past the actual need.
When 25 minutes of downtime can mean seven-figure revenue losses
Friday morning, 5 December 2025. Once again, thousands of websites worldwide display nothing but "500 Internal Server Error". Once again, Cloudflare is the culprit. And once again – just as on 18 November – it catches businesses completely unprepared.
The latest outage lasted officially about 25 minutes, but affected around 28% of all HTTP traffic that Cloudflare processes. The November outage was significantly more severe: for over four hours, services like ChatGPT, X (formerly Twitter), Discord, PayPal and countless other platforms were unreachable. Cloudflare themselves called it the worst outage since 2019.
The mandate is clear: "We need a new ERP system." Perhaps support for your current system is ending. Perhaps your processes have become so convoluted that nobody can make sense of them anymore. Or you're a growing company starting with a proper ERP for the first time. The goal is clear—but how do you get from a blank page to a tender that actually delivers what you need?
Over the past few years, I have supported several OKR implementations – from retail corporations to mid-sized companies. A recurring pattern emerged: enthusiasm for OKRs was there, the methodology was convincing, but implementation stalled at an unexpected point – when selecting the tool.