Most health analytics startups build a dashboard first and figure out the analytics later. We did the opposite. We built three validated prediction models first (diabetes risk, breast cancer recurrence, and osteoporosis risk) and deployed them as live demos before building any general-purpose platform features.
Each demo proves something specific
The diabetes demo proves we can take a published logistic regression model, implement the exact coefficients and standardisation parameters, deploy it as a live interactive tool, and produce results that match the original R Shiny application. That is a validation test, not a demo.
The breast cancer demo proves the same workflow works for a different disease, different clinical variables, and a different modelling context. The osteoporosis demo proves it works for a third condition with more complex inputs including categorical variables like gender and smoking status.
Three demos across three diseases, three populations, and three variable structures establish a pattern. That pattern becomes the blueprint for every future model deployment on the platform.
Each demo also tested a different aspect of the presentation layer. The diabetes demo established the core pattern: input form, score ring, risk band, key drivers, interpretation text, and reference citations. The breast cancer demo tested whether the same pattern worked for a recurrence prediction (a different clinical question from risk assessment). The osteoporosis demo tested categorical variable handling and a larger variable set. By the third demo, the pattern was validated across enough variation to be confident it would generalise to new models without fundamental redesign.
The validation process for each demo was explicit. We compared the demo output against the original R Shiny application (for the diabetes model) or against manual calculations from the published coefficients (for all three models) using a set of 20 test cases per model. Each test case specified input values and the expected probability to four decimal places. The demo had to match within rounding tolerance before it was considered deployed. This is not a formality. It is how we ensure the live tool computes the same thing as the published research.
Demos establish product patterns
Each demo created reusable architecture: a model specification format, a workbench UI pattern, a result display with risk bands and key drivers, interpretation text, and reference citations. When the fourth model arrives, the engineering work is mostly done. The new model fits into the established pattern.
This is not just efficiency. It is product discipline. Every demo looks and behaves consistently. Users who understand one demo can use the next one without relearning the interface. Partners who evaluated one demo can trust that the next one meets the same standard.
The model specification format is worth describing. Each model is defined as a JSON-like configuration object that includes: the variable names and their display labels, the coefficient for each variable, the intercept, the mean and standard deviation for standardisation, the valid input range for each variable, the risk band thresholds, and the reference citation. Adding a new model means creating a new specification object and writing the interpretation text. The platform handles everything else: input validation, score computation, key driver analysis, and result display.
This architecture also supports model versioning. When a researcher publishes updated coefficients, we create a new specification object with the updated values. The previous version remains accessible for anyone who needs to reproduce results generated with the earlier model. Version history is visible to users so they know which version of the model they are using and when it was last updated.
- Standardised model specification: coefficients, means, standard deviations, variable names
- Consistent UI: input fields, score ring, risk bands, key drivers, interpretation text
- Reference citations on every demo
- Disclaimer and scope statements appropriate to each condition
What we learned from the demo-first approach
Building demos before the platform forced us to solve the hard problems first. The hardest problem in health analytics is not building a dashboard. It is ensuring that the analytical output is correct, calibrated, appropriately scoped, and presented in a way that supports responsible decision-making. The demos required us to solve all of those problems before we wrote a single line of platform code.
The demo-first approach also gave us something concrete to show institutional partners during early conversations. A live, validated prediction tool is a stronger demonstration of capability than a slide deck or a product roadmap. Partners could interact with the tool, enter their own test cases, and verify the output against published literature. That kind of hands-on evaluation builds confidence faster than any marketing material.
We also discovered that the demo pattern defined the requirements for the platform. The need for project-scoped access came from thinking about how a partner would deploy a custom model (their data, their coefficients, their patients). The need for file exchange came from thinking about how model specifications would be delivered. The need for audit logging came from thinking about how an institution would verify that the deployed model matched the published research. The platform was shaped by the demos, not the other way around.
The dashboard comes after the analytics
Now that the analytical foundation is proven, the platform features (project management, file exchange, invoicing, messaging, team collaboration) are built around it. The dashboard serves the analytics, not the other way around.
This order matters because it means the platform was designed to support validated analytical workflows from the start. It is not a generic project management tool that had analytics bolted on as an afterthought.
The platform's architecture reflects its analytics-first origin. Every project has a lifecycle that mirrors a real analytical engagement: inquiry, quoting, data receipt, analysis, review, delivery, and completion. File categories distinguish between data uploads, results, and reports because those distinctions matter for access control and workflow automation. The messaging system supports internal notes (visible only to the team) and client-facing messages (visible to the partner) because analytical work requires both internal coordination and external communication.
None of these features were designed in the abstract. They emerged from the experience of building and deploying three validated analytical tools and thinking carefully about what the next ten deployments would require. That is why the platform feels purpose-built for health analytics rather than adapted from a generic template.