Healthcare data is growing in volume, velocity and complexity. Systems designed for yesterday’s workloads are increasingly difficult to scale, optimize, and operate cost-efficiently.
At HIMSS ‘26 our CTO and Co-Founder James Agnew introduced how Smile is architecting technology capable of supporting petabytes of healthcare data, while maintaining performance and enabling real-time insights.
Did you missed James’ Planetary Scale Video? Watch it here.
It is time to go beyond simply ‘storing massive volumes of data’. Intelligent data that works for us, is not one-dimentional.
Use the questions below to evaluate how your current architecture performs across all three dimensions of scale: data volume, query velocity and mixed data complexity (downloadable version is available below).
Can your architecture scale from hundreds of terabytes to petabytes without replatforming?
Are you optimizing storage by separating high-cost database storage from lower-cost object storage for large binaries and historical data (PDFs/CDAs)?
Do you have a strategy for managing "cold" data while keeping it accessible?
As your user base grows, can your repository maintain lightning-fast response times while servicing thousands of concurrent queries and live users every second?
How does your system perform under extreme concurrency?
Can your system execute highly expressive and detailed FHIR queries, without performance degradation?
Does your organization currently use separate silos for structured clinical data and unstructured documents like PDFs or faxes?
Are you able to search, retrieve and action upon data from clinical documents, CDAs and PDFs?
Can you perform population-scale intelligence without data extraction?
Can you execute analytics directly against your repository without exporting data
Can your current platform execute complex Clinical Quality Language (CQL) analysis—such as HEDIS measure calculations or care gap analysis—at population scale?
Are you receiving quarterly, named releases that provide incremental improvements to transaction speeds and parallel processing as your data volume grows?
Is your data platform cost-effective as volume grows?
Does your architecture require duplication of data across systems to support analytics?
We are always looking for partners to push the boundaries of what is possible with data.