Skip to content

Architecture

PHC Data Flow

LifeOmic is Cloud Native

  • Designed for the cloud using true multi-tenant architecture

  • Auto scaling across multiple data centers in multiple regions around the world

  • LifeOmic services deployed inside private subnets of Virtual Private Cloud (VPC)

  • Adheres to strict security and compliance standards (HIPAA, HITRUST, etc.)

Benefits of Cloud Architecture

  • Infrastructure is tailored to our customer's goals and usage patterns

  • "shared use" model reduces cost

  • Nearly infinite compute and data capacity via AWS cloud provider

  • Customers can focus on solving business problems and not worry about infrastructure

  • Automatic backup and recovery

  • Continuous improvements via change control process

  • Faster adoption of new technology

  • Increased security due a secure base platform and best practices.

Evolution of Cloud Computing

  1. Baremetal

    • A computer in someone else's data center
  2. Virtual Machine

    • A portion of a computer in someone else's data center
    • In AWS, a Virtual Machine is created from Amazon Machine Image (AMI)
  3. Container

    • A package of essential application source code and libraries, but not the core OS or platform libraries

    • Easier to scale container image than an entire virtual machine

    • No duplication of core OS processes (networking, filesystem, etc)

    • Typically using Docker for containers, other formats may become viable.

  4. Function

    • An independent piece of code that runs in a pre-built container

LifeOmic strives to leverage AWS Lambda functions as the primary building blocks for the following reasons:

  • Functions deploy more quickly than containers and virtual machines.

  • AWS can automatically scale Lambda functions based on the number of incoming invocations and our concurrency settings.

  • Functions are short-lived processes which minimize attack surface.

Advanced Analytics and Machine Learning

  • Big data batch jobs via Apache Spark and other tools
  • Machine learning
  • Run unsupervised and supervised learning models
  • Data visualization
  • Store output in S3 or indexed datastore

Security and Data Protection

  1. Zero-trust security model

    Granular segregation and policy enforcements with no "keys to the kingdom" and therefore no single points of compromise.

  2. "Air-Gapped" environments meet short-lived processes

    Fully isolated sandboxes to prevent accidental or malicious access. No direct administrative or broad network connectivity, such as VPN or SSH access, into production. Processes are short-lived and killed after use. This ensures minimal persistent attack surface and makes it virtually impenetrable.

  3. Need-based temporary access

    Access to critical systems and resources are closed by default, granted on demand, and protected by strong multi-factor authentication.

  4. Immutable builds

    Infrastructure as code. Security scan of every build. Full traceability from code commit to production. "Hands-free" deployment ensures each build is free from human error or malicious contamination.

  5. End-to-end data protection

    Data is safe both at rest and in transit, using strong encryption and key management.

  6. Strong yet flexible user access

    Our platform supports OpenID Connect, SAML and multi-factor authentication, combined with fine-grain attribute-based authorization.

  7. Watch everything, even the watchers

    All environments are monitored; all events are logged; all alerts are analyzed; all assets are tracked. Security agents are baked into standard system images and auto-installed on all active systems and endpoints. No privileged access without prior approval or full auditing. We even have redundant solutions to "watch the watchers".

  8. Usable security

    All employees receive security awareness training not annually, but monthly. Combined with simplicity and usability, we ensure our security policies, processes, and procedures are followed without the need to get around them. No "Shadow IT".

  9. Centralized and automated operations

    API-driven cloud-native security fabric that centrally monitors security events, automates compliance audits, and orchestrates near real-time risk management and remediation.

  10. Regulatory compliant and hacker verified

    Fully compliant with HIPAA / HITECH / HITRUST. Verified by white-hat hackers.

Logging, Metrics, and Alerts

  • Track everything
  • Report alerts
  • Developers on-call for high error rates or critical alerts
  • Public-facing status page

Usage-based Billing

  • The LifeOmic platform automatically provisions compute resources based on the job criteria.

  • Pay for what you use

    • API calls
    • Storage
    • Compute resources

Uninterrupted Service

The LifeOmic PHC complies with the Amazon Web Services Well Architected guidelines for high availability, which includes:

  • All server clusters (databases, Elasticsearch, etc...) have at least two instances in an active/active setup spread across at least two availability zones. Each availability zone is a geographically separate data center.

  • Data at rest is stored in high availability AWS services (DynamoDB, S3, Aurora) that store the data redundantly across at least three availability zones, which makes the data itself highly available and durable even in the face of the lose of an entire cluster.

  • A serverless architecture whenever possible that utilizes AWS CloudFront, API Gateway and Lambda to implement a stateless microservice architecture so that the lose of a server or an entire data center is irrelevant. API Gateway and Lambda run on distributed clusters spread across multiple availability zones, and CloudFront runs on a distributed cluster spread across multiple regions.

  • "Design for Failure" - Always plan for failure and provide mechanisms to retry or rollback changes.

  • Rolling updates to services for zero downtime.

  • Load balancing and health checks to automatically remove unhealthy actors from the platform and repair.


Last update: January 13, 2020