STUFF

Unified Capture Design Language (UCDL)

Company: Advanced Health Intelligence (AHI)

Complexity: 9.231/10

Fun factor: 9.632/10

Project details

  • Platform: iOS, Android, Web
  • Design system management: Supernova.io
  • Product type: Design system (SDK)
  • Design tool: Figma

What is it?

  • A design system. The Unified Capture Design Language (UCDL) provides a robust set of modular building blocks for all new and existing platforms as to create a familiar experience for biometric capture.

What did I do?

  • Discovery, research, documentation, prototypes, presentations.

What problem does it solve?

  • Addressed systemic interface fragmentation across licensed third-party technologies (FaceScan, DermaScan) which violated the Law of Similarity, resulting in high extraneous cognitive load and diluted brand integrity.
  • Resolved the Gulf of Execution inherent in remote biometric capture, where users struggled to translate system goals into physical positioning tasks due to a lack of perceptible affordances.

What components were created?

  • All Scans: On-screen messaging, camera frame, camera outline, countdown (circular, numeric pre-countdown, timer)
  • FaceScan: Scene indicator (stars)
  • BodyScan: Alignment
  • BodyScan, DermaScan: Camera flash

Challenges & resolution

  • Proxemic visual acuity: Standard UI components lost legibility when transitioning between near-field (FaceScan, ~30cm) and far-field (BodyScan, ~3m) contexts.
    • Resolution: Engineered algorithmic scaling modifiers (multipliers) within the token system, ensuring optical consistency and accessible contrast ratios regardless of physical user distance.
  • Whitelabel governance risks: Unrestricted partner customisation threatened to compromise usability heuristics and biometric capture accuracy.
    • Resolution: Architected a semantic token layer that permits brand alignment (colour, typography) while locking critical functional properties, preserving the integrity of the capture experience.
  • Heterogeneous tech stack integration: Amalgamating rigid third-party SDKs with proprietary tech created significant technical debt and visual dissonance.
    • Resolution: Developed an abstraction layer that wraps external dependencies in UCDL-compliant containers, enforcing a unified front-end logic without altering core backend mechanics.

Strategic Unification

  • Synthesised disparate biometric modalities into a cohesive design ecosystem to resolve interface inconsistencies and establish cross-platform semantic continuity.
  • Executed a heuristic evaluation of legacy interfaces to define a token-based unified visual syntax, enhancing brand uniformity and strengthening product differentiation.

System Architecture & Tokenisation

  • Architected a semantic design token system to abstract visual primitives, enabling programmatic scalability and robust white-label configurations for B2B partner integrations.
  • Standardised atomic components and interaction patterns to ensure functional interoperability across web and mobile SDKs, optimising engineering integration and maintainability.

Cognitive Ergonomics

  • Exploited existing mental models of photography (e.g. the “Viewfinder” metaphor) to minimise extrinsic cognitive load and lower the barrier to entry for complex physical alignment tasks.
  • Engineered context-aware component specifications that dynamically adjust visual affordances—such as stroke weights and scale—based on proxemics and environmental constraints.

Interaction & Feedback Loops

  • Designed continuous feedback loops utilising fluid state transitions to maintain system status visibility, bridging the Gulf of Evaluation and mitigating perceived latency during data capture.
  • Applied Gestalt principles (Law of Closure/Common Region) to create predictive interface elements that guide user behaviour and heighten perceived system reliability.