Voice-First AI Assistant

Abyss

The assistant that lives on your iPhone and reaches your Mac when the work gets real.

SwiftUI iPhone client
TypeScript WebSocket conductor
Permissioned macOS bridge
Cursor, Gmail, Calendar, Canvas

Why It Stands Out

Product polish up front, serious capability underneath

Abyss is designed to feel lightweight on the phone while still being able to escalate into real work across coding, productivity, and secure local execution.

Flagship Workflow

Coding from your phone, with your Mac on call

Abyss starts on the iPhone, then escalates into real development work through Cursor Cloud Agents, terminal execution, file operations, git workflows, and browser automation on a paired Mac.

  • +Voice-first coding flows with typed fallback
  • +Cursor agent orchestration and repo-connected work
  • +Terminal, file, and git actions through the bridge
  • +Optional Nova Act browser automation for deeper tasks

Personal Assistant

Real integrations, not a toy demo

The same assistant that can kick off coding work can also triage Gmail, manage Google Calendar, check Canvas, search the web, and keep the conversation readable with inline result cards.

  • +Editable Gmail draft confirmations before send
  • +Calendar create, update, and delete flows
  • +Canvas courses, assignments, grades, and announcements
  • +Inline cards for email, calendar, canvas, bridge output, and agents

Security

Permissioned local execution by design

Abyss does not pretend trust is free. Risky local actions live behind a separately paired macOS bridge with workspace boundaries, capability toggles, and explicit confirmation patterns for sensitive mutations.

  • +Workspace-root restrictions for local file access
  • +Independent permissions for shell, write, git push, and browser automation
  • +Confirmation cards before email and other risky actions finalize
  • +Safer than giving an agent unrestricted machine access

Continuity

Built to remember context and stay usable

Abyss supports multi-chat sessions, inline transcript cards, conversation summaries, user preferences, and optional long-term memory infrastructure so the assistant can pick up real threads over time.

  • +Multi-chat support with voice, push-to-talk, and text
  • +Auto-generated chat titles and transcript-friendly cards
  • +Context summarization for longer sessions
  • +Optional Neptune plus Titan context graph retrieval

Judge Demo Story

Start with voice. Escalate into real tools. Keep the trust boundary visible.

The strongest Abyss demo path is simple: speak to the iPhone, trigger a useful assistant task, then step up into a permissioned coding workflow on the paired Mac. The product story is not just “voice chat,” it is voice as the front door to secure execution.

Voice, push-to-talk, and typed interaction patterns
Inline transcript cards that keep tool output readable
Non-blocking confirmation flows for sensitive actions
Multi-chat sessions with summaries and generated titles

Architecture

Built like a product, explained like a system

The architecture matters because the product promise depends on it: natural voice on the phone, strong tool orchestration on the server, and privileged local actions behind an explicit bridge.

System architecture

Abyss system architecture diagram

The product is split across an iPhone-native client, a TypeScript conductor, and a paired macOS bridge so voice stays natural while privileged local execution stays gated.

Core data flow

Abyss core data flow diagram

Speech or text becomes structured events over WebSocket, tools run in the right place, and every result flows back into the transcript as readable inline cards and assistant messages.

Infrastructure and deployment

Abyss infrastructure diagram

The stack is production-minded end to end: AWS Bedrock for models, ECS Fargate for the server, optional memory infrastructure, and Apple-native clients on the front line.

Tech and Connections

Enough technical depth to be credible, without losing the product story

Abyss combines Apple-native UX, a TypeScript orchestration layer, Amazon Bedrock model routing, a permissioned Mac bridge, and a growing set of integrations that make the assistant feel useful across both development and day-to-day work.

Cursor Cloud AgentsGitHub-connected developer flowsGmailGoogle CalendarCanvasBrave SearchAmazon BedrockNova SonicNeptune AnalyticsTitan Embeddings

Apple-native front end

  • SwiftUI iPhone app
  • AVFoundation audio handling
  • WhisperKit transcription paths
  • ElevenLabs with fallback voice behavior
  • URLSession WebSockets and secure local storage

Conductor and model layer

  • Node.js 20+ and TypeScript
  • WebSocket orchestration with strict event envelopes
  • Amazon Bedrock routing across Nova Lite and Nova Pro
  • Nova Sonic voice support
  • Server-side integrations and tool dispatch

Bridge and memory stack

  • Paired macOS bridge in Swift and SwiftUI
  • Granular capability permissions and workspace constraints
  • Optional Nova Act browser automation
  • Optional summarization, S3 memory, Neptune Analytics, and Titan embeddings
  • Shared Swift and TypeScript protocol libraries

The Pitch

“Abyss makes the phone the primary interface, keeps local execution permissioned, and turns voice into a serious surface for coding and everyday work.”

This is the north star: the most capable, trusted, and personally useful assistant in your pocket, with the architecture to back up the claim.