Arkem — Reimagining News

At a glance

Arkem is a mobile-first news and analysis app built for people who want to understand what’s happening—without losing trust in the information.
Instead of giving you one “take,” Arkem helps you see the same event through multiple viewpoints, and makes it clear why a story says what it says.

Role: Product / App Developer
Tools: React + TypeScript, Supabase, SQL, Figma (rapid prototyping)
Timeline: Aug 2025 – Jan 2026

Problem

Modern news apps are optimized for volume, not understanding.

  • Headlines arrive faster than context.
  • Stories feel emotionally “loud,” but cognitively thin.
  • Even when sources exist, the reader has to do the work of reconciliation.

“I don’t need more news. I need someone to say: here’s what actually changed and who should care.”

At the same time, AI can summarize anything—but often hides its assumptions, cherry-picks sources, or feels like a generic chatbot bolted onto the news.
Arkem is my attempt to design a calmer, more honest news experience: one that is rigorously governed and combines structured storytelling with transparent AI analysis.

Product Vision

Arkem’s core promise is simple:

  1. Every story is structured (not just summarized).
  2. Every claim is anchored (sources are first-class).
  3. Every output has provenance (how it was made, confidence signals, and traceable AI runs).
  4. Every story is navigable across dimensions (category, topic, entity, region, tracking).

Audience:

  • News-engaged readers who want context and actions, not just headlines
  • Early adopters comfortable with AI, but skeptical of “black box” answers

Constraints:

  • Designed mockups in Figma, built prototype in React / Typescript, used Figma Make for rapid iterations
  • Supabase as backend for stories, topics, entities, watchlist, notes, and AI provenance
  • Mobile-first, WCAG 2.2 AA, 44pt tap targets, reduced-motion handling

What I Set Out To Build

Design and ship an MVP that lets users:

  1. Scan a calm feed of stories organized by what changed and why it matters.
  2. Dive deep into a single story: timeline, who’s affected, open questions, perspectives, analyzed and compiled by an AI model with responsible governance.
  3. Ask AI follow-up questions with transparent sources and confidence.
  4. Track what matters (topics, entities, stories, keywords) over time.

The MVP experience

Intro—First-time user to first meaningful win

Goal: A new user should understand what Arkem is for and get a first “this is different” moment in under 1–2 minutes.

Feed—facets without overwhelming users

Calm, structured story cards with confidence dots, category & region tags. The Feed supports two layers:

  • Quick filters: All / Local / Tracking
  • FacetTray: Category (single), Regions (multi), Topics (multi), Entities (multi)

The result: fast scanning, but also “precision mode” when you want it.

Ask Arkem—a story-aware AI assistant

Ask Arkem is not “chat for the sake of chat.” It’s built for:

  • asking about a specific story (“Ask Arkem about this story”)
  • getting answers with suggested follow-ups and save/copy/share actions
  • saving answers as notes into a Notes Library
  • tracing provenance via stored AI runs

Under the hood, every AI call is recorded (ai_runs) with:

  • Model, version, prompt, snapshot of the answer.
  • Metrics JSON (latency, confidence indicators, safety flags).
  • Sources JSON.

Library—one place for tracking and saving

The Library tab brings together long-term engagement:

  • Tracked items (topics/entities/keywords/stories)
  • Saved stories
  • Saved notes

Intent: Capture the mental model of “things I’m keeping an eye on” without overwhelming users with complex folder/tag systems.

I used consistent patterns (cards, chips, View buttons) to make cross-surface navigation predictable.

Key decisions that shaped the MVP

Stories are “documents,” not posts

Instead of infinite article text, each story has consistent sections:

What changed → Why it matters → Who’s affected → Open questions → Timeline → Perspectives → Sources → Provenance.

This directly addressed frequent user concerns like “I just want someone to tell me what actually changed and who should care.”

This isn’t just UI — it’s a contract between the database, the renderer, and the AI story writer.

Visual Portfolio, Posts & Image Gallery for WordPress
A clear chain of events
Quick scan of who’s affected and how
Multiple, expandable perspectives from different groups
How each group should respond
Transparency about any sources used
See exactly what AI did and how

Accessibility & Resilience – for users and developers

I defined a design system and component library in CSS and React:

  • Tokens mapped in /styles/globals.css:
    • Color roles (primary, surface, text, etc.).
    • Type ramp.
    • Radii, spacing, elevation, glass effects.
  • Atoms: Button, Input, Chip, Badge, Switch, etc.
  • Molecules: StoryCard, EntityImpactGrid, ActionPanel, AIResponseBlock, modals.
  • Screens: Feed, Search, Ask Arkem, Library, Settings, hubs, auth.

This allowed me to:

  • Iterate quickly in React while keeping visual consistency.
  • Make structural changes (like rethinking the story layout) without rewriting everything.

From non-dimming modals to 44pt tap targets and reduced-motion support, every major pattern was checked against WCAG 2.2 AA and mobile accessibility considerations.

Tracked items can be canonical or personalized

Users can track:

  • canonical topics
  • canonical entities
  • free-text keywords
  • specific stories

…but only topics/entities are curated taxonomies. Keywords remain user-owned phrases.

Provenance as a First-Class Surface

Every story and major Ask Arkem answer is backed by an ai_runs entry: prompt, model, metrics, and sources.

Provenance Viewer makes this visible for users who want to inspect “how this was made,” instead of hiding AI behind a generic label.

Use Markdown where humans write, use tables where systems relate

I resisted over-modeling every paragraph.

  • Narrative sections live as MD strings (easy to author, consistent rendering).
  • Relationships live in normalized tables (topics, entities, regions, sources, events).

That split kept the system: (a) structured enough for queries and facets, (b) flexible enough for real stories.

Engineering architecture

Why Supabase

Supabase gave me:

  • Postgres schema that matches the story contract
  • Auth (OTP) for gated actions like feedback and watchlist
  • RLS policies for user-owned tables
  • a fast iteration loop without reinventing infra

Adapter pattern for resilience

A theme in the build: never let the UI collapse just because data is imperfect.

Adapters:

  • try Supabase
  • fall back gracefully (localStorage / sample data)
  • emit telemetry for errors (so failures are visible)

This matters because the MVP was prototyped inside Figma Make, where debugging UX is constrained.

Data model decisions

A few “spine” choices:

  • stories contains narrative MD fields + cover image metadata
  • join tables connect stories to canonical dimensions:
    • story_topics
    • story_regions
    • story_entities
    • story_sources
  • story_events powers the Timeline
  • perspectives stores structured perspective blocks (group + stance + actions)
  • ai_runs stores provenance snapshots for story generation and Ask Arkem

I also built a seeding function (fn_seed_story(payload jsonb)) so a story writer (human or AI) can insert complete stories consistently.

AI UX + governance (L²)

Arkem is built on my framework  (LENS + LEVER): a “see + move” system for guiding AI behavior.

In Arkem, L² shows up as product principles:

  • sources-first (citations are not optional)
  • perspective-aware (who is affected, what actions follow)
  • confidence as a signal (not a vibe)
  • provenance by default (how it was made is inspectable)

Importantly: I treated “AI output quality” as a design problem, not just a prompt problem.

User Research

My research blended:

  • Early interviews about how people follow news today, and how they use mobile apps for news updates.
  • Quick prototype walkthroughs of Arkem (remote and in-person).
  • Continuous observation of where people hesitated in the flows.

Concept interviews — “How do you follow news now?”

Top pains:

  • Firehose, no hierarchy “I just scroll headlines forever. It’s hard to tell what actually matters.”
  • No sense of change over time “Two weeks later I have no idea where things landed.”
  • Fragmented context “I end up with 10 tabs open and still don’t know what to do with the information.”

Implications for the design:

  • Every story should have one clear one-liner: what changed.
  • The story view needs an explicit Timeline and Open questions section.
  • There should be space for actions and perspectives, not just facts.

Prototype walkthroughs — First sessions with Arkem

I walked people through early prototypes on mobile.

Delights:

  • Story structure. “This is nice — it literally spells out ‘What changed’ and ‘Why it matters’ instead of burying it.”
  • Who’s affected + actions. “This part about who’s affected makes me think about people I wouldn’t usually.” “These Do/Don’t bullets feel like a mini playbook.”
  • Ask Arkem as follow-up. “Being able to ask questions about this story feels more useful than a generic chatbot.”

Friction:

  • Facets vs Search vs Ask. People needed a bit of guidance on when to filter, when to search, and when to ask.
  • Library mental model. “What’s the difference between tracking a keyword here and just saving the story?”
  • Confidence dot. The confidence indicator was clear visually, but needed a legend or hint.

I used these reactions to prioritize clarity in a few key places: the story layout, the Ask Arkem entry points, and the Library naming and IA.

Outcomes & What’s Next

What I shipped:

  • Working, mobile-first MVP containing:
    • Feed with quick filters + modal facets
    • Search with Stories/Topics/Entities modes
    • TopicHub + EntityHub navigation
    • StoryDetail with structured sections, MD rendering, and provenance entry point
    • Ask Arkem: story-aware chat + history + save-to-notes
    • Library: tracked items + saved stories + saved notes
  • Supabase auth (OTP) + RLS policies
  • Telemetry hooks for debugging and iteration
  • Full Documentation set (README, design-system, ai-ux, data-model, authoring contract).
  • A reusable L² story schema (Markdown + JSON) that’s now used to seed and render stories.

What I’m proud of:

  • The story contract (DB ↔ renderer ↔ AI writer) is the core innovation.
  • The product treats AI like a governed collaborator, not a magic box.

Next steps:

  • Continue evolving the L² framework using live usage data.
  • Expand story volume and run further user tests on Library and Ask Arkem.
  • Experiment with digest emails and notifications based on tracked items.