Time Entities Headline Source

Anthropic Cowork plugins disrupt vertical software as OpenAI–Anthropic IPO race accelerates; ServiceNow crosses $10B with OpenAI deal

  • Anthropic launches Cowork domain plugins for legal, finance, and sales — RELX and Wolters Kluwer plunge 10%+, signaling platform AI entering vertical enterprise workflows.
  • OpenAI and Anthropic both accelerate IPO preparations at $500B and $183B valuations respectively; Anthropic targets $18B 2026 revenue (4× YoY).
  • ServiceNow crosses $10B annual revenue, signs 3-year OpenAI deal integrating GPT-5.2 for agentic AI, and acquires Armis for $7.75B. Accenture completes Faculty acquisition (~$1B, 400 AI specialists).
  • China AI wave intensifies with DeepSeek V4, ByteDance Doubao 2.0, and Moonshot K2.5 (1T params) all targeting February launches. OpenAI retires GPT-4o Feb 13, consolidating around GPT-5.2.
Read full analysis →

Strategic Themes

Select a theme to view synthesis, source articles, and trend analysis.

Recent Alerts

View all alerts →

Current Assessment

Key Developments

Pattern Analysis

Strategic Implications

Watch Items

Source Coverage

Date Source Article Signal Score

Explore Other Themes

-- Companies
-- Partnerships
-- Signals
-- This Week
Live --
Sort by
View
High-Impact Signals

Anthropic Cowork plugins trigger market shock — legal and data software stocks plunge 10%+ as platform AI enters vertical enterprise workflows.

OpenAI and Anthropic both accelerating IPO preparations with $500B and $183B valuations respectively.

ServiceNow-OpenAI 3-year strategic deal for GPT-5.2 agentic AI. China AI race intensifies with ByteDance, Alibaba, DeepSeek all preparing February model launches.

OpenAI retires GPT-4o on Feb 13, consolidating around GPT-5.2.

TL;DR

High-Priority Signals

Signal score ≥30 from the past 48 hours.

Notable Coverage

Signal score 20-29 from the past 48 hours.

Condensed Overview

How It Works

1. Curation — News content is manually gathered from tiered sources (company newsrooms, major publications, trade press, research). Each source is assigned a tier based on authority and signal quality. Content is reviewed for relevance to the coverage universe.

2. Classification — Each article is classified with: primary theme, secondary themes, entities, confidence (from source tier), impact, irreversibility, signal type, and a summary. A composite signal score is calculated as Confidence × (Impact + Irreversibility).

3. Synthesis — Classified signals are aggregated by theme and synthesized during the refresh cycle: 24-48 hour digest (The Digest), 10-15 day rolling executive briefing (The Assessment), and per-theme source coverage with key developments, pattern analysis, strategic implications, and watch items.

4. Presentation — Signals scoring ≥30 (high-priority threshold) are highlighted in The Digest and added to ALERTS[]. Critical signals (40-50) are featured in TL;DR and executive summaries. Notable coverage (20-29) is shown separately. All signals are ranked by temporal decay-adjusted score for freshness.

Source Tiers

Tier 1Primary sources — company newsrooms, official blogs, filings
Tier 2Major analysis — Reuters, Bloomberg, WSJ, FT
Tier 3Trade press — CRN, Channel Futures, SiliconANGLE
Tier 4Research — Gartner, IDC, Forrester (public content)

Theme Taxonomy

Content is classified into eight strategic themes: AI Infrastructure & Partnerships, Hyperscaler Dynamics, GSI Ecosystem Movements, Silicon Roadmap & Supply, Competitive Alliance Activity, Channel & Route-to-Market, Regulatory & Policy, and Market Signals.

Coverage Universe

The system monitors 69 entities across 10 industry segments: AI-Native (9), Hyperscalers (5), GSIs (11), Silicon (11), Custom Silicon (4), Foundry & Memory (3), OEM (4), Sovereign (2), Infrastructure (5), Enterprise Software (14).

Signal Types

Announcement Official first-party announcements
Analysis Third-party interpretation and context
Speculation Predictions, rumors, unconfirmed reports
Background Foundational context and market sizing

Signal Scoring Model

Each signal receives a composite score based on three dimensions:

Confidence (2-5)Source credibility — derived from source tier (Tier 1=5, Tier 2=4, Tier 3=3, Tier 4=2)
Impact (1-5)Magnitude of ecosystem effect
Irreversibility (1-5)Duration and permanence of the signal

Composite Score = Confidence × (Impact + Irreversibility)
Range: 2–50. Scores ≥30 trigger high-priority alerts.
Temporal Decay: Scoreadj = Score × e(-λ × age_days). Default λ = 0.03 (~50% weight after 23 days). Used for ranking; raw scores are preserved.

Current Implementation

Vanilla ES5 single-page application with AWS serverless backend. Passwordless authentication via 45-day persistent magic link tokens (type: invite) stored in DynamoDB (EcosystemEdgeMagicLinks), with login activity tracking (first_login_at, last_login_at, login_count). Access gated by auth/gate.js; unauthenticated users redirected to splash.html. Manual curation workflow with 12-stage refresh protocol. Data loaded asynchronously from JSON files via loader.js (9 core files + per-entity details). Client-side rendering with Chart.js for visualization. Temporal decay scoring for signal freshness. Schema validation and threshold sensitivity analysis run on page load (console-logged, non-gating). Monte Carlo simulation (5,000 iterations) based on real sample data for historical trend estimation. Convergence diagnostics (2-chain median comparison) for simulation quality. Strategic Intelligence Briefs loaded from data/dossiers/ JSON files, supporting live (full analytical content) and planned (stub outline) render modes. Outbound email via SendGrid for transactional invite and beta sequence delivery. Platform config centralized in config/platform.json (coverage stats, signal thresholds, segment maps, data manifest). Content manifest: config/reports.json (18 reports + 2 tools).

Target Technical Stack (Planned — Automated Pipeline)

Python ingestion scripts · SQLite storage · Claude API for classification (Haiku) and synthesis (Sonnet) · GitHub Actions scheduling · Jinja2 templating · Automated alerting. Current infrastructure (AWS Cognito, Lambda, DynamoDB, SendGrid, API Gateway) would persist; the pipeline automates content curation only.

Full Methodology

Complete documentation of the strategic theme analysis system architecture, implementation, and operation.

Illustrative Example

This example demonstrates the classification and scoring methodology applied to a real signal. Classification and scoring are performed manually during the 12-stage refresh protocol (see Refresh Protocol chapter).

Source Articles Identified

News from defined sources is reviewed for coverage universe relevance:

NVIDIA NewsroomNVIDIA and Accenture Expand Partnership to Accelerate Enterprise AI Adoption
ReutersAccenture to train 30,000 consultants on NVIDIA AI Enterprise platform
CRNChannel partners eye new opportunities as NVIDIA-Accenture deal deepens
BloombergAI consulting race heats up as Accenture doubles down on NVIDIA

Classification Output

Each article is classified using the scoring model:

Article: "NVIDIA and Accenture Expand Partnership..." Source: NVIDIA Newsroom (Tier 1) Primary Theme: GSI Ecosystem Movements Secondary Themes: AI Infrastructure & Partnerships Entities: NVIDIA, Accenture, AI Enterprise, DGX Cloud Confidence: 5 (Tier 1 source) Impact: 5 (reshapes ecosystem) Irreversibility: 4 (multi-year commitment) Signal Type: Announcement Signal Score: 45 [5 × (5 + 4)] Summary: NVIDIA and Accenture expanding partnership to deploy AI solutions across enterprise clients, including consultant training and joint go-to-market.

Theme Synthesis Generated

During the refresh cycle, all "GSI Ecosystem Movements" articles are synthesized into key developments, pattern analysis, strategic implications, and watch items (see Synthesis chapter).

Alert Triggered

Because the NVIDIA-Accenture article scored 45 (≥30 threshold), it is flagged as a high-priority alert in The Digest and added to the ALERTS[] array during the refresh cycle.

Architecture Overview

Current Implementation

The platform operates as a self-contained analytical application with manual curation workflow, client-side computation, and integrated data validation:

┌─────────────────────────────────────────────────────────────────────────┐ │ MANUAL CURATION (12-Stage Refresh Protocol) │ │ News Monitoring │ Source Review │ Relevance Assessment │ │ Entity Activation (L1/L2/L3) │ Content Gates │ Progress Markers │ └──────────────────────────────────┬──────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ CLASSIFICATION & SCORING │ │ Theme Assignment │ Entity Tagging │ Impact Scoring │ Summary │ │ Temporal Decay: Score_adj = Score × e^(-λ × age_days) │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ JSON DATA LAYER (loader.js 4-phase pipeline) │ │ SIGNALS[] │ THEMES[] │ ALERTS[] │ COMPANIES[] │ TRENDS{} │ │ THEME_SYNTHESES{} │ COVERAGE_STATS{} │ COMPANY_DATA{} │ │ COMPANY_PROFILES{} │ DOSSIERS{} │ REPORT_MANIFEST[] │ │ config/platform.json │ config/reports.json │ data/entities/*.json │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ ANALYTICAL ENGINE (Client-Side) │ │ Temporal Decay Ranking │ Monte Carlo Simulation (5K iterations) │ │ Convergence Diagnostics │ Heavy Cycle Detection │ Event Impulse │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ VALIDATION & INTEGRITY │ │ Schema Validation │ Range Checks │ Cross-Reference Integrity │ │ Sensitivity Analysis (±5 offsets) │ Methodology Staleness Audit │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ CLIENT-SIDE RENDERING │ │ Dynamic HTML Generation │ Chart.js Visualization │ Filter/Sort │ │ Interactive Navigation │ Dynamic Stat Derivation │ └──────────────────────────────────────────────────────────────────────────┘

Key Architectural Properties

PropertyImplementation
SPA + ServerlessVanilla ES5 single-page application with AWS backend (Cognito, Lambda, DynamoDB, API Gateway). Passwordless auth via magic link tokens. Access gated by auth/gate.js.
Data-drivenAll content loaded from JSON files via async loader.js pipeline. Views are projections of data, not hardcoded HTML.
Analytically activeTemporal decay, Monte Carlo simulation, convergence diagnostics, and sensitivity analysis run client-side on page load.
Schema-validatedvalidateData() checks structure, ranges, and cross-references on page load. Results logged to console; rendering is not gated on validation pass.
Methodology-awareMETHODOLOGY_REGISTRY tracks documentation freshness. Dynamic stats derived from live data arrays.

Target Architecture (Automated Pipeline)

The target architecture replaces manual curation with an automated ingestion and classification pipeline. The analytical engine and rendering layers would carry forward largely unchanged.

┌─────────────────────────────────────────────────────────────────────────┐ │ SOURCE LAYER │ │ RSS Feeds │ News APIs │ Press Releases │ Filings │ Blogs │ └──────────────────────────────────┬──────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ INGESTION PIPELINE │ │ Fetch → Dedupe → Parse → Normalize → Store │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ CLASSIFICATION ENGINE │ │ Theme Assignment │ Entity Extraction │ Sentiment │ Relevance │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ SYNTHESIS LAYER │ │ Per-Theme Aggregation │ Cross-Source Comparison │ Trend Detection │ └──────────────────────────────────┬───────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────────┐ │ OUTPUT LAYER │ │ Static Site │ Executive Briefings │ Alerts │ Trend Reports │ └──────────────────────────────────────────────────────────────────────────┘

Source Definition Framework

Source Tiers

Not all sources carry equal weight. Tiering helps prioritize signal over noise:

TierDescriptionExamplesConfidence
1Primary / First-partyCompany newsrooms, SEC filings, official blogs, PRNewswire/BusinessWire (official releases)5
2Major AnalysisReuters, Bloomberg, WSJ, FT4
3Trade PressCRN, Channel Futures, SiliconANGLE, The Information3
4Research (Public)Gartner, IDC, Forrester snippets2

Note: Wire services (PRNewswire, BusinessWire, GlobeNewswire) are classified as Tier 1 when distributing official company announcements. Confidence score range is 2-5; a score of 1 is not used in practice.

Recommended Sources by Category

Silicon Partners: NVIDIA Newsroom, Intel Newsroom, AMD News, Qualcomm News, Arm Newsroom

Hyperscalers: AWS News Blog, Google Cloud Blog, Microsoft Azure Blog

GSI / Consulting: Accenture Newsroom, Deloitte Press, IBM Newsroom, Infosys News

Competitive Set: Dell Technologies Newsroom, HPE Newsroom

General Tech/Business: Reuters Technology, Bloomberg Technology, WSJ Tech, FT Technology

Current Source Management

Sources are tracked as a narrative list within this chapter. No structured SOURCES[] array exists in the current implementation. The footer references "54 news sources," which requires manual verification when sources are added or removed during refreshes.

Target Source Configuration Schema

The automated pipeline will store sources as structured records:

{ "source_id": "nvidia_newsroom", "source_name": "NVIDIA Newsroom", "url": "https://nvidianews.nvidia.com/rss", "tier": 1, "category": "silicon", "refresh_interval_minutes": 60, "relevance_keywords": ["partnership", "alliance", "collaboration"], "enabled": true }

Coverage Universe

The monitoring system tracks 69 entities across 10 industry segments. Signals mentioning these entities receive priority classification and scoring.

Hyperscalers

CompanyNotes
Google (incl. Gemini)Cloud + TPU-based AI infra, foundation models, enterprise AI
MetaSocial + consumer AI, custom MTIA inference silicon
Microsoft (Azure)Cloud + Copilot, Maia/Cobalt custom AI silicon
Amazon / AWSCloud + Bedrock, Trainium/Inferentia AI accelerators
Oracle (OCI)Enterprise-focused AI cloud, database-led workloads

Global System Integrators (GSIs)

CompanyNotes
AccentureLarge-scale AI transformation, co-innovation with CSPs
DeloitteAI consulting, risk, and enterprise transformation
PwCAssurance, tax, and AI-enabled transformation
IBMHybrid cloud + AI services (Consulting, watsonx)
KPMGAssurance and advisory with AI focus
EYTax, assurance, and AI advisory
CapgeminiGlobal delivery + AI/industry solutions
TCSLarge-scale IT + AI services
InfosysAI and automation-led managed services
WiproCloud + AI and engineering services
NTT DATAGlobal SI with AI/infra focus

Silicon – Merchant CPUs/GPUs

CompanyNotes
IntelCPUs, GPUs, Gaudi AI accelerators
NVIDIAGPUs, networking, full-stack AI platform
AMDGPUs, AI accelerators (Instinct)
QualcommEdge/phone AI, NPUs
ARMCPU IP for AI-enabled SoCs
BroadcomNetworking and custom ASICs for AI DCs

Silicon – Hyperscaler Custom

Company / ChipNotes
Google – TPUCustom training/inference ASICs for Google Cloud
Amazon – Trainium / InferentiaCustom training/inference ASICs for AWS
Microsoft – Maia / CobaltCustom AI accelerator + CPU for Azure
Meta – MTIAInference accelerators for Meta's workloads

Silicon – AI Accelerator Startups

CompanyNotes
CerebrasWafer-scale engines for large-model training
GroqLPU-based, low-latency inference hardware
SambaNovaDataflow AI accelerators for enterprise/gov
GraphcoreIPU accelerators (specialized AI compute)
TenstorrentRISC-V–based AI compute platform

Foundry & Memory (AI-Enabling)

CompanyNotes
TSMCAdvanced-node foundry for leading AI chips
Samsung ElectronicsHBM/memory and foundry for AI silicon
MicronHBM and DRAM for AI workloads

KSA / Sovereign Actors

EntityNotes
PIFCapital allocator into global AI/infra
HUMAINKSA-linked AI initiative

Infrastructure (Colo / Data Center / GPU Cloud)

CompanyNotes
EquinixGlobal colocation + interconnect for AI clouds
Digital RealtyData center and interconnect for AI workloads
CoreWeaveGPU cloud provider, NVIDIA $2B investment, 5GW AI factory build-out
Lambda LabsGPU cloud for ML training and inference
Crusoe EnergySustainable AI infrastructure, clean-energy data centers

AI-Native (Model Labs / Platforms)

CompanyNotes
OpenAIFoundation models, ChatGPT / platform
AnthropicClaude models, enterprise safety-focused AI
xAIModel lab with X ecosystem integration
DeepSeekCost-optimized LLMs, alternative stack
MistralOpen-weight and enterprise-focused models
CohereEnterprise language models and APIs
Hugging FaceModel hub, tooling, and ecosystem

Services / Enterprise Control Planes

CompanyNotes
ServiceNowWorkflow + AI system of action
DatabricksLakehouse + ML/AI platform
SnowflakeData cloud + AI/ML workloads
PalantirData/decision platform; vertical AI
SAPERP + AI business processes
Oracle (Apps)SaaS + database + AI embedded in applications
RubrikData security/backup as AI-resilient control plane
VeeamBackup/DR for AI-era infra
SalesforceData Cloud + Einstein "system of intelligence"
CelonisProcess mining / execution management decision layer
UiPathAutomation and agentic orchestration
Blue YonderAI-native supply chain and planning

OEM Competitive Set

CompanyNotes
Dell TechnologiesAI Factory, PowerEdge, enterprise infrastructure
HPEGreenLake, AI infrastructure, HPC
SupermicroGPU-optimized servers, liquid cooling
CiscoNetworking, data center infrastructure

Theme Taxonomy Design

Principles

Themes should be:

  • Distinct enough for clear classification
  • Comprehensive enough to capture all relevant signals
  • Aligned to strategic priorities (not generic news categories)
  • Stable over time to enable trend analysis

Theme Definitions

AI Infrastructure & Partnerships
Training infrastructure deals, inference deployment partnerships, and AI platform integrations. Tracks how silicon vendors and platform providers are partnering to deliver enterprise AI capabilities.
Hyperscaler Dynamics
Cloud provider strategy shifts, on-premises and hybrid positioning, and partner program changes. Monitors AWS, Azure, and Google Cloud competitive positioning and partnership structures.
GSI Ecosystem Movements
Practice launches and investments, technology partnerships, and go-to-market alignment across major consulting firms. Tracks Accenture, Deloitte, IBM Consulting, and others.
Silicon Roadmap & Supply
Product announcements, supply and capacity signals, and competitive positioning from NVIDIA, Intel, AMD, and other semiconductor vendors.
Competitive Alliance Activity
Partnership announcements and alliance strategy moves from Dell, HPE, and other OEMs. Direct competitive intelligence for alliance strategy.
Channel & Route-to-Market
Partner program changes, distribution shifts, and as-a-service model evolution. Tracks how technology reaches end customers.
Regulatory & Policy
Trade and tariff developments, AI governance initiatives, and antitrust activity. Policy context affecting partnership and market strategy.
Market Signals
Demand indicators, pricing dynamics, and segment shifts. Broader market context for strategic planning.

Tuning Over Time

The taxonomy is intentionally stable to support trend analysis, but adjustments are expected as the landscape evolves. Themes with persistently low article counts may be candidates for merging, while themes that consistently exceed coverage capacity may warrant splitting. Any taxonomy changes are tracked in METHODOLOGY_LOG to preserve analytical continuity.

Classification Engine

What Classification Produces

FieldDescription
Primary ThemeSingle best-fit theme
Secondary ThemesUp to 2 additional relevant themes
EntitiesCompanies, products, people mentioned
Confidence2-5, derived from source tier (Tier 1=5, Tier 2=4, Tier 3=3, Tier 4=2)
Impact1-5, magnitude of ecosystem effect
Irreversibility1-5, duration/permanence of signal
Signal TypeAnnouncement, Analysis, Speculation, or Background
Summary2-sentence synthesis

Classification Prompt Template

You are classifying news content for a strategic alliance intelligence system. ARTICLE: Title: {title} Source: {source} (Tier {tier}) Date: {date} Content: {content} THEMES: 1. AI Infrastructure & Partnerships 2. Hyperscaler Dynamics 3. GSI Ecosystem Movements 4. Silicon Roadmap & Supply 5. Competitive Alliance Activity 6. Channel & Route-to-Market 7. Regulatory & Policy 8. Market Signals TASK: 1. Assign PRIMARY THEME (single best fit) 2. Assign up to 2 SECONDARY THEMES if clearly relevant 3. Extract ENTITIES mentioned (companies, products, people) 4. Rate IMPACT (1-5): Magnitude of ecosystem effect 1 = Minor/routine 3 = Notable shift 5 = Ecosystem-reshaping 5. Rate IRREVERSIBILITY (1-5): Duration/permanence 1 = Easily reversed 3 = Medium-term commitment 5 = Long-term/structural 6. Identify SIGNAL TYPE: Announcement | Analysis | Speculation | Background Note: CONFIDENCE is derived from source tier (Tier 1=5, Tier 2=4, Tier 3=3, Tier 4=2) Return JSON: { "primary_theme": "", "secondary_themes": [], "entities": [], "impact": 0, "irreversibility": 0, "signal_type": "", "summary": "" }

Signal Scoring Model

The scoring model produces a single composite score that enables ranking and prioritization across all signals.

Formula

Signal Score = Confidence × (Impact + Irreversibility) Range: 2 to 50 High-priority threshold: ≥30

Scoring Dimensions

DimensionScaleCriteria
Confidence 1-5 Derived from source tier:
Tier 1 = 5, Tier 2 = 4, Tier 3 = 3, Tier 4 = 2
Impact 1-5 1 = Minor/routine announcement
2 = Incremental development
3 = Notable strategic shift
4 = Significant ecosystem effect
5 = Ecosystem-reshaping event
Irreversibility 1-5 1 = Easily reversed (pilot, MOU)
2 = Short-term commitment (<1 year)
3 = Medium-term (1-3 years)
4 = Long-term contract (3+ years)
5 = Structural/permanent (M&A, major capex)

Example Calculation

Signal: "NVIDIA and Accenture Expand Partnership" Source: NVIDIA Newsroom (Tier 1) Confidence: 5 (Tier 1 source) Impact: 5 (reshapes GSI-silicon ecosystem) Irreversibility: 4 (multi-year training commitment, 30K consultants) Score = 5 × (5 + 4) = 45 Interpretation: High-priority signal warranting immediate attention.

Score Interpretation

Score RangePriorityColorAction
40-50CriticalClaret (#9F1D35)Immediate alert, leadership briefing
30-39HighMandarin (#FF8833)Same-day review, stakeholder notification
20-29MediumWheat (#F2DFCE)Include in 48-hour digest
10-19LowGrey (#E6D9CE)Weekly summary only
2-9BackgroundArchive for reference

Visual Examples

Signal scores appear throughout the platform with consistent color coding:

Score: 48 Score: 45 Score: 38 Score: 32 Score: 28 Score: 22 Score: 15

Temporal Decay Adjustment

For ranking and display ordering, signals are adjusted by a time-decay factor that reduces the weight of older signals while preserving the raw score for reference:

Score_adjusted = Score_raw × e^(-λ × age_days) Where: λ (lambda) = decay rate constant (default: 0.03) age_days = days between signal date and reference date Half-life at λ = 0.03: ~23 days (signal retains 50% weight) Quarter-life: ~46 days (signal retains 25% weight)

Temporal decay is configurable via COVERAGE_STATS.temporalDecay:

ParameterDefaultDescription
enabledtrueToggle decay on/off
lambda0.03Decay rate constant. Higher = faster decay
referenceDateCurrent dateISO date string. All ages calculated relative to this

Design rationale: Raw scores remain the permanent record of a signal's assessed importance. Temporal decay affects only ranking within views (Digest, Notable Coverage). This prevents stale signals from crowding out fresh developments while preserving the analytical judgment embedded in the raw score.

Note: The lambda parameter (0.03) was set by editorial judgment based on the typical news cycle cadence for alliance intelligence. External calibration against outcomes has not been performed.

Condensed Event Summary Specification

Executive briefings in The Assessment and The Digest follow a structured specification to ensure consistent, high-quality summaries. Summary length adapts based on news cycle intensity.

Heavy News Cycle Detection

A heavy news cycle triggers expanded summaries (5-6 sentences) when ANY threshold is met:

Runtime: These thresholds are evaluated and stored in COVERAGE_STATS.heavyCycle (isHeavy, trigger, thresholds) during each refresh cycle. The heavyCycle object drives summary length decisions in both The Digest and The Assessment.

MetricStandardHeavy Threshold
Articles processed<100≥100
High-priority signals (≥30)<15≥15
Critical signals (≥40)<4≥4
New partnerships announced<5≥5
Themes with activity spike (>25%)<3≥3

Qualitative Triggers

Tentpole eventsCES, MWC, GTC, re:Invent, Google I/O, Build, Ignite, Dreamforce
Policy announcementsExecutive orders, regulatory rulings, trade actions
Mega-dealsAny partnership >$10B or consortium announcement
Market-movingIPOs, major M&A, leadership changes at coverage entities

Segment-Specific Triggers

AI-NativeModel release, funding >$1B, market share shift >5%
HyperscalersRegion launch, pricing change, major customer win/loss
GSIsBusiness group launch, headcount >10K, new practice
SiliconArchitecture announcement, fab deal, supply constraint
OEMProduct launch, design win, channel program change
SovereignNational AI strategy, infrastructure >$1B
InfrastructureData center >500MW, new geography entry
Enterprise SaaSPlatform integration, AI feature, major migration

Summary Length by Cycle

Cycle TypeSentencesWordsEntitiesThemes
Standard3-450-806-103+
Heavy5-680-1208-124+

Structure — Standard Cycle

SentenceFunctionPattern
1Lead signal — highest-impact development[Entity] [action verb] [outcome].
2Secondary signals — 2-3 developments[Entity-Entity] [deal], [Entity-Entity] [deal] signal [pattern].
3Tertiary signal or emerging trend[Entity] [action] [implication].
4Watch item (optional)[Trend] intensifies around [developments].

Structure — Heavy Cycle (additional sentences)

SentenceFunctionPattern
5Segment impact — ecosystem implications[Segment] [consequence] as [specific development].
6Competitive positioning[Entity] response/positioning signals [strategic direction].

Style Rules

No articlesDrop "the," "a," "an" where possible
Entity-firstLead with company/organization names
Action verbstriggers, signals, accelerates, expands, formalizes, unveils, closes
Noun phrases"Siemens-NVIDIA expansion" not "Siemens and NVIDIA expanded"
Implication language"signals," "marks," "indicates" to connect facts to meaning
Compressed attribution"per Menlo Ventures" not "according to a report from"

Constraints — Standard Cycle

Length3-4 sentences, 50-80 words total
Entities6-10 mentioned
ThemesMinimum 3 of 8 strategic themes
Score thresholdOnly include signals with score ≥35
TensePresent for ongoing; past for completed deals

Content Priority

1. Partnership announcementsNew alliances, expansions, JVs
2. Infrastructure dealsCapacity, investment, deployment
3. Market position shiftsShare changes, competitive moves
4. Policy/regulatoryGovernment programs, compliance
5. Product launchesOnly if partnership-relevant

Vocabulary Reference

Action Verbs (by intensity):

High ImpactMedium ImpactLow Impact
triggers, accelerates, reshapes, dominates, surgesexpands, formalizes, closes, unveils, launchescontinues, maintains, supports, updates, extends

Pattern Verbs: signals (future direction), marks (inflection point), reflects (underlying trend), intensifies (escalating), converges (trends meeting)

Deal Type Nouns: expansion, alliance, partnership, JV, MOU, win, closing, deal, agreement, licensing, launch, rollout, availability, deployment

Quality Checklist

3-4 sentences, 50-80 words
Minimum 6 entities named
Minimum 3 themes represented
Lead signal has highest score (≥45)
No articles unless necessary for clarity
Entity-first sentence construction
At least one implication verb
Present tense for ongoing, past for completed

Example Output

NVIDIA's "ChatGPT moment for robotics" declaration triggers cascade of industrial AI partnerships. Siemens-NVIDIA Industrial AI Operating System expansion, Anthropic's Allianz enterprise win, and Deloitte-Pearson workforce development alliance signal accelerating GSI-hyperscaler convergence. Disney-OpenAI's $1B Sora licensing deal marks entertainment AI's commercial inflection point.

Analysis: Lead: NVIDIA Physical AI (Score 50) · Secondary: 3 deals across GSI Ecosystem, AI Infrastructure · Tertiary: Disney-OpenAI market signal · Entities: 8 total · Themes: 3 covered

Synthesis Methodology

Synthesis Levels

LevelScopeFrequencyStatus
The DigestLast 24-48 hours, score ≥30 onlyPer refreshActive
The AssessmentRolling 10-15 days, cross-themePer refreshActive
Theme SummaryPer-theme, rolling 7 daysPer refreshActive (via THEME_SYNTHESES)
Notable CoverageScore 20-29, last 48 hoursPer refreshActive
Strategic AssessmentQuarter-over-quarter shiftsMonthlyPlanned (not yet implemented)

Current Synthesis Workflow

In the current manual implementation, synthesis is performed during Stages 4-7 of the refresh protocol. The curator reviews classified signals, identifies cross-theme patterns, and writes narrative synthesis covering key developments, pattern analysis, strategic implications, and watch items. Synthesis output is stored in THEME_SYNTHESES{} with structured fields (meta, keyDevelopments, patternAnalysis, strategicImplications, watchItems, articles).

Target Synthesis Prompt Template

The automated pipeline will use this prompt structure for Claude API-driven synthesis:

You are synthesizing news for strategic intelligence. THEME: {theme_name} PERIOD: {start_date} to {end_date} ARTICLES: {for each article, sorted by signal_score DESC then recency} --- Source: {source_name} (Tier {tier}) Date: {published_at} Title: {title} Summary: {summary} Entities: {entities} --- {end for} Produce a synthesis with: 1. KEY DEVELOPMENTS (3-5 bullets) Most significant announcements or shifts 2. PATTERN ANALYSIS What patterns emerge? Where do sources agree or diverge? 3. STRATEGIC IMPLICATIONS What does this mean for alliance strategy? 4. WATCH ITEMS Emerging signals that warrant monitoring Keep synthesis concise and actionable.

Temporal Analysis

Why Time Matters

The key differentiator from static synthesis is tracking how signals evolve:

MetricWhat It Reveals
Theme volume over timeRising or falling attention
Source concentrationEcho chamber vs. genuine consensus
Entity frequencyWho's driving the narrative
Signal type mixNoise (speculation) vs. signal (announcements)
Week-over-week deltaAcceleration or deceleration of coverage

Comparison Windows

CURRENT: Last 7 days COMPARISON: Prior 7 days BASELINE: 90-day rolling average Calculate per theme: - Volume delta (% change) - New entities (appeared this period, not prior) - Dropped entities (appeared prior, not this period)

Alert Triggers

These classification rules are applied during the refresh protocol to determine which signals are promoted to the ALERTS[] array. In the current implementation, alerts are created manually based on these criteria.

TriggerConditionAction
Critical signalSignal score ≥40Immediate alert + leadership briefing
High-priority signalSignal score 30-39Same-day review + stakeholder notification
Theme volume spikeVolume >2x comparison periodDaily highlight
Competitive moveTheme = Competitive Activity + score ≥20Immediate alert
New entity emergenceEntity not in 90-day baseline + 3+ mentionsWeekly flag

Refresh Protocol

The content refresh process follows a 12-stage staged execution model governed by four design principles. This is the primary operational procedure for every content update.

Design Principles

PrincipleNameRule
P1Atomic StagesEach stage contains one type of work. If sub-steps require fundamentally different effort (e.g. updating narrative text vs. constructing sourced article records), they belong in separate stages. Asymmetric effort within a stage is a skip risk.
P2Hardest FirstWhen a stage has multiple sub-steps, order by descending effort. The heaviest sub-step runs while context is freshest. Completion momentum on light tasks should carry you out of a stage, not trick you into skipping the hard part.
P3Content GatesEvery validation step must include at least one assertion that can only pass if the actual work was done, not just if the data structure parses. Structural checks catch broken code. Content checks catch skipped work.
P4Compaction ResilienceAfter completing each stage, write a progress marker so that if context is compacted mid-refresh, resume can target the exact next stage. Never rely on context memory alone for tracking which stages are done.

Stage Execution Sequence

StageScopeContent Gate
1Web Research (Digest) — 6-8 targeted queries across coverage universeSignal candidates collected
2Update Digest Data Arrays — SIGNALS[], ALERTS[], COVERAGE_STATS periodsNewest SIGNALS[].date ≥ refresh window start
3Update Digest Hardcoded HTML — lead section, stats sidebarGrep confirms new date strings in HTML
4Web Research (Assessment Themes) — additional depth searchesPatterns and implications documented
5Update Assessment Narratives — THEMES[] metadata + THEME_SYNTHESES{} narrative fields. Does NOT touch articles[].All meta.lastUpdate values match refresh date
6Update Source Coverage Tables — THEME_SYNTHESES{}.articles[] for all 8 themes. Separate stage because article record construction is different work than narrative writing (P1).For each theme, newest articles[0].date within refresh window
7Update Assessment Hardcoded HTML — lead section, stats sidebarGrep confirms new dates and stats
8Update Ecosystems — COMPANIES[] entries with new datalastRefresh date matches current refresh date
9L2 Entity Propagation — update PARTNERSHIPS[], HEADLINES[], TIMELINE[], PROFILES for qualifying entities. Hardest-first: partnerships before profiles.Newest headline date within refresh window for each modified entity
10Update Media Trends — TRENDS{} weekly data, entities, distributionsweekLabels[] includes current week
11Footer + REFRESH_LOG — dates, change log entriesFooter date matches refresh date
12Final Validation + Deliver — structural + content validation sweep, methodology audit, present_filesAll content-level assertions pass

Progress Tracking

// PROGRESS: S1 ✓ S2 ✓ S3 ✓ S4 pending // Update this line after each stage completes. // On compaction or resume, read to determine where to pick up.

Refresh Log

All refresh activity is tracked in the REFRESH_LOG object:

FieldPurpose
lastRefreshISO timestamp of last completed refresh
lastRefreshDisplayHuman-readable date for UI display
changes[]Array of change descriptions from current refresh
previousRefreshes[]Archive of prior refresh summaries

Entity Activation Protocol

Entities in the Ecosystems tracker move through three activation levels. Each level builds on the previous and requires specific data structures.

Activation Levels

LevelNameMinimum DataResult
L1Card ActivationCOMPANIES[] entry: disabled=false, lastRefresh set, type/partnerships/capital/trend updatedEntity card shows as active with refresh badge. No detail page.
L2View DetailsCOMPANY_PROFILES[id], [ID]_PARTNERSHIPS[] (min 5), [ID]_HEADLINES[] (min 3), [ID]_TIMELINE[] (min 3 quarters), [ID]_SECTORS[], COMPANY_DATA[id] registered"View details →" arrow appears. Full detail page with Headlines, Partnerships, Timeline, Sectors tabs.
L2+Strategic BriefL2 requirements + COMPANY_PROFILES[id].dossier{} with status, brief, profile fields, sections[]. For live briefs: corresponding <template id="tpl-dossier-[id]"> with full analytical content."Strategic Brief" tab appears in detail page tab strip. Live briefs render full inline report (threat assessment, financials, competitive forces, executive tracking, timeline). Planned briefs render stub outline with analysis roadmap.
L3Media TrendsTRENDS.trendingEntities[] entry, optional volumeByTheme[] and weekOverWeek[] updatesEntity appears in Media Trends trending list and charts.

L2 Data Structure Requirements

ArrayRecord FieldsMinimum Records
[ID]_PARTNERSHIPS[]id, partner, sector, type, commitment, date, status, headline, summary, terms[]5
[ID]_HEADLINES[]title, value, sector, date, source, url, partnerId, impact3
[ID]_TIMELINE[]quarter, events: [{ date, partners[], detail }]3 quarters
[ID]_SECTORS[]id, name, partners, value, focusDerived from partnerships
COMPANY_PROFILES[id]title, date, metrics[]1 entry

L2+ Dossier Data Structure

PropertyFieldsNotes
COMPANY_PROFILES[id].dossier{}status ('live'|'planned'), brief, hq, founded, ticker, ceo, marketCap, sections[]Controls Strategic Brief tab visibility and render mode
dossier.sections[]title, descAnalysis roadmap items. Rendered as numbered list for planned briefs.
<template id="tpl-dossier-[id]">Full HTML content (sections, tables, charts, timelines)Required only for status='live'. Rendered inside .dossier-content wrapper with scoped CSS.

Activation During Refresh

Entity activation typically occurs at Stage 8 (L1) or Stage 9 (L2 propagation) of the refresh protocol. New L2 activations require dedicated web research for partnership data. L2 propagation (updating existing detail pages with new signals) is triggered when an active entity has ≥1 new SIGNAL or ALERT in the current refresh window.

Activation Checklist

[ ] L1: COMPANIES[] disabled: false, lastRefresh set [ ] L1: Type, partnerships, capital, trend updated [ ] L2: COMPANY_PROFILES[id] created [ ] L2: [ID]_PARTNERSHIPS[] created (min 5 records) [ ] L2: [ID]_HEADLINES[] created (min 3 records) [ ] L2: [ID]_TIMELINE[] created (min 3 quarters) [ ] L2: [ID]_SECTORS[] created [ ] L2: COMPANY_DATA[id] registered [ ] L2+: COMPANY_PROFILES[id].dossier{} created (status, brief, profile fields, sections[]) [ ] L2+ (live): <template id="tpl-dossier-[id]"> added with full analytical content [ ] L2+ (live): Scoped CSS covers all content classes (.dossier-content prefix) [ ] L3: TRENDS.trendingEntities[] updated (if applicable) [ ] Validation: JS syntax check passes [ ] Validation: validateData() reports no new warnings

Theme Volume Chart Methodology

The Theme Volume Chart (TVC) visualizes article volume by strategic theme across two views: a full 26-month timeline and a close-up weekly view. It blends observed data with simulated estimates to present a continuous trend narrative.

Data Sources

PeriodData TypeMethod
Jan 2024 – May 2025EstimatedReverse-trajectory Monte Carlo simulation from Jun 2025 anchor points
Jun 2025 – Oct 2025Estimated (weekly)Backward interpolation from Nov anchor points with event-driven impulses
Nov 2025 – Feb 2026ObservedActual weekly intake volumes from manual curation

Monte Carlo Simulation

The full-view monthly trajectory is generated by reverse-trajectory Monte Carlo:

Simulation Parameters: Iterations: 5,000 per theme Noise model: Log-normal (σ = 0.12 full view, 0.08 close-up) Anchor points: Aggregated weekly data (Jun 2025 onward) Growth rates: Theme-specific, derived from observed data Event boosts: Multiplicative impulse functions at known events Reverse trajectory formula: traj[i] = traj[i+1] / ((1 + growthRate) × eventBoost × noise) Output per theme: P25 (lower quartile), Median (P50), P75 (upper quartile) Confidence bands shown for estimated period only

Event-Driven Impulse Functions

Known market events (earnings, product launches, conferences, regulatory milestones) are modeled as multiplicative boosts applied to the corresponding time period. Each theme defines its own event boost map:

Event TypeTypical Boost RangeDuration
Major earnings season1.2x – 1.5x1-2 weeks
Tentpole conference (CES, GTC, re:Invent)1.3x – 1.8x1-2 weeks
Major fundraise or M&A1.2x – 1.6x1 week
Regulatory milestone1.1x – 1.4x1-2 weeks

Convergence Diagnostics

Simulation quality is verified using a Gelman-Rubin inspired diagnostic:

Method: Run two independent chains of 2,500 iterations each with offset PRNG seeds. Compare median values at the earliest estimated time point (furthest from anchor — maximum estimation uncertainty). Divergence = |median_chain1 - median_chain2| / average Pass criterion: divergence < 5% Warning: divergence ≥ 5% suggests insufficient iterations

Diagnostics run automatically on page load and log results to the browser console.

View Modes

ViewResolutionPeriodFeatures
Full ViewMonthlyJan 2024 – Feb 2026All 8 themes, P25-P75 confidence bands on estimated region, event annotation markers
Close-UpWeeklyJun 2025 – Feb 2026~35 weeks, observed region shaded in teal, dashed lines for estimated / solid for observed

Technical Implementation

Current Stack (Manual Curation)

ComponentImplementationPurpose
Data LayerJSON files (async loader.js)SIGNALS, THEMES, ALERTS, COMPANIES, COVERAGE_STATS, TRENDS, THEME_SYNTHESES, DOSSIERS
AuthAWS Cognito + LambdaPasswordless magic links, 45-day invite tokens, login tracking
Access Controlauth/gate.jsJWT-based client gate; redirects unauthenticated users to splash.html
RenderingVanilla JavaScriptDynamic DOM generation from data arrays
VisualizationChart.js 4.xTheme Volume Chart with Monte Carlo simulation
StylingCSS custom propertiesFT-inspired design system, responsive layout
Curation12-stage refresh protocolSource monitoring, classification, scoring, content gates
ValidationvalidateData() + runSensitivityAnalysis()Schema checks, range checks, cross-reference integrity, threshold sensitivity
RankingTemporal decay functionScoreadj = Score × e(-λ × age_days)
Frontend FxThree.js (dot-matrix.js), word-transition.jsAnimated dot-grid background, cross-fade word reveals
EmailSendGridTransactional invite emails, 3-email beta sequence
Configconfig/platform.jsonCoverage stats, signal thresholds, segment maps, data manifest
OutputSPA (index.html + JS + JSON)Static-hostable, serverless backend for auth and tracking
HostingStatic host + AWSFrontend served statically; backend on API Gateway + Lambda

Data Architecture (View-to-Source Map)

Each view in the platform is driven by specific data structures. The DATA_MANIFEST object in the codebase documents this mapping programmatically.

ViewPrimary Data SourcesRender Functions
The DigestSIGNALS, ALERTS, COVERAGE_STATS.digestPeriodrenderSignals, renderAlerts, renderTLDR, renderNotableCoverage
The AssessmentTHEMES, THEME_SYNTHESESrenderThemeCards, renderThemeSynthesis
EcosystemsCOMPANIES, COMPANY_DATA, COMPANY_PROFILESrenderSegmentFilters, renderCompanyCards
Strategic BriefCOMPANY_PROFILES[id].dossier{}, <template tpl-dossier-[id]>renderDossier
Media TrendsTRENDSrenderVolumeChart, renderTrendingEntities, renderSourceDistribution
Processing StatsCOVERAGE_STATSrenderProcessingSummary, updateNavDate

Data Structure Schema

ArrayRequired FieldsFeeds Views
SIGNALS[]id, score, source, sourceTier, timestamp, displayTime, title, summary, theme, signalType, entities, urlDigest, Processing
ALERTS[]id, type, title, source, theme, timestamp, displayTime, url, signalIdDigest
THEMES[]id, name, priority, articleCount, topScore, trendDirection, trendPercent, updatedAgo, summary, enabledAssessment, Digest
COMPANIES[]id, name, segment, type, partnerships, capital, sectors, trend, trendText, disabledEcosystems
COVERAGE_STATS{}coveragePeriod, digestPeriod, thresholds, heavyCycleProcessing, Digest, Trends
COMPANY_PROFILES[id].dossier{}status, brief, hq, founded, ticker, ceo, marketCap, sections[]Ecosystems (Strategic Brief tab)
<template tpl-dossier-[id]>Full HTML content (live briefs only)Ecosystems (Strategic Brief tab)

Data Validation

The validateData() function runs automatically on page load and performs three categories of checks. Results are logged to the browser console. Validation is currently non-gating: the page renders regardless of warnings. In a future automated pipeline, these checks should become pipeline gates that block publication on failure.

Check TypeWhat It ValidatesExample
SchemaAll required fields present in every recordSIGNALS[i] must have id, score, source, etc.
RangeValues within valid boundsScore 2-50, sourceTier 1-4
Cross-referenceForeign key integrity between arraysALERTS[i].signalId must exist in SIGNALS

Results are logged to the browser console. Zero warnings indicates a clean dataset.

Sensitivity Analysis

The runSensitivityAnalysis() function tests how signal classification shifts under different threshold assumptions. It evaluates three threshold offsets (-5, 0, +5) and reports how many signals fall into each priority tier at each offset.

Purpose: If small threshold changes dramatically shift signal counts between tiers, the scoring calibration may need adjustment. Stable distributions across offsets indicate robust threshold selection.

Target Stack (Automated Pipeline)

ComponentToolPurpose
SchedulerGitHub ActionsTrigger daily/hourly runs
FetcherPython + feedparserPull RSS content
ParserBeautifulSoupExtract clean text
StorageSQLiteStore articles + metadata
ClassificationClaude API (Haiku)High-volume classification
SynthesisClaude API (Sonnet)Weekly summaries
OutputJinja2 templatesGenerate static HTML
HostingStatic hostDeploy site

Target Database Schema

CREATE TABLE articles ( id TEXT PRIMARY KEY, source_id TEXT, source_tier INTEGER, fetched_at TIMESTAMP, published_at TIMESTAMP, title TEXT, url TEXT UNIQUE, content TEXT, primary_theme TEXT, secondary_themes JSON, entities JSON, confidence INTEGER, impact INTEGER, irreversibility INTEGER, signal_score INTEGER GENERATED ALWAYS AS (confidence * (impact + irreversibility)) STORED, signal_type TEXT, summary TEXT ); CREATE INDEX idx_theme ON articles(primary_theme, published_at); CREATE INDEX idx_published ON articles(published_at); CREATE INDEX idx_score ON articles(signal_score DESC);

Estimated Costs

ComponentMonthly Cost
Claude API (classification)$20-50
Claude API (synthesis)$30-60
Static hosting$0-20
Total$50-130

Implementation Roadmap

Complete: Presentation Layer

  • Static site architecture with section-based navigation
  • Theme index with priority indicators and trend signals
  • Theme detail pages with synthesis structure
  • Daily digest format with relevance filtering
  • Trends dashboard with volume charts and entity tracking
  • Theme Volume Chart with Monte Carlo simulation and convergence diagnostics
  • Full methodology documentation with dynamic stat derivation

Complete: Information Architecture

  • Eight-theme taxonomy aligned to strategic priorities
  • Four-tier source classification framework
  • Signal type definitions (Announcement, Analysis, Speculation, Background)
  • Relevance scoring criteria (1-5 scale) with temporal decay adjustment
  • Synthesis templates for theme summaries, including heavy cycle detection
  • DATA_MANIFEST documenting view-to-data-source mappings and schema

Complete: Operational Processes

  • 12-stage refresh protocol with 4 design principles (Atomic Stages, Hardest First, Content Gates, Compaction Resilience)
  • 3-level entity activation protocol (L1 Card, L2 Detail Page, L3 Trends Integration)
  • Data validation framework (schema, range, cross-reference integrity checks)
  • Sensitivity analysis for threshold parameters (±5 offset testing)
  • Methodology Registry for tracking documentation freshness

Complete: Analytical Engine

  • Temporal decay scoring for signal freshness ranking
  • Monte Carlo simulation engine (5,000 iterations, log-normal noise) for trend estimation
  • Convergence diagnostics (2-chain median comparison) for simulation quality
  • Event-driven impulse functions for known market catalysts

Complete: Authentication & Access Control

  • Passwordless magic link auth via AWS Cognito Custom Auth + Lambda (requestLink.js, verifyToken.js)
  • Dual token model: standard 15-minute single-use tokens + 45-day persistent invite tokens (type: invite) for beta users
  • Login activity tracking: first_login_at, last_login_at, login_count per token in DynamoDB (EcosystemEdgeMagicLinks)
  • Client-side auth gate (auth/gate.js) with JWT validation, token expiry checks, and redirect to splash.html
  • Beta user registry (data/beta-users.json) with status and delivery tracking per user

Complete: Email & Notification System

  • Outbound email migrated from AWS SES to SendGrid for transactional invite and sequence emails
  • 3-email beta sequence defined in data/email-sequence.json: Welcome (day 0), Pre-Flight Assignment (day 3), Weekly Friday Dispatch (12:12pm ET through May 16, 2026 launch)
  • 5-channel email subscription system with server-side persistence (DynamoDB EcosystemEdgeSubscriptions)
  • Mobile push notifications disabled and out of scope for beta program

Complete: Frontend Modules

  • js/dot-matrix.js — reusable Three.js animated dot-grid background with shiftColor() and bgShift() API
  • js/word-transition.js — reusable cross-fade word reveal utility with anchor, word, color, duration, exitFade, fadeTarget, and onComplete options
  • 4-phase async data loader (loader.js) replacing inline data arrays with JSON file fetches

Complete: Strategic Intelligence Briefs (v32)

  • L2+ activation level: dossier{} object in COMPANY_PROFILES with status, brief, profile fields, sections[]
  • Dynamic "Strategic Brief" tab injection in entity detail pages (visible only when dossier data exists)
  • Live brief rendering via <template> elements with scoped CSS (.dossier-content prefix)
  • Planned brief rendering with stub outlines: strategic brief, profile grid, status banner, numbered analysis roadmap
  • Intel dossier live with full analytical content (Executive Summary, Threat Assessment, Financials, Competitive Forces, Executive Departures, Timeline)
  • NVIDIA and Cerebras dossiers registered as planned with section roadmaps
  • Silicon Partners companion file (silicon-partners.html) with 15-entity index covering merchant silicon, custom silicon, foundry, and memory segments

In Progress: Entity Coverage

  • 17 of 72 entities at Level 2 detail depth (partnerships, headlines, timeline, sectors)
  • 3 of 17 L2 entities at Level 2+ with Strategic Brief tab (Intel live, NVIDIA and Cerebras planned)
  • Remaining 55 entities at Level 1 card-level data only
  • Priority: high-signal entities activated first based on news coverage density

Next: Data Pipeline (Automated)

  • Configure RSS feeds and API connections for defined sources
  • Build ingestion scripts with deduplication logic
  • Implement classification prompts via Claude API
  • Create SQLite database with defined schema
  • Test end-to-end fetch → classify → store flow

Next: Automation

  • Schedule daily ingestion runs
  • Automate theme synthesis generation
  • Configure alert triggers for high-relevance signals
  • Build static site generation from database

Next: Deployment

  • Select hosting environment
  • Configure CI/CD pipeline for automated publishing
  • Set up monitoring and error alerting
  • Establish backup and recovery procedures

Useful Database Queries

These queries are designed for the target SQLite database schema (see Implementation chapter, Target Database Schema). In the current implementation, equivalent analysis is performed through JavaScript functions operating on the in-memory data arrays.

Theme volume by week

SELECT primary_theme, COUNT(*) as articles FROM articles WHERE published_at > datetime('now', '-7 days') GROUP BY primary_theme ORDER BY articles DESC;

Trending entities

SELECT json_each.value as entity, COUNT(*) as mentions FROM articles, json_each(articles.entities) WHERE published_at > datetime('now', '-7 days') GROUP BY entity HAVING mentions >= 3 ORDER BY mentions DESC;

Week-over-week comparison

WITH current AS ( SELECT primary_theme, COUNT(*) as cnt FROM articles WHERE published_at > datetime('now', '-7 days') GROUP BY primary_theme ), prior AS ( SELECT primary_theme, COUNT(*) as cnt FROM articles WHERE published_at BETWEEN datetime('now', '-14 days') AND datetime('now', '-7 days') GROUP BY primary_theme ) SELECT c.primary_theme, c.cnt as this_week, p.cnt as last_week, ROUND((c.cnt - p.cnt) * 100.0 / p.cnt, 1) as pct_change FROM current c LEFT JOIN prior p ON c.primary_theme = p.primary_theme;

Limitations

Classification & Analysis

  • Classification accuracy depends on source content quality and may misassign edge cases between overlapping themes
  • Synthesis reflects patterns in source coverage, not ground truth market dynamics
  • Paywalled content from premium sources (The Information, Gartner full reports, IDC trackers) is not included
  • Entity extraction may miss new companies or products not in the coverage universe

Operational

  • Manual curation workflow limits refresh frequency to 2-3 updates per week at current operational tempo
  • Real-time alerts have 15-60 minute latency depending on source refresh intervals
  • Entity detail pages (Level 2) are available for approximately 24% of tracked entities; the remainder show card-level data only
  • Strategic Briefs (L2+) are available for 3 of 17 L2 entities (1 live, 2 planned); remaining L2 entities show standard detail tabs only
  • The "54 news sources" figure (42 active in current period) is configured in config/platform.json; no structured SOURCES[] array backs this count
  • Mobile push notifications are disabled and out of scope for the beta program
  • Beta invite tokens (45-day TTL) are reusable but not rotated automatically; expired tokens require manual re-issuance

Analytical Model

  • Theme Volume Chart data for Jan 2024 through May 2025 is synthetically reconstructed via Monte Carlo simulation, not observed
  • Temporal decay parameter (λ = 0.03) was set by editorial judgment; external calibration against outcomes has not been performed
  • Sensitivity analysis runs at fixed ±5 threshold offsets; broader parameter sweeps are not automated
  • Monte Carlo convergence diagnostics use 2-chain Gelman-Rubin principle with 2,500 iterations per chain; edge cases with high event-boost variance may require more iterations
  • Live dossier content is point-in-time analytical work; it does not auto-refresh from SIGNALS[] or ALERTS[] data and requires manual updates when material changes occur

Methodology Maintenance

  • Coverage universe counts appear as static text in multiple HTML locations; dynamic derivation covers some but not all instances
  • Methodology changes are tracked in METHODOLOGY_LOG; retroactive auditing begins at v31 (Feb 2026)
  • Dossier content is loaded from data/dossiers/ JSON files; new dossier content classes must be covered in the .dossier-content CSS scope
  • Platform config (config/platform.json) centralizes thresholds and segment maps but is not auto-synced to methodology text; manual verification required on config changes

Technology Reference

As-built architectural blueprint of every operational system: authentication, tracking, subscriptions, data loading, personalization, and platform governance.

System Architecture Overview

Ecosystem Edge is a self-contained single-page application with an AWS serverless backend. All analytical logic runs client-side; the backend handles authentication, user preferences, and interaction tracking.

Component Map

  Browser (Vanilla ES5 SPA)
  ├── index.html ............. Single-page shell, all section markup
  ├── app.js ................. Core application (~3,900 lines)
  │   ├── Navigation & routing
  │   ├── Signal rendering & filtering
  │   ├── Theme synthesis rendering
  │   ├── Company grid & detail pages
  │   ├── Dossier rendering (10 section formatters)
  │   ├── Subscription preferences UI
  │   ├── Watchlist data model
  │   ├── Temporal decay scoring
  │   ├── Data validation & sensitivity analysis
  │   └── Methodology navigation & governance
  ├── charts.js .............. Theme Volume Chart (~900 lines)
  │   ├── Monte Carlo simulation engine (5,000 iterations)
  │   ├── Mulberry32 seeded PRNG
  │   ├── Convergence diagnostics (2-chain)
  │   └── Chart.js rendering (full-range & zoom views)
  ├── loader.js .............. 4-phase async data pipeline
  ├── session-tracker.js ..... Beacon-based analytics
  ├── js/dot-matrix.js ....... Three.js animated dot-grid background
  ├── js/word-transition.js .. Cross-fade word reveal utility
  ├── auth/gate.js ........... JWT auth gate
  └── styles.css ............. FT-inspired stylesheet

  AWS Backend
  ├── Cognito User Pool (Custom Auth)
  │   ├── defineAuthChallenge.js
  │   ├── createAuthChallenge.js
  │   └── verifyAuthChallenge.js
  ├── API Gateway + Lambda
  │   ├── requestLink.js ........ Magic link generation → SendGrid
  │   ├── verifyToken.js ........ Token validation + login tracking
  │   ├── trackPulseAction.js ... Signal interaction tracking
  │   ├── getPulseActions.js .... Retrieve user interactions
  │   ├── saveSubscriptions.js .. Email preference persistence
  │   ├── getSubscriptions.js ... Email preference retrieval
  │   └── trackShare.js ......... Briefing share attribution
  ├── DynamoDB Tables
  │   ├── EcosystemEdgeMagicLinks ... Auth tokens (15-min standard / 45-day invite TTL)
  │   ├── EcosystemEdgePulseStream .. User signal actions
  │   ├── EcosystemEdgeSubscriptions  Email preferences
  │   └── EcosystemEdgeShareTracking  Share opens (90-day TTL)
  └── SendGrid ............... Transactional email delivery (invites, sequences)

Technology Stack

LayerTechnologyPurpose
FrontendVanilla ES5 JavaScriptNo framework dependency; runs in all modern browsers
VisualizationChart.js (CDN)Theme Volume Chart canvas rendering
AuthAWS Cognito (Custom Auth)Passwordless magic link flow
ComputeAWS Lambda (Node.js, SDK v3)9 serverless functions
DatabaseAWS DynamoDB4 tables: auth tokens (EcosystemEdgeMagicLinks), pulse stream, subscriptions, share tracking
EmailSendGridTransactional email delivery (invites, beta sequences)
APIAWS API GatewayREST endpoints, JWT authorization
DataJSON files (9 core + entity details)Static data layer, cache-busted via ?v= query string

File Inventory

FileSizeRole
index.html~113 KBSPA shell: all sections, methodology, briefing templates
app.js~213 KBCore application logic
charts.js~42 KBTVC Monte Carlo + Chart.js rendering
loader.js~14 KB4-phase data loader
session-tracker.js~3 KBBeacon analytics
auth/gate.js~2 KBJWT auth gate
styles.css~110 KBComplete stylesheet
config/platform.jsonThresholds, segment maps, source URLs, decay config
config/aliases.jsonEntity aliases + detail entity list
js/dot-matrix.jsThree.js animated dot-grid background (init, shiftColor, bgShift)
js/word-transition.jsCross-fade word reveal utility (wordTransition)
config/reports.jsonBriefing manifest (18 reports + 2 tools)
data/signals.jsonSIGNALS[] + ALERTS[]
data/themes.jsonTHEMES[] + THEME_SYNTHESES{}
data/trends.jsonTRENDS (weekly volumes, trending entities)
data/entities.jsonCOMPANIES[]
data/chart-config.jsonTVC theme config + weekly history
data/beta-users.jsonBeta user registry (status, sent_at per user)
data/beta-tokens.jsonLocal cache of DynamoDB invite tokens keyed by email
data/email-sequence.json3-email beta sequence definition (Welcome, Pre-Flight, Weekly Dispatch)

Data Loading Pipeline

The loader.js module orchestrates a 4-phase async startup sequence. Script load order is: Chart.js CDN → app.jscharts.jsloader.js. The loader runs last and calls initApp() after all data is assembled.

Phase 1: Parallel Core Fetch

Nine JSON files are fetched concurrently via Promise.all(). Each URL is cache-busted with ?v={cacheKey}.

FileGlobal(s) Assigned
data/signals.jsonSIGNALS, ALERTS
data/themes.jsonTHEMES, THEME_SYNTHESES
data/trends.jsonTRENDS
data/entities.jsonCOMPANIES
data/refresh-log.jsonREFRESH_LOG
config/platform.jsonDATA_MANIFEST, SEGMENT_NAMES, COVERAGE_STATS, SECTOR_LABELS, STRATEGIC_VALUES, SOURCE_URL_MAP, + 3 more
config/aliases.jsonDETAIL_ALIAS
data/chart-config.jsonTVC_CONFIG
config/reports.jsonREPORT_MANIFEST (sorted: live first, then by sortKey desc)

Phase 2: Entity Detail Files (Sequential)

For each entity ID listed in aliases.detailEntities, a per-entity JSON is fetched from data/entities/{id}.json. Each file populates four global arrays ({ID}_PARTNERSHIPS, {ID}_HEADLINES, {ID}_TIMELINE, {ID}_SECTORS) and registers a profile in COMPANY_PROFILES[id] with computed count getters (partnershipCount, headlineCount, sectorCount). Failures are non-fatal (console warning).

Phase 2B: Unified Dossier Files

Seven dossier files are fetched in parallel from data/dossiers/{id}.json. Current manifest: intel, google, qualcomm, deloitte, amazon, ey, servicenow. Each dossier is stored in DOSSIERS[id] and its brief block is merged into COMPANY_PROFILES[entityLink].dossier. Live dossiers supersede planned stubs.

Phase 3: COMPANY_DATA Closures

For each entity with a loaded profile, a COMPANY_DATA[id] object is created with four getter functions (partnerships(), headlines(), timeline(), sectors()) that reference the global arrays. DETAIL_ENABLED is built as the union of data IDs and alias IDs.

Phase 4: App Initialization

Calls initApp() to boot the SPA, then tvcInit() to render the Theme Volume Chart. If initApp is undefined, a fatal error is logged.

Error Handling

If any Phase 1 fetch fails, the entire page is replaced with a data-load error message suggesting python -m http.server 8000 for local development. Phase 2/2B failures are non-fatal per entity.

Authentication & Session Management

Ecosystem Edge uses a passwordless magic link flow built on AWS Cognito Custom Auth. No passwords are stored or transmitted anywhere in the system.

Magic Link Flow

StepComponentAction
1UserEnters email on splash.html
2requestLink.js (Lambda)Generates 32-byte hex token + 6-char base32 verification code. Stores in DynamoDB EcosystemEdgeMagicLinks table. Standard tokens: 15-minute TTL, single-use. Invite tokens (type: invite): 45-day TTL, reusable within TTL. Sends HTML email via SendGrid with magic link + code.
3UserClicks magic link or enters verification code
4Cognito Custom AuthdefineAuthChallenge.jscreateAuthChallenge.jsverifyAuthChallenge.js
5verifyToken.jsChecks DynamoDB: token exists, not expired. For standard tokens: deletes after use (one-time). For invite tokens: persists and records first_login_at, last_login_at, and login_count on each successful authentication. Returns Cognito JWT tokens.
6BrowserReceives JWT tokens (id, access). Stores in localStorage.

Token Model

Token32-byte hex (crypto.randomBytes(32))
Verification Code6-character base32 (charset: ABCDEFGHJKLMNPQRSTUVWXYZ23456789 — no 0/1/I/O to prevent confusion)
Standard TTL15 minutes from generation (single-use, deleted after verification)
Invite TTL45 days from generation (type: invite). Reusable within TTL — no code entry required for beta users.
Login TrackingInvite tokens record first_login_at, last_login_at, and login_count per token on each successful authentication.
DynamoDB TableEcosystemEdgeMagicLinks

Client Auth Gate (auth/gate.js)

Runs before app initialization. Checks localStorage for ee_idToken and ee_tokenExpiry. If missing or expired, redirects to /splash.html. On success, parses JWT claims (email, tokens) and fires an authReady custom event.

Beta Access Control

The requestLink.js Lambda enforces a beta user registry (data/beta-users.json, currently 23 users). Non-registered requests receive a silent HTTP 200 (no error message) to prevent email enumeration. Beta invite status and delivery timestamps are tracked per user. A local token cache (data/beta-tokens.json) mirrors DynamoDB invite tokens keyed by email.

Session Analytics

The session-tracker.js module captures user navigation and engagement events via a beacon-based telemetry system.

Architecture

Session IDUUID generated via crypto.randomUUID() (with fallback). Persisted in localStorage key ee_session_id.
Event QueueIn-memory array, max batch size 50
Flush Interval30 seconds (setInterval)
Transportnavigator.sendBeacon('/api/track') with fetch(keepalive: true) fallback
Lifecycle Hooksvisibilitychange (hidden) + beforeunload trigger immediate flush

Event Taxonomy

Event TypeDataTrigger
session_start{ url }Page load
section_view{ section }Click on [data-section] nav element
section_exit{ section, duration }Navigation away from section (duration in seconds)
entity_detail{ entity }Click on [data-company-id] element
briefing_open{ report }Click on [data-report-id] element
settings_toggle{ setting, value }Toggle any element with id="toggle-*"

Payload Format

{
  "sessionId": "a1b2c3d4-...",
  "events": [
    { "type": "section_view", "ts": "2026-03-23T14:30:00Z", "data": { "section": "assessment" } },
    { "type": "section_exit", "ts": "2026-03-23T14:32:15Z", "data": { "section": "assessment", "duration": 135 } }
  ]
}

Pulse Interaction Tracking

The Pulse feed (top 3 signals on the Digest page) supports user interaction tracking that feeds a personalization loop. Interactions are persisted server-side and used to shape subsequent signal presentation.

User Actions

ActionMeaningEffect
clickUser clicked the signal to read detailsRecorded for engagement analytics
saveUser bookmarked the signalAdded to retainedIds[]
dismissUser explicitly dismissed the signalExcluded from future retainedIds[]

Personalization Loop

On page load, getPulseActions retrieves the user's previous actions (up to 200 most recent). Dismissed signal IDs are filtered out, producing a retainedIds[] set that excludes content the user has already rejected. This narrows the Pulse feed to signals the user hasn't explicitly dismissed.

Lambda Endpoints

LambdaMethodFunction
trackPulseActionPOSTWrites action to DynamoDB with Bearer token auth. Stores signalId, action, title, url, source, actionAt, email.
getPulseActionsGETQueries user's actions by email. Deduplicates signal IDs. Returns actions[] + retainedIds[].

DynamoDB Schema (EcosystemEdgePulseStream)

Partition KeyACTION#{email}
Sort Key{action}#{isoTimestamp}#{signalId}
AttributessignalId, action, title, url, source, actionAt, email

Email Subscription System

Users configure email notification preferences across five content channels. Preferences are persisted server-side and used to control future email delivery via SendGrid. A 3-email beta onboarding sequence is defined in data/email-sequence.json: Welcome (day 0), Pre-Flight Assignment (day 3), and Weekly Friday Dispatch (12:12pm ET, recurring through May 16, 2026 launch). Mobile push notifications are disabled and out of scope for the beta program.

Channels

ChannelDefaultFrequenciesFilters
DigestEnabled, dailydaily, weekly, offNone
AssessmentEnabled, weeklydaily, weekly, offTheme multi-select (8 valid IDs)
EcosystemsEnabled, weeklydaily, weekly, offSegment multi-select (10 IDs), entity search (regex: ^[a-z0-9-]+$)
BriefingsEnabled, per-eventper-event, weekly, offCategory multi-select (Event Briefing, Perspectives, Strategic Framework, etc.)
Media TrendsDisabled, weeklydaily, weekly, offNone

Lambda Endpoints

LambdaMethodFunction
saveSubscriptionsPOSTValidates input against whitelisted theme/segment/category IDs. Entity IDs regex-checked. Preserves createdAt timestamp. Writes to DynamoDB.
getSubscriptionsGETRetrieves preferences by email. Returns sensible defaults if no record exists.

DynamoDB Table: EcosystemEdgeSubscriptions

Keyed by user email. Stores per-channel enabled/frequency settings plus content filter arrays. createdAt is preserved across updates; updatedAt is set on each save.

Share & Attribution Tracking

When users share briefing links, the system tracks both the share event and subsequent opens to provide attribution analytics.

Ref Token Structure

Share links contain a ref query parameter that encodes share metadata as base64 JSON:

{
  "e": "sharer@example.com",   // Sharer email
  "r": "nvidia-68b-quarter",   // Report ID
  "t": 1711036800000           // Share timestamp (epoch ms)
}

Open Tracking

When a recipient opens a shared link, the trackShare Lambda decodes the ref token and records the open event. It detects whether the viewer is authenticated (has a valid JWT) or anonymous.

DynamoDB Schema (EcosystemEdgeShareTracking)

Partition KeySHARE#{refPrefix}
Sort KeyOPEN#{timestamp}#{viewer}
AttributessharerEmail, reportId, sharedAt, openedAt, viewerEmail, viewerAuthenticated
TTL90 days from open event

Data Validation & Quality Gates

Two validation systems run automatically on page load. Both are informational (console-logged, non-gating) — they warn about data quality issues without blocking the application.

Schema Validation (validateData())

Checks all core data arrays against the DATA_MANIFEST.structures schema definition:

CheckTargetsRule
Required fieldsSIGNALS, ALERTS, THEMES, COMPANIES, COVERAGE_STATSAll fields listed in requiredFields must be present on every item
Score rangeSIGNALSscore must be 2–50
Source tier rangeSIGNALSsourceTier must be 1–4
Cross-referenceALERTSEvery ALERTS[].signalId must exist in SIGNALS
Theme synthesis keysTHEME_SYNTHESESKeys must match THEMES[].id

Sensitivity Analysis (runSensitivityAnalysis())

Tests how signal classification distribution shifts if scoring thresholds are adjusted by ±5 points. Runs at three offsets: [-5, 0, +5]. For each offset, counts how many signals fall into each priority tier (critical, high, medium, low, background). Output is console-logged for diagnostic purposes.

Methodology Staleness Check

During app refresh, METHODOLOGY_REGISTRY.getStaleSections(30) is called to identify any methodology chapters not verified within 30 days. Stale chapters are logged as console warnings.

Watchlist & Personalization

The USER_WATCHLIST object is the central personalization model. It tracks user preferences across multiple content dimensions and drives both signal filtering and email subscription scoping.

Data Model

FieldTypePurpose
entities{ id: true }Watched entities from Ecosystems section
segments{ id: true }Watched industry segments
relationships[{ entities, label }]Entity pair relationship watches
themes{ id: true }Watched assessment themes
analystItems{ key: { text, theme } }Specific analyst watch items
synthesisAlertsbooleanNotify on synthesis updates
briefingReports{ id: true }Watched briefing reports
briefingCategories{ cat: true }Watched briefing categories
customItems[{ text, theme }]User-defined custom watch items
scoreThresholdnumberMinimum signal score filter (default: 0)
signalTypes{ type: true }Filtered signal types
sourceTiers{ tier: true }Filtered source tiers
digestCadencestringDigest delivery frequency
smsNumberstringSMS notification number
smsEnabledbooleanSMS notifications toggle

Hydration

On app load, hydrateWatchlist(prefs) merges server-side subscription preferences into the client-side USER_WATCHLIST object. Theme, entity, and segment selections are populated from the channels object. Direct watchlist fields (analystItems, customItems, scoreThreshold, signalTypes, etc.) are copied if present.

Watchlist Count

getWatchlistCount() returns the total number of watched items across all dimensions (entities + segments + relationships + themes + analyst items + briefing reports + briefing categories + custom items). This count drives the alerts badge in the navigation.

Methodology Self-Governance

Two internal tracking systems ensure the methodology documentation stays synchronized with the codebase.

METHODOLOGY_REGISTRY

A structured object in app.js that tracks every methodology chapter with metadata:

FieldPurpose
titleHuman-readable chapter name
codeDependenciesArray of code symbols the chapter documents (e.g., ['COVERAGE_STATS.thresholds', 'getDecayedScore()'])
dataArraysData structures the chapter references (e.g., ['SIGNALS'])
lastVerifiedISO date of last manual verification
autoDerivableWhether chapter content can be auto-derived from data (e.g., coverage universe from COMPANIES[])
derivedFromSource expression for auto-derivable chapters

Staleness Detection

getStaleSections(maxAgeDays) iterates all chapters and returns those where lastVerified is either missing or older than maxAgeDays (default: 30). During each app refresh, stale chapters are logged as console warnings. This is checked in Stage 12b of the refresh protocol.

METHODOLOGY_LOG

An append-only change log tracking every methodology edit. Each entry records:

dateISO date of the change
chapterAffected chapter ID (e.g., ch-scoring, condensed)
changeTypeexpansion | correction | new
descriptionFree-text description of what changed

New entries are added via METHODOLOGY_LOG.addChange(chapter, changeType, description), which auto-stamps the current date. The log provides a traceable audit trail from the first tracked audit (v31, Feb 2026) onward.

Intelligence Reports

Deep-dive analysis, event briefings, and strategic frameworks beyond the daily signal cycle.

Technology Change Curve

Adoption pace prediction and disruption timing for AI infrastructure and custom silicon.

Coming Q2 2026

Report Preview

This briefing will analyze technology adoption trajectories across four tier-aligned dimensions:

  • T1: Infrastructure Maturity — Custom silicon deployment timelines, inference platform readiness
  • T2: Platform Consolidation — Neocloud viability vs. hyperscaler lock-in trajectories
  • T4: Model Evolution — Foundation model capability curves, open vs. closed adoption pace
  • T1↔T4: Silicon-Intelligence Convergence — Custom chips for model architectures (Cerebras + OpenAI case study)

Macro Business Factors

External forces shaping ecosystem strategy: regulation, geopolitics, and capital flows.

Coming Q2 2026

Report Preview

This briefing will analyze macro forces through four tier-aligned dimensions:

  • T1-T2: Capital & Infrastructure — VC/PE flows, IPO readiness signals (OpenAI, Anthropic), capex cycles
  • T1: Geopolitical & Supply Chain — Export controls, TSMC concentration, sovereign fab initiatives
  • T5-T6: Regulatory Landscape — EU AI Act enforcement, sector-specific rules on enterprise apps
  • Cross-Tier: Talent & Labor — AI hiring velocity by tier, skill premium shifts, geographic patterns

Workforce Intelligence

Role evolution: Product Managers, Solutions Architects, and Alliance Managers in AI.

Coming Q3 2026

Report Preview

This briefing will analyze workforce evolution across four tier-aligned dimensions:

  • T5-T6: Product Managers — AI-native tooling changing PM scope at enterprise and integrator firms
  • T6: Solutions Architects — Role evolution as pre-built AI services (T4) replace custom builds
  • T6: Alliance Managers — New partnership models across T1↔T4↔T6
  • T4-T5: Emerging Hybrid Roles — AI Engineers, Prompt Engineers at intelligence and app layers

Wearables & Hyper-Personal AI

The convergence of on-body computing, ambient intelligence, and personalized AI agents.

Coming Q2 2026

Report Preview

This briefing will analyze the wearables-AI convergence across four tier-aligned dimensions:

  • T1: On-Body Silicon — Chip requirements for glasses, earbuds, rings (power, thermals, edge inference)
  • T4: Personal AI Agents — Always-on assistants with persistent memory and multimodal sensing
  • T2: Platform Integration — Ambient compute requiring new cloud↔edge architecture patterns
  • T1↔T4↔T5: Ecosystem Formation — New alliances between chipmakers, model providers, device OEMs, health platforms

The 18A Gambit

Intel's foundry strategy, process node race, and the highest-stakes turnaround in semiconductor history.

Coming Q2 2026

Report Preview

This Perspectives report will analyze Intel's turnaround across four dimensions:

  • T1: Foundry Strategy — 18A process node timeline, IFS customer pipeline, and the TSMC diversification narrative
  • T1: AI Silicon — Gaudi 3 positioning, data center GPU ambitions, and the inference-first strategy
  • T1↔T2: Ecosystem Dependencies — Hyperscaler custom silicon threat, CHIPS Act funding execution, and sovereign fab opportunities
  • T1: Geopolitical Position — Western foundry capacity, export control implications, and national security as competitive advantage

The GPU Landlord

CoreWeave's IPO, GPU-as-a-service economics, and what happens when AI infrastructure becomes a commodity.

Coming Q2 2026

Report Preview

This Perspectives report will analyze CoreWeave's market position across four dimensions:

  • T2: Neocloud Economics — GPU-as-a-service unit economics, contract structures, and the infrastructure margin question
  • T1↔T2: NVIDIA Dependency — Preferred access to Blackwell supply, the symbiotic risk, and what happens if AMD supply diversifies
  • T2↔T4: Customer Concentration — Microsoft, OpenAI, and foundation model lab reliance on neocloud capacity
  • T2: IPO & Market Signal — Public market validation of the neocloud thesis, capital requirements, and competitive moat durability

The $500 Billion Question

OpenAI's enterprise pivot, the foundation model business model, and the economics of artificial general intelligence.

Coming Q2 2026

Report Preview

This Perspectives report will analyze OpenAI's trajectory across four dimensions:

  • T4: Model Economics — GPT-5 capability ceiling, inference cost curves, and the path from revenue to profitability
  • T4↔T5: Enterprise Platform — ChatGPT Enterprise adoption, API ecosystem, and the build-vs-buy dynamic for enterprise AI
  • T1↔T4: Infrastructure Ambitions — Custom silicon with Broadcom, Stargate data center project, and vertical integration strategy
  • T4↔T6: Channel Strategy — GSI partnerships, Microsoft relationship evolution, and the go-to-market for AGI

The Agent Platform

ServiceNow's AI agent strategy, workflow automation moat, and the enterprise application layer's transformation.

Coming Q2 2026

Report Preview

This Perspectives report will analyze ServiceNow's AI transformation across four dimensions:

  • T5: Workflow AI — Now Assist adoption, AI agent deployment in ITSM/HR/CSM, and the platform intelligence layer
  • T4↔T5: Model Integration — Multi-model strategy (NVIDIA, OpenAI, Anthropic), domain-specific fine-tuning, and LLM orchestration
  • T5↔T6: Channel & Ecosystem — GSI implementation partnerships, ISV marketplace, and the platform-as-ecosystem strategy
  • T5: Competitive Moat — Workflow data advantage, process mining for AI training, and defensibility against horizontal AI agents

Alerts

Curate your watchlist across entities, themes, and analyst watch items. Tune signal filters to surface what matters.