AI Infrastructure & Partnerships
Training deals, inference deployments, and platform integrations.
Select a theme to view synthesis, source articles, and trend analysis.
| Date | Source | Article | Signal | Score |
|---|
Training deals, inference deployments, and platform integrations.
GPU platforms, chip architectures, and supply chain dynamics.
GSI acquisitions, partnerships, and enterprise delivery shifts.
--
Anthropic Cowork plugins trigger market shock — legal and data software stocks plunge 10%+ as platform AI enters vertical enterprise workflows.
OpenAI and Anthropic both accelerating IPO preparations with $500B and $183B valuations respectively.
ServiceNow-OpenAI 3-year strategic deal for GPT-5.2 agentic AI. China AI race intensifies with ByteDance, Alibaba, DeepSeek all preparing February model launches.
OpenAI retires GPT-4o on Feb 13, consolidating around GPT-5.2.
Signal score ≥30 from the past 48 hours.
Signal score 20-29 from the past 48 hours.
Cross-theme analysis, entity tracking, and temporal patterns across the intelligence landscape.
Companies and products with significant mention increases in the past 7 days.
Article volume by source tier over the past 7 days.
Distribution of signal types indicates noise-to-signal ratio.
| Theme | This Week | Last Week | Change |
|---|
A manually curated intelligence system that monitors news from tiered sources, classifies content by strategic theme, scores signals for priority, and synthesizes insights for alliance and partnership intelligence. Includes temporal decay ranking, Monte Carlo simulation of historical trends based on real sample data, and data validation.
1. Curation — News content is manually gathered from tiered sources (company newsrooms, major publications, trade press, research). Each source is assigned a tier based on authority and signal quality. Content is reviewed for relevance to the coverage universe.
2. Classification — Each article is classified with: primary theme, secondary themes, entities, confidence (from source tier), impact, irreversibility, signal type, and a summary. A composite signal score is calculated as Confidence × (Impact + Irreversibility).
3. Synthesis — Classified signals are aggregated by theme and synthesized during the refresh cycle: 24-48 hour digest (The Digest), 10-15 day rolling executive briefing (The Assessment), and per-theme source coverage with key developments, pattern analysis, strategic implications, and watch items.
4. Presentation — Signals scoring ≥30 (high-priority threshold) are highlighted in The Digest and added to ALERTS[]. Critical signals (40-50) are featured in TL;DR and executive summaries. Notable coverage (20-29) is shown separately. All signals are ranked by temporal decay-adjusted score for freshness.
| Tier 1 | Primary sources — company newsrooms, official blogs, filings |
| Tier 2 | Major analysis — Reuters, Bloomberg, WSJ, FT |
| Tier 3 | Trade press — CRN, Channel Futures, SiliconANGLE |
| Tier 4 | Research — Gartner, IDC, Forrester (public content) |
Content is classified into eight strategic themes: AI Infrastructure & Partnerships, Hyperscaler Dynamics, GSI Ecosystem Movements, Silicon Roadmap & Supply, Competitive Alliance Activity, Channel & Route-to-Market, Regulatory & Policy, and Market Signals.
The system monitors 69 entities across 10 industry segments: AI-Native (9), Hyperscalers (5), GSIs (11), Silicon (11), Custom Silicon (4), Foundry & Memory (3), OEM (4), Sovereign (2), Infrastructure (5), Enterprise Software (14).
Announcement Official first-party announcements
Analysis Third-party interpretation and context
Speculation Predictions, rumors, unconfirmed reports
Background Foundational context and market sizing
Each signal receives a composite score based on three dimensions:
| Confidence (2-5) | Source credibility — derived from source tier (Tier 1=5, Tier 2=4, Tier 3=3, Tier 4=2) |
| Impact (1-5) | Magnitude of ecosystem effect |
| Irreversibility (1-5) | Duration and permanence of the signal |
Composite Score = Confidence × (Impact + Irreversibility)
Range: 2–50. Scores ≥30 trigger high-priority alerts.
Temporal Decay: Scoreadj = Score × e(-λ × age_days). Default λ = 0.03 (~50% weight after 23 days). Used for ranking; raw scores are preserved.
Vanilla ES5 single-page application with AWS serverless backend. Passwordless authentication via 45-day persistent magic link tokens (type: invite) stored in DynamoDB (EcosystemEdgeMagicLinks), with login activity tracking (first_login_at, last_login_at, login_count). Access gated by auth/gate.js; unauthenticated users redirected to splash.html. Manual curation workflow with 12-stage refresh protocol. Data loaded asynchronously from JSON files via loader.js (9 core files + per-entity details). Client-side rendering with Chart.js for visualization. Temporal decay scoring for signal freshness. Schema validation and threshold sensitivity analysis run on page load (console-logged, non-gating). Monte Carlo simulation (5,000 iterations) based on real sample data for historical trend estimation. Convergence diagnostics (2-chain median comparison) for simulation quality. Strategic Intelligence Briefs loaded from data/dossiers/ JSON files, supporting live (full analytical content) and planned (stub outline) render modes. Outbound email via SendGrid for transactional invite and beta sequence delivery. Platform config centralized in config/platform.json (coverage stats, signal thresholds, segment maps, data manifest). Content manifest: config/reports.json (18 reports + 2 tools).
Python ingestion scripts · SQLite storage · Claude API for classification (Haiku) and synthesis (Sonnet) · GitHub Actions scheduling · Jinja2 templating · Automated alerting. Current infrastructure (AWS Cognito, Lambda, DynamoDB, SendGrid, API Gateway) would persist; the pipeline automates content curation only.
Complete documentation of the strategic theme analysis system architecture, implementation, and operation.
This example demonstrates the classification and scoring methodology applied to a real signal. Classification and scoring are performed manually during the 12-stage refresh protocol (see Refresh Protocol chapter).
News from defined sources is reviewed for coverage universe relevance:
| NVIDIA Newsroom | NVIDIA and Accenture Expand Partnership to Accelerate Enterprise AI Adoption |
| Reuters | Accenture to train 30,000 consultants on NVIDIA AI Enterprise platform |
| CRN | Channel partners eye new opportunities as NVIDIA-Accenture deal deepens |
| Bloomberg | AI consulting race heats up as Accenture doubles down on NVIDIA |
Each article is classified using the scoring model:
During the refresh cycle, all "GSI Ecosystem Movements" articles are synthesized into key developments, pattern analysis, strategic implications, and watch items (see Synthesis chapter).
Because the NVIDIA-Accenture article scored 45 (≥30 threshold), it is flagged as a high-priority alert in The Digest and added to the ALERTS[] array during the refresh cycle.
The platform operates as a self-contained analytical application with manual curation workflow, client-side computation, and integrated data validation:
| Property | Implementation |
|---|---|
| SPA + Serverless | Vanilla ES5 single-page application with AWS backend (Cognito, Lambda, DynamoDB, API Gateway). Passwordless auth via magic link tokens. Access gated by auth/gate.js. |
| Data-driven | All content loaded from JSON files via async loader.js pipeline. Views are projections of data, not hardcoded HTML. |
| Analytically active | Temporal decay, Monte Carlo simulation, convergence diagnostics, and sensitivity analysis run client-side on page load. |
| Schema-validated | validateData() checks structure, ranges, and cross-references on page load. Results logged to console; rendering is not gated on validation pass. |
| Methodology-aware | METHODOLOGY_REGISTRY tracks documentation freshness. Dynamic stats derived from live data arrays. |
The target architecture replaces manual curation with an automated ingestion and classification pipeline. The analytical engine and rendering layers would carry forward largely unchanged.
Not all sources carry equal weight. Tiering helps prioritize signal over noise:
| Tier | Description | Examples | Confidence |
|---|---|---|---|
| 1 | Primary / First-party | Company newsrooms, SEC filings, official blogs, PRNewswire/BusinessWire (official releases) | 5 |
| 2 | Major Analysis | Reuters, Bloomberg, WSJ, FT | 4 |
| 3 | Trade Press | CRN, Channel Futures, SiliconANGLE, The Information | 3 |
| 4 | Research (Public) | Gartner, IDC, Forrester snippets | 2 |
Note: Wire services (PRNewswire, BusinessWire, GlobeNewswire) are classified as Tier 1 when distributing official company announcements. Confidence score range is 2-5; a score of 1 is not used in practice.
Silicon Partners: NVIDIA Newsroom, Intel Newsroom, AMD News, Qualcomm News, Arm Newsroom
Hyperscalers: AWS News Blog, Google Cloud Blog, Microsoft Azure Blog
GSI / Consulting: Accenture Newsroom, Deloitte Press, IBM Newsroom, Infosys News
Competitive Set: Dell Technologies Newsroom, HPE Newsroom
General Tech/Business: Reuters Technology, Bloomberg Technology, WSJ Tech, FT Technology
Sources are tracked as a narrative list within this chapter. No structured SOURCES[] array exists in the current implementation. The footer references "54 news sources," which requires manual verification when sources are added or removed during refreshes.
The automated pipeline will store sources as structured records:
The monitoring system tracks 69 entities across 10 industry segments. Signals mentioning these entities receive priority classification and scoring.
| Company | Notes |
|---|---|
| Google (incl. Gemini) | Cloud + TPU-based AI infra, foundation models, enterprise AI |
| Meta | Social + consumer AI, custom MTIA inference silicon |
| Microsoft (Azure) | Cloud + Copilot, Maia/Cobalt custom AI silicon |
| Amazon / AWS | Cloud + Bedrock, Trainium/Inferentia AI accelerators |
| Oracle (OCI) | Enterprise-focused AI cloud, database-led workloads |
| Company | Notes |
|---|---|
| Accenture | Large-scale AI transformation, co-innovation with CSPs |
| Deloitte | AI consulting, risk, and enterprise transformation |
| PwC | Assurance, tax, and AI-enabled transformation |
| IBM | Hybrid cloud + AI services (Consulting, watsonx) |
| KPMG | Assurance and advisory with AI focus |
| EY | Tax, assurance, and AI advisory |
| Capgemini | Global delivery + AI/industry solutions |
| TCS | Large-scale IT + AI services |
| Infosys | AI and automation-led managed services |
| Wipro | Cloud + AI and engineering services |
| NTT DATA | Global SI with AI/infra focus |
| Company | Notes |
|---|---|
| Intel | CPUs, GPUs, Gaudi AI accelerators |
| NVIDIA | GPUs, networking, full-stack AI platform |
| AMD | GPUs, AI accelerators (Instinct) |
| Qualcomm | Edge/phone AI, NPUs |
| ARM | CPU IP for AI-enabled SoCs |
| Broadcom | Networking and custom ASICs for AI DCs |
| Company / Chip | Notes |
|---|---|
| Google – TPU | Custom training/inference ASICs for Google Cloud |
| Amazon – Trainium / Inferentia | Custom training/inference ASICs for AWS |
| Microsoft – Maia / Cobalt | Custom AI accelerator + CPU for Azure |
| Meta – MTIA | Inference accelerators for Meta's workloads |
| Company | Notes |
|---|---|
| Cerebras | Wafer-scale engines for large-model training |
| Groq | LPU-based, low-latency inference hardware |
| SambaNova | Dataflow AI accelerators for enterprise/gov |
| Graphcore | IPU accelerators (specialized AI compute) |
| Tenstorrent | RISC-V–based AI compute platform |
| Company | Notes |
|---|---|
| TSMC | Advanced-node foundry for leading AI chips |
| Samsung Electronics | HBM/memory and foundry for AI silicon |
| Micron | HBM and DRAM for AI workloads |
| Entity | Notes |
|---|---|
| PIF | Capital allocator into global AI/infra |
| HUMAIN | KSA-linked AI initiative |
| Company | Notes |
|---|---|
| Equinix | Global colocation + interconnect for AI clouds |
| Digital Realty | Data center and interconnect for AI workloads |
| CoreWeave | GPU cloud provider, NVIDIA $2B investment, 5GW AI factory build-out |
| Lambda Labs | GPU cloud for ML training and inference |
| Crusoe Energy | Sustainable AI infrastructure, clean-energy data centers |
| Company | Notes |
|---|---|
| OpenAI | Foundation models, ChatGPT / platform |
| Anthropic | Claude models, enterprise safety-focused AI |
| xAI | Model lab with X ecosystem integration |
| DeepSeek | Cost-optimized LLMs, alternative stack |
| Mistral | Open-weight and enterprise-focused models |
| Cohere | Enterprise language models and APIs |
| Hugging Face | Model hub, tooling, and ecosystem |
| Company | Notes |
|---|---|
| ServiceNow | Workflow + AI system of action |
| Databricks | Lakehouse + ML/AI platform |
| Snowflake | Data cloud + AI/ML workloads |
| Palantir | Data/decision platform; vertical AI |
| SAP | ERP + AI business processes |
| Oracle (Apps) | SaaS + database + AI embedded in applications |
| Rubrik | Data security/backup as AI-resilient control plane |
| Veeam | Backup/DR for AI-era infra |
| Salesforce | Data Cloud + Einstein "system of intelligence" |
| Celonis | Process mining / execution management decision layer |
| UiPath | Automation and agentic orchestration |
| Blue Yonder | AI-native supply chain and planning |
| Company | Notes |
|---|---|
| Dell Technologies | AI Factory, PowerEdge, enterprise infrastructure |
| HPE | GreenLake, AI infrastructure, HPC |
| Supermicro | GPU-optimized servers, liquid cooling |
| Cisco | Networking, data center infrastructure |
Themes should be:
The taxonomy is intentionally stable to support trend analysis, but adjustments are expected as the landscape evolves. Themes with persistently low article counts may be candidates for merging, while themes that consistently exceed coverage capacity may warrant splitting. Any taxonomy changes are tracked in METHODOLOGY_LOG to preserve analytical continuity.
| Field | Description |
|---|---|
| Primary Theme | Single best-fit theme |
| Secondary Themes | Up to 2 additional relevant themes |
| Entities | Companies, products, people mentioned |
| Confidence | 2-5, derived from source tier (Tier 1=5, Tier 2=4, Tier 3=3, Tier 4=2) |
| Impact | 1-5, magnitude of ecosystem effect |
| Irreversibility | 1-5, duration/permanence of signal |
| Signal Type | Announcement, Analysis, Speculation, or Background |
| Summary | 2-sentence synthesis |
The scoring model produces a single composite score that enables ranking and prioritization across all signals.
| Dimension | Scale | Criteria |
|---|---|---|
| Confidence | 1-5 | Derived from source tier: Tier 1 = 5, Tier 2 = 4, Tier 3 = 3, Tier 4 = 2 |
| Impact | 1-5 | 1 = Minor/routine announcement 2 = Incremental development 3 = Notable strategic shift 4 = Significant ecosystem effect 5 = Ecosystem-reshaping event |
| Irreversibility | 1-5 | 1 = Easily reversed (pilot, MOU) 2 = Short-term commitment (<1 year) 3 = Medium-term (1-3 years) 4 = Long-term contract (3+ years) 5 = Structural/permanent (M&A, major capex) |
| Score Range | Priority | Color | Action |
|---|---|---|---|
| 40-50 | Critical | Claret (#9F1D35) | Immediate alert, leadership briefing |
| 30-39 | High | Mandarin (#FF8833) | Same-day review, stakeholder notification |
| 20-29 | Medium | Wheat (#F2DFCE) | Include in 48-hour digest |
| 10-19 | Low | Grey (#E6D9CE) | Weekly summary only |
| 2-9 | Background | — | Archive for reference |
Signal scores appear throughout the platform with consistent color coding:
For ranking and display ordering, signals are adjusted by a time-decay factor that reduces the weight of older signals while preserving the raw score for reference:
Temporal decay is configurable via COVERAGE_STATS.temporalDecay:
| Parameter | Default | Description |
|---|---|---|
| enabled | true | Toggle decay on/off |
| lambda | 0.03 | Decay rate constant. Higher = faster decay |
| referenceDate | Current date | ISO date string. All ages calculated relative to this |
Design rationale: Raw scores remain the permanent record of a signal's assessed importance. Temporal decay affects only ranking within views (Digest, Notable Coverage). This prevents stale signals from crowding out fresh developments while preserving the analytical judgment embedded in the raw score.
Note: The lambda parameter (0.03) was set by editorial judgment based on the typical news cycle cadence for alliance intelligence. External calibration against outcomes has not been performed.
Executive briefings in The Assessment and The Digest follow a structured specification to ensure consistent, high-quality summaries. Summary length adapts based on news cycle intensity.
A heavy news cycle triggers expanded summaries (5-6 sentences) when ANY threshold is met:
Runtime: These thresholds are evaluated and stored in COVERAGE_STATS.heavyCycle (isHeavy, trigger, thresholds) during each refresh cycle. The heavyCycle object drives summary length decisions in both The Digest and The Assessment.
| Metric | Standard | Heavy Threshold |
|---|---|---|
| Articles processed | <100 | ≥100 |
| High-priority signals (≥30) | <15 | ≥15 |
| Critical signals (≥40) | <4 | ≥4 |
| New partnerships announced | <5 | ≥5 |
| Themes with activity spike (>25%) | <3 | ≥3 |
| Tentpole events | CES, MWC, GTC, re:Invent, Google I/O, Build, Ignite, Dreamforce |
| Policy announcements | Executive orders, regulatory rulings, trade actions |
| Mega-deals | Any partnership >$10B or consortium announcement |
| Market-moving | IPOs, major M&A, leadership changes at coverage entities |
| AI-Native | Model release, funding >$1B, market share shift >5% |
| Hyperscalers | Region launch, pricing change, major customer win/loss |
| GSIs | Business group launch, headcount >10K, new practice |
| Silicon | Architecture announcement, fab deal, supply constraint |
| OEM | Product launch, design win, channel program change |
| Sovereign | National AI strategy, infrastructure >$1B |
| Infrastructure | Data center >500MW, new geography entry |
| Enterprise SaaS | Platform integration, AI feature, major migration |
| Cycle Type | Sentences | Words | Entities | Themes |
|---|---|---|---|---|
| Standard | 3-4 | 50-80 | 6-10 | 3+ |
| Heavy | 5-6 | 80-120 | 8-12 | 4+ |
| Sentence | Function | Pattern |
|---|---|---|
| 1 | Lead signal — highest-impact development | [Entity] [action verb] [outcome]. |
| 2 | Secondary signals — 2-3 developments | [Entity-Entity] [deal], [Entity-Entity] [deal] signal [pattern]. |
| 3 | Tertiary signal or emerging trend | [Entity] [action] [implication]. |
| 4 | Watch item (optional) | [Trend] intensifies around [developments]. |
| Sentence | Function | Pattern |
|---|---|---|
| 5 | Segment impact — ecosystem implications | [Segment] [consequence] as [specific development]. |
| 6 | Competitive positioning | [Entity] response/positioning signals [strategic direction]. |
| No articles | Drop "the," "a," "an" where possible |
| Entity-first | Lead with company/organization names |
| Action verbs | triggers, signals, accelerates, expands, formalizes, unveils, closes |
| Noun phrases | "Siemens-NVIDIA expansion" not "Siemens and NVIDIA expanded" |
| Implication language | "signals," "marks," "indicates" to connect facts to meaning |
| Compressed attribution | "per Menlo Ventures" not "according to a report from" |
| Length | 3-4 sentences, 50-80 words total |
| Entities | 6-10 mentioned |
| Themes | Minimum 3 of 8 strategic themes |
| Score threshold | Only include signals with score ≥35 |
| Tense | Present for ongoing; past for completed deals |
| 1. Partnership announcements | New alliances, expansions, JVs |
| 2. Infrastructure deals | Capacity, investment, deployment |
| 3. Market position shifts | Share changes, competitive moves |
| 4. Policy/regulatory | Government programs, compliance |
| 5. Product launches | Only if partnership-relevant |
Action Verbs (by intensity):
| High Impact | Medium Impact | Low Impact |
|---|---|---|
| triggers, accelerates, reshapes, dominates, surges | expands, formalizes, closes, unveils, launches | continues, maintains, supports, updates, extends |
Pattern Verbs: signals (future direction), marks (inflection point), reflects (underlying trend), intensifies (escalating), converges (trends meeting)
Deal Type Nouns: expansion, alliance, partnership, JV, MOU, win, closing, deal, agreement, licensing, launch, rollout, availability, deployment
| ☐ | 3-4 sentences, 50-80 words |
| ☐ | Minimum 6 entities named |
| ☐ | Minimum 3 themes represented |
| ☐ | Lead signal has highest score (≥45) |
| ☐ | No articles unless necessary for clarity |
| ☐ | Entity-first sentence construction |
| ☐ | At least one implication verb |
| ☐ | Present tense for ongoing, past for completed |
Analysis: Lead: NVIDIA Physical AI (Score 50) · Secondary: 3 deals across GSI Ecosystem, AI Infrastructure · Tertiary: Disney-OpenAI market signal · Entities: 8 total · Themes: 3 covered
| Level | Scope | Frequency | Status |
|---|---|---|---|
| The Digest | Last 24-48 hours, score ≥30 only | Per refresh | Active |
| The Assessment | Rolling 10-15 days, cross-theme | Per refresh | Active |
| Theme Summary | Per-theme, rolling 7 days | Per refresh | Active (via THEME_SYNTHESES) |
| Notable Coverage | Score 20-29, last 48 hours | Per refresh | Active |
| Strategic Assessment | Quarter-over-quarter shifts | Monthly | Planned (not yet implemented) |
In the current manual implementation, synthesis is performed during Stages 4-7 of the refresh protocol. The curator reviews classified signals, identifies cross-theme patterns, and writes narrative synthesis covering key developments, pattern analysis, strategic implications, and watch items. Synthesis output is stored in THEME_SYNTHESES{} with structured fields (meta, keyDevelopments, patternAnalysis, strategicImplications, watchItems, articles).
The automated pipeline will use this prompt structure for Claude API-driven synthesis:
The key differentiator from static synthesis is tracking how signals evolve:
| Metric | What It Reveals |
|---|---|
| Theme volume over time | Rising or falling attention |
| Source concentration | Echo chamber vs. genuine consensus |
| Entity frequency | Who's driving the narrative |
| Signal type mix | Noise (speculation) vs. signal (announcements) |
| Week-over-week delta | Acceleration or deceleration of coverage |
These classification rules are applied during the refresh protocol to determine which signals are promoted to the ALERTS[] array. In the current implementation, alerts are created manually based on these criteria.
| Trigger | Condition | Action |
|---|---|---|
| Critical signal | Signal score ≥40 | Immediate alert + leadership briefing |
| High-priority signal | Signal score 30-39 | Same-day review + stakeholder notification |
| Theme volume spike | Volume >2x comparison period | Daily highlight |
| Competitive move | Theme = Competitive Activity + score ≥20 | Immediate alert |
| New entity emergence | Entity not in 90-day baseline + 3+ mentions | Weekly flag |
The content refresh process follows a 12-stage staged execution model governed by four design principles. This is the primary operational procedure for every content update.
| Principle | Name | Rule |
|---|---|---|
| P1 | Atomic Stages | Each stage contains one type of work. If sub-steps require fundamentally different effort (e.g. updating narrative text vs. constructing sourced article records), they belong in separate stages. Asymmetric effort within a stage is a skip risk. |
| P2 | Hardest First | When a stage has multiple sub-steps, order by descending effort. The heaviest sub-step runs while context is freshest. Completion momentum on light tasks should carry you out of a stage, not trick you into skipping the hard part. |
| P3 | Content Gates | Every validation step must include at least one assertion that can only pass if the actual work was done, not just if the data structure parses. Structural checks catch broken code. Content checks catch skipped work. |
| P4 | Compaction Resilience | After completing each stage, write a progress marker so that if context is compacted mid-refresh, resume can target the exact next stage. Never rely on context memory alone for tracking which stages are done. |
| Stage | Scope | Content Gate |
|---|---|---|
| 1 | Web Research (Digest) — 6-8 targeted queries across coverage universe | Signal candidates collected |
| 2 | Update Digest Data Arrays — SIGNALS[], ALERTS[], COVERAGE_STATS periods | Newest SIGNALS[].date ≥ refresh window start |
| 3 | Update Digest Hardcoded HTML — lead section, stats sidebar | Grep confirms new date strings in HTML |
| 4 | Web Research (Assessment Themes) — additional depth searches | Patterns and implications documented |
| 5 | Update Assessment Narratives — THEMES[] metadata + THEME_SYNTHESES{} narrative fields. Does NOT touch articles[]. | All meta.lastUpdate values match refresh date |
| 6 | Update Source Coverage Tables — THEME_SYNTHESES{}.articles[] for all 8 themes. Separate stage because article record construction is different work than narrative writing (P1). | For each theme, newest articles[0].date within refresh window |
| 7 | Update Assessment Hardcoded HTML — lead section, stats sidebar | Grep confirms new dates and stats |
| 8 | Update Ecosystems — COMPANIES[] entries with new data | lastRefresh date matches current refresh date |
| 9 | L2 Entity Propagation — update PARTNERSHIPS[], HEADLINES[], TIMELINE[], PROFILES for qualifying entities. Hardest-first: partnerships before profiles. | Newest headline date within refresh window for each modified entity |
| 10 | Update Media Trends — TRENDS{} weekly data, entities, distributions | weekLabels[] includes current week |
| 11 | Footer + REFRESH_LOG — dates, change log entries | Footer date matches refresh date |
| 12 | Final Validation + Deliver — structural + content validation sweep, methodology audit, present_files | All content-level assertions pass |
All refresh activity is tracked in the REFRESH_LOG object:
| Field | Purpose |
|---|---|
| lastRefresh | ISO timestamp of last completed refresh |
| lastRefreshDisplay | Human-readable date for UI display |
| changes[] | Array of change descriptions from current refresh |
| previousRefreshes[] | Archive of prior refresh summaries |
Entities in the Ecosystems tracker move through three activation levels. Each level builds on the previous and requires specific data structures.
| Level | Name | Minimum Data | Result |
|---|---|---|---|
| L1 | Card Activation | COMPANIES[] entry: disabled=false, lastRefresh set, type/partnerships/capital/trend updated | Entity card shows as active with refresh badge. No detail page. |
| L2 | View Details | COMPANY_PROFILES[id], [ID]_PARTNERSHIPS[] (min 5), [ID]_HEADLINES[] (min 3), [ID]_TIMELINE[] (min 3 quarters), [ID]_SECTORS[], COMPANY_DATA[id] registered | "View details →" arrow appears. Full detail page with Headlines, Partnerships, Timeline, Sectors tabs. |
| L2+ | Strategic Brief | L2 requirements + COMPANY_PROFILES[id].dossier{} with status, brief, profile fields, sections[]. For live briefs: corresponding <template id="tpl-dossier-[id]"> with full analytical content. | "Strategic Brief" tab appears in detail page tab strip. Live briefs render full inline report (threat assessment, financials, competitive forces, executive tracking, timeline). Planned briefs render stub outline with analysis roadmap. |
| L3 | Media Trends | TRENDS.trendingEntities[] entry, optional volumeByTheme[] and weekOverWeek[] updates | Entity appears in Media Trends trending list and charts. |
| Array | Record Fields | Minimum Records |
|---|---|---|
| [ID]_PARTNERSHIPS[] | id, partner, sector, type, commitment, date, status, headline, summary, terms[] | 5 |
| [ID]_HEADLINES[] | title, value, sector, date, source, url, partnerId, impact | 3 |
| [ID]_TIMELINE[] | quarter, events: [{ date, partners[], detail }] | 3 quarters |
| [ID]_SECTORS[] | id, name, partners, value, focus | Derived from partnerships |
| COMPANY_PROFILES[id] | title, date, metrics[] | 1 entry |
| Property | Fields | Notes |
|---|---|---|
| COMPANY_PROFILES[id].dossier{} | status ('live'|'planned'), brief, hq, founded, ticker, ceo, marketCap, sections[] | Controls Strategic Brief tab visibility and render mode |
| dossier.sections[] | title, desc | Analysis roadmap items. Rendered as numbered list for planned briefs. |
| <template id="tpl-dossier-[id]"> | Full HTML content (sections, tables, charts, timelines) | Required only for status='live'. Rendered inside .dossier-content wrapper with scoped CSS. |
Entity activation typically occurs at Stage 8 (L1) or Stage 9 (L2 propagation) of the refresh protocol. New L2 activations require dedicated web research for partnership data. L2 propagation (updating existing detail pages with new signals) is triggered when an active entity has ≥1 new SIGNAL or ALERT in the current refresh window.
The Theme Volume Chart (TVC) visualizes article volume by strategic theme across two views: a full 26-month timeline and a close-up weekly view. It blends observed data with simulated estimates to present a continuous trend narrative.
| Period | Data Type | Method |
|---|---|---|
| Jan 2024 – May 2025 | Estimated | Reverse-trajectory Monte Carlo simulation from Jun 2025 anchor points |
| Jun 2025 – Oct 2025 | Estimated (weekly) | Backward interpolation from Nov anchor points with event-driven impulses |
| Nov 2025 – Feb 2026 | Observed | Actual weekly intake volumes from manual curation |
The full-view monthly trajectory is generated by reverse-trajectory Monte Carlo:
Known market events (earnings, product launches, conferences, regulatory milestones) are modeled as multiplicative boosts applied to the corresponding time period. Each theme defines its own event boost map:
| Event Type | Typical Boost Range | Duration |
|---|---|---|
| Major earnings season | 1.2x – 1.5x | 1-2 weeks |
| Tentpole conference (CES, GTC, re:Invent) | 1.3x – 1.8x | 1-2 weeks |
| Major fundraise or M&A | 1.2x – 1.6x | 1 week |
| Regulatory milestone | 1.1x – 1.4x | 1-2 weeks |
Simulation quality is verified using a Gelman-Rubin inspired diagnostic:
Diagnostics run automatically on page load and log results to the browser console.
| View | Resolution | Period | Features |
|---|---|---|---|
| Full View | Monthly | Jan 2024 – Feb 2026 | All 8 themes, P25-P75 confidence bands on estimated region, event annotation markers |
| Close-Up | Weekly | Jun 2025 – Feb 2026 | ~35 weeks, observed region shaded in teal, dashed lines for estimated / solid for observed |
| Component | Implementation | Purpose |
|---|---|---|
| Data Layer | JSON files (async loader.js) | SIGNALS, THEMES, ALERTS, COMPANIES, COVERAGE_STATS, TRENDS, THEME_SYNTHESES, DOSSIERS |
| Auth | AWS Cognito + Lambda | Passwordless magic links, 45-day invite tokens, login tracking |
| Access Control | auth/gate.js | JWT-based client gate; redirects unauthenticated users to splash.html |
| Rendering | Vanilla JavaScript | Dynamic DOM generation from data arrays |
| Visualization | Chart.js 4.x | Theme Volume Chart with Monte Carlo simulation |
| Styling | CSS custom properties | FT-inspired design system, responsive layout |
| Curation | 12-stage refresh protocol | Source monitoring, classification, scoring, content gates |
| Validation | validateData() + runSensitivityAnalysis() | Schema checks, range checks, cross-reference integrity, threshold sensitivity |
| Ranking | Temporal decay function | Scoreadj = Score × e(-λ × age_days) |
| Frontend Fx | Three.js (dot-matrix.js), word-transition.js | Animated dot-grid background, cross-fade word reveals |
| SendGrid | Transactional invite emails, 3-email beta sequence | |
| Config | config/platform.json | Coverage stats, signal thresholds, segment maps, data manifest |
| Output | SPA (index.html + JS + JSON) | Static-hostable, serverless backend for auth and tracking |
| Hosting | Static host + AWS | Frontend served statically; backend on API Gateway + Lambda |
Each view in the platform is driven by specific data structures. The DATA_MANIFEST object in the codebase documents this mapping programmatically.
| View | Primary Data Sources | Render Functions |
|---|---|---|
| The Digest | SIGNALS, ALERTS, COVERAGE_STATS.digestPeriod | renderSignals, renderAlerts, renderTLDR, renderNotableCoverage |
| The Assessment | THEMES, THEME_SYNTHESES | renderThemeCards, renderThemeSynthesis |
| Ecosystems | COMPANIES, COMPANY_DATA, COMPANY_PROFILES | renderSegmentFilters, renderCompanyCards |
| Strategic Brief | COMPANY_PROFILES[id].dossier{}, <template tpl-dossier-[id]> | renderDossier |
| Media Trends | TRENDS | renderVolumeChart, renderTrendingEntities, renderSourceDistribution |
| Processing Stats | COVERAGE_STATS | renderProcessingSummary, updateNavDate |
| Array | Required Fields | Feeds Views |
|---|---|---|
| SIGNALS[] | id, score, source, sourceTier, timestamp, displayTime, title, summary, theme, signalType, entities, url | Digest, Processing |
| ALERTS[] | id, type, title, source, theme, timestamp, displayTime, url, signalId | Digest |
| THEMES[] | id, name, priority, articleCount, topScore, trendDirection, trendPercent, updatedAgo, summary, enabled | Assessment, Digest |
| COMPANIES[] | id, name, segment, type, partnerships, capital, sectors, trend, trendText, disabled | Ecosystems |
| COVERAGE_STATS{} | coveragePeriod, digestPeriod, thresholds, heavyCycle | Processing, Digest, Trends |
| COMPANY_PROFILES[id].dossier{} | status, brief, hq, founded, ticker, ceo, marketCap, sections[] | Ecosystems (Strategic Brief tab) |
| <template tpl-dossier-[id]> | Full HTML content (live briefs only) | Ecosystems (Strategic Brief tab) |
The validateData() function runs automatically on page load and performs three categories of checks. Results are logged to the browser console. Validation is currently non-gating: the page renders regardless of warnings. In a future automated pipeline, these checks should become pipeline gates that block publication on failure.
| Check Type | What It Validates | Example |
|---|---|---|
| Schema | All required fields present in every record | SIGNALS[i] must have id, score, source, etc. |
| Range | Values within valid bounds | Score 2-50, sourceTier 1-4 |
| Cross-reference | Foreign key integrity between arrays | ALERTS[i].signalId must exist in SIGNALS |
Results are logged to the browser console. Zero warnings indicates a clean dataset.
The runSensitivityAnalysis() function tests how signal classification shifts under different threshold assumptions. It evaluates three threshold offsets (-5, 0, +5) and reports how many signals fall into each priority tier at each offset.
Purpose: If small threshold changes dramatically shift signal counts between tiers, the scoring calibration may need adjustment. Stable distributions across offsets indicate robust threshold selection.
| Component | Tool | Purpose |
|---|---|---|
| Scheduler | GitHub Actions | Trigger daily/hourly runs |
| Fetcher | Python + feedparser | Pull RSS content |
| Parser | BeautifulSoup | Extract clean text |
| Storage | SQLite | Store articles + metadata |
| Classification | Claude API (Haiku) | High-volume classification |
| Synthesis | Claude API (Sonnet) | Weekly summaries |
| Output | Jinja2 templates | Generate static HTML |
| Hosting | Static host | Deploy site |
| Component | Monthly Cost |
|---|---|
| Claude API (classification) | $20-50 |
| Claude API (synthesis) | $30-60 |
| Static hosting | $0-20 |
| Total | $50-130 |
These queries are designed for the target SQLite database schema (see Implementation chapter, Target Database Schema). In the current implementation, equivalent analysis is performed through JavaScript functions operating on the in-memory data arrays.
As-built architectural blueprint of every operational system: authentication, tracking, subscriptions, data loading, personalization, and platform governance.
Ecosystem Edge is a self-contained single-page application with an AWS serverless backend. All analytical logic runs client-side; the backend handles authentication, user preferences, and interaction tracking.
Browser (Vanilla ES5 SPA) ├── index.html ............. Single-page shell, all section markup ├── app.js ................. Core application (~3,900 lines) │ ├── Navigation & routing │ ├── Signal rendering & filtering │ ├── Theme synthesis rendering │ ├── Company grid & detail pages │ ├── Dossier rendering (10 section formatters) │ ├── Subscription preferences UI │ ├── Watchlist data model │ ├── Temporal decay scoring │ ├── Data validation & sensitivity analysis │ └── Methodology navigation & governance ├── charts.js .............. Theme Volume Chart (~900 lines) │ ├── Monte Carlo simulation engine (5,000 iterations) │ ├── Mulberry32 seeded PRNG │ ├── Convergence diagnostics (2-chain) │ └── Chart.js rendering (full-range & zoom views) ├── loader.js .............. 4-phase async data pipeline ├── session-tracker.js ..... Beacon-based analytics ├── js/dot-matrix.js ....... Three.js animated dot-grid background ├── js/word-transition.js .. Cross-fade word reveal utility ├── auth/gate.js ........... JWT auth gate └── styles.css ............. FT-inspired stylesheet AWS Backend ├── Cognito User Pool (Custom Auth) │ ├── defineAuthChallenge.js │ ├── createAuthChallenge.js │ └── verifyAuthChallenge.js ├── API Gateway + Lambda │ ├── requestLink.js ........ Magic link generation → SendGrid │ ├── verifyToken.js ........ Token validation + login tracking │ ├── trackPulseAction.js ... Signal interaction tracking │ ├── getPulseActions.js .... Retrieve user interactions │ ├── saveSubscriptions.js .. Email preference persistence │ ├── getSubscriptions.js ... Email preference retrieval │ └── trackShare.js ......... Briefing share attribution ├── DynamoDB Tables │ ├── EcosystemEdgeMagicLinks ... Auth tokens (15-min standard / 45-day invite TTL) │ ├── EcosystemEdgePulseStream .. User signal actions │ ├── EcosystemEdgeSubscriptions Email preferences │ └── EcosystemEdgeShareTracking Share opens (90-day TTL) └── SendGrid ............... Transactional email delivery (invites, sequences)
| Layer | Technology | Purpose |
|---|---|---|
| Frontend | Vanilla ES5 JavaScript | No framework dependency; runs in all modern browsers |
| Visualization | Chart.js (CDN) | Theme Volume Chart canvas rendering |
| Auth | AWS Cognito (Custom Auth) | Passwordless magic link flow |
| Compute | AWS Lambda (Node.js, SDK v3) | 9 serverless functions |
| Database | AWS DynamoDB | 4 tables: auth tokens (EcosystemEdgeMagicLinks), pulse stream, subscriptions, share tracking |
| SendGrid | Transactional email delivery (invites, beta sequences) | |
| API | AWS API Gateway | REST endpoints, JWT authorization |
| Data | JSON files (9 core + entity details) | Static data layer, cache-busted via ?v= query string |
| File | Size | Role |
|---|---|---|
index.html | ~113 KB | SPA shell: all sections, methodology, briefing templates |
app.js | ~213 KB | Core application logic |
charts.js | ~42 KB | TVC Monte Carlo + Chart.js rendering |
loader.js | ~14 KB | 4-phase data loader |
session-tracker.js | ~3 KB | Beacon analytics |
auth/gate.js | ~2 KB | JWT auth gate |
styles.css | ~110 KB | Complete stylesheet |
config/platform.json | — | Thresholds, segment maps, source URLs, decay config |
config/aliases.json | — | Entity aliases + detail entity list |
js/dot-matrix.js | — | Three.js animated dot-grid background (init, shiftColor, bgShift) |
js/word-transition.js | — | Cross-fade word reveal utility (wordTransition) |
config/reports.json | — | Briefing manifest (18 reports + 2 tools) |
data/signals.json | — | SIGNALS[] + ALERTS[] |
data/themes.json | — | THEMES[] + THEME_SYNTHESES{} |
data/trends.json | — | TRENDS (weekly volumes, trending entities) |
data/entities.json | — | COMPANIES[] |
data/chart-config.json | — | TVC theme config + weekly history |
data/beta-users.json | — | Beta user registry (status, sent_at per user) |
data/beta-tokens.json | — | Local cache of DynamoDB invite tokens keyed by email |
data/email-sequence.json | — | 3-email beta sequence definition (Welcome, Pre-Flight, Weekly Dispatch) |
The loader.js module orchestrates a 4-phase async startup sequence. Script load order is: Chart.js CDN → app.js → charts.js → loader.js. The loader runs last and calls initApp() after all data is assembled.
Nine JSON files are fetched concurrently via Promise.all(). Each URL is cache-busted with ?v={cacheKey}.
| File | Global(s) Assigned |
|---|---|
data/signals.json | SIGNALS, ALERTS |
data/themes.json | THEMES, THEME_SYNTHESES |
data/trends.json | TRENDS |
data/entities.json | COMPANIES |
data/refresh-log.json | REFRESH_LOG |
config/platform.json | DATA_MANIFEST, SEGMENT_NAMES, COVERAGE_STATS, SECTOR_LABELS, STRATEGIC_VALUES, SOURCE_URL_MAP, + 3 more |
config/aliases.json | DETAIL_ALIAS |
data/chart-config.json | TVC_CONFIG |
config/reports.json | REPORT_MANIFEST (sorted: live first, then by sortKey desc) |
For each entity ID listed in aliases.detailEntities, a per-entity JSON is fetched from data/entities/{id}.json. Each file populates four global arrays ({ID}_PARTNERSHIPS, {ID}_HEADLINES, {ID}_TIMELINE, {ID}_SECTORS) and registers a profile in COMPANY_PROFILES[id] with computed count getters (partnershipCount, headlineCount, sectorCount). Failures are non-fatal (console warning).
Seven dossier files are fetched in parallel from data/dossiers/{id}.json. Current manifest: intel, google, qualcomm, deloitte, amazon, ey, servicenow. Each dossier is stored in DOSSIERS[id] and its brief block is merged into COMPANY_PROFILES[entityLink].dossier. Live dossiers supersede planned stubs.
For each entity with a loaded profile, a COMPANY_DATA[id] object is created with four getter functions (partnerships(), headlines(), timeline(), sectors()) that reference the global arrays. DETAIL_ENABLED is built as the union of data IDs and alias IDs.
Calls initApp() to boot the SPA, then tvcInit() to render the Theme Volume Chart. If initApp is undefined, a fatal error is logged.
If any Phase 1 fetch fails, the entire page is replaced with a data-load error message suggesting python -m http.server 8000 for local development. Phase 2/2B failures are non-fatal per entity.
Ecosystem Edge uses a passwordless magic link flow built on AWS Cognito Custom Auth. No passwords are stored or transmitted anywhere in the system.
| Step | Component | Action |
|---|---|---|
| 1 | User | Enters email on splash.html |
| 2 | requestLink.js (Lambda) | Generates 32-byte hex token + 6-char base32 verification code. Stores in DynamoDB EcosystemEdgeMagicLinks table. Standard tokens: 15-minute TTL, single-use. Invite tokens (type: invite): 45-day TTL, reusable within TTL. Sends HTML email via SendGrid with magic link + code. |
| 3 | User | Clicks magic link or enters verification code |
| 4 | Cognito Custom Auth | defineAuthChallenge.js → createAuthChallenge.js → verifyAuthChallenge.js |
| 5 | verifyToken.js | Checks DynamoDB: token exists, not expired. For standard tokens: deletes after use (one-time). For invite tokens: persists and records first_login_at, last_login_at, and login_count on each successful authentication. Returns Cognito JWT tokens. |
| 6 | Browser | Receives JWT tokens (id, access). Stores in localStorage. |
| Token | 32-byte hex (crypto.randomBytes(32)) |
| Verification Code | 6-character base32 (charset: ABCDEFGHJKLMNPQRSTUVWXYZ23456789 — no 0/1/I/O to prevent confusion) |
| Standard TTL | 15 minutes from generation (single-use, deleted after verification) |
| Invite TTL | 45 days from generation (type: invite). Reusable within TTL — no code entry required for beta users. |
| Login Tracking | Invite tokens record first_login_at, last_login_at, and login_count per token on each successful authentication. |
| DynamoDB Table | EcosystemEdgeMagicLinks |
auth/gate.js)Runs before app initialization. Checks localStorage for ee_idToken and ee_tokenExpiry. If missing or expired, redirects to /splash.html. On success, parses JWT claims (email, tokens) and fires an authReady custom event.
The requestLink.js Lambda enforces a beta user registry (data/beta-users.json, currently 23 users). Non-registered requests receive a silent HTTP 200 (no error message) to prevent email enumeration. Beta invite status and delivery timestamps are tracked per user. A local token cache (data/beta-tokens.json) mirrors DynamoDB invite tokens keyed by email.
The session-tracker.js module captures user navigation and engagement events via a beacon-based telemetry system.
| Session ID | UUID generated via crypto.randomUUID() (with fallback). Persisted in localStorage key ee_session_id. |
| Event Queue | In-memory array, max batch size 50 |
| Flush Interval | 30 seconds (setInterval) |
| Transport | navigator.sendBeacon('/api/track') with fetch(keepalive: true) fallback |
| Lifecycle Hooks | visibilitychange (hidden) + beforeunload trigger immediate flush |
| Event Type | Data | Trigger |
|---|---|---|
session_start | { url } | Page load |
section_view | { section } | Click on [data-section] nav element |
section_exit | { section, duration } | Navigation away from section (duration in seconds) |
entity_detail | { entity } | Click on [data-company-id] element |
briefing_open | { report } | Click on [data-report-id] element |
settings_toggle | { setting, value } | Toggle any element with id="toggle-*" |
{
"sessionId": "a1b2c3d4-...",
"events": [
{ "type": "section_view", "ts": "2026-03-23T14:30:00Z", "data": { "section": "assessment" } },
{ "type": "section_exit", "ts": "2026-03-23T14:32:15Z", "data": { "section": "assessment", "duration": 135 } }
]
}
The Pulse feed (top 3 signals on the Digest page) supports user interaction tracking that feeds a personalization loop. Interactions are persisted server-side and used to shape subsequent signal presentation.
| Action | Meaning | Effect |
|---|---|---|
click | User clicked the signal to read details | Recorded for engagement analytics |
save | User bookmarked the signal | Added to retainedIds[] |
dismiss | User explicitly dismissed the signal | Excluded from future retainedIds[] |
On page load, getPulseActions retrieves the user's previous actions (up to 200 most recent). Dismissed signal IDs are filtered out, producing a retainedIds[] set that excludes content the user has already rejected. This narrows the Pulse feed to signals the user hasn't explicitly dismissed.
| Lambda | Method | Function |
|---|---|---|
trackPulseAction | POST | Writes action to DynamoDB with Bearer token auth. Stores signalId, action, title, url, source, actionAt, email. |
getPulseActions | GET | Queries user's actions by email. Deduplicates signal IDs. Returns actions[] + retainedIds[]. |
EcosystemEdgePulseStream)| Partition Key | ACTION#{email} |
| Sort Key | {action}#{isoTimestamp}#{signalId} |
| Attributes | signalId, action, title, url, source, actionAt, email |
Users configure email notification preferences across five content channels. Preferences are persisted server-side and used to control future email delivery via SendGrid. A 3-email beta onboarding sequence is defined in data/email-sequence.json: Welcome (day 0), Pre-Flight Assignment (day 3), and Weekly Friday Dispatch (12:12pm ET, recurring through May 16, 2026 launch). Mobile push notifications are disabled and out of scope for the beta program.
| Channel | Default | Frequencies | Filters |
|---|---|---|---|
| Digest | Enabled, daily | daily, weekly, off | None |
| Assessment | Enabled, weekly | daily, weekly, off | Theme multi-select (8 valid IDs) |
| Ecosystems | Enabled, weekly | daily, weekly, off | Segment multi-select (10 IDs), entity search (regex: ^[a-z0-9-]+$) |
| Briefings | Enabled, per-event | per-event, weekly, off | Category multi-select (Event Briefing, Perspectives, Strategic Framework, etc.) |
| Media Trends | Disabled, weekly | daily, weekly, off | None |
| Lambda | Method | Function |
|---|---|---|
saveSubscriptions | POST | Validates input against whitelisted theme/segment/category IDs. Entity IDs regex-checked. Preserves createdAt timestamp. Writes to DynamoDB. |
getSubscriptions | GET | Retrieves preferences by email. Returns sensible defaults if no record exists. |
EcosystemEdgeSubscriptionsKeyed by user email. Stores per-channel enabled/frequency settings plus content filter arrays. createdAt is preserved across updates; updatedAt is set on each save.
When users share briefing links, the system tracks both the share event and subsequent opens to provide attribution analytics.
Share links contain a ref query parameter that encodes share metadata as base64 JSON:
{
"e": "sharer@example.com", // Sharer email
"r": "nvidia-68b-quarter", // Report ID
"t": 1711036800000 // Share timestamp (epoch ms)
}
When a recipient opens a shared link, the trackShare Lambda decodes the ref token and records the open event. It detects whether the viewer is authenticated (has a valid JWT) or anonymous.
EcosystemEdgeShareTracking)| Partition Key | SHARE#{refPrefix} |
| Sort Key | OPEN#{timestamp}#{viewer} |
| Attributes | sharerEmail, reportId, sharedAt, openedAt, viewerEmail, viewerAuthenticated |
| TTL | 90 days from open event |
Two validation systems run automatically on page load. Both are informational (console-logged, non-gating) — they warn about data quality issues without blocking the application.
validateData())Checks all core data arrays against the DATA_MANIFEST.structures schema definition:
| Check | Targets | Rule |
|---|---|---|
| Required fields | SIGNALS, ALERTS, THEMES, COMPANIES, COVERAGE_STATS | All fields listed in requiredFields must be present on every item |
| Score range | SIGNALS | score must be 2–50 |
| Source tier range | SIGNALS | sourceTier must be 1–4 |
| Cross-reference | ALERTS | Every ALERTS[].signalId must exist in SIGNALS |
| Theme synthesis keys | THEME_SYNTHESES | Keys must match THEMES[].id |
runSensitivityAnalysis())Tests how signal classification distribution shifts if scoring thresholds are adjusted by ±5 points. Runs at three offsets: [-5, 0, +5]. For each offset, counts how many signals fall into each priority tier (critical, high, medium, low, background). Output is console-logged for diagnostic purposes.
During app refresh, METHODOLOGY_REGISTRY.getStaleSections(30) is called to identify any methodology chapters not verified within 30 days. Stale chapters are logged as console warnings.
The USER_WATCHLIST object is the central personalization model. It tracks user preferences across multiple content dimensions and drives both signal filtering and email subscription scoping.
| Field | Type | Purpose |
|---|---|---|
entities | { id: true } | Watched entities from Ecosystems section |
segments | { id: true } | Watched industry segments |
relationships | [{ entities, label }] | Entity pair relationship watches |
themes | { id: true } | Watched assessment themes |
analystItems | { key: { text, theme } } | Specific analyst watch items |
synthesisAlerts | boolean | Notify on synthesis updates |
briefingReports | { id: true } | Watched briefing reports |
briefingCategories | { cat: true } | Watched briefing categories |
customItems | [{ text, theme }] | User-defined custom watch items |
scoreThreshold | number | Minimum signal score filter (default: 0) |
signalTypes | { type: true } | Filtered signal types |
sourceTiers | { tier: true } | Filtered source tiers |
digestCadence | string | Digest delivery frequency |
smsNumber | string | SMS notification number |
smsEnabled | boolean | SMS notifications toggle |
On app load, hydrateWatchlist(prefs) merges server-side subscription preferences into the client-side USER_WATCHLIST object. Theme, entity, and segment selections are populated from the channels object. Direct watchlist fields (analystItems, customItems, scoreThreshold, signalTypes, etc.) are copied if present.
getWatchlistCount() returns the total number of watched items across all dimensions (entities + segments + relationships + themes + analyst items + briefing reports + briefing categories + custom items). This count drives the alerts badge in the navigation.
Two internal tracking systems ensure the methodology documentation stays synchronized with the codebase.
A structured object in app.js that tracks every methodology chapter with metadata:
| Field | Purpose |
|---|---|
title | Human-readable chapter name |
codeDependencies | Array of code symbols the chapter documents (e.g., ['COVERAGE_STATS.thresholds', 'getDecayedScore()']) |
dataArrays | Data structures the chapter references (e.g., ['SIGNALS']) |
lastVerified | ISO date of last manual verification |
autoDerivable | Whether chapter content can be auto-derived from data (e.g., coverage universe from COMPANIES[]) |
derivedFrom | Source expression for auto-derivable chapters |
getStaleSections(maxAgeDays) iterates all chapters and returns those where lastVerified is either missing or older than maxAgeDays (default: 30). During each app refresh, stale chapters are logged as console warnings. This is checked in Stage 12b of the refresh protocol.
An append-only change log tracking every methodology edit. Each entry records:
| date | ISO date of the change |
| chapter | Affected chapter ID (e.g., ch-scoring, condensed) |
| changeType | expansion | correction | new |
| description | Free-text description of what changed |
New entries are added via METHODOLOGY_LOG.addChange(chapter, changeType, description), which auto-stamps the current date. The log provides a traceable audit trail from the first tracked audit (v31, Feb 2026) onward.
Deep-dive analysis, event briefings, and strategic frameworks beyond the daily signal cycle.
Adoption pace prediction and disruption timing for AI infrastructure and custom silicon.
This briefing will analyze technology adoption trajectories across four tier-aligned dimensions:
External forces shaping ecosystem strategy: regulation, geopolitics, and capital flows.
This briefing will analyze macro forces through four tier-aligned dimensions:
Role evolution: Product Managers, Solutions Architects, and Alliance Managers in AI.
This briefing will analyze workforce evolution across four tier-aligned dimensions:
The convergence of on-body computing, ambient intelligence, and personalized AI agents.
This briefing will analyze the wearables-AI convergence across four tier-aligned dimensions:
Intel's foundry strategy, process node race, and the highest-stakes turnaround in semiconductor history.
This Perspectives report will analyze Intel's turnaround across four dimensions:
CoreWeave's IPO, GPU-as-a-service economics, and what happens when AI infrastructure becomes a commodity.
This Perspectives report will analyze CoreWeave's market position across four dimensions:
OpenAI's enterprise pivot, the foundation model business model, and the economics of artificial general intelligence.
This Perspectives report will analyze OpenAI's trajectory across four dimensions:
ServiceNow's AI agent strategy, workflow automation moat, and the enterprise application layer's transformation.
This Perspectives report will analyze ServiceNow's AI transformation across four dimensions:
Curate your watchlist across entities, themes, and analyst watch items. Tune signal filters to surface what matters.
Intel is undergoing its most aggressive restructuring since inception. Following the forced retirement of CEO Pat Gelsinger in December 2024, the Board installed Lip-Bu Tan in March 2025 to execute a manufacturing revival strategy.
Surface Reality: Cash flow concerns alleviated by massive Q2-Q3 2025 OpEx cuts. 2025 revenue stabilized at approximately $52B (down from $79B peak in 2021). Cash reserves $26.2B provide 12-18 month runway at current burn rate.
Second-Order Stress: CFO David Zinsner operates under structural contradiction: maintain investment-grade credit rating (BBB- floor) while funding $25B annual capex and $8B R&D. Every quarter forces binary choice between dividend restoration (investor pressure) and 18A yield investment (survival imperative). This creates 90-day decision cycles misaligned with 36-month fab timelines. For OEM partners, Intel's capex volatility translates to supply allocation uncertainty. OEM procurement teams face 6-9 month lead time variability on high-end SKUs, forcing dual-source strategies that dilute Intel volume commitments and weaken pricing power.
Surface Reality: Fabrication capacity in Germany and Poland paused. Investment consolidated to Arizona and Ohio to secure US government payouts. CHIPS Act funding unlocked $8.5B in grants, $11B in loans.
Strategic Asymmetry: Geographic concentration in Arizona/Ohio creates single-point-of-failure risk masked as "focus." TSMC operates 12 fabs across Taiwan, Arizona, Japan, and (planned) Germany. Intel's consolidation reduces geopolitical optionality: if US-China tensions escalate and Chinese market access deteriorates (30% of 2024 revenue), Intel cannot credibly threaten capacity reallocation to Beijing-friendly jurisdictions. For China GM Srini Iyengar, every export control tightening (entity list additions, advanced node restrictions) fragments his addressable TAM while AMD/NVIDIA retain TSMC hedges. Meanwhile, PC OEMs face bifurcated supply chains: Intel-only for US government/defense, AMD for China domestic, creating SKU complexity that increases BOM costs 8-12% versus single-vendor scenarios.
Surface Reality: 18A process yield remains the primary existential metric. Failure to hit volume manufacturing by H2 2025 or early 2026 would likely result in breakup of the company. Current reported yields: sub-70% (Q3 2025), target: 90%+ for profitability.
Organizational Load Distribution: CTO Stuart Pann's May 2025 departure signals yields are failing at integration layer, not transistor physics. Remaining engineering leaders (Central Engineering under Iyengar) inherit accumulated technical debt from IDM 2.0's "four nodes in five years" roadmap compression. Each node (Intel 7 → Intel 4 → 20A → 18A) introduced untested process innovations (RibbonFET, PowerVia backside power delivery) without adequate learning cycles. Engineering teams now troubleshoot three simultaneous yield issues: gate-all-around transistor defects, backside power routing failures, and EUV multi-patterning misalignment. This creates 80-100 hour work weeks for senior process engineers, triggering retention crisis. For products division CEO (post-Holthaus vacancy), every 18A delay forces product launches onto Intel 3 (trailing TSMC N3 by 18 months), guaranteeing market share losses that make next quarter's targets unachievable before the quarter begins.
Surface Asymmetry: Intel historically dictated PC roadmaps through platform specifications (Centrino, Ultrabook, Evo). AMD now sets performance benchmarks; ARM defines power efficiency; Microsoft controls ISA direction (Windows on ARM).
Bargaining Power Collapse: When Intel held 85%+ x86 market share (2016), OEMs accepted 15% ASP increases to secure launch allocation. At 76% share (Q4 2025), pricing power inverts. Large OEM procurement teams can credibly threaten "AMD-only quarter" to extract rebates, because tier-1 OEMs with 20%+ global PC market share are critical to Intel's volume forecasts. For Intel's Client Computing GM (currently recruiting post-Holthaus departure), this creates impossible optimization: preserve volume (accept 8-10% price cuts) or preserve margin (lose share to AMD, trigger Wall Street downgrade). The role requires navigating OEM relationships where Intel needs tier-1 OEMs more than they need Intel—a power dynamic unseen since the Pentium 4 era. Operationally, this manifests as procurement chief Gokul Subramaniam experiencing buyer aggression in every negotiation cycle: tier-1 OEMs now demand marketing development funds (MDF) paid upfront rather than upon achievement, shifting financial risk to Intel.
Hidden Attrition Dynamics: Public departures (Gelsinger, Holthaus, Koduri) visible; silent exodus of VP-level technical leaders to NVIDIA, AMD, Broadcom unreported. Stock compensation underwater (INTC -60% from 2021 peaks) eliminates golden handcuffs.
Performance Demand Mismatch: New CEO Lip-Bu Tan demands "radical efficiency" (20% additional headcount cuts, April 2025) while simultaneously expecting 18A yield improvements requiring *more* engineering hours, not fewer. Process development VPs face contradictory directives: reduce team size 25% (cost target) while accelerating debug cycles 40% (schedule target). This creates adverse selection: top performers with external options depart; remaining teams skew toward risk-averse execution. For Foundry Services GM Naga Chandrasekaran (appointed Sept 2025), customer acquisition requires credible delivery commitments, but thinned engineering ranks mean every commitment carries execution risk that could trigger customer exodus if missed. The role's success metrics—sign 3+ external customers by end 2026—assume organizational capacity that no longer exists post-layoffs.
Supplier Concentration Risk: Large OEM commercial PC segments remain 70%+ Intel dependent (Q4 2025). Enterprise IT buyers specify "Intel vPro" in RFPs, creating lock-in. But Intel's 18A delays mean 2026-2027 business PC refreshes launch on trailing-node silicon versus AMD Zen 5/ARM competitors, compressing OEM commercial margins 3-5 percentage points.
Portfolio Rebalancing Tension: OEM Product Marketing VPs navigate contradiction: expanding AMD Ryzen presence (risk mitigation, performance leadership) alienates Intel co-marketing funds ($200M+ annually for tier-1 players) that subsidize channel incentives. Every AMD design win reduces Intel MDF allocation, forcing OEMs to self-fund promotions or accept lower promotional intensity. This creates strategic oscillation: OEMs publicly commit to "x86 diversity" while procurement quietly negotiates Intel volume rebates contingent on 75%+ share maintenance.
Innovation Pipeline Asymmetry: Intel's AI PC strategy (Core Ultra with integrated NPU) trails Qualcomm Snapdragon X by 12-18 months in battery life, creating cannibalization risk for OEM premium segments. OEM Industrial Design teams must now develop parallel ARM and x86 chassis (non-compatible thermals, different port configurations), doubling NRE costs. For OEM VP Consumer PCs, this manifests as P&L pressure: ARM-based premium models achieve target margins but Intel co-op funds subsidize x86 losses, creating cross-subsidy that masks true SKU profitability and distorts portfolio optimization decisions.
| Quarter | Revenue | Net Income | Gross Margin | Operating Margin |
|---|---|---|---|---|
| Q4 2024 | $14.3B | -$0.3B | 39.6% | -2.1% |
| Q1 2025 | $12.7B | -$0.4B | 38.4% | -3.2% |
| Q2 2025 | $12.8B | -$1.6B | 38.0% | -12.5% |
| Q3 2025 | $13.3B | -$0.2B | 40.3% | -1.5% |
| Q4 2025 | $14.3B | $0.4B | 42.5% | 2.8% |
Equipment Concentration: ASML monopoly on EUV lithography systems (€350M per tool). Intel dependent on TSMC-priority allocation.
Materials Suppliers: Limited sources for high-purity silicon wafers, specialty gases, and photoresists. Single-source risk for critical inputs.
Switching Costs: Process qualification takes 18-24 months. Equipment lock-in due to proprietary interfaces and support requirements.
Assessment: Moderate supplier power concentrated in capital equipment and specialty materials.
Customer Concentration: Top 10 OEMs (Dell, HP, major PC manufacturers) represent 60%+ of PC volume. Hyperscalers (AWS, Azure, Google Cloud) dominate data center.
Switching Availability: AMD Ryzen/EPYC provide direct substitutes. ARM-based alternatives (Apple M-series, Graviton, Qualcomm) gaining PC/server share.
Price Sensitivity: OEMs operate on 5-8% net margins. Every 1% CPU price increase pressures system-level profitability.
Assessment: Buyers wield significant power through credible alternatives and volume concentration.
Architecture Shift: ARM-based chips (Ampere, AWS Graviton, Microsoft Cobalt) capture 15%+ data center share. Apple M-series eliminates x86 in MacBook/iMac.
Specialized Accelerators: GPU compute (NVIDIA H100/H200) dominates AI training. Google TPU, AWS Trainium bypass CPU for inference workloads.
Performance Parity: AMD EPYC Genoa/Bergamo match or exceed Xeon on performance-per-watt. Qualcomm Snapdragon X challenges Core Ultra in mobile.
Assessment: Substitution threat intensifying across PC, data center, and AI segments.
Software Moat Erosion: Microsoft's Windows on ARM initiative (Snapdragon X Elite/Plus) and Linux dominance in cloud infrastructure structurally degrade x86 software lock-in. Each ARM-native application reduces Intel's ecosystem switching cost advantage. Developer mindshare increasingly follows GPU/accelerator toolchains (CUDA, ROCm) rather than CPU ISA.
OEM Co-Investment Fragility: Intel's $200M+ annual MDF allocation to tier-1 OEMs creates mutual dependency, but every AMD design win reduces Intel co-marketing funds, forcing OEMs to self-fund promotions. vPro certification requirements lock enterprise procurement cycles to Intel roadmaps, but 18A delays risk breaking the refresh cadence that sustains this lock-in.
Foundry Ecosystem Gap: Intel Foundry Services must attract EDA vendors (Synopsys, Cadence), IP block providers, and design partners to compete with TSMC's mature ecosystem of 500+ design partners. TSMC's Open Innovation Platform has 20+ years of compounding network effects that Intel cannot replicate through capital investment alone.
Hyperscaler Disintermediation: AWS (Graviton4), Microsoft (Maia/Cobalt), Google (TPU/Axion) are simultaneously Intel's largest data center customers and its emerging competitors. Each custom silicon generation reduces hyperscaler dependence on Intel Xeon, creating a structural demand ceiling that tightens with every cloud generation cycle.
Assessment: Intel's complementor relationships are inverting from asset to liability. The ecosystem that once reinforced x86 dominance now accelerates diversification away from it. Alliance value increasingly flows toward enabling partners' independence rather than deepening Intel dependency.
AMD Share Gains: x86 server market share increased from 8% (2020) to 24% (Q4 2025), driven by EPYC Genoa/Bergamo superior performance-per-watt and competitive pricing. Client CPU share expanded from 16% to 21% over same period. Intel forced into aggressive price cuts, compressing margins across product stack.
Process Technology Gap: TSMC N3E (3nm) process demonstrates 15-20% performance advantage and 30% power efficiency improvement versus Intel 18A in independent benchmarks. Intel's 18A yield issues (reported sub-70% in Q3 2025) delay volume production. Samsung 3nm GAA technology challenges Intel Foundry Services positioning for external customers.
Capital Expenditure Arms Race: TSMC 2025 capex $32B versus Intel $25B. TSMC expanding Arizona capacity (Fab 21 Phase 2), Japan facilities, and European operations. Samsung investing $17B in Texas fab. Combined capacity additions risk 2026-2027 supply glut in trailing-edge nodes (7nm, 5nm), pressuring ASPs.
Design Talent Competition: Intel lost senior architects to AMD, NVIDIA, and ARM over 2023-2025 period. Raja Koduri departure (2023) signaled GPU competency gap. Foundry Services struggles to attract tier-1 design wins beyond limited government contracts.
Ecosystem Lock-In Erosion: Microsoft Windows on ARM initiative (Snapdragon X Elite/Plus) and Linux dominance in cloud infrastructure reduce x86 software moat. Hyperscaler custom silicon (Graviton3/4, Azure Maia) bypass Intel entirely for specific workloads.
Assessment: Intel fights multi-front competitive battle with eroding technological differentiation, market share losses, and margin compression. Rivalry intensity highest in 30-year company history.
Structural Position: High barriers to entry protect oligopoly rents, but incumbent rivalry, buyer power, substitution threats, and ecosystem inversion compress margins. Intel's vertically integrated model faces profitability pressure versus fabless competitors (AMD, Qualcomm) leveraging TSMC, while the complementor relationships that once reinforced x86 dominance now actively enable diversification away from it.
Strategic Implication: Foundry separation (Intel 18A external customers) and process leadership (vs TSMC N3/N2) are necessary but insufficient. Intel must simultaneously rebuild ecosystem gravity through design partner acquisition, ISV toolchain investment, and OEM co-engineering depth. Failure risks margin structure collapsing toward 25-30% (foundry economics) from current 38-42% (IDM model), compounded by shrinking addressable market as hyperscalers and ARM vendors capture workloads that once defaulted to x86.
| Name | Former Role | Departure Date | Notes |
|---|---|---|---|
| Michelle Johnston Holthaus | CEO, Intel Products | Sept 08, 2025 | Resigned. Retained as "Strategic Advisor" until Mar 2026. |
| Pat Gelsinger | Chief Executive Officer | Dec 01, 2024 | "Retired" following board pressure. |
| Stuart Pann | SVP, Foundry Services | May 2025 | Replaced during restructuring of Foundry leadership. |
| Keyvan Esfarjani | EVP, Chief Global Ops | 2024 (Late) | Transitioned out as manufacturing oversight shifted to Tan. |
| Raja Koduri | EVP, AXG (Graphics) | Mar 2023 | Early indicator of GPU strategy failure. |
Intel's trajectory from 2021-2025 reveals a systematic pattern: strategic pivots consistently lag operational reality by 12-18 months, while executive departures signal hidden strategic failures 6-9 months before public acknowledgment.
The IDM 2.0 announcement (March 2021) preceded meaningful foundry separation by two years. Gelsinger's "Black August" restructuring (August 2024) came three years after the Arizona fab commitment, suggesting capital allocation decisions disconnected from market demand signals.
The executive churn—five C-level departures in 18 months—tracks not to quarterly performance but to inflection points where strategic narrative collapsed into operational reality. Tan's March 2025 appointment represents the Board's implicit admission that manufacturing execution, not vision articulation, determines survival.