Signal, condensed.

Curated editorial intelligence on AI transformation in high-stakes domains. Each note distills a consequential observation into dense, actionable insight. For principals who value clarity over volume.

3 notes published

ogram | STANDARDOGM-20251212-PUB-003
12 December 2025

Compression and decompression of information in LLMsLLMLarge language model trained on large text corpora to predict tokens and generate language.

LLMsLLMLarge language model trained on large text corpora to predict tokens and generate language. are machines that compress and dilate the information contained in messages. From the GPT-5 family onward, or with Gemini-3 and Claude 4.5, LLMsLLMLarge language model trained on large text corpora to predict tokens and generate language. became so good at information compression that one of their responses is often too dense for a human reader.

With the first generations of language models, we were often confronted with redundant messages. Many specialists in their fields still criticized stereotyped responses from AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. systems, which seemed interesting on the surface but were severely lacking in substance.

Today, LLMsLLMLarge language model trained on large text corpora to predict tokens and generate language. have become so capable that human readers often have to find ways to decompress the information contained in a response. Finding the right density/comprehensibility ratio is essential. But this ratio varies from one individual to another, from one domain to another, from one situation to another. LLMsLLMLarge language model trained on large text corpora to predict tokens and generate language. are precision tools. When calibrated correctly, their results are markedly better. The same applies to this ratio.

Implications

Important: the theoretical background for these reflections appears in thinkers such as Claude Shannon and Andrey Kolmogorov. The two mathematicians sought, each in his own way, to answer the question of the maximum information a given message could contain. LLMsLLMLarge language model trained on large text corpora to predict tokens and generate language. force us to update their research with a new angle: they can modulate the density of information in a message automatically. What can that be used for? It is a very theoretical model, but it can lead to new communication protocols in which the message is "optimized" for its reader.

ogram | STANDARDOGM-20251205-PUB-001
5 December 2025

On the epistemics of model uncertainty in consequential decisions

The confidence scores displayed by AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. systems rarely correspond to epistemic reliability. This asymmetry creates measurable risk in domains where decisions compound.

Large language models produce outputs with probabilistic fluency that masks fundamental uncertainty. When a model generates text with apparent conviction, it offers no meaningful signal about whether the underlying claim corresponds to external reality. This distinction—between linguistic confidence and epistemic confidence—remains poorly understood by most institutional adopters.

The practical consequence: professionals who integrate AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. outputs into high-stakes workflows must develop calibration frameworks that their tools cannot provide. A model that states "the regulatory deadline is 31 March" with identical fluency to "the regulatory deadline is likely in Q1" provides no mechanism for distinguishing fact from inference. Both statements emerge from the same generative process, weighted by training distributions rather than verified knowledge.

Swiss financial institutions navigating this terrain have begun implementing structured verification protocols: AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models.-generated intelligence marked as provisional until cross-referenced against authoritative sources, confidence indicators derived from source plurality rather than model self-assessment, and clear delineation between synthesis (where models excel) and factual assertion (where they remain unreliable).

Implications

Institutional AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. adoption without epistemic scaffolding creates liability exposure. The question is not whether AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models.-generated errors will occur in regulated contexts, but whether organizations have demonstrable processes for catching them before they compound into decisions.

ogram | STANDARDOGM-20251201-PUB-002
1 December 2025

The quiet reorganization of legal knowledge work

Major law firms are restructuring their associate leverage models. The shift is not yet visible in partnership announcements, but it is evident in hiring patterns and matter economics.

Document review, the traditional province of junior associates and contract attorneys, has reached an inflection point. AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. systems now process disclosure sets at speeds and accuracy levels that make human-only review economically indefensible for commoditized matters. The firms adapting fastest are not those deploying the most sophisticated technology—they are those restructuring compensation and advancement frameworks to reflect new value distributions.

The emerging pattern: junior lawyers who develop judgment about when AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. outputs require scrutiny become disproportionately valuable. Those who treat AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models. as either infallible or useless find themselves in shrinking demand. The skill is calibration—knowing which outputs to trust, which to verify, and which to regenerate with refined prompts.

This reorganization extends to client relationships. General counsel increasingly request matter budgets that specify AIAIArtificial intelligence: systems that perform tasks requiring human cognition, typically via statistical models.-assisted versus human-only work streams. Firms that cannot articulate their methodology—what gets automated, what gets reviewed, what gets elevated to senior judgment—lose competitive positioning. The opacity that once protected billable hour models now signals operational immaturity.

Implications

Law firm economics are being repriced around judgment rather than time. Partners who understand this shift are restructuring their practices accordingly. Those who wait for industry consensus will find the reallocation already complete.

Intelligence requirements vary.

These public notes represent a fraction of our analytical output. Client-specific intelligence briefs are tailored to your domain, your jurisdiction, and your decision timeline.

Discuss your intelligence requirements