← Back1Dec 21, 2025

**Constrained Decoding and False Confidence: Exploring Struc

**Constrained Decoding and False Confidence: Exploring Struc

**Constrained Decoding and False Confidence: Exploring Structured Outputs** When you use large language models (LLMs) to extract structured data from receipts, like tracking item quantities, constrained decoding leads to false confidence. While structured outputs ensure compliance with your schema, they often sacrifice output quality. Below, we'll unpack this phenomenon: **1. Quantity Errors and Format Conflicts** If your receipt has a fractional quantity, the LLM may return an integer (e.g., 4 instead of 4.0). Constrained decoding forces it to adhere strictly to the format you've set, which can cause confusion about the true value. **2. Missing Data and Parsing Failures** Structured outputs often fail when there's contradictory information. For example, if a receipt is mispelled (e.g., "elephant" instead of "item"), the LLM may incorrectly parse it without full confidence in its results. **3. Security Concerns and Prompt Injection Vulnerabilities** Structuring responses can expose your data to malicious attacks via prompt injection. If an attacker injects malicious code, the LLM's structured output parsing amplifies potential security vulnerabilities. By understanding these pitfalls, you can approach structured outputs with awareness of their limitations—while constrained decoding provides a sense of control, it risks reducing response quality.