Within the present panorama of pc imaginative and prescient, the usual working process includes a modular ‘Lego-brick’ method: a pre-trained imaginative and prescient encoder for function extraction paired with a separate decoder for process prediction. Whereas efficient, this architectural separation complicates scaling and bottlenecks the interplay between language and imaginative and prescient.
The Expertise Innovation Institute (TII) analysis staff is difficult this paradigm with Falcon Notion, a 600M-parameter unified dense Transformer. By processing picture patches and textual content tokens in a shared parameter house from the very first layer, TII analysis staff has developed an early-fusion stack that handles notion and process modeling with excessive effectivity.
https://arxiv.org/pdf/2603.27365
The Structure: A Single Stack for Each Modality
The core design of Falcon Notion is constructed on the speculation {that a} single Transformer can concurrently be taught visible representations and carry out task-specific era.
Hybrid Consideration and GGROPE
In contrast to normal language fashions that use strict causal masking, Falcon Notion employs a hybrid consideration technique. Picture tokens attend to one another bidirectionally to construct a world visible context, whereas textual content and process tokens attend to all previous tokens (causal masking) to allow autoregressive prediction.
To take care of 2D spatial relationships in a flattened sequence, the analysis staff makes use of 3D Rotary Positional Embeddings. This decomposes the top dimension right into a sequential part and a spatial part utilizing Golden Gate ROPE (GGROPE). GGROPE permits consideration heads to take care of relative positions alongside arbitrary angles, making the mannequin sturdy to rotation and facet ratio variations.
Minimalist Sequence Logic
The essential architectural sequence follows a Chain-of-Notion format:
[Image] [Text] … .
This ensures that the mannequin resolves spatial ambiguity (place and dimension) as a conditioning sign earlier than producing the ultimate segmentation masks.
Engineering for Scale: Muon, FlexAttention, and Raster Ordering
TII analysis staff launched a number of optimizations to stabilize coaching and maximize GPU utilization for these heterogeneous sequences.
- Muon Optimization: The analysis staff report that using the Muon optimizer for specialised heads (coordinates, dimension, and segmentation) led to decrease coaching losses and improved efficiency on benchmarks in comparison with normal AdamW.
- FlexAttention and Sequence Packing: To course of photos at native resolutions with out losing compute on padding, the mannequin makes use of a scatter-and-pack technique. Legitimate patches are packed into fixed-length blocks, and FlexAttention is used to limit self-attention inside every picture pattern’s boundaries.
- Raster Ordering: When a number of objects are current, Falcon Notion predicts them in raster order (top-to-bottom, left-to-right). This was discovered to converge sooner and produce decrease coordinate loss than random or size-based ordering.
The Coaching Recipe: Distillation to 685GT
The mannequin makes use of multi-teacher distillation for initialization, distilling data from DINOv3 (ViT-H) for native options and SigLIP2 (So400m) for language-aligned options. Following initialization, the mannequin undergoes a three-stage notion coaching pipeline totaling roughly 685 Gigatokens (GT):
- In-Context Itemizing (450 GT): Studying to ‘listing’ the scene stock to construct world context.
- Process Alignment (225 GT): Transitioning to independent-query duties utilizing Question Masking to make sure the mannequin grounds every question solely on the picture.
- Lengthy-Context Finetuning (10 GT): Quick adaptation for excessive density, rising the masks restrict to 600 per expression.
Throughout these levels, the task-specific serialization is used:
expr1 expr2 .
The and tokens pressure the mannequin to decide to a binary resolution on an object’s existence earlier than localization.
PBench: Profiling Capabilities Past Saturated Baselines
To measure progress, TII analysis staff launched PBench, a benchmark that organizes samples into 5 ranges of semantic complexity to disentangle mannequin failure modes.
Essential Outcomes: Falcon Notion vs. SAM 3 (Macro-F1)
Benchmark Break upSAM 3Falcon Notion (600M)L0: Easy Objects64.365.1L1: Attributes54.463.6L2: OCR-Guided24.638.0L3: Spatial Understanding31.653.5L4: Relations33.349.1Dense Break up58.472.6
Falcon Notion considerably outperforms SAM 3 on complicated semantic duties, significantly displaying a +21.9 level acquire on spatial understanding (Degree 3).
https://arxiv.org/pdf/2603.27365
FalconOCR: The 300M Doc specialist
TII staff additionally prolonged this early-fusion recipe to FalconOCR, a compact 300M-parameter mannequin initialized from scratch to prioritize fine-grained glyph recognition. FalconOCR is aggressive with a number of bigger proprietary and modular OCR programs:
- olmOCR: Achieves 80.3% accuracy, matching or exceeding Gemini 3 Professional (80.2%) and GPT 5.2 (69.8%).
- OmniDocBench: Reaches an total rating of 88.64, forward of GPT 5.2 (86.56) and Mistral OCR 3 (85.20), although it trails the highest modular pipeline PaddleOCR VL 1.5 (94.37).
Key Takeaways
- Unified Early-Fusion Structure: Falcon Notion replaces modular encoder-decoder pipelines with a single dense Transformer that processes picture patches and textual content tokens in a shared parameter house from the primary layer. It makes use of a hybrid consideration masks—bidirectional for visible tokens and causal for process tokens—to behave concurrently as a imaginative and prescient encoder and an autoregressive decoder.
- Chain-of-Notion Sequence: The mannequin serializes occasion segmentation right into a structured sequence (⟨coord⟩→⟨dimension⟩→⟨seg⟩)(langle coordrangle rightarrow langle sizerangle rightarrow langle segrangle), which forces it to resolve spatial place and dimension as a conditioning sign earlier than producing the pixel-level masks.
- Specialised Heads and GGROPE: To handle dense spatial information, the mannequin makes use of Fourier Characteristic encoders for high-dimensional coordinate mapping and Golden Gate ROPE (GGROPE) to allow isotropic 2D spatial consideration. The Muon optimizer is employed for these specialised heads to stability studying charges towards the pre-trained spine.
- Semantic Efficiency Good points: On the brand new PBench benchmark, which disentangles semantic capabilities (Ranges 0-4), the 600M mannequin demonstrates important positive aspects over SAM 3 in complicated classes, together with a +13.4 level lead in OCR-guided queries and a +21.9 level lead in spatial understanding.
- Excessive-Effectivity OCR Extension: The structure scales right down to Falcon OCR, a 300M-parameter mannequin that achieves 80.3% on olmOCR and 88.64 on OmniDocBench. It matches or exceeds the accuracy of a lot bigger programs like Gemini 3 Professional and GPT 5.2 whereas sustaining excessive throughput for large-scale doc processing.
Try the Paper, Mannequin Weight, Repo and Technical particulars. Additionally, be at liberty to observe us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be part of us on telegram as properly.

