ChatGPT: What It Is (and What It Isn’t)

ChatGPT is best understood as a deployed assistant built on top of large language models, shaped by post-training and product constraints. The “chat” experience is more than pretraining.

Base model vs assistant model

A base model is trained primarily with next-token prediction on large corpora. An assistant model adds post-training so outputs follow instructions, refuse unsafe requests, and match user intent. The difference can feel like “personality,” but it’s an optimization target change.

Why chat formatting matters

Most modern assistants rely on structured prompts that separate system/developer instructions from user messages. The model is trained or fine-tuned to treat that structure as part of its context. This is one reason “the same model” can behave differently across products.

Tools and function calling

Tool use turns the assistant into a router: the model decides when to call an external function, then integrates results into the response. The core idea is to let the model delegate to deterministic systems (search, code execution, databases) instead of hallucinating.

What users perceive as “reasoning”

Some capabilities come from pretraining scale and data diversity. Others come from post-training (preferences for step-by-step, consistency, calibration). Either way, the output is still a token sequence generated by a Transformer (see Transformers).

Safety is a product and a training pipeline

Deployed chat systems include model-side mitigations and product-side controls: content filters, policy layers, rate limits, and monitoring. The training-side story is in RLHF and Post-Training.

Practical takeaway

When analyzing any “ChatGPT behavior,” ask:

Is this coming from the base model?

From post-training objective choices?

From the prompt format/tooling?

From product policy and UI constraints?