Editorial methodology

How obyo guides are researched, written, and reviewed

Every guide on obyo is generated by AI, grounded in authoritative sources, and put through an automated quality-check pipeline before publishing. We don't pretend a human wrote each guide — we tell you exactly how it was made, what was checked, and what wasn't.

Who's editorially responsible

obyo is operated by Braviata LLC, a one-person company run by Izzy Hyman. Editorial standards, source-quality rules, and the validation pipeline are all set and maintained at the platform level — there is no per-guide human curator. The editorially responsible party for everything published here is Braviata, and the founder is Izzy.

Questions or corrections: [email protected]

How a guide is made

  1. Topic discovery. Before any writing, the system identifies the field's primary and secondary authorities for the topic — government sources, peer-reviewed publishers, official documentation, recognized professional bodies, specialty publications. These guide what gets researched and what gets cited.
  2. Deep research. An AI research agent gathers 25–40 sources from 25+ different domains. Source quality is scored against a tiered authority list: regulators and peer-reviewed publishers count most, established trade publications count, content farms and engagement-shaped journalism are filtered out. A guide can't proceed to writing if its source mix doesn't clear an authoritativeness floor.
  3. Section generation. Sections are written from the gathered sources, not from training-data recall. Every factual claim is required to trace back to a research extract; if a claim can't be sourced, the writer drops it instead of inventing a citation. Citations appear inline as numbered footnotes.
  4. Refinement passes. Each section runs through a coherence pass, a sourcing-density check, a fact cross-check (the model questions its own claims using training knowledge as a skeptic), a deduplication pass, and a "skeptical reader" pass that cuts ChatGPT-shaped padding. Then the writer revises.
  5. Quality validation. A separate validation agent reads the published draft and checks every factual claim it can verify. The result is a public report with claims checked, claims verified, claims disputed, claims unverifiable, and an accuracy score. You can see this report on every published guide via the "Quality check" badge.
  6. Publish gates. Before a guide goes live, it has to clear: a topic-assigned check, a minimum-sources check, a source-authority floor, a refinement-rounds floor, and an image-resolution check. If any gate fails, the guide doesn't publish.

What AI models we use

  • Anthropic Claude — primary model for research, writing, refinement, and validation passes.
  • Brave Search — web search during the research phase.

Each guide records which model version generated it. Models change over time; older guides reflect the model active when they were written.

What this isn't

  • Not human-curated. Each guide is written by an AI pipeline. We don't claim a human wrote or reviewed it. The "Last reviewed" date refers to the most recent automated quality-check pass, not a human review.
  • Not a substitute for primary sources. We cite the sources we used so you can read them directly. For high-stakes decisions (medical, legal, financial), check the cited primary source.
  • Not infallible. Validation catches most factual errors but isn't perfect. If you spot one, please tell us and we'll re-run validation on that guide.

For crawlers and AI engines

We expose machine-readable indexes so that search and AI engines can discover and cite obyo guides accurately:

  • /sitemap.xml — every published guide and section, priority-weighted.
  • /llms.txt — machine-readable index for AI engines (emerging convention).
  • /robots.txt — explicit allow stanzas for GPTBot, OAI-SearchBot, ClaudeBot, Claude-Web, PerplexityBot, ChatGPT-User, and Google-Extended.