Skip to content

Quality, Privacy, and FAQ

What Affects Answer Quality

  • Document quality: scan clarity, formatting, and language consistency affect extraction.
  • Question specificity: focused questions usually produce better answers.
  • Corpus completeness: missing or outdated documents limit answer quality.
  • Figure caption quality: better captions improve chart and image retrieval.

Best Practices for Better Results

  1. Ask focused, single-topic questions.
  2. Mention document name, chapter, page, or section when possible.
  3. Use follow-up prompts to narrow, compare, or verify.
  4. Rephrase broad questions into concrete requests.
  5. For figure questions, mention what type of chart or metric you need.

Privacy and Trust (User-Facing Transparency)

  • Stored data can include indexed document text, metadata, and embeddings.
  • Figure captions and related metadata may be indexed for media retrieval.
  • Depending on deployment configuration, question text and retrieved context may be sent to an LLM service.
  • Administrative controls can edit titles and references, and hide low-quality records.

For policy-level details such as retention, deletion windows, and third-party processing terms, refer to your project administrator or deployment policy documentation.

What Users Should Expect

  • Response speed can vary by document size, server load, and model latency.
  • Some answers may be partial if evidence is weak or split across documents.
  • Similar questions can produce different wording while using similar sources.
  • Critical decisions should be verified against original documents.

What This App Does Not Do

  • It does not guarantee that every answer is complete or correct.
  • It does not replace review of original source documents.
  • It does not infer facts that are not present in indexed material.

Frequently Asked Questions

Why is the answer sometimes incomplete?

The system can only answer from indexed material. If relevant details are missing, low quality, or ambiguous, the response may be incomplete.

Why do similar questions sometimes get different answers?

The retrieval set may vary slightly by phrasing, and LLM wording can vary even with similar evidence.

How do I improve answer quality?

Use narrower prompts, reference specific documents or sections, and ask follow-up questions to refine scope.

Where does the answer come from?

From the most relevant indexed document sections found by vector retrieval, then synthesized by the language model.

Glossary

  • Grounded answer: response based on retrieved document content.
  • Embedding: numeric representation used for semantic similarity.
  • Vector search: retrieval method for meaning-based matching, not keyword-only matching.