Skip to main content

Overview

Knowledge Base stores the content AI Puffer can search before it generates an answer or a piece of content. Use it for support answers, product details, documentation, policies, posts, pages, WooCommerce products, uploaded documents, and other source text you want AI Puffer to use as context.

Providers

Use OpenAI Vector Stores, Pinecone, or Qdrant.

Vector Stores

Create, select, or delete vector targets.

Add Data

Add Q&A, text, files, or WordPress content.

Manage Data

View, edit, retrain, or delete source records.

Settings

Configure visibility, chunking, and indexing controls.

Semantic Search

Publish a frontend vector search form.

Troubleshooting

Fix missing targets, dimension errors, and empty results.

Providers

ProviderTarget nameHow it works
OpenAIVector storeAI Puffer sends the source data to OpenAI Vector Stores. No separate embedding model is selected in AI Puffer for this target.
PineconeIndexAI Puffer creates embeddings with the model you choose, then stores the vectors in a Pinecone index.
QdrantCollectionAI Puffer creates embeddings with the model you choose, then stores the vectors in a Qdrant collection.
For Pinecone and Qdrant, the index or collection dimension must match the embedding model. For example, if your Pinecone index or Qdrant collection is 3072 dimensions, use a 3072-dimension embedding model when adding data and when searching that data later.
If the dimension does not match, Pinecone or Qdrant can reject the data or return unusable search results.

Manage Vector Stores

Use the provider and target selectors at the top of AI Puffer > Knowledge Base to create, select, or delete vector targets.
Knowledge Base provider selector

OpenAI Vector Stores

  1. Add your OpenAI API key in AI Puffer > Settings > AI.
  2. Go to AI Puffer > Knowledge Base.
  3. Select OpenAI as the provider.
  4. Click Create new vector store.
  5. Enter a store name.
  6. Click Create.
OpenAI handles the vector store search on its side. AI Puffer stores a local source record so you can see what was added.
OpenAI Create Vector

Pinecone Indexes

  1. Add your Pinecone API key in AI Puffer > Settings > Integrations.
  2. Go to AI Puffer > Knowledge Base.
  3. Select Pinecone as the provider.
  4. Select the embedding model you plan to use.
  5. Click Create new index.
  6. Enter an index name.
  7. Enter the dimension for the selected embedding model.
  8. Click Create.
Use the same embedding model when you add data to the index and when a module searches that index.
Pinecone Create Index
Pinecone Create Index

Qdrant Collections

  1. Add your Qdrant URL and API key in AI Puffer > Settings > Integrations.
  2. Go to AI Puffer > Knowledge Base.
  3. Select Qdrant as the provider.
  4. Select the embedding model you plan to use.
  5. Click Create new collection.
  6. Enter a collection name.
  7. Enter the dimension for the selected embedding model.
  8. Click Create.
Use the same embedding model when you add data to the collection and when a module searches that collection.
Qdrant Create collection
Qdrant Create collection
To delete a target, select the provider, open the target selector, choose the target, then use Delete in the selector footer.

Add Data

Before adding data:
  1. Select a provider.
  2. Select the target vector store, index, or collection.
  3. For Pinecone or Qdrant, select the embedding model.
  4. Click Add data.
Knowledge Base Add data panel

Q&A

Use Q&A for short answers that should be easy to retrieve later.
  1. Select Q&A.
  2. Enter the question.
  3. Enter the answer.
  4. Click Train.
AI Puffer stores the pair as text:
Q: question text
A: answer text
Knowledge Base Q&A tab

Text

Use Text for policies, instructions, product notes, support snippets, or any source text that does not already exist as WordPress content.
  1. Select Text.
  2. Paste the source text.
  3. Click Train.
Knowledge Base Text tab

Files

Use Files when the source is already in a document.
  1. Select Files.
  2. Click Choose files.
  3. Select one or more files.
Files start uploading and training after selection. Supported file extensions:
.pdf, .docx, .txt, .md, .csv, .json
For Pinecone and Qdrant, AI Puffer extracts text, splits large files into chunks, creates embeddings, and stores each chunk in the selected index or collection. File size is limited by your WordPress/PHP upload settings. OpenAI Vector Store uploads also use OpenAI’s file limits.
Knowledge Base Files tab

Website

Use Website when the source is WordPress content.
  1. Select Website.
  2. Choose All or Specific.
  3. Choose the content status: Published, Draft, or Any.
  4. Select post types.
  5. If using Specific, select the individual items.
  6. Click Train.
Posts and pages are selected by default. WooCommerce products appear when WooCommerce is active. Public custom post types can also appear. When WordPress content is indexed, AI Puffer builds the source text from the URL, title, excerpt, content, public custom fields, public taxonomies, and available WooCommerce product data.
Knowledge Base Website all mode
Knowledge Base Website specific mode

Manage Data

The source table shows the local records created while adding data.
ColumnWhat it shows
TimeWhen the source was added or updated.
IndexProvider and target vector store, index, or collection.
TypeSite Content, Text, or File Upload.
SourcePost title, text preview, file name, or source identifier.
StatusTrained, Processing, Failed, or another provider status.
ActionsAvailable actions for the source.
Knowledge Base source table
Available actions:
ActionUse it for
ViewReview the stored source preview.
EditEdit a text source and save it again.
RetrainRe-index a WordPress content source after the content changes.
DeleteRemove the source from the external provider and from the local source table.
Knowledge Base source preview
Knowledge Base edit source

Settings

Click Settings in AI Puffer > Knowledge Base to open Knowledge Base settings.
Knowledge Base settings

General

General controls how Knowledge Base records and indexing buttons appear in the admin.
SettingWhat it does
Hide user uploadsHides chatbot upload records from the main Knowledge Base source table.
Show index buttonShows vector indexing controls on supported WordPress list screens.
Knowledge Base general settings

Document Chunking

Document chunking controls how AI Puffer splits large uploaded files before embedding them for Pinecone or Qdrant.
SettingDefaultRangeUse it for
Avg chars per token42 to 10Estimates how many characters equal one token.
Max tokens per chunk3000256 to 8000Sets the maximum chunk size before embedding.
Overlap tokens1500 to 1000Repeats a small part of the previous chunk so context does not break sharply.
Use smaller chunks when an embedding provider rejects long input. Keep some overlap for long documents where meaning continues across sections.
Knowledge Base document chunking settings

Indexing Controls

Indexing controls define which WordPress fields are included when Website training or list-screen indexing sends WordPress content to a vector target.
  1. Click Settings in AI Puffer > Knowledge Base.
  2. Open Indexing controls.
  3. Select a post type.
  4. Adjust Basic Labels if you want different labels for source URL, title, excerpt, or content.
  5. Enable or disable custom fields.
  6. Enable or disable taxonomies.
  7. For WooCommerce products, enable or disable product data such as SKU, price, stock, dimensions, and attributes.
  8. Save.
If the Save button shows Upgrade, activate Pro before saving indexing rules.
Knowledge Base indexing controls
When Show index button is enabled, supported WordPress list screens can show vector indexing controls.
ControlWhat it does
Index Status columnShows whether a post has already been indexed.
Index Status filterFilters content by indexed or not indexed.
Add to Vector Store actionSends selected posts to a vector target.
Semantic Search publishes a search form that queries a Pinecone index or Qdrant collection from the frontend. Open AI Puffer > Knowledge Base > Settings > Semantic search.
  1. Select Vector DB: Pinecone or Qdrant.
  2. Select the index or collection.
  3. Select the embedding model.
  4. Set Number of Results.
  5. Set No Results Text.
  6. Test a query in Try semantic search.
  7. Copy the shortcode.
[aipkit_semantic_search]
Semantic Search uses the global settings from this panel. It does not use OpenAI Vector Stores in the current UI. Use the same embedding model that was used when the Pinecone or Qdrant data was added.
Knowledge Base Semantic Search settings

Troubleshooting

Configure the provider credentials, then sync or create the vector target again.
Confirm the embedding model dimension matches the index or collection dimension.
Check Settings > Indexing controls for that post type.
Enable Settings > General > Show index button and confirm the user role can access the vector content indexer module.
Confirm the selected target contains trained data and the same embedding model is selected.